All Those Truths

  • Columns

He had been building toy worlds for a while now, to show himself how they behave when his personal assumptions and narratives are played out when he stumbled upon a book review by James Gleick titled Simulating Democracy. It was not a completely unexpected shock: mimicking democracy using big data and neuropublicity techniques in what Bobbitt calls market states is, after all, perilous. He knew that but had decided that the danger had to be faced head on, using the very same techniques. Lepore thinks different and voiced that in a conversation with Sunilh Amrith on YouTube like this:

“[…] But I do not believe we actually learn from the computer simulation of human interaction and kindness and compassion and justice. I just think there are — and this is my commitment, kind of disciplinarily — to try to model those things computationally is to forfeit our humanity. And so for me, the novel is just going to be a more compelling source of knowledge and of wisdom and of truth, ultimately, about these things. There’s a lot that we can model, quantitatively and computationally, but there are some things that I don’t think we ought to be modeling because we have developed language. We have human languages, not computer languages. And we have literature and we have art and we have theater and forms of expression that tell us how humans survive scourges. And that’s where I want to go, to know.”

The quotation has an element he completely disagrees with. It is about losing our humanity through computer simulations based on the idea that human language can do what computer language cannot. The latter is completely nonsensical, given the capabilities of Google translate and the like.

But perhaps Lepore refers to a common-sense no-brainer that is often side-stepped in the debate. Simply this: that computers do not become people by speaking human language(s) and that people do not become computers by speaking computer language (s).

The idea of forfeiting our humanity when trying to understand democratic processes by reenacting them in computer-driven toy worlds has two edges.

One is that of the Bernays and the Whitakers and the Baxters of this world, succeeding all too often in unwarranted persuasion of women to take on smoking or of the Californian electorate to believe their framing of a candidate being a socialist. This edge succeeds by using simulation models based on big data and algorithms and knowing human behavioral preferences. Many naive citizens, consumers and employees do succumb without a fight here. And here lies, I guess, the risk of forfeiting humanity. But my guess is that with art, culture, decency and vulnerability as weapons we (whatever “we” does mean) stand to lose in our contemporary Google / Baidu / Twitter / Tiktok driven societies.

Lepore wrote in 2018 a book on the history of America, very useful in 2020 in the midst of the multitude of (GOP?, DEM?, Russia?, Iran?, PRC?, Science?, Racist?, Constitution?-based) political narratives of presidential elections. It is helpful for making up our mind on the half forgotten or half misrepresented historic roots of most of our contemporary political divides. For the members of the electorate that make the effort and read the book (or an equivalent) the book helps to take a stand and to reason why. Those that do not sort of forfeit their humanity as a democratic agent. The same goes for those that do not take the effort for finding out what propaganda is evidence-based and what isn’t.

As long as evidence-based is left with some authority, there is hope. Almost all data that is important for political decision-making is public. And there are ample platforms (R, Netlogo) available with which individuals can simulate the outcomes of to them crucial sets of theorems that mimic political positions.

You do not lose your humanity with that, on the contrary. It is the only way you can visualize and counterbalance the talking heads who are taking advantage of the opportunities left them to sell dangerous nonsense based on secretive algorithms and silent selections from far too large sample- and parameter spaces. Confront (and if necessary beat) them with their own weapons.

It is also the reason for one of the threads running through this blog.