I neglected this blog for a while because I was asked to give a one-hour lecture to a conference of young computer scientists interested in artificial intelligence and Big Data. Yesterday I delivered it via Zoom.
Now, of course, I have to record here, and for myself, what I’ve learned from that job.
A first imoression. After a year and a half of COVID isolation in the Netherlands, giving a lecture without a visible audience is an activity that has an oppressive effect as time goes on. Furthermore, when a question was asked at the end, it was a shock to find that I had more or less forgotten how to speak English spontaneously — of course it is uncertain whether this is due to age-related mental decline or more simply to having been out of practice for so long. The tape is sent to me, so that I will be able to judge things through the eyes of a beholder.
I had written the thing out. I was quite pleased with it. Yet afterwards I had the feeling that it had been too difficult and also a show too outlandish for those involved with Big Data.
Meanwhile, it was about something important. About how we come to believe through intellectual manipulation that we do things consciously and rationally, while that can only be very partially the case, in view of observable events such as around COVID, around the CO2 problem and around recent political scenes in the US and in the Netherland.
My approach is and remains that reenacting such sequences of events in toy worlds can indicate how those processes work. A problem with that is that you can’t be very precise about that kind of intellectual manipulation and their effect.
And this at a time when science distinguishes itself by being able to know and manipulate processes at the micro and nano scale, and also know about those at the macro and galaxy scale.
In between, on the scale of biological life, something exceptional occurs. There, it seems, science has to accept the unpredictability of the future — on individual, institutional, and ecological levels. Here knowledge is about the past and expectation is about the future. In this transition area lie Heracles’ crossroads, dividing those who place their hopes on the challenges of the unpredictable (the virtuosi) from those who place their hopes on their preference for (some say addiction to) familiar certainties (the victimi).
Today I am not so sure that science still lives on the side of the virtuosi. This is partly motivated by my feeling that knowledge that shows the limits of our certainties (my lecture) and that identifies challenges, and that challenges virtuosi gets less attention than the knowledge that makes our lives easier (trapping us in deep-learning-based information bubbles) and that makes victimi.
This reflection is also prompted by my having put into use a new ‘smart’ phone, in which a new version of android manages to activate a Google assistant that is not only willing to mediate and make my choices for me almost everywhere, but which is also virtually ineradicable.
Google, which was once a service that relieved me of the tedious task of walking to the library to consult magazines and prided itself on pursuing the good has turned into a tool that forces me towards victimization.
An ethical dilemma arises here. Should I continue to participate in a science that supports victimization of its target audience?