In my work as philosopher and anthropologist, I have often sought to blur the lines between humans and machines. For example, I would collect sentences from conversations with engineers that make it impossible to tell whether one speaks about humans or machines.
The opposition between culture and technology, between the human and the machine, is wrong and without fundament; it is an expression of ignorance as much as of resentment. To speak of an opposition is a cheap effort to render invisible, behind a naïve humanism, the rich reality of of the world of technical objects.
Why this interest in blurring the lines that separate the human from the machine? Think of it (the blurring) as an artistic technique –– as an opportunity to break open the at times unbearable human exceptionalism of our times. Humans, for roughly four centuries, have consistently set themselves apart from mere machines or mere nature and have pretended that machines are mindless and nature a mere resource.
Or think of it as a flirt –– as an invitation to a “maybe.” I am interested in maintaining the openness of this “maybe” –– in leaving it undecided whether humans are machines (or machines human). Openness allows for a playfulness. It allows to imagine things differently from how they currently are.
But recently I began to doubt that strategy of blurring the lines between humans and machine. Why? Because there is a vast army of commentators out there –– engineers, policy makers, philosophers, entrepreneurs, journalists –– who seriously think that humans are reducible machines.
You may say they are all Dadaists. But they are not. On the contrary, they are literalists, that is, they do not playfully embrace the machine in order to trouble the human, no,they really and seriously think humans are machines (and that machines have human quality).
A closer analysis of the commentaries I refer to shows that almost all of them make a similar, well, mistake: they simultaneously anthropomorphize machines and machinize humans. That is, they describe machines in terms invented to describe humans and they talk about humans in terms invented to talk about machines. The effect is that machines appear to have human traits –– and humans can be sufficiently described in machine terms.
But this is a spectacular confusion. Machines and humans, as Simondon so clearly pointed out, have different, still evolving modes of existence. They are not the same.
It is wrong –– and naïve –– to describe machines in a language invented to describe humans. To provide them with human attributes doesn’t do justice to machines, to their spectacular “non-human” capacities.
And it’s equally wrong –– and naïve –– to describe humans in the insufficient, poor vocabulary we currently have available to think about the mode of existence of machines. It doesn’t do justice to our other-than-machine possibilities.
Machines and humans may overlap. In fact, as everyone who every toyed with a Turing test knows, it is not very difficult to find grey areas where one cannot really tell human and machine apart. And yes, the boundary between humans and machines are unstable. But grey areas and unstable boundaries are not sufficient to claim that humans and machines are factually reducible to each other.
Why is there so much confusion about the relation of humans and machines? Upon reflection, I think that one reason is that we have no language for thinking about machines as such.
Given that we have no vocabulary for thinking and talking about machines as such, for capturing their very own mode of existence or the unprecedented realities that machines today create. The consequence is that, whenever people speak about machines, they have to rely on words and concepts that have historically been invented to bring into view human reality. Here human reality, there machine reality.
How come we have no separate language for talking about the latter? An interesting answer emerges from the history of philosophy. Indeed, if one reads through this history, trying to find out when philosophers first began to struggle thinking about machines as such, one finds that reflections about machines appear in only in the late 19th century –– and are provoked by the industrial revolution.
Before our times there was no need for a philosophy of machines. After all, there were hardly any and the few that existed were well integrated into daily rhythms of life. With the industrial revolution, however, machines emerged that not only could no longer be maintained by the everyday life-worlds in which people lived: they also created something new, something that had never before existed: a machine based reality in which humans had to find their ways.
Arguably, machines have changed many times over since the days of Kapp. And yet, the vast majority of concepts we have available to talk about machines are actually a product of Kapp’s time. We can –– and should –– change this.
Why not build a project group –– a manifesto writing collective composed of philosophers, artists, designers, engineers, etc. –– that comes together to invent a vocabulary that can bring machines into view in their own right? A vocabulary that would be able to bring into view the mode of existence that is peculiar to machines? Imagine a many-facetted vocabulary adequate to the spectacular capacities of machines –– capacities one can admire precisely insofar as machines are finally liberated from the comparison with the human!
Remember the humanists? The task at hand is akin to task of the humanists. Just as philosophers in the 14th century began to invent a vocabulary for thinking ‘the’ human in a general, time and place independent manner, so we need to learn to invent a vocabulary for speaking about machines. We need a machinism, just like the 14th century needed a humanism. A rich and detailed machinism with a complex vocabulary for describing the rich reality of machines.
Would such a machinism have effects on what it means to be human? I sure hope so. Indeed, when philosopher actually began to invent a language for thinking ‘the’ human, they all used the differentiation of humans from machines as a strategic tool: what makes humans unique, they argued, is that they, unlike machines, can think (and talk). If we revise our conception of machines –– as they think and talk today –– then this must have an affect on our conception of ourselves.
So, however implicitly, inventing a vocabulary to think machines as such is also a manner to liberate humans from the comparison with machines. This, exactly this, is the philosophical-anthropological substrate of AI: whether artificial intelligence engineers aware of it or not, they engage in an experimental philosophy of what it means to be human.
Are Machines Really Smarter?
Let’s return to the ubiquitous suggestion that soon machines are going to be smarter than us –– and that they then will take over the world. Now that we have separated the human from the machine –– while continuously engaging in flirts between them –– it is somehow obvious that this suggestion is a result of the confusion between humans and machines described above.
There was a time when machines were modeled on the human and when the goal was to challenge the human, directly. Alain Turing repeatedly suggested that building a computer ultimately amounted to building a brain. John van Neumann sought to establish a formal congruence between machines and humans (or brains), and Ad Newell and Herbert Simon used psychological concepts of intelligence to describe their thinking machine.
This obsession with the human turned out –– no one made this clearer than Marvin Minsky –– to be unproductive. It provided A.I. researchers with a measure of success most of their work could never live up to. Instead of understanding A.I. as full blown taking on of the human, Minsky suggested, A.I. would do better understanding its research as amounting to the building of the material condition of possibility of a separate, a non-human intelligence as data processing (or pattern recognition).
And the confusion? Well, if one says that machines are soon smarter than humans, one measures humans in terms of a non-human intelligence, that is, in terms of a concept of intelligence that was explicitly defined as human-independent.
But does it make sense to think human intelligence is reducible to this general concept of intelligence? To ask the question is to answer it. Again the challenge is one of reflection and vocabulary: we need several different concepts of intelligence, so that we can distinguish and differentiate, so that we can bring into view similarities and differences.
Unfortunately, the philosophical contingency of the comparison between humans and machines on this general conception of intelligence has remained largely hidden to the vast majority of commentators. The consequence of this error is that they warn us (wrongly, I dare to say) that machines are so much smarter and soon rule the world.
Machines are going to be part of our world. They are going to be smart, skillful, empathic, etc. They may produce art. However, they are not going to be human: they are going to be machines (even if we can’t draw clear lines). And it would be good if we could respect them and treat them as machines. If they challenge us every now and then, all the better.