Being Human in the Age of Intelligent Machines –– Rethinking AI100.
In many households across North America, thinking and talking machines have become part of everyday life. Perhaps one has to pause a moment to appreciate how extraordinary an event this actually is: artificial intelligence, once the stuff of science fiction novels, has entered everyday life!
How did this happen? How do these intelligent machines, partners in everyday life, change how we live? What is their effect on society? What on what it means to be human? Any?
The Stanford 100 Year Study of Artificial Intelligence (AI100) has released a report that attempts to provide an answer to these questions.
In 2009, Eric Horvitz, then president of Association for the Advancement of AI (AAAI), convened a conference at Asilomar to discuss the relation between AI and society. One outcome of the workshop was the suggestion to build an institutional platform, ideally at a University close to the tech sector, that would regularly assemble AI experts from around the world and charge them with the task of mapping recent developments in AI research and assess their significance for society. Stanford University agreed to house the project, which now consists of a standing committee that every five years strikes an expert committee that will write a report on AI’s recent past and its near future –– for the next one hundred years.
The 2016 Report is the first of its kind and it is a hugely interesting and in many ways helpful document.
The report is designed as an expert account of AI, written for “people and society.” And indeed, the authors provide a helpful overview, divided in eight domains, of how AI research developed and where it is, in the foreseeable future, likely going to.
These domains are transportation (self-driving cars and the city of the future), home/service robots (vacuum cleaners), health care (big data based analysis of biological processes), education (the automation of instruction), low resource communities (gaining access to the internet), public safety (surveillance), employment (automation of work processes), and entertainment (internet of things).
However, the most important message the authors of the first AI100 Report wish to convey to “people and society,” is this: don’t be afraid!
On every page, in so many different ways, the readers are assured that whatever they may have read about AI in the popular press, whatever they may have seen in a movie –– it is likely wrong. The Report insists that singularities, that is, machines that are self-conscious and begin, as if they were unique, singular beings, to think and act themselves, are likely never going to exist.
On the one hand, the report lives up to its aspirations: If you were interested in what AI is, what it has accomplished in recent decades, and what it is likely to accomplish in the near future –– there is perhaps no better source. The authors, all internationally renowned experts, have done a spectacular job.
On the other hand, however, the first AI100 report has some substantial problems. I zoom in on two.
The report is obsessed with society. On almost every page, in so many different ways, the authors emphasize how important it is to “inform society.” We are told society has to decide, that society has a choice, that society should remove regulatory obstacles, that society may profit, etc.
Given all of this emphasis on AI and society, one wonders though why there is no single social scientist on the standing committee that actually could analyze the effect of AI on society?
How is one to explain this absence?
The answer that implicit emerges from the report is that the authors are not actually concerned with society. At least not with the society the social sciences are concerned with.
Instead they are concerned with those experts that make decisions on behalf of society: government officials, policy makers, politicians, lawyers, judges.
Differently put, what they are worried about are not the effects that the becoming superfluous of unskilled workers might have on families or on society as a whole. The suggestion that it frees the labor force up to be creative in other domains is perhaps well meant but not without naiveté: most workers lack the education –– or the financial liberty, let alone the necessary connections –– to explore economically creative ideas.
A large set of social –– political –– questions emerge at this point.
What about the redistribution of wealth from AI and how this effects the composition of society? Should the workers that are laid off by machines somehow profit from the wealth generated by their replacements?
And what about the huge social consequences the changes they suggest AI is going to have on education: Is access to human teachers going to be a privilege of the few who can pay for it, while the masses are left with talking machines?
A social scientist could also have informed the authors of the report that the suggestion that the poorest members of society –– or the poorest of the poorest nations –– will profit from AI because they can now use their iPads to go online and take college classes is kind of embarrassing.
If you live on a dollar per day, you are unlikely to have an iPad and/or access to the internet.
An even more significant shortcoming of the 2016 report is the refusal to attend to critically explore what it means to be human in the age of intelligent machines?
The authors simply have nothing to say about how the emergence of intelligent machines actually challenges our established ways of thinking about ourselves.
It is as if they are utterly unaware of the actual philosophical substrate that, today, makes AI an experimental philosophy of the human.
When, in the early 17th century, philosophers began to courageously elaborate a concept of the human that would allow us to be free, autonomous beings who could live their lives independent of authority (political or religious), they argued that what defines humans, what sets them apart from all things, is their capacity to think.
At stake in this new definition of humans as thinking things was the effort to liberate us humans from being a part of the natural, God given cosmos of the Middle Ages.
To accomplish this liberation, the philosophers came up with an argument that held for centuries: that machines don’t think. Unabashedly they argued that all of nature –– including animals and plants and even the human body –– is nothing but a machine, organized and explicable in purely mechanical terms.
But there is one thing, they went on, that distinguishes humans from all the machines that make up nature: their mind, that is, their capacity to think and, ultimately, to understand.
To think, the philosophers emphatically declared, means to open up spaces of deliberation, of innovation, ultimately of freedom.
In short, the modern conception of the human, on which we still in many ways rely, has been contingent, on the suggestions that machines cannot –– think.
Certainly, this definition of the human was challenged over the last 400 years. However, it is safe to say that no challenge was strong enough to throw it aboard. Not until now, not until AI has begun to enter everyday life.
What is the effect of the advent of thinking –– and talking –– machines on our self comprehension as human?
Remember Herbert Simon’s provocations?
“It is not my aim to surprise or shock you,” one of his most famous sentences goes, “but the simplest way I can summarize is to say that there are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until – in a visible future – the range of problems they can handle will be coextensive with the range to which the human mind has been applied.”
Of course Simon wanted to shock. And surprise.
Simon still knew that a thinking machine was the ultimate challenge to the conception of the human his colleagues in human sciences departments –– and most of us outside of them –– relied on. What is more, he embraced the provocation, as did Marvin Minsky, and celebrated AI as an experimental philosophy of the human.
They still knew the philosophical substrate of artificial intelligence.
In the Stanford AI100 Report, however, AI is devoid of any philosophical substrate.
This really is a missed opportunity.
The question concerning the human –– what is a human being? –– occurs today primarily in areas outside of the expertise of the human sciences. For example, in AI labs.
However, this by no means implies that the human sciences are irrelevant.
AI, arguably, is simultaneously an engineering science and a philosophical-anthropological problem –– and hence AI is a powerful opportunity to bring engineers and philosophers or anthropologists together to discuss what it means to be human in the age of intelligent machines.
Could one rethink AI100 as a consortium of AI researchers, anthropologists, philosophers, etc. who, together, bring AI into view as an experimental philosophy of the human, one that escapes our established concepts? One that charts the new terrain that AI opens up –– for ourselves and as well for machines?