Elon Musk Is Wrong About AI. Here's Why.

Will AI evolve to become an existential threat to humanity? Billionaire entrepreneur Elon Musk and physicist Stephen Hawking seem to think so. They are amongst a number of luminaries who see intelligent machines as humanity's way of committing collective suicide.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Will AI evolve to become an existential threat to humanity? Billionaire entrepreneur Elon Musk and physicist Stephen Hawking seem to think so. They are amongst a number of luminaries who see intelligent machines as humanity's way of committing collective suicide. But how right are they? Should we heed to their alarmist views? Is there anything can we do to ensure that machine intelligence will never threaten us?

The Musk-Hawking view on AI echoes the so-called "AI Singularity" idea: a machine with the ability to learn becomes superintelligent, reflects upon the pointlessness of biological, carbon-based intelligences -- such as us -- and decides to exterminate us, or use us for its entertainment. It is the same idea behind the Terminator and Matrix end-of-the-world films. It is also an idea that is based on a series of false assumptions and misunderstandings. Let's take the concept of "superintelligence" first. What Musk and Hawking mean by the term is an information-processing entity that surpasses "human intelligence". But what exactly is "human intelligence"? Is it a set of behaviors that humans exhibit as we deal with external or internal stimuli? Or is it something deeper, something that has to do with our consciousness, or self-awareness. Musk and Hawking seem to think the latter, in other words for them "superintelligence" means a conscious machine that can process information infinitely faster than a human brain.

But this idea of superintelligence is fallacious because it assumes that the brain is "like a computer", i.e. a biological information processing machine. Our brain is not neither a computer nor a "machine". It does not "store memories" and does not "process information". We use such words to describe the brain metaphorically, not literarily. Musk and Hawking confuse metaphor with reality, and they do so systematically. Indeed, Hawking has repeatedly said that he believes that in the future humans will be able to "download their consciousness" in a computer and live forevermore. But human consciousness is not a software program. We have no idea what it is, but we are certain that it cannot be reduced to symbolic reasoning. So if consciousness is not an algorithm, could an apparently non-symbolic computer -- such as AlphaGo -- which functions on a connectionist neural network architecture and emulates how the human brain works, become conscious? Again, any machine, whatever it emulates with its software, is a Turing machine that ultimately processes logical symbols; and consciousness is not symbolic. No symbolic machine will ever have an "I" to reflect upon the pointlessness, or not, of anything.

But Musk and Hawking also point to the possibility that humanity could be exterminated by a non-conscious machine intelligence that goes haywire. For instance, a machine intelligence that is given the task to lower the carbon dioxide in the Earth's atmosphere may fulfil its goal by destroying all industry and then all carbon-producing humans. This "extinction by accident" scenario is not a theoretical impossibility but it is a practical improbability. It assumes that this machine intelligence will somehow gain access to the planetary infrastructure, weapon systems, communications, etc. -- and will be able to act on its own without hindrance. This improbability can be further enhanced by making AI systems safe. Extinction by accident is ultimately engineering problem that can be solved like all other engineering problems: by identifying the safety parameters of an engineering system and by devising safety strategies, approaches and techniques to make the system and its operation safe. Engineers have been doing this for centuries. And indeed a recent paper aims to do exactly that. AI systems can be made safe to use.

So why all this alarmist talk about AI? And why such smart people, like Musk and Hawking, fall into the trap of touting sci-fi scenarios as if they were scientific facts? Perhaps much has to do with how society has changed its perception of technology over the past few decades. As the 20th century dawned technology was seen as a liberator, a force for good that promised a better future. Both in the capitalist West as well as in the communist Soviet Union the future was seen as a utopia of plenty where machines served humanity and aided its expansion to other planets and star systems. But by the 1990s this optimistic scenario of the future has turned dark and became a dystopian place. Sci-fi movies made sure to reflect that in their stories; humanity was tested, fought and ultimately lost to its own creations and doings, whether they were intelligent machines, genetically-engineered viruses, or global warming. The future is now full of zombies, killer robots, income inequality, deserts, and social strife. With their alarmist views on AI, Musk and Hawking have lent voice to the internal anxieties that we all have about the future. And that is exactly the reason why we must resist them. We must reclaim the future as something that we can shape for the greater good, and not as something that will simply happen against our will. Ensuring that AI is safe and used for good purpose is the right step forward.

Popular in the Community


What's Hot