The Future And Artificial Intelligence: A Reluctance to Recognize Elon Musk's Demon?

Just over 65 years ago, Alan Turing famously posed the following question: Can machines think? In Computing Machinery and Intelligence, Turing investigates the concept of Artificial Intelligence (AI), the idea that machine-based life may indeed meet or surpass the boundaries of human intellect.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

2016-10-15-1476505537-2220862-Elon_Musk__The_Summit_2013.jpg

Just over 65 years ago, Alan Turing famously posed the following question: Can machines think? In Computing Machinery and Intelligence, Turing investigates the concept of Artificial Intelligence (AI), the idea that machine-based life may indeed meet or surpass the boundaries of human intellect. Since Turing's essay and over the course of the last several years, leaders in the technology industry, public intellectuals and mathematicians and philosophers alike have begun to sound the alarm on advances in AI computing, warning of the potential unforeseen end results of placing such super-intelligence "online".

Elon Musk, CEO of Tesla Motors and SpaceX, shocked many in 2014 when he postulated that the world's greatest existential threat was likely not nuclear war or climate change but rather the unboxing of an ill-considered AI, an act he would refer to as "summoning the demon". In subsequent interviews, Musk has carefully elaborated on his view, still cautioning against a foolish act on the part of those at the forefront of AI development. The concern here is not the advent of super-human intelligence per se- as the benefits of this development for humanity could be enormous- but rather a consequential mistake in the game of expectations, allowing a machine-based intelligence, capable of recursive self-improvement, to essentially roam free.

In an attempt to mitigate the likelihood of a dangerous concentration of AI power, the tech giant has in fact dumped significant resources into the formation of OpenAI, a non-profit organization charged with making significant AI tech advances widely available to those around the globe. Indeed Musk's admonitions echo those publicly espoused by the writer Sam Harris and the philosopher Nick Bostrom, among others.

Earlier this month, the National Science and Technology Council released a report entitled: Preparing for the Future of Artificial Intelligence. While it is promising that the Obama Administration recognizes the importance in beginning to seriously contemplate AI and its implications for human endeavors in the 21st century, the report seems intent on betting that the warnings advocated by Musk and others are not something worth worrying about today. In discussing the associated risks of potentially harmful AI, the report states: "The policies the Federal Government should adopt in the near-to-medium term if these fears are justified are almost exactly the same policies the Federal Government should adopt if they are not justified." The report goes on to assert that the best path forward is to focus on the "less extreme" near term risks inherent in AI development such as security and privacy. The issue with the government's stance here is not in its logic but rather in its assumption that the long-term concerns of AI really are long-term, seemingly declaring the alarm-sounding by industry experts as little more than science fiction overindulgence.

While the report addresses ethical considerations and the concerns raised by those surveying and spending time thinking and working on AI development, it does so rather bizarrely, stating: "Although prudence dictates some attention to the possibility that harmful super- intelligence might someday become possible, these concerns should not be the main driver of public policy for AI." In acknowledging the possibility that a "harmful super-intelligence" may indeed someday become a reality, the report's wording here is puzzling, given the seemingly all-or-none nature self-aware and constructive machine intelligence portends.

The report is by all accounts a step in the right direction and perhaps one we wouldn't expect to see under Republican leadership. In addition, a companion report on research strategy was released alongside the original, entitled: The National Artificial Intelligence Research and Development Strategic Plan. In it, a subsection labelled Achieving long-term AI safety and value-alignment, discusses the importance of researching system checks aimed at monitoring recursive learning strategies adopted by AI systems and in making sure there is no divergence between our own goals and those of any general AI.

Despite these positive steps, there does, however, seem to exist a reluctance to earnestly address worries about substantial advances in AI becoming limited to a small group of developers and being shrouded in secrecy. Moreover, the endorsement and rollout of a robust research and regulation program aimed at uncovering and investigating the nature of "long term" AI risks, how to see them in advance and how to stop them, has not yet arrived, despite urging in a recent open letter by experts including Musk and famed scientist Stephen Hawking.

The federal government, and indeed the world, may be playing with fire in not appreciating the speed with which AI is exponentially advancing, as Musk put it in one recent interview. It was Alan Turing who ended his essay on machine intelligence by writing: "We can only see a short distance ahead, but we can see plenty there that needs to be done". Let us hope that the shortsightedness the federal government may currently be espousing on the topic of AI isn't something we come to regret in the coming years and decades. If future regulatory protections fail, let us further hope that those on the front lines of AI development recognize the demon before they see it.

Popular in the Community

Close

What's Hot