Humankind’s Greatest Threat May Not Be Global Warming or North Korea

Humankind’s Greatest Threat May Not Be Global Warming or North Korea
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Hold your thoughts about the threats to humanity from North Korea and global warming for a moment and chew on this. The greatest threat may be something altogether different – technology itself. When the average Joe claims that technology is running amok, we attribute it to the old man yelling at kids to get off his lawn. When Stephen Hawking, Bill Gates, and Elon Musk unite to give such a warning however, we sit up and pay attention, even if we don’t agree, such as Facebook CEO Mark Zuckerberg certainly doesn’t (more on that in a moment).

So, what on earth has got everybody so worked up? Artificial Intelligence (AI) and machine learning—basically, any technology that can exhibit conscious, intelligent, and autonomous behavior. Much as we near a point of no return with global warming, a group of some of our planet’s most brilliant minds, who don’t work at Facebook, feel we must act with the same urgency towards this evolving technology. Otherwise they fear, if the Earth doesn’t kill us, then the robots will.

In contrast, Zuckerberg paints a much rosier picture. “I think you can build things and the world gets better. But with AI especially, I am really optimistic.” During a Facebook Live broadcast to his billions of users, he took the time, while smoking meats, to smoke Musk too. Zuckerberg characterized Musk as Chicken Little on the matter, an irresponsible naysayer of doom and gloom. Musk responded back on Twitter with this “drop mike” tweet: “I’ve talked to Mark about this. His understanding of the subject is limited.”

It’s rare to see industry icons participate in such a public spat. But if humankind hangs in the balance, then let them have their cat fight.

Regardless of which side you are on, most everyone would agree that AI has crossed a technological threshold. We just don’t know yet what that means. Musk’s fear is that we’ve opened the door to losing control of our inventions. Gates referred to it as “summoning the demon.” We think we can control it, but as he points out, it never works out. Hawking echoed a similar sentiment. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” That’s why in Hawking’s instance, he concluded that such innovations could become “the best or worst thing to happen to humanity.” Zuckerberg would agree with half of that statement.

You could argue that this manipulation of innovation is nothing new. For example, we discovered atomic and nuclear energy and then lost control of them. In those instances, however, control was taken over by other humans or even more scary, the military.

The truth is that up until now, technology has been at our beck and call. Bad technology behavior consisted of episodes such as your monitor freezing, poor Wi-Fi reception, or an Internet connection lost. Musk warned that one day it could lead to robots going down streets and killing people. He also wondered if that is what it will take to wake up our collective conscience and Zuckerberg’s before it’s too late. When stated that way, it puts the inconvenience of a dropped Skype call in perspective.

The driving force behind current AI technology focuses on predictive modeling and reactive responses. Think of it as part fortune teller and part driving force. Applications and sensors in devices collect endless information and analyze it to foretell what you may or may not do. For example, machine learning can help financial institutions quickly identify uncharacteristic credit card behavior. This has saved the industry billions of dollars. But it can also be used with the Internet of Things to predict what part of the house you hang in at what part of the day. In the movie “Minority Report,” technology was used to predict criminal behavior before it became actual. Such science fiction may not be far off from becoming fact.

Musk calls out a particular form of AI, artificial general intelligence, that poses the greatest threat. This is the super-intelligent entity, similar to Hal from Stanley Kubrick’s “2001: A Space Odyssey.”

That movie came from the mind of legendary author Isaac Asimov. He sounded the alarm bell on robotics more than 70 years ago. In 1942, his short story, “Runaround,” outlined three principles for ethical behavior when developing robots and AI technology:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These ideas have served as the guiding light and bible for Robotic and AI ethicists ever since. AI technology shouldn’t hurt people, disobey orders, or protect itself over humans. The problem is if you give an object thought and decision-making capabilities without a soul, then that object lacks empathy, an important piece of the human condition and pillar behind Asimov’s laws. There’s actual medical diagnoses, Empathy Deficit Disorder (EDD) and Borderline Personality Disorder (BPD) for example, pertaining to such behavior in people. Serial killers, hit men, psychopaths often suffer from a form of these ailments as can narcissists.

Autonomous robots have EDD built into their DNA. Binary codes of “1’s” and “0’s” don’t have integrity or morality compasses. They can be programmed, but as we all know, our feelings, behaviors, and decisions derive from individual situations and can’t be modelled with any absolute certainty.

If we hand over a copy of the keys to our kingdom to robots, we become victims to the technology and lose significant power to become designers of our destiny. We’ve already started on this journey. Now we must figure out how to control it.

In following Asimov’s thoughts, it all comes down to regulation. Strike that. It all comes down to proactive regulation. As Musk pointed out recently, “By the time we are reactive in AI regulation, it’s too late.” Take digital privacy as a shining example. Here in America, we’re having a crisis. Our personal information is quickly and easily available through public searches. It can be sold by the services we use to advertisers and data miners. Fixing it is very complicated. You can’t protect data retroactively when everything digital has a permanent presence. Tweets and Snaps can be deleted but they don’t disappear. We can easily save screen images and download videos before that happens. We have a right to be forgotten online but that doesn’t mean it’s scientifically possible that we can be now.

Now model our experience against what has transpired in Europe. Most of the Old World designed data privacy laws during the Internet’s infancy. Once intertwined in the fiber of the technology, these rules have proven much easier to refine and update. Their laws have become the model for privacy. Too bad we didn’t follow suit back then.

Technological innovation unfolds daily before our eyes. It you’re 80 years old, you witnessed the birth of television. If you’re 50, you saw the advent of personal computers. Social media is less than 20 years old. Today, it’s mobility and the cloud driving a digital transformation in people’s lives and in business. Sorry Mr. Zuckerberg, but in all of these instances, invention becomes a hindrance without proper monitoring and management.

Hawking advocates deeply researching the potential roadmap for robotics and AI. Michael Vassar, chief science officer of MetaMed Research, recently offered a similar recommendation. “If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order.” Sadly, that would go for Facebook users too.

The Future of Life Institute, a charity and outreach organization founded by a group including a co-founder of Skype and a MIT professor, similarly advocates planning ahead as a better strategy than learning from mistakes. In 2017, the institute combined with 1,200 AI/Robotics Researchers and 2,300 additional voices, to write an open letter about AI safety. Taking the baton passed by Asimov, the paper contained 23 principles including the following:

· The goal of AI research should be to create not undirected intelligence, but beneficial intelligence

· AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible

· AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity

· Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization

Almost everything in life requires a form of regulation, but at the same time not everything puts humankind at risk. No one’s demanding the AI genie get back into the bottle here, but many want to ensure that the technology protects all of our wishes. Otherwise, you form an autocracy and a society of metallic terminators. The argument may sound ethereal, which doesn’t jibe with Zuckerberg’s overall modus vivendi, but isn’t. One day it will become all too real, at which point people will either hail the technology or mourn the loss of ourselves, relegating our independence and the world we know to distant memories.

Popular in the Community