AI has taken the stage explosively. I am getting notifications of lectures, breakfast sessions, think tank charters and meetings on AI and the future at a rate well exceeding one a day. Investment banks, tech companies, consulting firms, universities, government offices, research labs--they are all asking questions on AI cut from the same cloth: how will Artificial Intelligence change our future? Employment, food security, geopolitics, surveillance, privacy, marketing, politics, economics--all these issues are on the table at the get-togethers now swirling about in search of prognostications that give each set of players what they hope will be an edge, thanks to AI, over their competitors and over an uncertain future.
And yet it is worth reflecting on the recent experiences of Uber as they moved a whole fleet of self-driving cars into San Francisco, then moved them right out again after a few days of suboptimal performance. AI is a boundary science- the very definition of what AI means and can do changes on a daily basis, and Uber ran right into the discrepancy between their engineers' view of 'good enough' and that of the general public, and that in a town where the social contract between bicyclists, pedestrians and automobiles is freighted with significance. Of course, Uber will be back someday with more better cars. And of course there will be problems that will catch them unawares. But the in-and-out move of Uber may itself be more the harbinger of a new pattern worth our attention. AI is coming, in fits and starts, and its very experimentation may permeate our lived experiences more than we could have ever imagined. So in this post I want to clear up three common misconceptions so that we can be the most informed lab rats possible as the experiments on human society commence. So, here, for your reading pleasure, are Three Not-laws of Artificial Intelligence:
1. AI is not singular
At a recent talk an audience member was remarking on how impressive it was that an major corporation's AI system can not only diagnose medical conditions, but it can also tutor children in maths by providing individualized learning guidance. The flexibility of the AI system really impressed her. The head engineer of the AI system laughed, explaining that actually this demonstrated the flexibility of two completely separate engineering teams. The code was entirely different, and each problem had been solved in individualized ways. The company just happened to name all their solutions the same to market their overall expertise. This is the first not-lessons of AI that we all need to fathom. AI is not a program, a code base or a strategy. It is a giant, red tool box full of diverse tools, some new and some old. There are even thirty-year tools in there that were useless because computers were just too slow, but now that they're insanely fast, these long-forgotten tools have newfound value. Each time engineers make use of AI to solve a real-world problem, they are consulting the giant red toolbox. Human ingenuity and Sears Craftsman are collaboratively creating a bespoke solution to a specific challenge, and the resulting system is good at One Particular Thing. Think of AI as a technique rather than a solution; like the scientific method. When you see a single company solve five problems and boast about their AI, assume five utterly separate, balkanized solutions that do not necessarily jump ship to solve new problems particularly well. In the end, the problem-solving magic is in the hands of the computer scientists, not the machines.
2. AI does not fail humanely
After several thousand years of lived experience, metaphysics and philosphy, we humans have excellent models of how humans fail. Greed, corruption, inattention, anger--we are well aware of mechanisms that compromise our decision-making capabilities and turn our human achievements sour. But every AI-based solution that solves a formerly human problem does so in ways that are utterly alien to the ways humans work. And, as a result, their failures have nothing to do with our human shortcoming either. This is both refreshing and disconcerting. Refreshing because a human-computer team will have complementary strengths and weaknesses, and that can be an outstanding way to create balanced and diverse teams. Disconcerting because we will see autonomous AI systems failing in completely alien ways. It's as if a new Lottery system is coming down from the heavens, picking random fights with new technical solutions in ways that non-engineers will find entirely baffling. Remember the Tim Burton comedy Mars Attacks! Happily, AI-driven drones are unlikely to pulverize peace doves; but when they make mistakes, we will have as little common ground with AI systems as we do with giant-brained Martians. Now, this is a good time to recall rule number one: AI is not singular. When AI systems do fail in unexpected ways, that doesn't mean you have any moreinsight into a future AI system's future failings, because the various technologies surrounding us are so very diverse that even their unpredictable shortcomings don't match.
3. AI does not equal SI
No matter how much quantitative intelligence we afford to outstandingly engineered AI systems, keep in mind that comp know-how has never correlated well with social intelligence. And then reconsider how much social intelligence is actually critical to our human way of life. Is driving, for instance, a quantitative act or perception, cognition and motion control? Or is it also a deeply social interaction, from eye contact and intention interpretation to complex social engineering between drivers, passengers, pedestrians, bicyclists and many other social actors. AI researchers at all the major corporations will show you computer vision systems that can read your micro-expressions even before you are aware of your own emotional state. These AI systems quantify your facial musculature, turning affect into numerics at the speed of light. And yet. Once these numerics are done, we are right back to the world of alien AI, where numbers will drive interaction, and interaction will be right until it's wrong, and it will be wrong in ways that are alien to our understanding of fringe sociality. Today, millions of families will partake of a Christmas luncheon. Reflect, as you look around, on how important social intelligence is to how we all interact. AI will surround us, sure; but it will not replace our social intelligence anytime soon.