Artificial Intelligence has been around for sixty years, and through this long time it has had many ups and downs, but mostly downs. The “AI winters”, as they are commonly known, were caused because of the insurmountable obstacles that declarative programming presented when building the knowledge base of an intelligent system. Hand coding a complete description of the world proved to be an impossible task. Systems were limited by the “knowledge” that could be coded into them, and therefore unable to cope with the unexpected. And then, sometime around the late 1990s, the unexpected happened: a mostly discredited notion in AI, that one should emulate the human brain and its neural intricacies, came back in vogue. This resurrection of old ideas coincided with the cost of parallel processing dropping significantly – thanks to Graphic Processing Units (GPUs) used to process video game graphics. The AI everyone is talking about is this new kind of brain-like AI, where you do not code the world; instead you teach the machine how to learn about the world.
Brain-like AI, in the guise of “machine learning” or “deep learning” systems, is the heart of the debate on the future of work and humans, as intelligent machines begin to take over many cognitive tasks that used to require human intelligence. This new technology is truly unlike anything else we have ever invented. As we ponder the future, and how we will live and collaborate with those machines, it is important to reduce what is particular and unique about AI. I would like to suggest that there are three unique characteristics of AI that we should consider; self-improvement, prescience and autonomy.
Let’s look first at the ability of AI for self-improvement. No other technological artefact has ever had such ability before. For every machine that humans created in the past its performance was forever determined in advance; with wear and tear that performance was reduced over time, and it was always necessary for humans to undertake regular maintenance, just to keep the performance of those machine level. Improvements would only occur when humans replaced parts of the machine, or produced new, improved, versions. In AI we have machines that do not need humans in order to improve their performance. With machine learning, AI systems become better every time they ingest and process a new set of data.
The second unique characteristic of AI is prescience, or its ability to predict. This ability is sometimes based on mathematical approaches that pre-existed the advent of hardware capable of executing them. But the fact that those predictive mathematical algorithms have become executable in machines is a great achievement in itself. More sophisticated approaches, such as reinforcement learning combined with convoluted neural networks, are delivering systems capable of strategizing in complex situations. Just think of what AlphaGo achieved a few months ago, beating a highly-skilled and experienced human in the most difficult game of strategy every invented by humans. Prescience is the prerequisite for strategy. In the biological world only highly-advanced predators are capable of strategies that require prediction. AI’s prescience furnishes this technology with the ability to adapt its behaviour according to unexpected events so that it achieves a final goal. No other technology is capable of such outcomes.
Finally, and not least because of self-improvement and prescience, AI is also capable of autonomy. Autonomy means that the system can take decisions about its future actions based on its internal states changing according to perceived sensory data. This makes AI systems similar to biological systems. The military is already testing a number of “Lethal Autonomous Weapon Systems” (LAWS) that can be perform complex combat missions without the need of human guidance. Many of the AI systems currently in use require a human-in-the-loop; mostly because of the necessity for clean and tagged data sets for use in supervised learning. But AI systems are becoming increasingly independent. Soon they will be able to explore the world for themselves, purely out of “curiosity”.
In combination, the 3 characteristics of AI point to a logical eventuality: a system that can learn and therefore self-improve, and is also prescient, will eventually maximize its autonomy. And this is why it is absolutely necessary to think of AI ethics now. The autonomous, intelligent machines of the future, must have a code of ethics that limits the autonomy of their decisions.