The field of artificial intelligence has been a boom market almost from its beginning 60 years ago with the brilliant but doomed British mathematician Alan Turing. One branch of AI believes that a computer will one day duplicate how the human brain works, once the technical difficulties are worked out. Turing was more clever than that. He only asked that a computer's responses convince a thinking mind. The famous Turing test was the cyber equivalent of, "If it quacks like a duck, it is a duck." But is this really true?
AI has taken us to the verge of an Orwellian dilemma, because the spectacular advantages offered by computers weigh so heavily and create such enormous optimism, it's easy to overlook one flaw: AI isn't based on the truth. Computers process information at lightning speed and their abilities improve as the algorithms that are programmed into them become more sophisticated. Yet, without question, life isn't algorithmic, which means that no computer can ever truly be alive. Computers cannot and will never have minds.
Artificial intelligence isn't based on the truth.
This assertion runs contrary to every part of the AI worldview, which not only foresees mind-like computers but also, in one extreme fantasy, declares that a human being can be digitized, placing every memory into computer memory, along with a lifetime's worth of experience -- thus creating a viable afterlife. A digitized human being would be equal to a living human according to this fantasy. There would be no need for a body when life comes down to nothing except information.
Such a worldview is quite peculiar, and only in this anti-philosophical age could it even gain traction. The "quacks like a duck" standard for measuring the success of AI falls far short of the real thing. When you ask your computer to translate a page of German into English, a program can do it almost instantly. Does this mean that your computer knows German? Of course not. The artificial imitation of thinking isn't the real thing. The translator program does its job by matching words and phrases to a dictionary. Someone who knows German doesn't do this at all. Thinking requires a mind, period.
Of course, it's marvelous to have a German-English translator at your fingertips -- not that they're more than roughly accurate, even the best -- but if life doesn't operate by algorithms, why should that matter? A perfect German-English translator isn't theoretically impossible, so all we have to do is wait for one to be developed in the future. The challenge for anyone who sees a limit to AI is to devise a convincing theory for why the champions of AI are on the wrong track to building a mind.
Here are five directions for knowing the limitations of computers as thinkers.
- Computers can calculate anything but understand nothing.
- Computers cannot truly create -- only recombine what humans create.
- Computers are strictly rational. A human mind owes its richness largely to non-rational aspects.
- Computers have no insight. They are immune to "aha" moments.
- Computers cannot relate to human existence at levels we most cherish -- love, beauty, truth.
None of these are metaphysical objections, although AI proponents like to tar any opposition to their worldview as "metaphysical" -- meaning other-worldly, impractical and useless. If we want to offer a truly metaphysical objection to AI, it would be quite radical: the mind isn't the same as the brain. Therefore, no matter how perfectly one duplicates the machinery of the brain -- replicating biology into hard-wired technology -- the end result would be a machine, not a mind. At the opposite extreme of the metaphysical argument, which is by no means settled, is a mathematical argument that cyber types conveniently overlook. This objection revolves around the so-called incompleteness theorems of the Austrian mathematician Kurt Gödel.
AI loves Turing, because he opened up limitless vistas for computers. AI doesn't love Gödel, because he set a limit on what a totally rational, logical system can know. Even to bring up Gödel's name opens you to attack if you aren't a professional mathematician. But his argument can be grasped by referring to everyday life.
No computer can ever truly be alive.
Let me be technical for a second, in a painless way. What Gödel found is that logical systems have built-in ﬂaws (and computers are essentially logic machines). They contain statements that cannot be proven -- hence, Gödel's notion of incompleteness. His ﬁrst theorem says that incompleteness is the fate of any logical system; there will never be a system that explains everything. His second theorem says that if you are looking at a system from the inside, it might be a consistent world, but you won't be able to ﬁnd this out as long as you stay inside the system. A blind spot is built in, because certain unprovable assumptions are part of every system.
If you want to escape these fatal ﬂaws, you must ﬁnd a way to step outside the system. Unfortunately, logic cannot transcend itself. A computer, unaided by programmers from the outside, will be trapped inside its pre-set limits. What makes us human, on the other hand, is that consciousness can go where logic can't. We transcend logic at any moment of love, inspiration, intuition, imagination and more. We make quantum leaps. We follow ridiculously indirect paths in life -- from the perspective of a logic machine -- and find ourselves wiser for the experience. A logical "mistake" is often the only way to arrive at a breakthrough, because we needed to have a certain fund of experience, neither right nor wrong, to probe the unknown.
Boiling it down, incompleteness undermines AI's vision of an ideal rationality, and with that goes the ideal thinking machine stripped of human frailty, inconsistency and illogicality. AI cannot escape one of Gödel's main points, that logical/mathematical systems include certain statements that are accepted as true but that cannot be proven. If we boldly take this point outside the realm of numbers, Gödel is saying that unprovable things are woven into our explanation of reality. Religionists make statements based on the assumption that God exists, although they can't prove it. Materialists make statements based on the assumption that consciousness can be ignored, which they, too, cannot prove.
Why do we keep living with these unprovable X factors? Several answers come to mind.
- Faith: We believe in certain things and that's good enough.
- Necessity: We have to make sense of the world, even if there are glitches along the way.
- Habit: The unprovable assumptions haven't bothered anybody so far, and therefore we've gotten into the habit of forgetting them.
- Conformity: A given system may be ﬂawed, but everybody else uses it so I will, too. I want to belong.
Lump all of these reasons together and lesser mortals -- even lesser mortals working in AI -- wind up defending systems that have ﬂaws they don't want to admit. But it's not just the Achilles heel in logic that plagues us. We are trapped by the implications of Gödel's second theorem, which holds that a logical system cannot reveal its inconsistencies. Blindness is built in. I know that I am humanizing mathematics, which marks me as a total outsider, but systems engulf us at every turn -- systems of politics, religion, morality, gender, economics and, above all, materialism.
It's vital to know that you have been conditioned to accept these systems without regard for their unprovable assumptions. (Note that unprovable isn't the same as wrong -- I can't prove that my mother loved me. But it's still true.) Rationalists warn us against investing our hopes in childish things like God, the afterlife or the soul and then expect them to be true. But I don't think spirituality came about from wishful thinking. It came about because the world's sages, saints and seers managed to escape the limitations of the logical system that AI has put so much faith in. Think of a computer that can detect a million shades of red. If you ask it which one is the nicest, it has nothing to say. "Nice" is outside its logic. Fortunately, life cannot be expressed in algorithms or explained with models.
Thinking requires a mind.
When Picasso invented Cubism, when Tolstoy imagined Anna Karenina jumping in front of the train, when Keats wrote the ﬁnal draft of "Ode to a Nightingale" in a few frenzied minutes, turning a promising poem into a masterpiece, creativity made leaps that were not based on mixing and matching the ingredients of what came before. Logic didn't come into it.
Of all the weird contradictions that plague modern life, the strangest is the collapse of philosophy with the triumph of science. Aesthetics, morals, love, transcendence, idealism -- all of these fields of thought, having persisted for thousands of years, in the East and in the West, mean nothing in scientific terms because they cannot be reduced to data, measurements and experimentation.
AI attempts to defy such an obvious truth by its belief that absolutely everything is reducible to data. Great benefits are sure to come out of AI -- technology shows no sign of slowing down and promises endless improvements. But while that's going on, human beings live their lives according to those banished, discredited things: aesthetics, morals, love, transcendence and ideals. They are the way to make life meaningful and fulfilled.
So what we need is a wholesale commitment to a meaningful life, placing our best hopes there, not in logic machines and their parody of having a mind.
Also on WorldPost: