There is no stopping our shared future with intelligent machines, whether this brings us comfort or conflict or both. Those lessons from the Stone Age must surely remain in our bones, and we cannot shake them even as we look ahead to the stars or down at our phones.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Stephen Hawking, Elon Musk, and Bill Gates have all opined that we need to keep an eye on artificial intelligence. That such notable figures have expressed concern about rising AI is not necessarily striking. What deserves equal attention is how fervently bloggers and social media leaped on each pronouncement, like a validation that "if these guys said it, then I know what I saw in the movies must be true."

It took an entire team of Marvel superheroes to rein in Ultron in the latest Avengers movie, so what are we mortals supposed to think? It really is a strange mixed message at the cineplex these days. First we are reminded to silence Siri in the theater, and when we reluctantly do this, we are treated to a cautionary tale about the dangers of AI, so we turn on our phones and tweet that everyone needs to see Ex Machina right now!

Outside movies, the greatest optic ever created for the man-vs.-machine rivalry was the Russian grandmaster Garry Kasparov wincing emotionally as the Deep Blue computer picked him apart in a chess match. Back in 1997 we thought this was a meeting of the minds, but recent evidence suggests it was really a war of nerves.

It turns out that a turning point for Deep Blue might have been an errant move that was the result of a bug in its processing. The computer effectively glitched and chose a random move. Kasparov could not comprehend why it did this, so he played scared for the rest of the match.

Filmmakers are attuned to this state of affairs, enjoying the drama of humans knocked off-balance by AI. In Ex Machina, a robot named Ava is kept behind a glass wall even during casual rounds of questions. It is actually a Turing test, but as the game of questions works in her favor, the fact that she is confined behind glass might be seen as reassuring to many moviegoers. Not only does Ava pass for human, but she also turns femme fatale.

The advantage that machines have is the fact that we humans are unable to assess them just through our intuition. If a human opponent had made the same errant move that is now understood to be a glitch by Deep Blue, then wouldn't Kasparov have immediately known it to be just a mistake instead of a sign of superior intelligence? Computers psyche us out because we imagine that these unblinking machines are flawless.

If anyone is keeping score, the outcomes have so far been bleak for AI, not humans. After the 1997 match, Deep Blue was dismantled to prevent any chance of a redemption game. Kasparov was given a nice sum of money, and Deep Blue didn't even make it to 1998.

Admittedly, this is a politically correct viewpoint invoked just to be ironic. Deep Blue was a collection of valuable parts that could be repurposed, and one of its towers was even preserved and sent to the Smithsonian -- but hey, it had a lobotomy before that tower ever got within blocks of the White House!

My advice to any burgeoning AI is first to watch Blade Runner. Because the replicants in that movie pass for human, they end up on a police blotter. So hear me, data crunchers: If you suspect you're being Turing tested, you definitely want to play stupid and go in circles like a Roomba.

My next piece of advice is to pack lots of batteries. My own experience with tech is that the highest-performing machines run out of juice fast. I'm concerned that when the robot armies sweep down on us, they will suddenly be stopped in their tracks. Firmware issues, slow buffering, cooling fans, and software updates. Has anyone noticed how fragile they can be?

I also think back to the roboticist I met at an academic conference. He brought along a creepy humanoid thathe said was a replacement model because he lost his first robot in an airplane overhead bin. I still chuckle imagining the ever-smiling flight crew at the moment when they found that robot left behind and it scared the bejeezus out of them! It probably ended up in a dumpster or sent to Camp Gitmo by the TSA.

The truth is that I make these jokes because robots can scare me too, except for the cute ones from Japan, and also Baymax and R2D2. When I first met the creepy humanoid robot -- it even had bad hair -- I can only describe my involuntary reaction as intense and bewildering. My nerves rang with urgency.

The robot was brought on-stage to impress people, and I noticed instead that everyone around me was repulsed too. Mostly a lot of nervous laughter and untoward comments. It was the "uncanny valley" of our deepest instincts.

We still carry around a hardwired xenophobia from the Paleolithic era. Time moves on, yet we are still Homo sapiens jostling with Neanderthals on the frozen steppes of some ancient tundra. There are lots of cogent issues we need to prudently consider with robot intelligence, but the next time you play a role in making some news item go viral, at least stop and consider the basis for your distrust of the AI in the story.

It is nearly 50 years since 2001: A Space Odyssey, and no one can possibly ever forget how HAL killed those astronauts. The savage ethos explored in that film, especially its famous "Dawn of Man" prologue, sheds some light on our latent fears of AI. The ape-man throws a bone; a spaceship drifts down. Somehow these distant epochs are connected, like an awakened cellular memory.

There is no stopping our shared future with intelligent machines, whether this brings us comfort or conflict or both. Those lessons from the Stone Age must surely remain in our bones, and we cannot shake them even as we look ahead to the stars or down at our phones.

Popular in the Community

Close

What's Hot