Can Machines Think? The Curious Case of 'Eugene Goostman,' Part I

So, over this past weekend, a computer running a program that styles itself "Eugene Goostman" is said to have successfully passed the Turing Test. Did that really happen? And, if so, what does it even mean?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

So, over this past weekend, a computer running a program that styles itself "Eugene Goostman" is said to have successfully passed the Turing Test. Did that really happen? And, if so, what does it even mean? As the late, lamented John Belushi mused in the movie Continental Divide, "Am I pleased or frightened?"

Let's start with the easy part: the Turing Test itself.

In that connection, the first half of this blog's heading (and, in fact, the first half of this blog) is drawn from an article written by British mathematician and unsung hero of World War II Alan Turing. That's not the actual title of the article, although, to all intents and purposes, it might as well be. Few today will recall that Turing endowed his 1950 essay for the journal Mind with the rather clunky caption "Computing Machinery and Intelligence," but everyone working in the field of artificial intelligence will remember its signature first sentence -- to wit:

"I propose to consider the question 'Can machines think?'"

And with this, he was off to the races.

The Test According to Turing

Discarding the original formulation of his own question as too vague to permit a definitive answer, Turing proposed to replace it with one "closely related to it and expressed in relatively unambiguous words." This substitute question (paraphrasing now) is:

Can a machine pass what has come to be known as the "Turing Test"?

Turing himself didn't call it that, of course. To him it was just "the imitation game," which he explained by the following example:

Take a man, a woman and a human judge of either gender, and place each of them in a separate room. The object of the game is for the judge to guess which of the other rooms has the woman in it and which the man. In aid of doing so, the judge has been provided with two teletype terminals (this is 1950, after all) which communicate to the other two rooms. The judge may use the teletype to ask questions, make statements, offer insults or compliments, generally do anything with language that one can do over such an admittedly narrow channel and then observe the responses. Oh, did I mention that, by the rules of the game, the man and the woman will each try to convince the judge that he/she is in fact the woman?

Having established the basic structure of this imitation game, Turing then makes the following switch: for the woman, substitute a human of either gender, and for the man pretending to be a woman, substitute a computer pretending to be a human. The judge's task now becomes to identify which of the two terminals has the real human behind it, and which the machine.

Turing then goes into a rather lengthy description of what he has in mind by a "computer," a novel topic back when he was writing, rather old news today. But finally, he comes to the point:

I believe that in about 50 years' time it will be possible, to program computers... to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.

Keep that in mind for later: 70 percent, five minutes.

But meanwhile, a couple things to note about the test before we move on. First off (and this has always struck me as somewhat odd), Turing has effectively reduced the criterion for human-level intelligence to that of a human-level skill at using language. So, never mind if IBM's Deep Blue can trounce Gary Kasparov at chess -- unless it can also talk a good game, it's out of the running as far as Alan is concerned!

Now, admittedly, most, if not all, of the other challenges that might seem to call for human-type smarts could conceivably be recast in terms of language use. One could envision, for instance, all the moves of an entire Grand Master chess tournament being expressed in such a fashion. Turing's own field of mathematical proofs, though, would seem to be less susceptible to such an approach. Does this mean something?

A second, separate concern is that the community of artificial intelligence researchers is by no means unanimous in its acceptance of the Turing Test as a benchmark for success in their endeavor. All the more so since the test itself got some serious bucks behind it. In 1988, Hugh Loebner took out a mortgage on his house to set up the "Loebner Prize": $100,000 plus a solid gold medal for the first computer program to pass a "full" (i.e., unrestricted by subject matter) Turing Test.

The depth of the animus this has called forth may by gauged by the fact that Marvin Minsky, MIT's so-called "father of artificial intelligence" has gone so far as to brand the Loebner Prize "obnoxious and stupid," and to offer a prize (smaller by several orders of magnitude) to anyone who can convince Loebner to rescind his prize.

Be all that as it may, it's been more than 50 years from 1950 to 2014. More like 64 years -- which is, however, a nice, round (binary) number (and, coincidentally, the 60th anniversary of Alan Turing's apparent suicide by cyanide poisoning).

And now, finally, as of Saturday, June 8th, with the backing of the Royal Society in London, we have a winner... or do we?

Stay tuned for my blow-by-blow as regards l'affaire de Eugene Goostman.

Popular in the Community

Close

What's Hot