Turing's Bar: The Curious Case of 'Eugene Gosstman,' Part I.5

No, it's not a hot new AI nightspot. The "Turing's Bar" of the title is rather a reference to a comment I made on LinkedIn's "Turing Test breakthrough" discussion thread.

In that comment, I characterized Alan Turing's expressed belief, in his 1950 "Computing Machinery and Intelligence" article, that "in about fifty years time it will be possible, to programme computers ... to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning," as having set a "bar" against which future progress in artificial intelligence might be measured.

Dr. Gary Pack at U Wisconsin/Madison took issue with that characterization: Referring to the fifty years/five minutes/seventy percent combo criterion explored in my previous blog on the topic, Gary says, "That is speculation about the future, not a condition or a threshold for passing any test."

Accordingly, I thought I'd better clarify:

... but reading on (in the very next two sentences after the one in which he "set the bar", in fact), we find Turing saying: "The original question, 'Can machines think?' I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

Coupling that with the passage you quote, we can derive the following lemma:

1. Turing speculates that, in fifty years time (i.e., the year 2000, given that he is writing in 1950) a machine will be capable of playing the imitation game well enough that after five minutes of questioning it will have a 30 percent chance of being judged a human.

2. He then says that "by the end of the century" (also the year 2000), we'll be able to talk about machines thinking without fear of contradiction.

3. What will justify us in making such claims on behalf of machines by the year 2000 (premise 2)? In the context of the article, such justification can only derive from the fulfilment of the prediction in premise 1.

4. Putting the two together, I don't think it's much of a stretch to paraphrase Turing as follows: "I predict that by the year 2000 we'll acknowledge that machines can think, assuming that, also by the year 2000, a machine will be capable of successfully passing itself off as a human in the imitation game not less than 30 percent of the time."

Call it a bar, or a criterion, or a threshold, or what-have-you, Turing was not simply speculating about what the future might hold, he was also pointing out how his envisioned eventuality would bear on the question that motivated him to write his article in the first place!

So, paraphrasing again: "Can machines think? Well, by the year 2000 they'll be smart enough to win a five-minute imitation game three times out of ten, and that ought to be good enough to convince us they're thinking."

Summing up in his section entitled "Learning Machines," Turing says (direct quote this time): "The only really satisfactory support that can be given for the view expressed at the beginning of section 6, will be that provided by waiting for the end of the century and then doing the experiment described."

The "view expressed at the beginning of section 6" is, of course, that that we will alter our conceptions so as to accommodate the proposition that machines can think, and the "experiment described" is simply the imitation game, with the added proviso that a successful experimental outcome would be the aforementioned 30 percent success rate in a five minute game.

(Please note, I'm only trying to clarify the nature of Turing's claims here, I'm not buying into them.)

And, having gone to the trouble to make my point clearer, I thought I'd better share it with the universe! :)

The final caveat bears repeating: I'm no particular fan of the Turing Test, and certainly not of its current grandstanding instantiations in the Loebner Prize and others of its ilk. My views on what I described in my new technothriller Dualism with the oxymoronic phrase "real artificial intelligence" are amply summed up in my characterization of "Nietzsche" -- that book's resident AI.

In my next formal post on this topic, you'll see why I'm not a Eugene fanboy.