Alan Turing (1912–1954), often regarded as the pioneer of AI, posed the following question in his 1950 work Computing Machinery and Intelligence.

"Can machines think?"

Despite its brief duration and ancient origins, this question remains a topic of debate, trying to find a balance between technology, philosophy, neuroscience, and theology.

Similarly, Alfred Ayer, a philosopher, addressed the standard philosophical question of other minds in 1936: how can we know that other people have the same conscious experiences as us? In his book Language, Truth, and Logic, Ayer proposed a protocol to distinguish between a conscious man and an unconscious machine. 

Can machines think?

In 1950, Alan Turing said, "I'm going to think about the question, "Can machines think?"." Changing the question from whether a machine "thinks" to "whether or not machines can act intelligently," he said, The only thing is how the machine acts. "It doesn't matter if the machine is conscious, or has a mind, or if the intelligence is just a "simulation" and not "the real thing." As he said, we also don't know these things about other people, but we make sure that they are "thinking." This idea is at the heart of the Turing test.

What is the Turing test?

The Turing test (originally called the imitation game) is a way to see how well a machine can act like a person. It looks at how well a machine can act like a person. For example, Turing said that a human would judge natural language conversations between a human and a machine to sound like a human. One of the two people having a conversation would be a machine, and everyone else would be apart. Text-only communication, like typing on a computer keyboard, would be the only way to have a conversation. The outcome would not depend on the machine's ability to make words sound like speech. In this case, the machine has passed the test. The test results don't rely on how well the machine can answer questions correctly. They only look at how close their answers are to what a person would say.

What is an imitation game?

Consider there is a man (A), a woman (B), and a neutral person who is interrogating them all (C). Each of these people is in a separate room with no windows or other way to see each other. C can only communicate and interact with A and B through a screen and a keyboard.

This game is all about getting to know each other better. The interrogator, C, wants to know who is the male (A) and the female (B). The C does this by asking each of them a question. Then, he sends a text message to A (who doesn't know that he is the man).

"What is the length of your hair?"

A's goal is to make the interrogator fail, making him think he is the woman. For example, 'My hair is single, and the longest strands are about nine inches long,' or something even vaguer could answer the question above.

What will happen if a machine plays A in this game?

This person could be a person or a computer. This conversation will only last for a short time, but it's completely free.

An interrogator can't tell if what they were talking to is human or machine after this conversation, or even better, says the person they were talking to was a human, and it turns out to be a machine. The machine has passed the 'Turing Test' if it did this.

Milestones

  • Joseph Weizenbaum developed a program that appeared to pass the Turing test in 1966. The application, dubbed ELIZA, worked by searching for keywords in users' typed comments.
  • In 1972, Kenneth Colby invented PARRY, a program dubbed "ELIZA with attitude." It aimed to model the behaviour of a paranoid schizophrenic using a method comparable to (but more advanced than) that used by Weizenbaum.
  • In his 1980 article Minds, Brains, and Programs, John Searle suggested the "Chinese room" thought experiment and stated that the Turing test could not assess whether a machine can think.
  • In November 1991, the Loebner Prize provided an annual venue for practical Turing testing with the first competition.
  • The first Loebner Prize competition in 1991 sparked a renewed debate in the popular press and academics over the Turing test's practicality and the value of pursuing it.
  • A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) has received the bronze prize three times in previous years (2000, 2001, 2004).
  • In 2005 and 2006, Learning AI Jabberwacky was the winner.
  • In 2022, Researchers from USC Viterbi collaborated with a DARPA-funded multi-institution team to define how future machines, like humans and other animals, will be able to learn for the rest of their lives.

Conclusion

The Turing test does not directly address whether a machine is intelligent. It merely determines whether the computer behaves humanely. Since human behaviour and intelligent behaviour are not synonymous, the test may fall short of effectively measuring intelligence in two ways:

  • "Some human behaviour is unintelligent."
  • "Some intelligent behaviour is inhuman."

Martin Ford, a futurist, interviewed 23 renowned AI specialists to forecast when Artificial general intelligence (AGI) would appear in his 2018 book Architects of Intelligence. The average response time was 2099 out of the 18 responses he received.

It's also unclear when AI will be able to pass the Turing test conclusively. However, if it does occur, it will almost certainly predate the emergence of AGI.

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in