Results for ""
American philosopher John Searle (1932) first published the argument and thought experiment currently known as the Chinese Room Argument in a 1980 paper. It has become one of the most well-known philosophical arguments in recent years.
Philosopher John Searle offered the argument in his 1980 paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences. Gottfried Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974), and Ned Block all advanced similar arguments (1978). Since then, Searle's version has been the subject of extensive discussion. The cornerstone of Searle's argument is the Chinese room thought experiment.
What is that argument?
John Searle came up with the Chinese room argument as a thought experiment. It is one of the best-known and most widely accepted arguments against claims of artificial intelligence (AI), which are claims that computers think or at least can (or someday might) think.
According to Searle's original presentation, the argument has two main claims: brains cause minds, and syntax is insufficient for semantics. Its intended objective is what Searle refers to as "strong AI." "The computer is not only a tool in the study of the mind," Searle says of strong AI, "rather, but the suitably programmed computer is also a mind in the sense that computers, given the right programs, can be said to understand and have other cognitive states." Searle distinguishes between "strong AI" and "weak AI." Computers, according to weak AI, merely simulate thought.
Is it possible for a machine to be aware of its surroundings? to be able to communicate in a foreign language? Our "intelligent machine" must be able to comprehend the meanings of words and sentences. To learn a language (for example, the sentence "It will rain on Friday.") And if it can do that, it can believe in anything ("I believe it will rain on Friday ''). And if it can have beliefs, it may be able to have additional mental states such as hopes ("I hope it rains on Friday '') and fears ("I worry it rains on Friday"). But what does it take to understand the meanings of words?
Chinese room thought experiment.
Searle's thought experiment starts with a supposition: let's say that research into artificial intelligence has created a computer that acts as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, makes other Chinese characters as output. Searle says, "Let's say that this computer does its job so well that it easily passes the Turing test." For example, let's say it convinces an actual Chinese speaker that the program is an authentic Chinese speaker. It gives the correct answers to all of the person's questions so that a person who speaks Chinese would think they were talking to another Chinese-speaking person.
Searle wants to know if the machine "understands" Chinese literally. Or does it just make it seem like you can understand Chinese? Searle calls the first choice "strong AI" and the second choice "weak AI."
Searle then imagines that he is in a closed room with enough paper, pencils, erasers, filing cabinets, and a book with an English version of the computer program. Searle could get Chinese characters through a slot in the door, process them according to the program's instructions, and get Chinese characters back as output, all without understanding Chinese writing. If the computer passed the Turing test this way, Searle says, it stands to reason that he would also give it by just running the program by hand.
Searle says that the roles of the computer and himself in the experiment are not that different. Each one follows a program step by step, making it act in a way that the user thinks is intelligent conversation. But Searle would not be able to understand. So, he says, it stands to reason that the computer also wouldn't be able to understand the conversation.
Searle says that we can't call what the machine is doing "thinking" without "understanding" or "intentionality." Since the machine isn't thinking, Searle says, it doesn't have a "mind" in the usual sense of the word. So, he believes that the "strong AI" hypothesis is not true.
Conclusion
In 1714, Gottfried Leibniz made a similar case against mechanism (the position that a mind is a machine and nothing more). Anatoly Dneprov, a Soviet cyberneticist, made a very identical argument in 1961 in the short tale "The Game." Lawrence Davis proposed duplicating the brain using telephone connections and offices operated by humans in 1974, while Ned Block proposed a brain simulation including the whole population of China in 1978. Searle's version was published in the Behavioral and Brain Sciences journal in 1980 as "Minds, Brains, and Programs." Unfortunately, we can't define what the machine is doing as "thinking" without "understanding" (or "intentionality"), and since it doesn't think, it doesn't have a "mind" in the traditional sense. As a result, we cannot regard machines to be intelligent.