Why the Chinese Room Argument is Flawed

This text deals with arguments against the possibility of so-called strong artificial intelligence, with a particular focus on the Chinese Room Argument devised by philosopher John Searle. We start with a description of the thesis that Searle wants to disprove. Then we describe Searle’s arguments. Subsequently, we take a look at some objections to Searle by other influential philosophers. Finally, I conclude with my own objection that introduces a more accurate definition of strong artificial intelligence which renders Searle’s arguments invalid. Along the way, we will dispose of some common misconceptions about artificial intelligence.

Searle’s Argument

The Semantic Argument

In his essay Can Computers Think? [11], Searle gives his own definition of strong artificial intelligence, which he subsequently tries to refute. His definition is as follows:

One could summarise this view […] by saying that the mind is to the brain, as the program is to the computer hardware.

Searle’s first attempt at refuting the possibility of strong artificial intelligence is based on the insight that mental states have, by definition, a certain semantic content or meaning. Programs, on the other hand, are purely formal and syntactical, i.e. a sequence of symbols that do not have a meaning in themselves. Therefore, a program could not be equivalent to a mind. A formal reconstruction of this argument looks as follows:

  • Syntax is not sufficient for semantics
  • Programs are completely characterized by their formal, syntactical structure
  • Human minds have semantic contents
  • Therefore, programs are not sufficient for creating a mind

Searle emphasizes the fact that his argument is based solely on the property that programs are defined formally, regardless of which physical system is used to run the program. Therefore, it does not state that it is impossible for us today to create a strong artificial intelligence, but that this is generally impossible for any conceivable machine in the future, regardless of how fast it is or which other properties it might have.

The Chinese Room Argument

In order to make his first premise more plausible (“Syntax is not sufficient for semantics”), Searle describes a thought experiment – the Chinese Room. Assume there were a program that is capable of answering Chinese questions in Chinese. No matter which question you pose in Chinese, it gives you an appropriate answer that a human Chinese speaker might also give. Searle now tries to argue that a computer running this program doesn’t actually understand Chinese in the same sense as a Chinese human being understands Chinese.

To this end, he assumes that the formal instructions of the program are carried out by a person who does not understand Chinese. This person is locked in a room, and the Chinese questions are passed into the room as a sequence of symbols. The room contains baskets with many other Chinese symbols, along with a list of formal instructions, which are purely syntactical rules that tell the person how to produce an answer to the question by assembling the symbols from the baskets. The answer generated by these instructions are then passed out of the room by the person. The person is not aware that the symbols that are passed into the room are questions and the symbols that are passed out of the room are answers to these questions. He just blindly carries out the instructions strictly and correctly. And these instructions generate meaningful Chinese sentences that are answers to the questions which couldn’t be distinguished from the answers a real Chinese speaking person would give.

Now Searle raises attention to the fact that the person in the room doesn’t understand Chinese simply by following formal instructions for generating answers. He continues to argue that a computer running a program that generates Chinese answers to Chinese questions therefore also doesn’t understand Chinese. Since this experiment could be generalized to arbitrary tasks, Searle concludes that computers are inherently incapable of understanding something.

Replies to the Chinese Room Argument

There are numerous objections to the Chinese Room argument by various authors. Many of these arguments are similar in nature. In the following, I will present the most commonly presented ones, including answers to these objections by Searle himself.

The Systems Reply

One of the most commonly raised objection is that even though the person in the Chinese Room does not understand Chinese, the system as whole does – the room with all its constituents, including the person. This objection is often called the Systems Reply and there are various versions of it.

For example, artificial intelligence researcher, entrepreneur and author Ray Kurzweil says in [5] that the person is only an executive unit and that its properties are not to be confused with the properties of the system. If one looks at the room as an overall system, the fact that the person does not understand Chinese doesn’t entail that this also holds for the room.

Cognitive scientist Margaret Boden argues in [1] that the human brain is not the carrier of intelligence, but rather that it causes intelligence. Analogously, the person in the room causes an understanding of Chinese to arise, even though it does not understand Chinese itself.

Searle responds to the Systems Reply with the semantic argument: Even the system as a whole couldn’t go from syntax to semantics and, hence, couldn’t understand the meaning of the Chinese symbols. In [9], he adds that the person in the room could theoretically memorize all the formal rules and perform all the computations in its head. Then, he argues, the person is the entire system, could answer Chinese questions without help and perhaps even lead Chinese conversations, but still wouldn’t understand Chinese since it only carries out formal rules and can’t associate a meaning with the formal symbols.

The Virtual Mind Reply

Similar to the Systems Reply, the Virtual Mind Reply states that the person does not understand Chinese, but that a running system could create new entities that differ from both the person and the system as a whole. The understanding of Chinese could be a new entity of this sort. This standpoint is argued for by artificial intelligence researcher Marvin Minsky in [15] and philosopher Tim Maudlin in [6]. Maudlin notes that Searle didn’t provide an adequate answer to this reply thus far.

The Robot Reply

Another reply changes the thought experiment in such a way that the program is put into a robot that can perceive the world through sensors (like cameras or microphones) and interact with the world via effectors (like motors or loudspeakers). This causal interaction with the environment, the argument goes, is a guarantee that the robot understands Chinese, since the formal symbols are endowed with semantics this way – namely objects in the real world. This view presupposes an externalist semantics. This reply is raised, for example, by Margaret Boden in [1].

Searle answers to this argument in [17] with the semantic argument: The robot still only has a computer as its brain and couldn’t go from syntax to semantics. He makes this more plausible by adapting the thought experiment such that the Chinese Room itself is integrated into a robot as its central processing unit. The Chinese symbols would then be generated by sensors and passed into the room. Analogously, the symbols passed out of the room would control the effectors. Even though the robot interacts with the external world this way, the person in the room still doesn’t understand the meaning of the symbols.

The Brain Simulator Reply

Some authors, e.g. philosophers Patricia and Paul Churchland in [2], suggest that one should imagine that instead of manipulating the Chinese symbols, a computer should simulate the neuronal firings in the brain of a Chinese person. Since the computer operates in exactly the same way as a brain, the argument goes, it must understand Chinese.

Searle answers to this argument in [10]. He argues that one could also simulate the neuronal structures by a system of water pipes and valves and put it into the Chinese Room. The person in the room then has instructions on how to guide the water through the pipes in order to simulate the brain of a Chinese person. Still, he says, no understanding of Chinese is generated.

The Emergence Reply

Now I present my own reply, which I have coined the Emergence Reply.

I grant that Searle’s arguments prove that a mind can not be equated with a computer program. This is immediately obvious from the semantic argument: Since a mind has properties that a program does not have (namely semantic content), a program can not be equal to a mind. Hence, it refutes the possibility of strong artificial intelligence by his own definition.

However, one can phrase another definition of strong artificial intelligence which, as I will argue, is not affected by Searle’s arguments:

A system exhibits strong artificial intelligence if it can create a mind as an emergent phenomenon by running a program.

I explicitly include any type of system, regardless of the material from which it is made – be it a computer, a Chinese Room or a gigantic hall of falling dominos or beer cans that simulate a Turing machine.

I will not try to argue for the possibility of strong artificial intelligence according to this definition. It is doubtful whether this is even possible. However, I will argue why this definition is not affected by Searle’s arguments.

Non-Applicability of the Semantic Argument

In my proposed definition, no analogy between the program and the mind created by the program is demanded. Therefore, the semantic argument becomes obsolete: Even though a program as a syntactical construct doesn’t create semantics (and therefore couldn’t be equal to a mind), it doesn’t follow that a program can’t create semantic contents in the course of its execution.

Moreover, this definition doesn’t state that the computer hardware is the carrier of the mental processes. The hardware is not enabled to think this way. Rather, the computer creates the mental processes as an emergent phenomenon, similarly to how the brain creates mental processes as an emergent phenomenon. So, if one considers the question in the title of Searle’s original essay “Can Computers Think?”, the answer would be “No, but they might create thinking.”

How a mind can be created through the execution of a program, and what sort of ontological existence this mind would have, is a discussion topic of its own. In order to make this more plausible, imagine a program that exactly simulates the trajectories and interactions of elementary particles in a brain of a Chinese speaker. This way, the program does not only create the same outputs for the same inputs as the Chinese’s brain, but proceeds completely analogously. There is no immediate way to exclude the possibility that the simulated brain can’t create a mind in exactly the same way as a real brain can. The only assumption here is that the physical processes in a brain are deterministic. There are some theories claiming that a mind requires non-deterministic quantum phenomena that can’t be simulated algorithmically. One such theory is presented by physicist Sir Roger Penrose in [7], who has founded the Penrose Institute to explore this possibility. If such theories turn out to be true, then this would be a strong argument against the possibility of strong artificial intelligence.

Non-Applicability of the Chinese Room Argument

As regards the Chinese Room Argument, it convincingly shows that the fact that a system gives the impression of understanding something doesn’t entail that it really understands it. Not every program that the person in the Chinese Room could execute in order to converse in Chinese does in fact create understanding. This is an important insight that refutes some common misconceptions, like the fact that IBM’s Deep Blue understands chess in the same way as a human does, or that Apple’s Siri understands spoken language. Deep Blue just calculates the payoff of certain moves, and Siri just transcribes one sequence of numbers into another (albeit in a sophisticated way). This definitely doesn’t create understanding or a mind.

Moreover, the Chinese Room Argument shows that the Turing Test is no reliable indicator of strong artificial intelligence. In this test, described by Alan Turing in [12], a human subject should converse with an unknown entity and decide whether it is talking to another human or a computer, solely based on the answers that the entity gives. If the computer repeatedly manages to trick the subject, we call it intelligent. This test only measures how good a computer is at giving the impression of being intelligent without making any restrictions as to how the computer does it internally, which, as we argued already, is an important factor in determining whether a computer really exhibits strong artificial intelligence.

Additionally, Searle’s argument shows that it is not the hardware itself that understands Chinese. Even if a hardware running a program creates a mind that understands Chinese, the person in the Chinese Room is the hardware and doesn’t understand Chinese.

It does not, however, refute the possibility that the hardware can create a mind that understands Chinese by executing the program. Assume there is a program that answers Chinese questions and creates mental processes that exhibit an understanding of the Chinese questions and answers. This assumption can not be refuted by the Chinese Room Argument. If we let the person in the room execute the program via pen and paper, it is correct that the person doesn’t understand Chinese. But the person is only the hardware in this case. Its mind does not equal the mind that is created by the execution of the program.

It might seem intuitively implausible that arithmetical operations carried out with pen and paper could give rise to a mind. But this can be made more plausible by assuming, as before, that the neuronal processes in the brain are simulated in the form of these arithmetical operations. The fact that a mind could not arise in such a way may be a false intuition. There is no immediately obvious logical reason to exclude this possibility. Similar things hold for Searle’s system of water pipes, beer can domino or other unorthodox hardware. If one assumes that a computer hardware can create a mind, one must grant that this is also possible for other, more exotic mechanical systems.

Whether it is indeed possible to create a mind by the execution of a program is still an open question. Maybe Roger Penrose turns out to be right that consciousness is a natural phenomenon that can’t be created by the deterministic interaction of particles. Are organisms really just algorithms? How can the parallel firing of tens of billions of neurons give rise to consciousness and a mind? As of now, neuroscience has not the slightest idea. However, I would say with some certainty that this question cannot be answered by thought experiments alone.

If you liked this article, you may also be interested in my article Gödel’s Incompleteness Theorem And Its Implications For Artificial Intelligence.


[1] Boden, Margaret A: Escaping from the Chinese Room. University of Sussex, School of Cognitive Sciences, 1987.

[2] Churchland, Paul M und Patricia Smith Churchland: Could a Machine Think? Machine Intelligence: Perspectives on the Computational Model, 1:102, 2012.

[3] Cole, David: The Chinese Room Argument. In: Zalta, Edward N. (Herausgeber): The Stanford Encyclopedia of Philosophy. Summer 2013. http://plato.stanford.edu/archives/ sum2013/entries/chinese-room/.

[4] Dennett, Daniel C: Fast thinking. 1987.

[5] Kurzweil, Ray: Locked in his Chinese Room. Are We Spiritual Machines: Ray Kurzweil vs. the Critics of Strong AI, 2002.

[6] Maudlin, Tim: Computation and consciousness. The journal of Philosophy, pp 407–432, 1989.

[7] Penrose, Roger: The Emperor’s New Mind (1990). Vintage, London.

[8] Russell, Stuart Jonathan et al.: Artificial Intelligence: A Modern Approach. Prentice hall Englewood Cliffs, 1995.

[9] Searle, John: The Chinese Room Argument. Encyclopedia of Cognitive Science, 2001.

[10] Searle, John R: Minds, brains, and programs. Behavioral and brain sciences, 3(03):417–424, 1980.

[11] Searle, John R: Minds, brains, and science. Harvard University Press, 1984.

[12] Turing, Alan M: Computing machinery and intelligence. Mind, pp 433–460, 1950.

By | 2017-10-23T16:21:00+00:00 September 3rd, 2017|Artificial Intelligence, Cognitive Science, Philosophy|2 Comments
  • Matt Longpre

    Where did the comments go?

    • Daniel Sabinasz

      I switched to a new commenting system, but now I’ve reimported the old ones, so they should all be back.

      Thanks for the info!