John Searle's Chinese Room Argument
The purpose of this paper is to present John Searle’s Chinese room argument in which it challenges the notions of the computational paradigm, specifically the ability of intentionality. Then I will outline two of the commentaries following, the first by Bruce Bridgeman, which is in opposition to Searle and uses the super robot to exemplify his point. Then I will discuss John Eccles’ response, which entails a general agreement with Searle with a few objections to definitions and comparisons. My own argument will take a minimalist computational approach delineating understanding and its importance to the concepts of the computational paradigm.
Searle's argument delineates what he believes to be the invalidity of the computational paradigm's and artificial intelligence's (AI) view of the human mind. He first distinguishes between strong and weak AI. Searle finds weak AI as a perfectly acceptable investigation in that it uses the computer as a strong tool for studying the mind. This in effect does not observe or formulate any contentions as to the operation of the mind, but is used as another psychological, investigative mechanism. In contrast, strong AI states that the computer can be created so that it actually is the mind. We must first describe what exactly this entails. In order to be the mind, the computer must be able to not only understand, but to have cognitive states. Also, the programs by which the computer operates are the focus of the computational paradigm, and these are the explanations of the mental states. Searle's argument is against the claims of Shank and other computationalists who have created SHRDLU and ELIZA, that their computer programs can (1) be ascribe...
... middle of paper ...
... ha, I will say, this is just my point. Our brain does not simply receive input strings, process them, and output strings, there is a very specific and nonrandom association going on that is based on the motivations and inclinations at that time. In other words, it is directly influenced by those hormonal levels, which Bridgeman is so eager to disregard. For instance, I may think, “yum, a banana tastes very good,” because I am hungry right then. At another moment, I might refer to a visual representation of the banana, because I am painting a still life, and banana will do well for my composition. So in turn my fourth point would be that understanding is hormonal and motivational specific, changing, perhaps even from moment to moment. In summary, I feel computational understanding can be achieved at a secondary level, but the primary motivations are lacking.
Computationalism: the view that computation, an abstract notion of materialism lacking semantics and real-world interaction, offers an explanatory basis for human comprehension. The main purpose of this paper is to discuss and compare different views regarding computationalism, and the arguments associated with these views. The two main arguments I feel are the strongest are proposed by Andy Clark, in “Mindware: Meat Machines”, and John Searle in “Minds, Brains, and Programs.”
It is easy for Searle to respond to this claim. There is no evidence that he needs to refute. He even says that he is "embarrassed" to respond to the idea that a whole system aside from the human brain was capable of understanding. He asks the key question which will never be answered by Lycan, "Where is the understanding in this system?" Although Lycan tries to refute Searle's views, his arguments are not backed with proof. Lycan responded by explaining that Searle is looking only at the "fine details" of the system and not at the system as a whole. While it is a possibility that Searle is not looking at the system as a whole, it still does not explain in any way or show any proof whatsoever as to where the thinking in the system is.
... in 21th century, and it might already dominate humans’ life. Jastrow predicted computer will be part of human society in the future, and Levy’s real life examples matched Jastrow’s prediction. The computer intelligence that Jastrow mentioned was about imitated human brain and reasoning mechanism. However, according to Levy, computer intelligence nowadays is about developing AI’s own reasoning pattern and handling complicated task from data sets and algorithms, which is nothing like human. From Levy’s view on today’s version of AI technology, Jastrow’s prediction about AI evolution is not going to happen. As computer intelligence does not aim to recreate a human brain, the whole idea of computer substitutes human does not exist. Also, Levy said it is irrelevant to fear AI may control human, as people in today’s society cannot live without computer intelligence.
Searle’s argument is one against humans having free will. The conclusion comes from his view on determinism and his view on substances. His view on substances is a materialist one. To him, the entire world is composed of material substances. All occurrences can be explained by these materials.
John Searle’s Chinese room argument from his work “Minds, Brains, and Programs” was a thought experiment against the premises of strong Artificial Intelligence (AI). The premises of conclude that something is of the strong AI nature if it can understand and it can explain how human understanding works. I will argue that the Chinese room argument successfully disproves the conclusion of strong AI, however, it does not provide an explanation of what understanding is which becomes problematic when creating a distinction between humans and machines.
Since antiquity the human mind has been intrigued by artificial intelligence hence, such rapid growth of computer science has raised many issues concerning the isolation of the human mind.
Artificial Intelligence is a term not too widely used in today’s society. With today’s technology we haven’t found a way to enable someone to leave their physical body and let their mind survive within a computer. Could it be possible? Maybe someday, but for now it’s just in theory. The novel by William Gibson, Neuromancer, has touched greatly on the idea of artificial intelligence. He describes it as a world where many things are possible. By simply logging on the computer, it opens up a world we could never comprehend. The possibilities are endless in the world of William Gibson.
The “human sense of self control and purposefulness, is a user illusion,” therefore, if computational systems are comparable to human consciousness, it raises the questions of whether such artificial systems should be treated as humans. (261) Such programs are even capable of learning like children, with time and experience; the programs “[get] better at their jobs with experience,” however, many can argue the difference is self-awareness and that there are many organisms that can conduct such complex behavior but have no sense of identity.
In “Can Computers Think?”, Searle argues that computers are unable to think like humans can. He argues this
In this paper I will evaluate and present A.M. Turing’s test for machine intelligence and describe how the test works. I will explain how the Turing test is a good way to answer if machines can think. I will also discuss Objection (4) the argument from Consciousness and Objection (6) Lady Lovelace’s Objection and how Turing responded to both of the objections. And lastly, I will give my opinion on about the Turing test and if the test is a good way to answer if a machine can think.
At the end of chapter two, Searle summarizes his criticism of functionalism in the following way. The mental processes of a mind are caused entirely by processes occurring inside the brain. There is no external cause that determines what a mental process will be. Also, there is a distinction between the identification of symbols and the understanding of what the symbols mean. Computer programs are defined by symbol identification rather than understanding. On the other hand, minds define mental processes by the understanding of what a symbol means. The conclusion leading from this is that computer programs by themselves are not minds and do not have minds. In addition, a mind cannot be the result of running a computer program. Therefore, minds and computer programs are not entities with the same mental state. They are quite different and although they both are capable of input and output interactions, only the mind is capable of truly thinking and understanding. This quality is what distinguishes the mental state of a mind from the systemic state of a digital computer.
One of the hottest topics that modern science has been focusing on for a long time is the field of artificial intelligence, the study of intelligence in machines or, according to Minsky, “the science of making machines do things that would require intelligence if done by men”.(qtd in Copeland 1). Artificial Intelligence has a lot of applications and is used in many areas. “We often don’t notice it but AI is all around us. It is present in computer games, in the cruise control in our cars and the servers that route our email.” (BBC 1). Different goals have been set for the science of Artificial Intelligence, but according to Whitby the most mentioned idea about the goal of AI is provided by the Turing Test. This test is also called the imitation game, since it is basically a game in which a computer imitates a conversating human. In an analysis of the Turing Test I will focus on its features, its historical background and the evaluation of its validity and importance.
Specifically, in how the theory likens conscious intelligence to a mimicry of consciousness. In Alan Turing’s study of computing and consciousness, he developed the Turing Test, which essentially led to the notion that if a computing machine or artificial intelligence could perfectly mimic human communication, it was deemed ‘conscious’. REF. However, many do not agree and instead argue that while computers may be able to portray consciousness and semantics, it is not commensurable to actual thought and consciousness. Simulation is not the same as conscious thinking, and having a conscious understanding of the sematic properties of the symbols it is manipulating. This flaw was portrayed in John Searle’s thought experiment, ‘The Chinese Room’. Searle places a person who cannot speak Chinese in a room with various Chinese characters and a book of instructions, while a person outside of the room that speaks Chinese communicates through written Chinese message passed into the room. The non-Chinese speaker responds by manipulating the uninterpreted Chinese characters, or symbols, in conjunction with the syntactical instruction book, giving the illusion that they can speak Chinese. This process simulated the operation of a computer program, yet the non-Chinese speaker clearly had no understanding of the messages, or of Chinese, and was still able to produce
...lligent, intentional activity taking place inside the room and the digital computer. The proponents of Searle’s argument, however, would counter that if there is an entity which does computation, such as human being or computer, it cannot understand the meanings of the symbols it uses. They maintain that digital computers do not understand the input given in or the output given out. But it cannot be claimed that the digital computers as whole cannot understand. Someone who only inputs data, being only a part of the system, cannot know about the system as whole. If there is a person inside the Chinese room manipulating the symbols, the person is already intentional and has a mental state, thus, due to the seamless integration of their systems of hardware and software that understand the inputs and outputs as whole systems, digital computers too have states of mind.
First off let’s get something straight. When I refer to computers in this essay I am not referring only to the microprocessor sitting on your desk but to microprocessors that control robots of various structure.