Argument Reconstruction and Objection on Searle’s Essay American philosopher John Searle wrote Minds, Brains, and Programs in 1980 to discredit the existence of strong artificial intelligence. He starts off by drawing a clear line between strong artificial intelligence and weak artificial intelligence, which he has no objections against. Searle uses the work of Roger Schank as the basis for what strong artificial intelligence tries to accomplish. Simply put, the purpose of Schank’s program is to “simulate the human ability to understand stories” and through this it should be able to understand the story and provide answers to questions about it, while being able to express metacognition. On the other hand, weak A.I. will be used as a “very …show more content…
While the symbol manipulation may have some connection to the way humans understand, it is unnecessary. P4 details what gives something the capacity to think. Searle says that “My own view is that only a machine could think, and indeed only very special kinds of machines, namely brains and machines that had the same causal powers as brains.” This means that he sees humans as machines that can think. He goes on to state that strong artificial intelligence is about programs, and programs are not machines. He also states that brains are “digital computers” just like many other things. The main difference is that the brain does not rely on pure symbol translation for it to think, it is much more complex than …show more content…
For a computer to run a program, it needs to translate the English words into something that the computer can understand. This is usually done through ASCII (American Standard Code for Information Interchange), which is a way for a computer to change every letter and symbol into the appropriate binary string. This translation is crucial for a computer to work with non-binary input. The brain also does symbol manipulation much faster than we realize. Pictures and words that are flashed very quickly to a person can still be picked up without much loss of information. When the brain reads a word, it doesn’t look at each letter individually, rather it looks at the whole word. It is essentially translating the symbol (the word) into something the brain understands. This all happens within milliseconds and allows us to do things that a computer simply cannot
Computationalism: the view that computation, an abstract notion of materialism lacking semantics and real-world interaction, offers an explanatory basis for human comprehension. The main purpose of this paper is to discuss and compare different views regarding computationalism, and the arguments associated with these views. The two main arguments I feel are the strongest are proposed by Andy Clark, in “Mindware: Meat Machines”, and John Searle in “Minds, Brains, and Programs.”
Temple Grandin uses many metaphors to explain her way of thinking. She uses terms like “web browser,” “tape recorder,” and “computer.” She says, “I hypothesize that the frontal cortex of my brain is the operator and the rest of my brain is the computer” (Grandin 404). This is the perfect example of her
To sum up his article, Carr mentions the scientist at Google who is trying to make an artificial intelligence for us to use our brains. He wants us to feel scared and be frightened because with an artificial intelligence in us we will be more like computers. Not being able to think on our own, but instead our brains will be running like a
...ysterious technology. When referencing the new technology he states, “They supply the stuff for thought, but they also shape the process of thought” (6). Carr’s main point is the effect of technology, especially the Internet, is changing the programming of the brain.
Searle's argument delineates what he believes to be the invalidity of the computational paradigm's and artificial intelligence's (AI) view of the human mind. He first distinguishes between strong and weak AI. Searle finds weak AI as a perfectly acceptable investigation in that it uses the computer as a strong tool for studying the mind. This in effect does not observe or formulate any contentions as to the operation of the mind, but is used as another psychological, investigative mechanism. In contrast, strong AI states that the computer can be created so that it actually is the mind. We must first describe what exactly this entails. In order to be the mind, the computer must be able to not only understand, but to have cognitive states. Also, the programs by which the computer operates are the focus of the computational paradigm, and these are the explanations of the mental states. Searle's argument is against the claims of Shank and other computationalists who have created SHRDLU and ELIZA, that their computer programs can (1) be ascribe...
Computers are well known for their ability to perform computations and follow a list of instructions, but can a computer be a mind? There are varying philosophical theories on what constitutes a mind. Some believe that the mind must be a physical object, and others believe in dualism, or the idea that the mind is separate from the brain. I am a firm believer in dualism, and this is part of the argument that I will use in the favor of Dennett. The materialist view however, would likely not consider Hubert to be a mind. That viewpoint believes that all objects are physical objects, so the mind is a physical part of a human brain, and thus this viewpoint doesn’t consider the mind and body as two separate things, but instead they are both parts of one object. The materialist would likely reject Hubert as a mind, even though circuit boards are a physical object, although even a materialist would likely agree that Yorick being separated from Dennett does not disqualify Yorick as a mind. If one adopts a dualism view and accept the idea that the mind does not have to be connected to a physical object, then one can make sense of Hubert being able to act as the mind of Dennett. The story told to us by Dennett, is that when the switch is flipped on his little box attached to his body, the entity that controls Dennett, changes to the other entity. Since the switches are not labeled, it is never known which entity is
Carr starts off his argument by referencing a “2001 a space odyssey” released in 1968 about a computer named HAL that tries to kill the astronauts that are on the spaceship that HAL controls. Carr uses an excerpt from this movie to incite fear into his readers and fear clouds judgement and causes irrational ideas to be formed. This movie is an over exaggerated sci-fi thriller and not a realistic representation of what computers are becoming. At the conclusion of his argument Carr does not forget to leave his readers the way he greeted them, Carr quotes 2001: a space odyssey “i can feel it. I’m afraid” (Carr 328). Although emotions are a strong way to engage with a reader, strong emotions also distract readers from the actual argument and encourage the reader to make a decision based on their feeling rather than their actual brain. The fact that Carr uses emotion to convince his readers is quite ironic, considering he is arguing that new technology is limiting our ability to use our brains. In contrast Thompson’s article uses logic and reason to make his argument. At the same time Thompson’s article still engages readers and is just as interesting to read as Carr’s essay. Thompson’s article starts off pondering whether computers or humans are better at chess. To answer this
I will begin by providing a brief overview of the thought experiment and how Searle derives his argument. Imagine there is someone in a room, say Searle himself, and he has a rulebook that explains what to write when he sees certain Chinese symbols. On the other side of the room is a Chinese speaker who writes Searle a note. After Searle receives the message, he must respond—he uses the rulebook to write a perfectly coherent response back to the actual Chinese speaker. From an objective perspective, you would not say that Searle is actually able to write in Chinese fluently—he does not understand Chinese, he only knows how to compute symbols. Searle argues that this is exactly what happens if a computer where to respond to the note in Chinese. He claims that computers are only able to compute information without actually being able to understand the information they are computing. This fails the first premise of strong AI. It also fails the second premise of strong AI because even if a computer were capable of understanding the communication it is having in Chinese, it would not be able to explain how this understanding occurs.
Computers are machines that take syntactical information only and then function based on a program made from syntactical information. They cannot change the function of that program unless formally stated to through more information. That is inherently different from a human mind, in that a computer never takes semantic information into account when it comes to its programming. Searle’s formal argument thus amounts to that brains cause minds. Semantics cannot be derived from syntax alone. Computers are defined by a formal structure, in other words, a syntactical structure. Finally, minds have semantic content. The argument then concludes that the way the mind functions in the brain cannot be likened to running a program in a computer, and programs themselves are insufficient to give a system thought. (Searle, p.682) In conclusion, a computer cannot think and the view of strong AI is false. Further evidence for this argument is provided in Searle’s Chinese Room thought-experiment. The Chinese Room states that I, who does not know Chinese, am locked in a room that has several baskets filled with Chinese symbols. Also in that room is a rulebook that specifies the various manipulations of the symbols purely based on their syntax, not their semantics. For example, a rule might say move the squiggly
At the end of chapter two, Searle summarizes his criticism of functionalism in the following way. The mental processes of a mind are caused entirely by processes occurring inside the brain. There is no external cause that determines what a mental process will be. Also, there is a distinction between the identification of symbols and the understanding of what the symbols mean. Computer programs are defined by symbol identification rather than understanding. On the other hand, minds define mental processes by the understanding of what a symbol means. The conclusion leading from this is that computer programs by themselves are not minds and do not have minds. In addition, a mind cannot be the result of running a computer program. Therefore, minds and computer programs are not entities with the same mental state. They are quite different and although they both are capable of input and output interactions, only the mind is capable of truly thinking and understanding. This quality is what distinguishes the mental state of a mind from the systemic state of a digital computer.
It is best to begin with Turing’s hypothetical opinion, considering Searle’s will later require an additional consideration (in response to Part C of this scenario). Based on Turing’s argument for the possibility of artificial intelligence, Siri* would be considered a “thinking, intelligent being” because Turing’s definition of a “thinking, intelligent being” is a being that has the ability to use and understand language. This is measured by a successful passing of the Turing test, also known as the Imitation Game, in w...
John Searle developed two areas of thought concerning the independent cognition of computers. These ideas included the definition of a weak AI and a strong AI. In essence, these two types of AI have their fundamental differences. The weak AI was defined as a system, which simply were systems that simulations of the human mind and AI systems that were characterized as an AI system that is completely capable of cognitive processes such as consciousness and intentionality, as well as understanding. He utilizes the argument of the Chinese room to show that the strong AI does not exist.
...ped with such knowledge, giving them the rudimentary ability to understand the semantics in which Searle describe. This too, can be reflected in Turing’s test, given that language is a prominent factor in the experiment.
Specifically, in how the theory likens conscious intelligence to a mimicry of consciousness. In Alan Turing’s study of computing and consciousness, he developed the Turing Test, which essentially led to the notion that if a computing machine or artificial intelligence could perfectly mimic human communication, it was deemed ‘conscious’. REF. However, many do not agree and instead argue that while computers may be able to portray consciousness and semantics, it is not commensurable to actual thought and consciousness. Simulation is not the same as conscious thinking, and having a conscious understanding of the sematic properties of the symbols it is manipulating. This flaw was portrayed in John Searle’s thought experiment, ‘The Chinese Room’. Searle places a person who cannot speak Chinese in a room with various Chinese characters and a book of instructions, while a person outside of the room that speaks Chinese communicates through written Chinese message passed into the room. The non-Chinese speaker responds by manipulating the uninterpreted Chinese characters, or symbols, in conjunction with the syntactical instruction book, giving the illusion that they can speak Chinese. This process simulated the operation of a computer program, yet the non-Chinese speaker clearly had no understanding of the messages, or of Chinese, and was still able to produce
...lligent, intentional activity taking place inside the room and the digital computer. The proponents of Searle’s argument, however, would counter that if there is an entity which does computation, such as human being or computer, it cannot understand the meanings of the symbols it uses. They maintain that digital computers do not understand the input given in or the output given out. But it cannot be claimed that the digital computers as whole cannot understand. Someone who only inputs data, being only a part of the system, cannot know about the system as whole. If there is a person inside the Chinese room manipulating the symbols, the person is already intentional and has a mental state, thus, due to the seamless integration of their systems of hardware and software that understand the inputs and outputs as whole systems, digital computers too have states of mind.