Proving Strong AI false
I will argue that strong AI is false using examples from Searle’s Chinese room thought experiment. Strong AI says that a computer, if programmed correctly, literally has a mind. What is a mind? The mind enables a person to be aware of the world, to think, and to feel. It is the faculty of consciousness and thought. Humans have minds, even animals have minds. But to say that programmed robots have minds, like what strong AI says, is quite a stretch. The human mind and strong AI have all sorts of differences. For example, in the Chinese room experiment Searle says something along the lines that computers exploit formal symbols according to rules in the program. The human mind can learn things, it can be aware of what
…show more content…
Although he admits that Searle is successful in pleading his case that strong AI is false, he gives examples to show that “…running the right program together with the satisfaction of certain conditions may not be sufficient.”(Fodor188) I can see why Fodor might reject Searle’s claim why strong AI is false. He believes that even though the computer or machine does not fully understand the meaning of the symbols that are programmed in them that does not mean that the symbols do not have any meaning to them. What he means by this is that the computers are not aware of the symbols given to them, they do not know what they mean but the pivotal difference between him (Fodor) and Searle is that he believes that the computers derive the meanings of the symbols through external casual relations rather than formal exploitations, like what Fodor claims. Fodor might believe this example. For example, if a dog is given dog treats by its owner when it is told to roll over. The dog does not understand English, but every time it rolls over the dog is given a treat there would be a kind of casual connection. The dog understands when it is told to roll over it will be given a treat, even though the dog has no knowledge of English. So when the dog is told to roll over it would have the correct belief that if it follows the command in its brain which understands what will happen if it rolls over, a treat will be …show more content…
A mind does more than just think or day dream. It is the control central of the human body. There would be no mind without a brain and machines that are programmed to do certain tasks do not have brains. Functionalists, along with Fodor believe that a thing can think even if it does not have a brain and is made of different stuff. My objection to this is that hundreds of years ago when there were not any computers or machines, humans did all the thinking to create the things we have today. Philosophy, for example, cannot be done by a machine even if programmed correctly. Why? Because the machine is programmed for one thing only. Philosophers think of many different claims and objections to a variety of topics. Philosophers also communicate with other philosophers to either agree on something or disagree, which requires a mind to think rationally to decide what is right and what is wrong. Not like a machine which is programmed to do one task at a time. A human mind is not programmed at all and can romp free to day dream or think. Whatever the mind wants to do, it can do. Another reason why I am sticking with Searle’s original claim that strong AI is false is the fact that without the human mind machines would never exist! It takes the human mind to brainstorm new inventions. My objection to someone who believes that strong Ai is true is that how do the machines get the programmed material into their control system? They get
Andy Clark strongly argues for the theory that computers have the potential for being intelligent beings in his work “Mindware: Meat Machines.” The support Clark uses to defend his claims states the similar comparison of humans and machines using an array of symbols to perform functions. The main argument of his work can be interpreted as follows:
It is easy for Searle to respond to this claim. There is no evidence that he needs to refute. He even says that he is "embarrassed" to respond to the idea that a whole system aside from the human brain was capable of understanding. He asks the key question which will never be answered by Lycan, "Where is the understanding in this system?" Although Lycan tries to refute Searle's views, his arguments are not backed with proof. Lycan responded by explaining that Searle is looking only at the "fine details" of the system and not at the system as a whole. While it is a possibility that Searle is not looking at the system as a whole, it still does not explain in any way or show any proof whatsoever as to where the thinking in the system is.
... in 21th century, and it might already dominate humans’ life. Jastrow predicted computer will be part of human society in the future, and Levy’s real life examples matched Jastrow’s prediction. The computer intelligence that Jastrow mentioned was about imitated human brain and reasoning mechanism. However, according to Levy, computer intelligence nowadays is about developing AI’s own reasoning pattern and handling complicated task from data sets and algorithms, which is nothing like human. From Levy’s view on today’s version of AI technology, Jastrow’s prediction about AI evolution is not going to happen. As computer intelligence does not aim to recreate a human brain, the whole idea of computer substitutes human does not exist. Also, Levy said it is irrelevant to fear AI may control human, as people in today’s society cannot live without computer intelligence.
Searle's argument delineates what he believes to be the invalidity of the computational paradigm's and artificial intelligence's (AI) view of the human mind. He first distinguishes between strong and weak AI. Searle finds weak AI as a perfectly acceptable investigation in that it uses the computer as a strong tool for studying the mind. This in effect does not observe or formulate any contentions as to the operation of the mind, but is used as another psychological, investigative mechanism. In contrast, strong AI states that the computer can be created so that it actually is the mind. We must first describe what exactly this entails. In order to be the mind, the computer must be able to not only understand, but to have cognitive states. Also, the programs by which the computer operates are the focus of the computational paradigm, and these are the explanations of the mental states. Searle's argument is against the claims of Shank and other computationalists who have created SHRDLU and ELIZA, that their computer programs can (1) be ascribe...
Computers are well known for their ability to perform computations and follow a list of instructions, but can a computer be a mind? There are varying philosophical theories on what constitutes a mind. Some believe that the mind must be a physical object, and others believe in dualism, or the idea that the mind is separate from the brain. I am a firm believer in dualism, and this is part of the argument that I will use in the favor of Dennett. The materialist view however, would likely not consider Hubert to be a mind. That viewpoint believes that all objects are physical objects, so the mind is a physical part of a human brain, and thus this viewpoint doesn’t consider the mind and body as two separate things, but instead they are both parts of one object. The materialist would likely reject Hubert as a mind, even though circuit boards are a physical object, although even a materialist would likely agree that Yorick being separated from Dennett does not disqualify Yorick as a mind. If one adopts a dualism view and accept the idea that the mind does not have to be connected to a physical object, then one can make sense of Hubert being able to act as the mind of Dennett. The story told to us by Dennett, is that when the switch is flipped on his little box attached to his body, the entity that controls Dennett, changes to the other entity. Since the switches are not labeled, it is never known which entity is
I will begin by providing a brief overview of the thought experiment and how Searle derives his argument. Imagine there is someone in a room, say Searle himself, and he has a rulebook that explains what to write when he sees certain Chinese symbols. On the other side of the room is a Chinese speaker who writes Searle a note. After Searle receives the message, he must respond—he uses the rulebook to write a perfectly coherent response back to the actual Chinese speaker. From an objective perspective, you would not say that Searle is actually able to write in Chinese fluently—he does not understand Chinese, he only knows how to compute symbols. Searle argues that this is exactly what happens if a computer where to respond to the note in Chinese. He claims that computers are only able to compute information without actually being able to understand the information they are computing. This fails the first premise of strong AI. It also fails the second premise of strong AI because even if a computer were capable of understanding the communication it is having in Chinese, it would not be able to explain how this understanding occurs.
This world of artificial intelligence has the power to produce many questions and theories because we don’t understand something that isn’t possible. “How smart’s an AI, Case? Depends. Some aren’t much smarter than dogs. Pets. Cost a fortune anyway. The real smart ones are as smart as the Turing heat is willing to let ‘em get.” (Page 95) This shows that an artificial intelligence can be programmed to only do certain ...
Computers are machines that take syntactical information only and then function based on a program made from syntactical information. They cannot change the function of that program unless formally stated to through more information. That is inherently different from a human mind, in that a computer never takes semantic information into account when it comes to its programming. Searle’s formal argument thus amounts to that brains cause minds. Semantics cannot be derived from syntax alone. Computers are defined by a formal structure, in other words, a syntactical structure. Finally, minds have semantic content. The argument then concludes that the way the mind functions in the brain cannot be likened to running a program in a computer, and programs themselves are insufficient to give a system thought. (Searle, p.682) In conclusion, a computer cannot think and the view of strong AI is false. Further evidence for this argument is provided in Searle’s Chinese Room thought-experiment. The Chinese Room states that I, who does not know Chinese, am locked in a room that has several baskets filled with Chinese symbols. Also in that room is a rulebook that specifies the various manipulations of the symbols purely based on their syntax, not their semantics. For example, a rule might say move the squiggly
In this paper I will evaluate and present A.M. Turing’s test for machine intelligence and describe how the test works. I will explain how the Turing test is a good way to answer if machines can think. I will also discuss Objection (4) the argument from Consciousness and Objection (6) Lady Lovelace’s Objection and how Turing responded to both of the objections. And lastly, I will give my opinion on about the Turing test and if the test is a good way to answer if a machine can think.
At the end of chapter two, Searle summarizes his criticism of functionalism in the following way. The mental processes of a mind are caused entirely by processes occurring inside the brain. There is no external cause that determines what a mental process will be. Also, there is a distinction between the identification of symbols and the understanding of what the symbols mean. Computer programs are defined by symbol identification rather than understanding. On the other hand, minds define mental processes by the understanding of what a symbol means. The conclusion leading from this is that computer programs by themselves are not minds and do not have minds. In addition, a mind cannot be the result of running a computer program. Therefore, minds and computer programs are not entities with the same mental state. They are quite different and although they both are capable of input and output interactions, only the mind is capable of truly thinking and understanding. This quality is what distinguishes the mental state of a mind from the systemic state of a digital computer.
If a machine passes the test, then it is clear that for many ordinary people it would be a sufficient reason to say that that is a thinking machine. And, in fact, since it is able to conversate with a human and to actually fool him and convince him that the machine is human, this would seem t...
The official foundations for "artificial intelligence" were set forth by A. M. Turing, in his 1950 paper "Computing Machinery and Intelligence" wherein he also coined the term and made predictions about the field. He claimed that by 1960, a computer would be able to formulate and prove complex mathematical theorems, write music and poetry, become world chess champion, and pass his test of artificial intelligences. In his test, a computer is required to carry on a compelling conversation with humans, fooling them into believing they are speaking with another human. All of his predictions require a computer to think and reason in the same manner as a human. Despite 50 years of effort, only the chess championship has come true. By refocusing artificial intelligence research to a more humanlike, cognitive model, the field will create machines that are truly intelligent, capable of meet Turing's goals. Currently, the only "intelligent" programs and computers are not really intelligent at all, but rather they are clever applications of different algorithms lacking expandability and versatility. The human intellect has only been used in limited ways in the artificial intelligence field, however it is the ideal model upon which to base research. Concentrating research on a more cognitive model will allow the artificial intelligence (AI) field to create more intelligent entities and ultimately, once appropriate hardware exists, a true AI.
Specifically, in how the theory likens conscious intelligence to a mimicry of consciousness. In Alan Turing’s study of computing and consciousness, he developed the Turing Test, which essentially led to the notion that if a computing machine or artificial intelligence could perfectly mimic human communication, it was deemed ‘conscious’. REF. However, many do not agree and instead argue that while computers may be able to portray consciousness and semantics, it is not commensurable to actual thought and consciousness. Simulation is not the same as conscious thinking, and having a conscious understanding of the sematic properties of the symbols it is manipulating. This flaw was portrayed in John Searle’s thought experiment, ‘The Chinese Room’. Searle places a person who cannot speak Chinese in a room with various Chinese characters and a book of instructions, while a person outside of the room that speaks Chinese communicates through written Chinese message passed into the room. The non-Chinese speaker responds by manipulating the uninterpreted Chinese characters, or symbols, in conjunction with the syntactical instruction book, giving the illusion that they can speak Chinese. This process simulated the operation of a computer program, yet the non-Chinese speaker clearly had no understanding of the messages, or of Chinese, and was still able to produce
...lligent, intentional activity taking place inside the room and the digital computer. The proponents of Searle’s argument, however, would counter that if there is an entity which does computation, such as human being or computer, it cannot understand the meanings of the symbols it uses. They maintain that digital computers do not understand the input given in or the output given out. But it cannot be claimed that the digital computers as whole cannot understand. Someone who only inputs data, being only a part of the system, cannot know about the system as whole. If there is a person inside the Chinese room manipulating the symbols, the person is already intentional and has a mental state, thus, due to the seamless integration of their systems of hardware and software that understand the inputs and outputs as whole systems, digital computers too have states of mind.
In the past few decades we have seen how computers are becoming more and more advance, challenging the abilities of the human brain. We have seen computers doing complex assignments like launching of a rocket or analysis from outer space. But the human brain is responsible for, thought, feelings, creativity, and other qualities that make us humans. So the brain has to be more complex and more complete than any computer. Besides if the brain created the computer, the computer cannot be better than the brain. There are many differences between the human brain and the computer, for example, the capacity to learn new things. Even the most advance computer can never learn like a human does. While we might be able to install new information onto a computer it can never learn new material by itself. Also computers are limited to what they “learn”, depending on the memory left or space in the hard disk not like the human brain which is constantly learning everyday. Computers can neither make judgments on what they are “learning” or disagree with the new material. They must accept into their memory what it’s being programmed onto them. Besides everything that is found in a computer is based on what the human brain has acquired though experience.