At some point in our lives, we have wondered about the possibility of a computer being able to think. John Searle addresses this issue in his paper, “Can Computers Think?”, where he argues that computers cannot think because they are directed by formal information. This means that the information presented is only syntax with no semantics behind it. In this paper, I will elaborate more on Searle’s position and reasoning whilst critiquing his argument by saying that it is possible to derive semantics from syntax. Finally, I will analyze the significance of my criticism and present a possible response from Searle to defend his argument.
In “Can Computers Think?”, Searle argues that computers are unable to think like humans can. He argues this
…show more content…
in response to a view in philosophy that states that “the brain is just a digital computer and the mind is just a computer program.” (Searle, p.677) This view entails that there is no biology involved in the human mind, and the brain is solely a piece of hardware that is large enough to house enough programs to make up the human mind. This view is summarized by Searle as “’strong artificial intelligence,’ or ‘strong AI’” as it states simply that the mind is to the brain like a program is to the computer hardware. (Searle, p.677) Therefore, that would mean that it is possible for a computer to think like a human being, but Searle objects against this. Searle reasons that digital computers can never think like humans can, not because of any technological advances that could be made, but because of what computers are at their core.
Computers are machines that take syntactical information only and then function based on a program made from syntactical information. They cannot change the function of that program unless formally stated to through more information. That is inherently different from a human mind, in that a computer never takes semantic information into account when it comes to its programming. Searle’s formal argument thus amounts to that brains cause minds. Semantics cannot be derived from syntax alone. Computers are defined by a formal structure, in other words, a syntactical structure. Finally, minds have semantic content. The argument then concludes that the way the mind functions in the brain cannot be likened to running a program in a computer, and programs themselves are insufficient to give a system thought. (Searle, p.682) In conclusion, a computer cannot think and the view of strong AI is false. Further evidence for this argument is provided in Searle’s Chinese Room thought-experiment. The Chinese Room states that I, who does not know Chinese, am locked in a room that has several baskets filled with Chinese symbols. Also in that room is a rulebook that specifies the various manipulations of the symbols purely based on their syntax, not their semantics. For example, a rule might say move the squiggly …show more content…
sign from the first basket into the second basket next to the straight sign. Now suppose that I receive messages in Chinese, and I am able to translate them and with another set of rules I am able to send back messages. Unbeknownst to me, the messages I am receiving are questions in Chinese and I am actually sending back replies or “answers” to those questions in perfect Chinese. We can therefore assess that even though I am replying to these questions in perfect Chinese, I do not know a single word of Chinese because I am simply rearranging symbols of a language, yet I am responding in perfect Chinese. This thought-experiment simulates the actions that a computer program would take. (Searle, p.679) Searle presents an incredible argument to be sure, especially with the Chinese Room thought-experiment. In the end, I would have to concede to it, if I was trapped in a room with only the barrels of symbols and the rulebooks, I would not know the meaning of the Chinese symbols from simply arranging responses from these symbols. However; that does not mean that the argument is perfect. The premise that semantics cannot be derived from syntax alone can be challenged with an extension to the Chinese Room experiment. Let us assume that we put a digital computer into a robot, which has visual cameras, touch, taste, and smell sensors, and microphones to gain external data from the outside world, which would go into the computer as additional syntactical data. Let us also assume that the robot looks and acts like the average human, and also that each and every action that the robot takes is taken in the same way as the Chinese Room experiment, as syntactical data turning into new syntactical data due to the implementation of programs. Therefore, in the same respect as the Chinese Room experiment, this robot reacts exactly as a human does, but only with syntactical information with no semantic information behind it. Now that the computer is placed into an external environment, as it interacts with the world, it will learn the meaning of the symbols in its syntax, for example; it would see an apple, then it would know it as an apple because the robot would be able to give the symbol for an apple a visual representation. That would entail that the robot is learning, and it would able to derive semantics from syntax through connections to the external world and is able to think. This response is commonly known as the Robot reply. (Cole, 4.2) The significance of the Robot reply can be likened to not a direct hit on Searle’s reasoning, but a dodge to the notion that a computer cannot derive semantics from syntax directly. The reply states a computer can only derive the semantics from syntax if given enough connections to assist with the derivation from syntax to semantics. However; since the reply is essentially an incomplete argument against Searle’s argument, it is not a very devastating criticism. If the Chinese Room was put into the robot, and I was given all of this syntactical information about the world, I still would not know what the symbols would mean despite what the robot is actually doing because I am only shuffling symbols. I am unable to gain any meaning from them. (Cole, 4.2) The actions of the robot do not prove that the computer operating it is thinking, it only means that it is still able to run through its own programming. Based on these assumptions so far, Searle would say that the Robot reply does not change the fact that the computer in the head of the robot still takes in information syntactically only.
He would say that it is still impossible for a computer to derive semantic information from merely syntax because the two things, according to him, are mutually exclusive when separate. It is impossible to gain any semantic information from syntax alone, which would mean that even if a robot was interacting with the world, the computer inside the robot is only getting syntactical information and processes it in syntactical terms only. It is also important to note, in the words of Searle, that a computer’s “operations have to be defined syntactically, whereas consciousness, thoughts, feelings, emotions, and all the rest of it involve more than syntax.” (Searle, p.681) Therefore, even though a robot would be able to simulate being a human, it cannot actually be a human. I then believe, with that evidence, Searle would conclude that the Robot reply would not satisfy the conditions needed for a computer to be able to
think. To conclude, the line of criticism that I took, the Robot reply, does not defeat Searle’s argument on whether computers can think, but it does put to question the viability of the premise that syntax is insufficient on its own to derive semantics.
Andy Clark strongly argues for the theory that computers have the potential for being intelligent beings in his work “Mindware: Meat Machines.” The support Clark uses to defend his claims states the similar comparison of humans and machines using an array of symbols to perform functions. The main argument of his work can be interpreted as follows:
... in 21th century, and it might already dominate humans’ life. Jastrow predicted computer will be part of human society in the future, and Levy’s real life examples matched Jastrow’s prediction. The computer intelligence that Jastrow mentioned was about imitated human brain and reasoning mechanism. However, according to Levy, computer intelligence nowadays is about developing AI’s own reasoning pattern and handling complicated task from data sets and algorithms, which is nothing like human. From Levy’s view on today’s version of AI technology, Jastrow’s prediction about AI evolution is not going to happen. As computer intelligence does not aim to recreate a human brain, the whole idea of computer substitutes human does not exist. Also, Levy said it is irrelevant to fear AI may control human, as people in today’s society cannot live without computer intelligence.
Searle's argument delineates what he believes to be the invalidity of the computational paradigm's and artificial intelligence's (AI) view of the human mind. He first distinguishes between strong and weak AI. Searle finds weak AI as a perfectly acceptable investigation in that it uses the computer as a strong tool for studying the mind. This in effect does not observe or formulate any contentions as to the operation of the mind, but is used as another psychological, investigative mechanism. In contrast, strong AI states that the computer can be created so that it actually is the mind. We must first describe what exactly this entails. In order to be the mind, the computer must be able to not only understand, but to have cognitive states. Also, the programs by which the computer operates are the focus of the computational paradigm, and these are the explanations of the mental states. Searle's argument is against the claims of Shank and other computationalists who have created SHRDLU and ELIZA, that their computer programs can (1) be ascribe...
Computers are well known for their ability to perform computations and follow a list of instructions, but can a computer be a mind? There are varying philosophical theories on what constitutes a mind. Some believe that the mind must be a physical object, and others believe in dualism, or the idea that the mind is separate from the brain. I am a firm believer in dualism, and this is part of the argument that I will use in the favor of Dennett. The materialist view however, would likely not consider Hubert to be a mind. That viewpoint believes that all objects are physical objects, so the mind is a physical part of a human brain, and thus this viewpoint doesn’t consider the mind and body as two separate things, but instead they are both parts of one object. The materialist would likely reject Hubert as a mind, even though circuit boards are a physical object, although even a materialist would likely agree that Yorick being separated from Dennett does not disqualify Yorick as a mind. If one adopts a dualism view and accept the idea that the mind does not have to be connected to a physical object, then one can make sense of Hubert being able to act as the mind of Dennett. The story told to us by Dennett, is that when the switch is flipped on his little box attached to his body, the entity that controls Dennett, changes to the other entity. Since the switches are not labeled, it is never known which entity is
Searle’s argument is one against humans having free will. The conclusion comes from his view on determinism and his view on substances. His view on substances is a materialist one. To him, the entire world is composed of material substances. All occurrences can be explained by these materials.
I will begin by providing a brief overview of the thought experiment and how Searle derives his argument. Imagine there is someone in a room, say Searle himself, and he has a rulebook that explains what to write when he sees certain Chinese symbols. On the other side of the room is a Chinese speaker who writes Searle a note. After Searle receives the message, he must respond—he uses the rulebook to write a perfectly coherent response back to the actual Chinese speaker. From an objective perspective, you would not say that Searle is actually able to write in Chinese fluently—he does not understand Chinese, he only knows how to compute symbols. Searle argues that this is exactly what happens if a computer where to respond to the note in Chinese. He claims that computers are only able to compute information without actually being able to understand the information they are computing. This fails the first premise of strong AI. It also fails the second premise of strong AI because even if a computer were capable of understanding the communication it is having in Chinese, it would not be able to explain how this understanding occurs.
This world of artificial intelligence has the power to produce many questions and theories because we don’t understand something that isn’t possible. “How smart’s an AI, Case? Depends. Some aren’t much smarter than dogs. Pets. Cost a fortune anyway. The real smart ones are as smart as the Turing heat is willing to let ‘em get.” (Page 95) This shows that an artificial intelligence can be programmed to only do certain ...
Technology Is What You Make It The articles “How Computers Change the Way We Think” by Sherry Turkle and “Electronic Intimacy” by Christine Rosen argue that technology is quite damaging to society as a whole and that even though it can at times be helpful it is more damaging. I have to agree and disagree with this because it really just depends on how it is used and it can damage or help the user. The progressing changes in technology, like social media, can both push us, as a society, further and closer to and from each other and personal connections because it has become a tool that can be manipulated to help or hurt our relationships and us as human beings who are capable of more with and without technology. Technology makes things more efficient and instantaneous.
In this paper I will evaluate and present A.M. Turing’s test for machine intelligence and describe how the test works. I will explain how the Turing test is a good way to answer if machines can think. I will also discuss Objection (4) the argument from Consciousness and Objection (6) Lady Lovelace’s Objection and how Turing responded to both of the objections. And lastly, I will give my opinion on about the Turing test and if the test is a good way to answer if a machine can think.
At the end of chapter two, Searle summarizes his criticism of functionalism in the following way. The mental processes of a mind are caused entirely by processes occurring inside the brain. There is no external cause that determines what a mental process will be. Also, there is a distinction between the identification of symbols and the understanding of what the symbols mean. Computer programs are defined by symbol identification rather than understanding. On the other hand, minds define mental processes by the understanding of what a symbol means. The conclusion leading from this is that computer programs by themselves are not minds and do not have minds. In addition, a mind cannot be the result of running a computer program. Therefore, minds and computer programs are not entities with the same mental state. They are quite different and although they both are capable of input and output interactions, only the mind is capable of truly thinking and understanding. This quality is what distinguishes the mental state of a mind from the systemic state of a digital computer.
If a machine passes the test, then it is clear that for many ordinary people it would be a sufficient reason to say that that is a thinking machine. And, in fact, since it is able to conversate with a human and to actually fool him and convince him that the machine is human, this would seem t...
The official foundations for "artificial intelligence" were set forth by A. M. Turing, in his 1950 paper "Computing Machinery and Intelligence" wherein he also coined the term and made predictions about the field. He claimed that by 1960, a computer would be able to formulate and prove complex mathematical theorems, write music and poetry, become world chess champion, and pass his test of artificial intelligences. In his test, a computer is required to carry on a compelling conversation with humans, fooling them into believing they are speaking with another human. All of his predictions require a computer to think and reason in the same manner as a human. Despite 50 years of effort, only the chess championship has come true. By refocusing artificial intelligence research to a more humanlike, cognitive model, the field will create machines that are truly intelligent, capable of meet Turing's goals. Currently, the only "intelligent" programs and computers are not really intelligent at all, but rather they are clever applications of different algorithms lacking expandability and versatility. The human intellect has only been used in limited ways in the artificial intelligence field, however it is the ideal model upon which to base research. Concentrating research on a more cognitive model will allow the artificial intelligence (AI) field to create more intelligent entities and ultimately, once appropriate hardware exists, a true AI.
Specifically, in how the theory likens conscious intelligence to a mimicry of consciousness. In Alan Turing’s study of computing and consciousness, he developed the Turing Test, which essentially led to the notion that if a computing machine or artificial intelligence could perfectly mimic human communication, it was deemed ‘conscious’. REF. However, many do not agree and instead argue that while computers may be able to portray consciousness and semantics, it is not commensurable to actual thought and consciousness. Simulation is not the same as conscious thinking, and having a conscious understanding of the sematic properties of the symbols it is manipulating. This flaw was portrayed in John Searle’s thought experiment, ‘The Chinese Room’. Searle places a person who cannot speak Chinese in a room with various Chinese characters and a book of instructions, while a person outside of the room that speaks Chinese communicates through written Chinese message passed into the room. The non-Chinese speaker responds by manipulating the uninterpreted Chinese characters, or symbols, in conjunction with the syntactical instruction book, giving the illusion that they can speak Chinese. This process simulated the operation of a computer program, yet the non-Chinese speaker clearly had no understanding of the messages, or of Chinese, and was still able to produce
...lligent, intentional activity taking place inside the room and the digital computer. The proponents of Searle’s argument, however, would counter that if there is an entity which does computation, such as human being or computer, it cannot understand the meanings of the symbols it uses. They maintain that digital computers do not understand the input given in or the output given out. But it cannot be claimed that the digital computers as whole cannot understand. Someone who only inputs data, being only a part of the system, cannot know about the system as whole. If there is a person inside the Chinese room manipulating the symbols, the person is already intentional and has a mental state, thus, due to the seamless integration of their systems of hardware and software that understand the inputs and outputs as whole systems, digital computers too have states of mind.
In the past few decades we have seen how computers are becoming more and more advance, challenging the abilities of the human brain. We have seen computers doing complex assignments like launching of a rocket or analysis from outer space. But the human brain is responsible for, thought, feelings, creativity, and other qualities that make us humans. So the brain has to be more complex and more complete than any computer. Besides if the brain created the computer, the computer cannot be better than the brain. There are many differences between the human brain and the computer, for example, the capacity to learn new things. Even the most advance computer can never learn like a human does. While we might be able to install new information onto a computer it can never learn new material by itself. Also computers are limited to what they “learn”, depending on the memory left or space in the hard disk not like the human brain which is constantly learning everyday. Computers can neither make judgments on what they are “learning” or disagree with the new material. They must accept into their memory what it’s being programmed onto them. Besides everything that is found in a computer is based on what the human brain has acquired though experience.