Searle’s paper, “Minds, Brains, and Programs”, was originally published in Behavioral and Brain Sciences in 1980. It has become one of modern philosophy’s (and broadly, cognitive science’s) most disputed and discussed pieces due to the nature of the argument presented in the paper. In said paper, John Searle sought, or should I say, seeks, to dispute the claim that artificial intelligence in the form of computers and programs do, or at the most basic level, could (one day), think for their synthetic selves; essentially it’s a refutation of the idea that computers or programs can actually “understand” in the same way that a human can. This argument is formulated around two distinct claims: (1) Intentionality in humans is a product of causal features of the brain, i.e., minds are a product of brains. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality, i.e., just because a computer or program is given the resources to understand, does not mean that the computer or program can actually ever understand. Before we examine these claims further, it’s important to illustrate exactly what type of artificial intelligence Searle is addressing. There are two different types of AI, one being “weak AI” and the other being “strong AI”. The layman understanding of weak AI, or “cautious AI” as some call it, is that the computer or program is simply used as a tool, something that can facilitate the human mind in a more powerful way. This essentially boils weak AI down to a resource for simulating mental abilities - useful for people in fields of psychology or medicine where processes like hypothesis testing are important, which is something a computer can simulate better than a human. It is important to... ... middle of paper ... ... robot that perform its various motor functions. Still, Searle argues, the user still understands nothing beyond the scope of symbol manipulation. Running through the program prevents the occurrence of any mental state of a meaningful type. Searle also argues in this case, that there is a tacit concession of the argument for strong AI: the Robot Reply suggests that cognition and understanding in computers is not solely a matter of manipulating symbols; contrary to what strong AI actually supposes. The tacit concession is a result of adding a set of causal relations to the outside world (http://www.iep.utm.edu/chineser/#SH2b). http://dictionary.reference.com/browse/turing+test?s=ts pg 417 Minds, Brains, and Programs. Searle, John. http://psych.utoronto.ca/users/reingold/courses/ai/turing.html http://www.iep.utm.edu/chineser/ http://www.iep.utm.edu/chineser/#SH2b
Andy Clark strongly argues for the theory that computers have the potential for being intelligent beings in his work “Mindware: Meat Machines.” The support Clark uses to defend his claims states the similar comparison of humans and machines using an array of symbols to perform functions. The main argument of his work can be interpreted as follows:
It is easy for Searle to respond to this claim. There is no evidence that he needs to refute. He even says that he is "embarrassed" to respond to the idea that a whole system aside from the human brain was capable of understanding. He asks the key question which will never be answered by Lycan, "Where is the understanding in this system?" Although Lycan tries to refute Searle's views, his arguments are not backed with proof. Lycan responded by explaining that Searle is looking only at the "fine details" of the system and not at the system as a whole. While it is a possibility that Searle is not looking at the system as a whole, it still does not explain in any way or show any proof whatsoever as to where the thinking in the system is.
... in 21th century, and it might already dominate humans’ life. Jastrow predicted computer will be part of human society in the future, and Levy’s real life examples matched Jastrow’s prediction. The computer intelligence that Jastrow mentioned was about imitated human brain and reasoning mechanism. However, according to Levy, computer intelligence nowadays is about developing AI’s own reasoning pattern and handling complicated task from data sets and algorithms, which is nothing like human. From Levy’s view on today’s version of AI technology, Jastrow’s prediction about AI evolution is not going to happen. As computer intelligence does not aim to recreate a human brain, the whole idea of computer substitutes human does not exist. Also, Levy said it is irrelevant to fear AI may control human, as people in today’s society cannot live without computer intelligence.
Searle's argument delineates what he believes to be the invalidity of the computational paradigm's and artificial intelligence's (AI) view of the human mind. He first distinguishes between strong and weak AI. Searle finds weak AI as a perfectly acceptable investigation in that it uses the computer as a strong tool for studying the mind. This in effect does not observe or formulate any contentions as to the operation of the mind, but is used as another psychological, investigative mechanism. In contrast, strong AI states that the computer can be created so that it actually is the mind. We must first describe what exactly this entails. In order to be the mind, the computer must be able to not only understand, but to have cognitive states. Also, the programs by which the computer operates are the focus of the computational paradigm, and these are the explanations of the mental states. Searle's argument is against the claims of Shank and other computationalists who have created SHRDLU and ELIZA, that their computer programs can (1) be ascribe...
Searle’s argument is one against humans having free will. The conclusion comes from his view on determinism and his view on substances. His view on substances is a materialist one. To him, the entire world is composed of material substances. All occurrences can be explained by these materials. This is a view that is very attuned with (accepting) determinism. Determinism states that necessary causes must be for the occurrence to be. This deterministic cause and effect relationship is apparent in the physical world. Hard believing determinists see determinism as being exclusive of free will. Searle, being a materialist, views humans as just another material substance. He accepts determinism and rejects (libertarian) free will.
I will begin by providing a brief overview of the thought experiment and how Searle derives his argument. Imagine there is someone in a room, say Searle himself, and he has a rulebook that explains what to write when he sees certain Chinese symbols. On the other side of the room is a Chinese speaker who writes Searle a note. After Searle receives the message, he must respond—he uses the rulebook to write a perfectly coherent response back to the actual Chinese speaker. From an objective perspective, you would not say that Searle is actually able to write in Chinese fluently—he does not understand Chinese, he only knows how to compute symbols. Searle argues that this is exactly what happens if a computer where to respond to the note in Chinese. He claims that computers are only able to compute information without actually being able to understand the information they are computing. This fails the first premise of strong AI. It also fails the second premise of strong AI because even if a computer were capable of understanding the communication it is having in Chinese, it would not be able to explain how this understanding occurs.
Since antiquity the human mind has been intrigued by artificial intelligence hence, such rapid growth of computer science has raised many issues concerning the isolation of the human mind.
This world of artificial intelligence has the power to produce many questions and theories because we don’t understand something that isn’t possible. “How smart’s an AI, Case? Depends. Some aren’t much smarter than dogs. Pets. Cost a fortune anyway. The real smart ones are as smart as the Turing heat is willing to let ‘em get.” (Page 95) This shows that an artificial intelligence can be programmed to only do certain ...
The “human sense of self control and purposefulness, is a user illusion,” therefore, if computational systems are comparable to human consciousness, it raises the questions of whether such artificial systems should be treated as humans. (261) Such programs are even capable of learning like children, with time and experience; the programs “[get] better at their jobs with experience,” however, many can argue the difference is self-awareness and that there are many organisms that can conduct such complex behavior but have no sense of identity.
In “Can Computers Think?”, Searle argues that computers are unable to think like humans can. He argues this
People love to read stories and watch movies of a science-fictional society that include robots with artificial intelligence. People are intrigued with the ability of the robots that seem to demonstrate what we humans consider morality. Eando Binder’s and Isaac Asimov’s short stories, as well as the 2004 Hollywood movie, all carry the title “I, Robot” and introduce possible futuristic worlds where robots are created and integrated within society. These stories challenge our perceptions about robots themselves, and could perhaps become an everyday commodity, or even valued assistants to human society. The different generations of “I, Robot” seem to set out the principles of robot behavior and showcase robots to people in both different and similar ways. How does the Robot view itself? More importantly, how does society judge these creations? The concepts discussed in these three stories covers almost 75 years of storytelling. Why has this theme stayed so relevant for so long?
The official foundations for "artificial intelligence" were set forth by A. M. Turing, in his 1950 paper "Computing Machinery and Intelligence" wherein he also coined the term and made predictions about the field. He claimed that by 1960, a computer would be able to formulate and prove complex mathematical theorems, write music and poetry, become world chess champion, and pass his test of artificial intelligences. In his test, a computer is required to carry on a compelling conversation with humans, fooling them into believing they are speaking with another human. All of his predictions require a computer to think and reason in the same manner as a human. Despite 50 years of effort, only the chess championship has come true. By refocusing artificial intelligence research to a more humanlike, cognitive model, the field will create machines that are truly intelligent, capable of meet Turing's goals. Currently, the only "intelligent" programs and computers are not really intelligent at all, but rather they are clever applications of different algorithms lacking expandability and versatility. The human intellect has only been used in limited ways in the artificial intelligence field, however it is the ideal model upon which to base research. Concentrating research on a more cognitive model will allow the artificial intelligence (AI) field to create more intelligent entities and ultimately, once appropriate hardware exists, a true AI.
Specifically, in how the theory likens conscious intelligence to a mimicry of consciousness. In Alan Turing’s study of computing and consciousness, he developed the Turing Test, which essentially led to the notion that if a computing machine or artificial intelligence could perfectly mimic human communication, it was deemed ‘conscious’. REF. However, many do not agree and instead argue that while computers may be able to portray consciousness and semantics, it is not commensurable to actual thought and consciousness. Simulation is not the same as conscious thinking, and having a conscious understanding of the sematic properties of the symbols it is manipulating. This flaw was portrayed in John Searle’s thought experiment, ‘The Chinese Room’. Searle places a person who cannot speak Chinese in a room with various Chinese characters and a book of instructions, while a person outside of the room that speaks Chinese communicates through written Chinese message passed into the room. The non-Chinese speaker responds by manipulating the uninterpreted Chinese characters, or symbols, in conjunction with the syntactical instruction book, giving the illusion that they can speak Chinese. This process simulated the operation of a computer program, yet the non-Chinese speaker clearly had no understanding of the messages, or of Chinese, and was still able to produce
...lligent, intentional activity taking place inside the room and the digital computer. The proponents of Searle’s argument, however, would counter that if there is an entity which does computation, such as human being or computer, it cannot understand the meanings of the symbols it uses. They maintain that digital computers do not understand the input given in or the output given out. But it cannot be claimed that the digital computers as whole cannot understand. Someone who only inputs data, being only a part of the system, cannot know about the system as whole. If there is a person inside the Chinese room manipulating the symbols, the person is already intentional and has a mental state, thus, due to the seamless integration of their systems of hardware and software that understand the inputs and outputs as whole systems, digital computers too have states of mind.
With the development of technology in the world, people are faced with many things they never saw and knew before. In this modern life, technology has affected a lot of people’s lives in many levels. Robots are considered as important products of technology. Robots were introduced by a writer, Karel Čapek, from the Czech word, robota, meaning “forced labor” or “serf”. Čapek used this word in his play, R.U.R. (Rossum's Universal Robots) which opened in Prague in January, 1921, a play in which an Englishman named Rossum mass-produced automata. The automata, robots, are meant to do the world’s work and to make a better life for humans; but in the end they rebel, wipe out humanity, and start a new race of intelligent life for the robots themselves (Asimov, 1984). Robot does not have a specific definition itself, every dictionary has a slightly different definition. “Deciding if a machine is or is not a robot is like trying to decide if a certain shade of greenish blue is truly blue or not blue,” said Carlo Bertocchini, the owner of RobotBooks.com. “Some people will call it blue while others will vote not blue,” (Branwyn, 2004). This essay will limit the meaning of robot as what defined in the Merriam Webster Dictionary (2004), robot is a machine that looks and acts like a human being, an efficient but insensitive person, a device that automatically performs especially repetitive tasks, and something guided by automatic controls. As the technology grows more modern each day, scientists and programmers are creating and improving the function of robots. Nevertheless, many people are still debating should robots be developed more and should robots be used in everyday life. I disagree that the further development of robots should be remain...