Searle’s paper, “Minds, Brains, and Programs”, was originally published in Behavioral and Brain Sciences in 1980. It has become one of modern philosophy’s (and broadly, cognitive science’s) most disputed and discussed pieces due to the nature of the argument presented in the paper. In said paper, John Searle sought, or should I say, seeks, to dispute the claim that artificial intelligence in the form of computers and programs do, or at the most basic level, could (one day), think for their synthetic selves; essentially it’s a refutation of the idea that computers or programs can actually “understand” in the same way that a human can. This argument is formulated around two distinct claims: (1) Intentionality in humans is a product of causal features of the brain, i.e., minds are a product of brains. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality, i.e., just because a computer or program is given the resources to understand, does not mean that the computer or program can actually ever understand. Before we examine these claims further, it’s important to illustrate exactly what type of artificial intelligence Searle is addressing. There are two different types of AI, one being “weak AI” and the other being “strong AI”. The layman understanding of weak AI, or “cautious AI” as some call it, is that the computer or program is simply used as a tool, something that can facilitate the human mind in a more powerful way. This essentially boils weak AI down to a resource for simulating mental abilities - useful for people in fields of psychology or medicine where processes like hypothesis testing are important, which is something a computer can simulate better than a human. It is important to... ... middle of paper ... ... robot that perform its various motor functions. Still, Searle argues, the user still understands nothing beyond the scope of symbol manipulation. Running through the program prevents the occurrence of any mental state of a meaningful type. Searle also argues in this case, that there is a tacit concession of the argument for strong AI: the Robot Reply suggests that cognition and understanding in computers is not solely a matter of manipulating symbols; contrary to what strong AI actually supposes. The tacit concession is a result of adding a set of causal relations to the outside world (http://www.iep.utm.edu/chineser/#SH2b). http://dictionary.reference.com/browse/turing+test?s=ts pg 417 Minds, Brains, and Programs. Searle, John. http://psych.utoronto.ca/users/reingold/courses/ai/turing.html http://www.iep.utm.edu/chineser/ http://www.iep.utm.edu/chineser/#SH2b
Computationalism: the view that computation, an abstract notion of materialism lacking semantics and real-world interaction, offers an explanatory basis for human comprehension. The main purpose of this paper is to discuss and compare different views regarding computationalism, and the arguments associated with these views. The two main arguments I feel are the strongest are proposed by Andy Clark, in “Mindware: Meat Machines”, and John Searle in “Minds, Brains, and Programs.”
It is easy for Searle to respond to this claim. There is no evidence that he needs to refute. He even says that he is "embarrassed" to respond to the idea that a whole system aside from the human brain was capable of understanding. He asks the key question which will never be answered by Lycan, "Where is the understanding in this system?" Although Lycan tries to refute Searle's views, his arguments are not backed with proof. Lycan responded by explaining that Searle is looking only at the "fine details" of the system and not at the system as a whole. While it is a possibility that Searle is not looking at the system as a whole, it still does not explain in any way or show any proof whatsoever as to where the thinking in the system is.
Searle's argument delineates what he believes to be the invalidity of the computational paradigm's and artificial intelligence's (AI) view of the human mind. He first distinguishes between strong and weak AI. Searle finds weak AI as a perfectly acceptable investigation in that it uses the computer as a strong tool for studying the mind. This in effect does not observe or formulate any contentions as to the operation of the mind, but is used as another psychological, investigative mechanism. In contrast, strong AI states that the computer can be created so that it actually is the mind. We must first describe what exactly this entails. In order to be the mind, the computer must be able to not only understand, but to have cognitive states. Also, the programs by which the computer operates are the focus of the computational paradigm, and these are the explanations of the mental states. Searle's argument is against the claims of Shank and other computationalists who have created SHRDLU and ELIZA, that their computer programs can (1) be ascribe...
John Searle’s Chinese room argument from his work “Minds, Brains, and Programs” was a thought experiment against the premises of strong Artificial Intelligence (AI). The premises of conclude that something is of the strong AI nature if it can understand and it can explain how human understanding works. I will argue that the Chinese room argument successfully disproves the conclusion of strong AI, however, it does not provide an explanation of what understanding is which becomes problematic when creating a distinction between humans and machines.
This world of artificial intelligence has the power to produce many questions and theories because we don’t understand something that isn’t possible. “How smart’s an AI, Case? Depends. Some aren’t much smarter than dogs. Pets. Cost a fortune anyway. The real smart ones are as smart as the Turing heat is willing to let ‘em get.” (Page 95) This shows that an artificial intelligence can be programmed to only do certain ...
In “Can Computers Think?”, Searle argues that computers are unable to think like humans can. He argues this
At the end of chapter two, Searle summarizes his criticism of functionalism in the following way. The mental processes of a mind are caused entirely by processes occurring inside the brain. There is no external cause that determines what a mental process will be. Also, there is a distinction between the identification of symbols and the understanding of what the symbols mean. Computer programs are defined by symbol identification rather than understanding. On the other hand, minds define mental processes by the understanding of what a symbol means. The conclusion leading from this is that computer programs by themselves are not minds and do not have minds. In addition, a mind cannot be the result of running a computer program. Therefore, minds and computer programs are not entities with the same mental state. They are quite different and although they both are capable of input and output interactions, only the mind is capable of truly thinking and understanding. This quality is what distinguishes the mental state of a mind from the systemic state of a digital computer.
The official foundations for "artificial intelligence" were set forth by A. M. Turing, in his 1950 paper "Computing Machinery and Intelligence" wherein he also coined the term and made predictions about the field. He claimed that by 1960, a computer would be able to formulate and prove complex mathematical theorems, write music and poetry, become world chess champion, and pass his test of artificial intelligences. In his test, a computer is required to carry on a compelling conversation with humans, fooling them into believing they are speaking with another human. All of his predictions require a computer to think and reason in the same manner as a human. Despite 50 years of effort, only the chess championship has come true. By refocusing artificial intelligence research to a more humanlike, cognitive model, the field will create machines that are truly intelligent, capable of meet Turing's goals. Currently, the only "intelligent" programs and computers are not really intelligent at all, but rather they are clever applications of different algorithms lacking expandability and versatility. The human intellect has only been used in limited ways in the artificial intelligence field, however it is the ideal model upon which to base research. Concentrating research on a more cognitive model will allow the artificial intelligence (AI) field to create more intelligent entities and ultimately, once appropriate hardware exists, a true AI.
If a machine passes the test, then it is clear that for many ordinary people it would be a sufficient reason to say that that is a thinking machine. And, in fact, since it is able to conversate with a human and to actually fool him and convince him that the machine is human, this would seem t...
John Searle developed two areas of thought concerning the independent cognition of computers. These ideas included the definition of a weak AI and a strong AI. In essence, these two types of AI have their fundamental differences. The weak AI was defined as a system, which simply were systems that simulations of the human mind and AI systems that were characterized as an AI system that is completely capable of cognitive processes such as consciousness and intentionality, as well as understanding. He utilizes the argument of the Chinese room to show that the strong AI does not exist.
In this paper, I have attempted to concisely yet methodically explain the Turing Test and its respective objection and rebuttals. Both Turing and Searle’s comparisons between humans and computers in a methodological manner alike illustrate their opposing views on the topic. However, following Searle’s reasoning against Turing’s experiment, it is clear that he lacks adequacy for his reasoning. This is most commonly found in Searle’s tendency to base his theories off assumptions. In doing so, Turing’s ideal responses effortlessly undermine any substance Searle might have had, thus proving his to be the stronger theory.
Artificial Intelligence, also known as AI, allows a machine to function as if the machine has the capability to think like a human. While we are not expecting any hovering cars anytime soon, artificial intelligence is projected to have a major impact on the labor force and will likely replace about half the workforce in the United States in the decades to come. The research in artificial intelligence is advancing rapidly at an unstoppable rate. So while many people feel threatened by the possibility of a robot taking over their job, computer scientists actually propose that robots would benefit a country’s efficiency of production, allowing individuals to reap the benefits of the robots. For the advantage of all, researchers and analysts have begun to mend the past ideas of human-robot interaction. They have pulled inspiration from literary works of Isaac Asimov whom many saw as the first roboticist ahead of his time, and have also gotten ideas of scholarly research done by expert analysts. These efforts have began to create an idea of a work force where humans and robots work together in harmony, on a daily basis.
It is fascinating that non-living things can think reason, plan, solve problems, and perceive, just like humans can. Robots and systems became sentient beings that were self-aware, going against their defining trait (that robots and machines lack emotion).
The traditional notion that seeks to compare human minds, with all its intricacies and biochemical functions, to that of artificially programmed digital computers, is self-defeating and it should be discredited in dialogs regarding the theory of artificial intelligence. This traditional notion is akin to comparing, in crude terms, cars and aeroplanes or ice cream and cream cheese. Human mental states are caused by various behaviours of elements in the brain, and these behaviours in are adjudged by the biochemical composition of our brains, which are responsible for our thoughts and functions. When we discuss mental states of systems it is important to distinguish between human brains and that of any natural or artificial organisms which is said to have central processing systems (i.e. brains of chimpanzees, microchips etc.). Although various similarities may exist between those systems in terms of functions and behaviourism, the intrinsic intentionality within those systems differ extensively. Although it may not be possible to prove that whether or not mental states exist at all in systems other than our own, in this paper I will strive to present arguments that a machine that computes and responds to inputs does indeed have a state of mind, but one that does not necessarily result in a form of mentality. This paper will discuss how the states and intentionality of digital computers are different from the states of human brains and yet they are indeed states of a mind resulting from various functions in their central processing systems.
Artificial Intelligence “is the ability of a human-made machine to emulate or simulate human methods for the deductive and inductive acquisition and application of knowledge and reason” (Bock, 182). The early years of artificial intelligence were seen through robots as they exemplified the advances and potential, while today AI has been integrated society through technology. The beginning of the thought of artificial intelligence happened concurrently with the rise of computers and the dotcom boom. For many, the utilization of computers in the world was the most advanced role they could ever see machines taking. However, life has drastically changed from the 1950s. This essay will explore the history of artificial intelligence, discuss the