In “Minds, brains, and programs”, John Searle argues that artificial intelligence is not capable of human understanding. This paper hopes to show that although artificial intelligence may not understand in precisely the way that the human mind does, that does not mean artificial intelligence is without any capacity for understanding. EXPOSITION (441) The type of artificial intelligence Searle's argument focuses on is “strong AI”. “Strong AI”, in contrast to “weak AI” which is described as being only a “very powerful tool” for use in study of the human brain, is said to be programmed to have equal functionality as the human mind. In this way, the programming of strong AI is said to have the capacity to understand and have “other cognitive …show more content…
As a central feature of what it means to understand in Searle's argument, what Searle means by “intentionality” is the ability to independently manifest changeable internal states of being. These states of being can be interpreted as moods, desires, ideas, and any other mental state directed at things or topics. Searle argues that because AI is composed of programs that are created to receive and express predetermined information (inputs and outputs), it is not in possession of intentionality. Instead, Searle argues that AI is merely an exhibition not of intentionality, but of fixed …show more content…
Neither does he allow room for the evolution of AI technology to a point where he may be able to identify the computations of some AI as even a kind of intentionality, albeit not a biologically based one. Nor does he in anyway offer flexibility of his defining terms of understanding so that they may be applied to AI. This paper seeks to explore the concept of AI as a realm in which intentionality, and understanding is at least possible in its own specific context. Searle rejects the concept of AI as capable of understanding based on his assertion that AI has no intentionality, and he is able to do this because he grounds intentionality in the biological phenomena of the brain. By rooting intentionality in biology, Searle makes it easy to ignore all other forms of intentionality, in this case, technological ones. Searle's biologically based intentionality is comprised of mental states, including general thinking. But, when considered more broadly, thinking is just a process of organizing and accessing information. This type of action does not have to be limited to human minds, and certainly can be seen in a diverse array of AI. Some of the most basic AI organizes and assesses information, and when it is required to access and exhibit certain information, the processes necessary to do so can be seen as a sort of
Andy Clark strongly argues for the theory that computers have the potential for being intelligent beings in his work “Mindware: Meat Machines.” The support Clark uses to defend his claims states the similar comparison of humans and machines using an array of symbols to perform functions. The main argument of his work can be interpreted as follows:
It is easy for Searle to respond to this claim. There is no evidence that he needs to refute. He even says that he is "embarrassed" to respond to the idea that a whole system aside from the human brain was capable of understanding. He asks the key question which will never be answered by Lycan, "Where is the understanding in this system?" Although Lycan tries to refute Searle's views, his arguments are not backed with proof. Lycan responded by explaining that Searle is looking only at the "fine details" of the system and not at the system as a whole. While it is a possibility that Searle is not looking at the system as a whole, it still does not explain in any way or show any proof whatsoever as to where the thinking in the system is.
Searle’s argument is one against humans having free will. The conclusion comes from his view on determinism and his view on substances. His view on substances is a materialist one. To him, the entire world is composed of material substances. All occurrences can be explained by these materials.
Searle's argument delineates what he believes to be the invalidity of the computational paradigm's and artificial intelligence's (AI) view of the human mind. He first distinguishes between strong and weak AI. Searle finds weak AI as a perfectly acceptable investigation in that it uses the computer as a strong tool for studying the mind. This in effect does not observe or formulate any contentions as to the operation of the mind, but is used as another psychological, investigative mechanism. In contrast, strong AI states that the computer can be created so that it actually is the mind. We must first describe what exactly this entails. In order to be the mind, the computer must be able to not only understand, but to have cognitive states. Also, the programs by which the computer operates are the focus of the computational paradigm, and these are the explanations of the mental states. Searle's argument is against the claims of Shank and other computationalists who have created SHRDLU and ELIZA, that their computer programs can (1) be ascribe...
I will begin by providing a brief overview of the thought experiment and how Searle derives his argument. Imagine there is someone in a room, say Searle himself, and he has a rulebook that explains what to write when he sees certain Chinese symbols. On the other side of the room is a Chinese speaker who writes Searle a note. After Searle receives the message, he must respond—he uses the rulebook to write a perfectly coherent response back to the actual Chinese speaker. From an objective perspective, you would not say that Searle is actually able to write in Chinese fluently—he does not understand Chinese, he only knows how to compute symbols. Searle argues that this is exactly what happens if a computer where to respond to the note in Chinese. He claims that computers are only able to compute information without actually being able to understand the information they are computing. This fails the first premise of strong AI. It also fails the second premise of strong AI because even if a computer were capable of understanding the communication it is having in Chinese, it would not be able to explain how this understanding occurs.
Computers are machines that take syntactical information only and then function based on a program made from syntactical information. They cannot change the function of that program unless formally stated to through more information. That is inherently different from a human mind, in that a computer never takes semantic information into account when it comes to its programming. Searle’s formal argument thus amounts to that brains cause minds. Semantics cannot be derived from syntax alone. Computers are defined by a formal structure, in other words, a syntactical structure. Finally, minds have semantic content. The argument then concludes that the way the mind functions in the brain cannot be likened to running a program in a computer, and programs themselves are insufficient to give a system thought. (Searle, p.682) In conclusion, a computer cannot think and the view of strong AI is false. Further evidence for this argument is provided in Searle’s Chinese Room thought-experiment. The Chinese Room states that I, who does not know Chinese, am locked in a room that has several baskets filled with Chinese symbols. Also in that room is a rulebook that specifies the various manipulations of the symbols purely based on their syntax, not their semantics. For example, a rule might say move the squiggly
At the end of chapter two, Searle summarizes his criticism of functionalism in the following way. The mental processes of a mind are caused entirely by processes occurring inside the brain. There is no external cause that determines what a mental process will be. Also, there is a distinction between the identification of symbols and the understanding of what the symbols mean. Computer programs are defined by symbol identification rather than understanding. On the other hand, minds define mental processes by the understanding of what a symbol means. The conclusion leading from this is that computer programs by themselves are not minds and do not have minds. In addition, a mind cannot be the result of running a computer program. Therefore, minds and computer programs are not entities with the same mental state. They are quite different and although they both are capable of input and output interactions, only the mind is capable of truly thinking and understanding. This quality is what distinguishes the mental state of a mind from the systemic state of a digital computer.
One of the hottest topics that modern science has been focusing on for a long time is the field of artificial intelligence, the study of intelligence in machines or, according to Minsky, “the science of making machines do things that would require intelligence if done by men”.(qtd in Copeland 1). Artificial Intelligence has a lot of applications and is used in many areas. “We often don’t notice it but AI is all around us. It is present in computer games, in the cruise control in our cars and the servers that route our email.” (BBC 1). Different goals have been set for the science of Artificial Intelligence, but according to Whitby the most mentioned idea about the goal of AI is provided by the Turing Test. This test is also called the imitation game, since it is basically a game in which a computer imitates a conversating human. In an analysis of the Turing Test I will focus on its features, its historical background and the evaluation of its validity and importance.
Specifically, in how the theory likens conscious intelligence to a mimicry of consciousness. In Alan Turing’s study of computing and consciousness, he developed the Turing Test, which essentially led to the notion that if a computing machine or artificial intelligence could perfectly mimic human communication, it was deemed ‘conscious’. REF. However, many do not agree and instead argue that while computers may be able to portray consciousness and semantics, it is not commensurable to actual thought and consciousness. Simulation is not the same as conscious thinking, and having a conscious understanding of the sematic properties of the symbols it is manipulating. This flaw was portrayed in John Searle’s thought experiment, ‘The Chinese Room’. Searle places a person who cannot speak Chinese in a room with various Chinese characters and a book of instructions, while a person outside of the room that speaks Chinese communicates through written Chinese message passed into the room. The non-Chinese speaker responds by manipulating the uninterpreted Chinese characters, or symbols, in conjunction with the syntactical instruction book, giving the illusion that they can speak Chinese. This process simulated the operation of a computer program, yet the non-Chinese speaker clearly had no understanding of the messages, or of Chinese, and was still able to produce
This world of artificial intelligence has the power to produce many questions and theories because we don’t understand something that isn’t possible. “How smart’s an AI, Case? Depends. Some aren’t much smarter than dogs. Pets. Cost a fortune anyway. The real smart ones are as smart as the Turing heat is willing to let ‘em get.” (Page 95) This shows that an artificial intelligence can be programmed to only do certain ...
Since antiquity the human mind has been intrigued by artificial intelligence hence, such rapid growth of computer science has raised many issues concerning the isolation of the human mind.
John Searle developed two areas of thought concerning the independent cognition of computers. These ideas included the definition of a weak AI and a strong AI. In essence, these two types of AI have their fundamental differences. The weak AI was defined as a system, which simply were systems that simulations of the human mind and AI systems that were characterized as an AI system that is completely capable of cognitive processes such as consciousness and intentionality, as well as understanding. He utilizes the argument of the Chinese room to show that the strong AI does not exist.
The “human sense of self control and purposefulness, is a user illusion,” therefore, if computational systems are comparable to human consciousness, it raises the questions of whether such artificial systems should be treated as humans. (261) Such programs are even capable of learning like children, with time and experience; the programs “[get] better at their jobs with experience,” however, many can argue the difference is self-awareness and that there are many organisms that can conduct such complex behavior but have no sense of identity.
...lligent, intentional activity taking place inside the room and the digital computer. The proponents of Searle’s argument, however, would counter that if there is an entity which does computation, such as human being or computer, it cannot understand the meanings of the symbols it uses. They maintain that digital computers do not understand the input given in or the output given out. But it cannot be claimed that the digital computers as whole cannot understand. Someone who only inputs data, being only a part of the system, cannot know about the system as whole. If there is a person inside the Chinese room manipulating the symbols, the person is already intentional and has a mental state, thus, due to the seamless integration of their systems of hardware and software that understand the inputs and outputs as whole systems, digital computers too have states of mind.
In order to see how artificial intelligence plays a role on today’s society, I believe it is important to dispel any misconceptions about what artificial intelligence is. Artificial intelligence has been defined many different ways, but the commonality between all of them is that artificial intelligence theory and development of computer systems that are able to perform tasks that would normally require a human intelligence such as decision making, visual recognition, or speech recognition. However, human intelligence is a very ambiguous term. I believe there are three main attributes an artificial intelligence system has that makes it representative of human intelligence (Source 1). The first is problem solving, the ability to look ahead several steps in the decision making process and being able to choose the best solution (Source 1). The second is the representation of knowledge (Source 1). While knowledge is usually gained through experience or education, intelligent agents could very well possibly have a different form of knowledge. Access to the internet, the la...