The traditional notion that seeks to compare human minds, with all its intricacies and biochemical functions, to that of artificially programmed digital computers, is self-defeating and it should be discredited in dialogs regarding the theory of artificial intelligence. This traditional notion is akin to comparing, in crude terms, cars and aeroplanes or ice cream and cream cheese. Human mental states are caused by various behaviours of elements in the brain, and these behaviours in are adjudged by the biochemical composition of our brains, which are responsible for our thoughts and functions. When we discuss mental states of systems it is important to distinguish between human brains and that of any natural or artificial organisms which is said to have central processing systems (i.e. brains of chimpanzees, microchips etc.). Although various similarities may exist between those systems in terms of functions and behaviourism, the intrinsic intentionality within those systems differ extensively. Although it may not be possible to prove that whether or not mental states exist at all in systems other than our own, in this paper I will strive to present arguments that a machine that computes and responds to inputs does indeed have a state of mind, but one that does not necessarily result in a form of mentality. This paper will discuss how the states and intentionality of digital computers are different from the states of human brains and yet they are indeed states of a mind resulting from various functions in their central processing systems.
The most common refutation to the notion of mental states in digital computers is that there are inherent limits of computation and that there are inabilities that exist in any algorithm to...
... middle of paper ...
...lligent, intentional activity taking place inside the room and the digital computer. The proponents of Searle’s argument, however, would counter that if there is an entity which does computation, such as human being or computer, it cannot understand the meanings of the symbols it uses. They maintain that digital computers do not understand the input given in or the output given out. But it cannot be claimed that the digital computers as whole cannot understand. Someone who only inputs data, being only a part of the system, cannot know about the system as whole. If there is a person inside the Chinese room manipulating the symbols, the person is already intentional and has a mental state, thus, due to the seamless integration of their systems of hardware and software that understand the inputs and outputs as whole systems, digital computers too have states of mind.
Searle's argument delineates what he believes to be the invalidity of the computational paradigm's and artificial intelligence's (AI) view of the human mind. He first distinguishes between strong and weak AI. Searle finds weak AI as a perfectly acceptable investigation in that it uses the computer as a strong tool for studying the mind. This in effect does not observe or formulate any contentions as to the operation of the mind, but is used as another psychological, investigative mechanism. In contrast, strong AI states that the computer can be created so that it actually is the mind. We must first describe what exactly this entails. In order to be the mind, the computer must be able to not only understand, but to have cognitive states. Also, the programs by which the computer operates are the focus of the computational paradigm, and these are the explanations of the mental states. Searle's argument is against the claims of Shank and other computationalists who have created SHRDLU and ELIZA, that their computer programs can (1) be ascribe...
Searle's claim is that any installation of a program is an operation. The lack of meaning, he states, means that the computer program does not have true understanding and is not truly thinking, it is simply computing and processing symbols. He presents this argument by using his famous Chinese room. Searle begins by ta...
In the first three chapters of Kinds of Minds, Dennett introduces a variety of perspectives on what the mind is. From Cartesianism to Functionalism, Dennett outlines the evolution of thought about thought and the mind and explains his own perspective along the way. Cartesianism, as proposed by Descartes, proposes that the mind is who we are and characterizes the mind as a non physical substance that was completely separate from, and in control of, the physical body. In the strictest sense, Functionalism can be defined from Alan Turing’s perspective that a mind can be defined by what it can do. So from the Turing test, if an AI can fool a human into thinking it is also human, it must be at least as intelligent as the human. Using a plethora of anecdotes and examples, Dennett makes his position clear as he denounces Cartesianism and advocates for a functionalist based perspective in his own evolving definition of the mind.
I will begin by providing a brief overview of the thought experiment and how Searle derives his argument. Imagine there is someone in a room, say Searle himself, and he has a rulebook that explains what to write when he sees certain Chinese symbols. On the other side of the room is a Chinese speaker who writes Searle a note. After Searle receives the message, he must respond—he uses the rulebook to write a perfectly coherent response back to the actual Chinese speaker. From an objective perspective, you would not say that Searle is actually able to write in Chinese fluently—he does not understand Chinese, he only knows how to compute symbols. Searle argues that this is exactly what happens if a computer where to respond to the note in Chinese. He claims that computers are only able to compute information without actually being able to understand the information they are computing. This fails the first premise of strong AI. It also fails the second premise of strong AI because even if a computer were capable of understanding the communication it is having in Chinese, it would not be able to explain how this understanding occurs.
The “human sense of self control and purposefulness, is a user illusion,” therefore, if computational systems are comparable to human consciousness, it raises the questions of whether such artificial systems should be treated as humans. (261) Such programs are even capable of learning like children, with time and experience; the programs “[get] better at their jobs with experience,” however, many can argue the difference is self-awareness and that there are many organisms that can conduct such complex behavior but have no sense of identity.
In addition all the objects, people and the sky that we perceive, and all our experiences are just the result of electronic impulses travelling from the computer to the nerve endings. (ibid.). However, he start by posing doubts by asking that if our brains were in a vat, could we say or think that we were (Putnam, 1981:7). He furthermore argued that we could not (ibid.). For Putnam, it cannot be true that, if our brains are a vat and we say or think that we were, for Putnam it is self-refuting (ibid.).
Computers are machines that take syntactical information only and then function based on a program made from syntactical information. They cannot change the function of that program unless formally stated to through more information. That is inherently different from a human mind, in that a computer never takes semantic information into account when it comes to its programming. Searle’s formal argument thus amounts to that brains cause minds. Semantics cannot be derived from syntax alone. Computers are defined by a formal structure, in other words, a syntactical structure. Finally, minds have semantic content. The argument then concludes that the way the mind functions in the brain cannot be likened to running a program in a computer, and programs themselves are insufficient to give a system thought. (Searle, p.682) In conclusion, a computer cannot think and the view of strong AI is false. Further evidence for this argument is provided in Searle’s Chinese Room thought-experiment. The Chinese Room states that I, who does not know Chinese, am locked in a room that has several baskets filled with Chinese symbols. Also in that room is a rulebook that specifies the various manipulations of the symbols purely based on their syntax, not their semantics. For example, a rule might say move the squiggly
In this paper I will evaluate and present A.M. Turing’s test for machine intelligence and describe how the test works. I will explain how the Turing test is a good way to answer if machines can think. I will also discuss Objection (4) the argument from Consciousness and Objection (6) Lady Lovelace’s Objection and how Turing responded to both of the objections. And lastly, I will give my opinion on about the Turing test and if the test is a good way to answer if a machine can think.
The conditions of the present scenario are as follows: a machine, Siri*, capable of passing the Turing test, is being insulted by a 10 year old boy, whose mother is questioning the appropriateness of punishing him for his behavior. We cannot answer the mother's question without speculating as to what A.M. Turing and John Searle, two 20th century philosophers whose views on artificial intelligence are starkly contrasting, would say about this predicament. Furthermore, we must provide fair and balanced consideration for both theorists’ viewpoints because, ultimately, neither side can be “correct” in this scenario. But before we compare hypothetical opinions, we must establish operant definitions for all parties involved. The characters in this scenario are the mother, referred to as Amy; the 10 year old boy, referred to as the Son; Turing and Searle; and Siri*, a machine that will be referred to as an “it,” to avoid an unintentional bias in favor of or against personhood. Now, to formulate plausible opinions that could emerge from Turing and Searle, we simply need to remember what tenants found their respective schools of thought and apply them logically to the given conditions of this scenario.
John Searle formulated the Chinese Room Argument in the early 80’s as an attempt to prove that computers are not cognitive operating systems. In short though the immergence of artificial and computational systems has rapidly increased the infinite possibility of knowledge, Searle uses the Chinese room argument to shown that computers are not cognitively independent.
Do inanimate technologies think? Do they genuinely have a consciousness and real knowledge or are they simply machines? Are they made up of just algorithms and math medical equations? This is the argument many philosophers and scientists have been arguing over for years. John Searle, who is a professor at University of California, Berkeley, believes that not just Watson, but all higher-level information holding technologies do not have an active consciousness. They are only products of the human brain’s ideas and programs. Even though many esteemed mechanisms may demonstrate extraordinary knowledge even beyond human recognition, I agree with Searle. Computers do not have original thought. They are the result of high cognitive thinking
In this paper, I have attempted to concisely yet methodically explain the Turing Test and its respective objection and rebuttals. Both Turing and Searle’s comparisons between humans and computers in a methodological manner alike illustrate their opposing views on the topic. However, following Searle’s reasoning against Turing’s experiment, it is clear that he lacks adequacy for his reasoning. This is most commonly found in Searle’s tendency to base his theories off assumptions. In doing so, Turing’s ideal responses effortlessly undermine any substance Searle might have had, thus proving his to be the stronger theory.
John Searle is an American philosopher who is best known for his thought experiment on The Chinese Room Argument. This argument is used in order to show that computers cannot process what they comprehend and that what computers do does not explain human understanding. The question of “Do computers have the ability to think?” is a very conflicting argument that causes a lot of debate between philosophers in the study of Artificial Intelligence—a belief that machines can imitate human performance— and philosophers in the Study of Mind, who study the correlation between the mind and the physical world. Searle concludes that a computer cannot simply understand a language just by applying a computer program to it and that in order for it to fully comprehend the language the computer needs to identify syntax and semantics.
is false. To accomplish this, Searle uses the example of “the Chinese Room” to challenge strong AI, and to object to Turing’s test. Searle begins by stating to imagine himself in a room with a box of Chinese characters which he could not understand, but in the room he had a book of instructions in English which he could understand. Searle then states that if there was a group Chinese speakers’ outside of the room passing him messages in Chinese, he would not understand, but could reply with the symbols in with the use of the instructions to form an appropriate response. Furthermore, Searle states that the Chinese speakers would think that the speakers were speaking to a Chinese speaker; however, realistically they were talking to a confused John Searle. Therefore, as Searle states, if a computer were to be placed in Searle’s position, the rule would become the “computer program”, and the basket of symbols the “data base”, it would prove that the machine would not understand Chinese, but only simulate that knowledge, which is not truly
In the past few decades we have seen how computers are becoming more and more advance, challenging the abilities of the human brain. We have seen computers doing complex assignments like launching of a rocket or analysis from outer space. But the human brain is responsible for, thought, feelings, creativity, and other qualities that make us humans. So the brain has to be more complex and more complete than any computer. Besides if the brain created the computer, the computer cannot be better than the brain. There are many differences between the human brain and the computer, for example, the capacity to learn new things. Even the most advance computer can never learn like a human does. While we might be able to install new information onto a computer it can never learn new material by itself. Also computers are limited to what they “learn”, depending on the memory left or space in the hard disk not like the human brain which is constantly learning everyday. Computers can neither make judgments on what they are “learning” or disagree with the new material. They must accept into their memory what it’s being programmed onto them. Besides everything that is found in a computer is based on what the human brain has acquired though experience.