Wait a second!
More handpicked essays just for you.
More handpicked essays just for you.
can a computer think.?discuss
can a computer think.?discuss
can a computer think.?discuss
Don’t take our word for it - see why 10 million students trust us with their essay needs.
Recommended: can a computer think.?discuss
The traditional notion that seeks to compare human minds, with all its intricacies and biochemical functions, to that of artificially programmed digital computers, is self-defeating and it should be discredited in dialogs regarding the theory of artificial intelligence. This traditional notion is akin to comparing, in crude terms, cars and aeroplanes or ice cream and cream cheese. Human mental states are caused by various behaviours of elements in the brain, and these behaviours in are adjudged by the biochemical composition of our brains, which are responsible for our thoughts and functions. When we discuss mental states of systems it is important to distinguish between human brains and that of any natural or artificial organisms which is said to have central processing systems (i.e. brains of chimpanzees, microchips etc.). Although various similarities may exist between those systems in terms of functions and behaviourism, the intrinsic intentionality within those systems differ extensively. Although it may not be possible to prove that whether or not mental states exist at all in systems other than our own, in this paper I will strive to present arguments that a machine that computes and responds to inputs does indeed have a state of mind, but one that does not necessarily result in a form of mentality. This paper will discuss how the states and intentionality of digital computers are different from the states of human brains and yet they are indeed states of a mind resulting from various functions in their central processing systems.
The most common refutation to the notion of mental states in digital computers is that there are inherent limits of computation and that there are inabilities that exist in any algorithm to...
... middle of paper ...
...lligent, intentional activity taking place inside the room and the digital computer. The proponents of Searle’s argument, however, would counter that if there is an entity which does computation, such as human being or computer, it cannot understand the meanings of the symbols it uses. They maintain that digital computers do not understand the input given in or the output given out. But it cannot be claimed that the digital computers as whole cannot understand. Someone who only inputs data, being only a part of the system, cannot know about the system as whole. If there is a person inside the Chinese room manipulating the symbols, the person is already intentional and has a mental state, thus, due to the seamless integration of their systems of hardware and software that understand the inputs and outputs as whole systems, digital computers too have states of mind.
One of the key questions raised by Rupert Sheldrake in the Seven Experiments That Could Change the World, is are we more than the ghost in the machine? It is perfectly acceptable to Sheldrake that humans are more than their brain, and because of this, and in actual reality “the mind is indeed extended beyond the brain, as most people throughout most of human history have believed.” (Sheldrake, Seven Experiments 104)
Searle's claim is that any installation of a program is an operation. The lack of meaning, he states, means that the computer program does not have true understanding and is not truly thinking, it is simply computing and processing symbols. He presents this argument by using his famous Chinese room. Searle begins by ta...
Searle's argument delineates what he believes to be the invalidity of the computational paradigm's and artificial intelligence's (AI) view of the human mind. He first distinguishes between strong and weak AI. Searle finds weak AI as a perfectly acceptable investigation in that it uses the computer as a strong tool for studying the mind. This in effect does not observe or formulate any contentions as to the operation of the mind, but is used as another psychological, investigative mechanism. In contrast, strong AI states that the computer can be created so that it actually is the mind. We must first describe what exactly this entails. In order to be the mind, the computer must be able to not only understand, but to have cognitive states. Also, the programs by which the computer operates are the focus of the computational paradigm, and these are the explanations of the mental states. Searle's argument is against the claims of Shank and other computationalists who have created SHRDLU and ELIZA, that their computer programs can (1) be ascribe...
Since antiquity the human mind has been intrigued by artificial intelligence hence, such rapid growth of computer science has raised many issues concerning the isolation of the human mind.
John Searle’s Chinese room argument from his work “Minds, Brains, and Programs” was a thought experiment against the premises of strong Artificial Intelligence (AI). The premises of conclude that something is of the strong AI nature if it can understand and it can explain how human understanding works. I will argue that the Chinese room argument successfully disproves the conclusion of strong AI, however, it does not provide an explanation of what understanding is which becomes problematic when creating a distinction between humans and machines.
In addition all the objects, people and the sky that we perceive, and all our experiences are just the result of electronic impulses travelling from the computer to the nerve endings. (ibid.). However, he start by posing doubts by asking that if our brains were in a vat, could we say or think that we were (Putnam, 1981:7). He furthermore argued that we could not (ibid.). For Putnam, it cannot be true that, if our brains are a vat and we say or think that we were, for Putnam it is self-refuting (ibid.).
Computers are machines that take syntactical information only and then function based on a program made from syntactical information. They cannot change the function of that program unless formally stated to through more information. That is inherently different from a human mind, in that a computer never takes semantic information into account when it comes to its programming. Searle’s formal argument thus amounts to that brains cause minds. Semantics cannot be derived from syntax alone. Computers are defined by a formal structure, in other words, a syntactical structure. Finally, minds have semantic content. The argument then concludes that the way the mind functions in the brain cannot be likened to running a program in a computer, and programs themselves are insufficient to give a system thought. (Searle, p.682) In conclusion, a computer cannot think and the view of strong AI is false. Further evidence for this argument is provided in Searle’s Chinese Room thought-experiment. The Chinese Room states that I, who does not know Chinese, am locked in a room that has several baskets filled with Chinese symbols. Also in that room is a rulebook that specifies the various manipulations of the symbols purely based on their syntax, not their semantics. For example, a rule might say move the squiggly
The conditions of the present scenario are as follows: a machine, Siri*, capable of passing the Turing test, is being insulted by a 10 year old boy, whose mother is questioning the appropriateness of punishing him for his behavior. We cannot answer the mother's question without speculating as to what A.M. Turing and John Searle, two 20th century philosophers whose views on artificial intelligence are starkly contrasting, would say about this predicament. Furthermore, we must provide fair and balanced consideration for both theorists’ viewpoints because, ultimately, neither side can be “correct” in this scenario. But before we compare hypothetical opinions, we must establish operant definitions for all parties involved. The characters in this scenario are the mother, referred to as Amy; the 10 year old boy, referred to as the Son; Turing and Searle; and Siri*, a machine that will be referred to as an “it,” to avoid an unintentional bias in favor of or against personhood. Now, to formulate plausible opinions that could emerge from Turing and Searle, we simply need to remember what tenants found their respective schools of thought and apply them logically to the given conditions of this scenario.
At the end of chapter two, Searle summarizes his criticism of functionalism in the following way. The mental processes of a mind are caused entirely by processes occurring inside the brain. There is no external cause that determines what a mental process will be. Also, there is a distinction between the identification of symbols and the understanding of what the symbols mean. Computer programs are defined by symbol identification rather than understanding. On the other hand, minds define mental processes by the understanding of what a symbol means. The conclusion leading from this is that computer programs by themselves are not minds and do not have minds. In addition, a mind cannot be the result of running a computer program. Therefore, minds and computer programs are not entities with the same mental state. They are quite different and although they both are capable of input and output interactions, only the mind is capable of truly thinking and understanding. This quality is what distinguishes the mental state of a mind from the systemic state of a digital computer.
Do inanimate technologies think? Do they genuinely have a consciousness and real knowledge or are they simply machines? Are they made up of just algorithms and math medical equations? This is the argument many philosophers and scientists have been arguing over for years. John Searle, who is a professor at University of California, Berkeley, believes that not just Watson, but all higher-level information holding technologies do not have an active consciousness. They are only products of the human brain’s ideas and programs. Even though many esteemed mechanisms may demonstrate extraordinary knowledge even beyond human recognition, I agree with Searle. Computers do not have original thought. They are the result of high cognitive thinking
In this paper I will evaluate and present A.M. Turing’s test for machine intelligence and describe how the test works. I will explain how the Turing test is a good way to answer if machines can think. I will also discuss Objection (4) the argument from Consciousness and Objection (6) Lady Lovelace’s Objection and how Turing responded to both of the objections. And lastly, I will give my opinion on about the Turing test and if the test is a good way to answer if a machine can think.
If a machine passes the test, then it is clear that for many ordinary people it would be a sufficient reason to say that that is a thinking machine. And, in fact, since it is able to conversate with a human and to actually fool him and convince him that the machine is human, this would seem t...
Lycan, W. G. (1980) Reply to: "Minds, brains, and programs", The B.B.S. 3, p. 431.
The object of this essay is to depict as to whether or not artificial intelligence (A.I.) is possible from the use of arguments by Alan Turing, John Searle, and Jerry Fodor. To accomplish the task at hand; I shall firstly, describe the Turing Test and explain how it works, secondly, describe Functionalism and to detail on how it allows for future A.I. Thirdly, I will describe and explain Searle’s argument and example of the “Chinese room”, and finally I shall describe and explain a few replies to Searle’s “Chinese room” argument. However, due to the time constraint I will be unable to fully analyze Searle’s reply to all of his critiques, rather I will now state Searle’s counter to the objections with a simple point; they all are inadequate because they fail to come to
John Searle is an American philosopher who is best known for his thought experiment on The Chinese Room Argument. This argument is used in order to show that computers cannot process what they comprehend and that what computers do does not explain human understanding. The question of “Do computers have the ability to think?” is a very conflicting argument that causes a lot of debate between philosophers in the study of Artificial Intelligence—a belief that machines can imitate human performance— and philosophers in the Study of Mind, who study the correlation between the mind and the physical world. Searle concludes that a computer cannot simply understand a language just by applying a computer program to it and that in order for it to fully comprehend the language the computer needs to identify syntax and semantics.