# Essay 3 In chapter one of *The Emperor's New Mind,* professor Roger Penrose introduces the idea of *strong Artificial Intelligence.* He writes: > According to strong AI, not only would the devices just referred to indeed be intelligent and have minds, etc., but mental qualities of a sort can be attributed to the logical functioning of *any* computational device, even the very simplest mechanical ones, such as a thermostat. The idea is that mental activity is simply the carrying out of some well-defined sequence of operations, frequently referred to as an *algorithm.* (21-2) > [An algorithm being] a *systematic, calculational procedure* where the procedure itself applies quite generally... But in any specific case the procedure will eventually terminate and a definite answer will be obtained in a *finite* number of steps. At each step it is perfectly clear-cut what the operation is that has to be performed, and the decision as to the moment at which the whole process has terminated is also perfectly clear-cut. Moreover, the description of the whole procedure can be presented in finite terms. (41-2) Penrose's distaste for strong AI is bluntly apparent when he writes, "in fact I do *not* regard the idea as intrinsically an absurd one--mainly just wrong!" (29) However, strong AI …show more content…
How would the book 'know' the difference? Perhaps the book would not need to be opened, its information being retrieved by means of X-ray tomography, or some other technological wizardry. Would Einstein's awareness be enacted only when the book is being so examined? Would he be aware twice over if two people chose to ask the book the same question at two completely different times? Or would that entail two separate and temporally distinct instances of the *same* state of Einstein's awareness? Perhaps his awareness would be enacted only if the book is *changed*?
Andy Clark strongly argues for the theory that computers have the potential for being intelligent beings in his work “Mindware: Meat Machines.” The support Clark uses to defend his claims states the similar comparison of humans and machines using an array of symbols to perform functions. The main argument of his work can be interpreted as follows:
In the magic of the mind author Dr. Elizabeth loftus explains how a witness’s perception of an accident or crime is not always correct because people's memories are often imperfect. “Are we aware of our minds distortions of our past experiences? In most cases, the answer is no.” our minds can change the way we remember what we have seen or heard without realizing it uncertain witnesses “often identify the person who best matches recollection
... in 21th century, and it might already dominate humans’ life. Jastrow predicted computer will be part of human society in the future, and Levy’s real life examples matched Jastrow’s prediction. The computer intelligence that Jastrow mentioned was about imitated human brain and reasoning mechanism. However, according to Levy, computer intelligence nowadays is about developing AI’s own reasoning pattern and handling complicated task from data sets and algorithms, which is nothing like human. From Levy’s view on today’s version of AI technology, Jastrow’s prediction about AI evolution is not going to happen. As computer intelligence does not aim to recreate a human brain, the whole idea of computer substitutes human does not exist. Also, Levy said it is irrelevant to fear AI may control human, as people in today’s society cannot live without computer intelligence.
The debate between those who are in favor of strong and weak artificial intelligence (AI) is directly related to the philosophy of mind. The claim of weak AI is that it is possible to run a program on a machine, which will behave as if it were a thinking thing. Believers of strong AI say that it is possible to create a program running on a machine which not only behaves as if it were thinking, but does actually think. Strong AI followers argue that an installation of a computer program is considered a mind as real as the mind of any human.
In the first three chapters of Kinds of Minds, Dennett introduces a variety of perspectives on what the mind is. From Cartesianism to Functionalism, Dennett outlines the evolution of thought about thought and the mind and explains his own perspective along the way. Cartesianism, as proposed by Descartes, proposes that the mind is who we are and characterizes the mind as a non physical substance that was completely separate from, and in control of, the physical body. In the strictest sense, Functionalism can be defined from Alan Turing’s perspective that a mind can be defined by what it can do. So from the Turing test, if an AI can fool a human into thinking it is also human, it must be at least as intelligent as the human. Using a plethora of anecdotes and examples, Dennett makes his position clear as he denounces Cartesianism and advocates for a functionalist based perspective in his own evolving definition of the mind.
Computers are well known for their ability to perform computations and follow a list of instructions, but can a computer be a mind? There are varying philosophical theories on what constitutes a mind. Some believe that the mind must be a physical object, and others believe in dualism, or the idea that the mind is separate from the brain. I am a firm believer in dualism, and this is part of the argument that I will use in the favor of Dennett. The materialist view however, would likely not consider Hubert to be a mind. That viewpoint believes that all objects are physical objects, so the mind is a physical part of a human brain, and thus this viewpoint doesn’t consider the mind and body as two separate things, but instead they are both parts of one object. The materialist would likely reject Hubert as a mind, even though circuit boards are a physical object, although even a materialist would likely agree that Yorick being separated from Dennett does not disqualify Yorick as a mind. If one adopts a dualism view and accept the idea that the mind does not have to be connected to a physical object, then one can make sense of Hubert being able to act as the mind of Dennett. The story told to us by Dennett, is that when the switch is flipped on his little box attached to his body, the entity that controls Dennett, changes to the other entity. Since the switches are not labeled, it is never known which entity is
John Searle’s Chinese room argument from his work “Minds, Brains, and Programs” was a thought experiment against the premises of strong Artificial Intelligence (AI). The premises of conclude that something is of the strong AI nature if it can understand and it can explain how human understanding works. I will argue that the Chinese room argument successfully disproves the conclusion of strong AI, however, it does not provide an explanation of what understanding is which becomes problematic when creating a distinction between humans and machines.
This world of artificial intelligence has the power to produce many questions and theories because we don’t understand something that isn’t possible. “How smart’s an AI, Case? Depends. Some aren’t much smarter than dogs. Pets. Cost a fortune anyway. The real smart ones are as smart as the Turing heat is willing to let ‘em get.” (Page 95) This shows that an artificial intelligence can be programmed to only do certain ...
I will use this article’s arguments and logic in the counter argument section of my essay. I will address the arguments Boden utilizes and will mention the additional fears that are mentioned in the article. While dehumanizing aspects of artificial intelligence are not a great threat given artificial intelligence’s limitations, artificial intelligence will continue to advance. Meanwhile, the issue of humans depending too heavily on inaccurate information is a concern. Artificial intelligence cannot know everything, so decisions may not be as thought-out as humans. This article is unbiased, as it uses strong logical arguments without employing logical fallacies. The article also addresses other fears, instead of claiming that artificial intelligence is a flawless concept. This article is limited, as it doesn 't discuss two of my arguments in my essay
If a machine passes the test, then it is clear that for many ordinary people it would be a sufficient reason to say that that is a thinking machine. And, in fact, since it is able to conversate with a human and to actually fool him and convince him that the machine is human, this would seem t...
John Searle developed two areas of thought concerning the independent cognition of computers. These ideas included the definition of a weak AI and a strong AI. In essence, these two types of AI have their fundamental differences. The weak AI was defined as a system, which simply were systems that simulations of the human mind and AI systems that were characterized as an AI system that is completely capable of cognitive processes such as consciousness and intentionality, as well as understanding. He utilizes the argument of the Chinese room to show that the strong AI does not exist.
For years philosophers have enquired into the nature of the mind, and specifically the mysteries of intelligence and consciousness. (O’Brien 2017) One of these mysteries is how a material object, the brain, can produce thoughts and rational reasoning. The Computational Theory of Mind (CTM) was devised in response to this problem, and suggests that the brain is quite literally a computer, and that thinking is essentially computation. (BOOK) This idea was first theorised by philosopher Hilary Putnam, but was later developed by Jerry Fodor, and continues to be further investigated today as cognitive science, modern computers, and artificial intelligence continue to advance. [REF] Computer processing machines ‘think’ by recognising information
The traditional notion that seeks to compare human minds, with all its intricacies and biochemical functions, to that of artificially programmed digital computers, is self-defeating and it should be discredited in dialogs regarding the theory of artificial intelligence. This traditional notion is akin to comparing, in crude terms, cars and aeroplanes or ice cream and cream cheese. Human mental states are caused by various behaviours of elements in the brain, and these behaviours in are adjudged by the biochemical composition of our brains, which are responsible for our thoughts and functions. When we discuss mental states of systems it is important to distinguish between human brains and that of any natural or artificial organisms which is said to have central processing systems (i.e. brains of chimpanzees, microchips etc.). Although various similarities may exist between those systems in terms of functions and behaviourism, the intrinsic intentionality within those systems differ extensively. Although it may not be possible to prove that whether or not mental states exist at all in systems other than our own, in this paper I will strive to present arguments that a machine that computes and responds to inputs does indeed have a state of mind, but one that does not necessarily result in a form of mentality. This paper will discuss how the states and intentionality of digital computers are different from the states of human brains and yet they are indeed states of a mind resulting from various functions in their central processing systems.
Crevier, D. (1999). AI: The tumultuous history of the search for Artificial Intelligence. Basic Books: New York.
Artificial intelligence is a concept that has been around for many years. The ancient Greeks had tales of robots, and the Chinese and Egyptian engineers made automations. However, the idea of actually trying to create a machine to perform useful reasoning could have begun with Ramon Llull in 1300 CE. After this came Gottfried Leibniz with his Calculus ratiocinator who extended the idea of the calculating machine. It was made to execute operations on ideas rather than numbers. The study of mathematical logic brought the world to Alan Turing’s theory of computation. In that, Alan stated that a machine, by changing between symbols such as “0” and “1” would be able to imitate any possible act of mathematical