In the first three chapters of Kinds of Minds, Dennett introduces a variety of perspectives on what the mind is. From Cartesianism to Functionalism, Dennett outlines the evolution of thought about thought and the mind and explains his own perspective along the way. Cartesianism, as proposed by Descartes, proposes that the mind is who we are and characterizes the mind as a non physical substance that was completely separate from, and in control of, the physical body. In the strictest sense, Functionalism can be defined from Alan Turing’s perspective that a mind can be defined by what it can do. So from the Turing test, if an AI can fool a human into thinking it is also human, it must be at least as intelligent as the human. Using a plethora of anecdotes and examples, Dennett makes his position clear as he denounces Cartesianism and advocates for a functionalist based perspective in his own evolving definition of the mind.
Dennett makes his opinion of Cartesianism known on
…show more content…
Dennett leaves his own definition of the mind incomplete where we are in the readings, mulling over the concepts he reviewed and focusing on the border of sentience and sensitivity. Dennett’s own account of the mind is focused on drawing the line between sensitivity, exemplified by reacting to the environment, and sentience, which he defines as “the lowest grade of consciousness” (pg 64). In Dennett’s explanation on page 64, he proposes that while all intentional systems respond to the environment, sentient systems or “genuine minds” enjoy their sentience. Combining these theories, Dennett defines the mind as functional sensitivity in concert with an “undefined factor x” (pg 65) which allows the enjoyment and emotional aspects of thought to take place and therefore create a
Andy Clark strongly argues for the theory that computers have the potential for being intelligent beings in his work “Mindware: Meat Machines.” The support Clark uses to defend his claims states the similar comparison of humans and machines using an array of symbols to perform functions. The main argument of his work can be interpreted as follows:
Jaegwon Kim thinks that multiple realizability of mental properties would bring about the conclusion that psychology is most likely not a science. Several functionalists, specially, Fodor, take up the opposing stance to Kim, supporting that the multiple realizability of mental states is one of the reasons why psychology is an autonomous and justifiable science. Essentially, Kim think that in order for mental states to be multiply realizable then psychology must be fundamentally broken; with human psychology encompassing properties realized for humans and alien psychology encompassing those mental states realized in the alien way etc. I will demonstrate that even if one supports and allows the principles behind Kim’s argument they do not result in his final conclusion of psychology failing to be a science. By attacking his principle of Casual Individuation of Kinds I will show that Kim has failed to find the correct conclusion. Furthermore, I will consider a possible objection that Kim might have to my stance and give a short rebuttle. I will conclude by explicating Jerry Fodor’s account of what is Kim’s essential problem is. By showing that Kim’s conclusion fails it will entail that Fodor’s conclusion is more viable in reality.
Searle's argument delineates what he believes to be the invalidity of the computational paradigm's and artificial intelligence's (AI) view of the human mind. He first distinguishes between strong and weak AI. Searle finds weak AI as a perfectly acceptable investigation in that it uses the computer as a strong tool for studying the mind. This in effect does not observe or formulate any contentions as to the operation of the mind, but is used as another psychological, investigative mechanism. In contrast, strong AI states that the computer can be created so that it actually is the mind. We must first describe what exactly this entails. In order to be the mind, the computer must be able to not only understand, but to have cognitive states. Also, the programs by which the computer operates are the focus of the computational paradigm, and these are the explanations of the mental states. Searle's argument is against the claims of Shank and other computationalists who have created SHRDLU and ELIZA, that their computer programs can (1) be ascribe...
In this theory, since it is based upon matter alone then this is a theory that does not correspond the best, in my opinion, with the essay. In the essay Daniel mentions many times where his mind, Dennett, feels like it is elsewhere from his brain and his body. He contemplates on whether or not Dennett resides in his brain that is out of his body in a life support vat or between his ears in his empty skull. He clearly distinguishes that his brain and body and mind are all separate from each other. In this materialism theory, I personally feel like this does not support Daniel Dennett in his understanding of this situation he was put
Computers are well known for their ability to perform computations and follow a list of instructions, but can a computer be a mind? There are varying philosophical theories on what constitutes a mind. Some believe that the mind must be a physical object, and others believe in dualism, or the idea that the mind is separate from the brain. I am a firm believer in dualism, and this is part of the argument that I will use in the favor of Dennett. The materialist view however, would likely not consider Hubert to be a mind. That viewpoint believes that all objects are physical objects, so the mind is a physical part of a human brain, and thus this viewpoint doesn’t consider the mind and body as two separate things, but instead they are both parts of one object. The materialist would likely reject Hubert as a mind, even though circuit boards are a physical object, although even a materialist would likely agree that Yorick being separated from Dennett does not disqualify Yorick as a mind. If one adopts a dualism view and accept the idea that the mind does not have to be connected to a physical object, then one can make sense of Hubert being able to act as the mind of Dennett. The story told to us by Dennett, is that when the switch is flipped on his little box attached to his body, the entity that controls Dennett, changes to the other entity. Since the switches are not labeled, it is never known which entity is
John Searle’s Chinese room argument from his work “Minds, Brains, and Programs” was a thought experiment against the premises of strong Artificial Intelligence (AI). The premises of conclude that something is of the strong AI nature if it can understand and it can explain how human understanding works. I will argue that the Chinese room argument successfully disproves the conclusion of strong AI, however, it does not provide an explanation of what understanding is which becomes problematic when creating a distinction between humans and machines.
Since antiquity the human mind has been intrigued by artificial intelligence hence, such rapid growth of computer science has raised many issues concerning the isolation of the human mind.
How could Dennett breathe, talk, move, function, or even live without a brain? All human beings must have a brain to be alive, so given that premise, we know that Dennett has access to a brain somehow, or as Dennett would describe it, his brain has access to his body. So the wild idea that Dennett’s brain is in a vat sending signals through wires and then through radio signals seems more plausible now. The radio antennas that are implanted into his head further back up this
Are minds physical things, or are they nonmaterial? If your beliefs and desires are caused by physical events outside of yourself, how can it be true that you act the way you do of your own free will? Are people genuinely moved by the welfare of others, or is all behavior, in reality, selfish? (Sober 203). These are questions relevant to philosophy of the mind and discussed through a variety of arguments. Two of the most important arguments with this discussion are Cartesian dualism and logical behaviorism, both of which argue the philosophy of the mind in two completely different ways. Robert Lane, a professor at the University of West Georgia, define the two as follows: Cartesian dualism is the theory that the mind and body are two totally different things, capable of existing separately, and logical behaviorism is the theory that our talk about beliefs, desires, and pains is not talk about ghostly or physical inner episodes, but instead about actual and potential patterns of behavior. Understanding of the two arguments is essential to interpret the decision making process; although dualism and behaviorism are prominent arguments for the philosophy of the mind, both have their strengths and weaknesses.
Theodore Millon’s theory on human, biological, psychological, and interactional dimensions in the human mind are evolved from different perspectives throughout the
This world of artificial intelligence has the power to produce many questions and theories because we don’t understand something that isn’t possible. “How smart’s an AI, Case? Depends. Some aren’t much smarter than dogs. Pets. Cost a fortune anyway. The real smart ones are as smart as the Turing heat is willing to let ‘em get.” (Page 95) This shows that an artificial intelligence can be programmed to only do certain ...
In this paper I will evaluate and present A.M. Turing’s test for machine intelligence and describe how the test works. I will explain how the Turing test is a good way to answer if machines can think. I will also discuss Objection (4) the argument from Consciousness and Objection (6) Lady Lovelace’s Objection and how Turing responded to both of the objections. And lastly, I will give my opinion on about the Turing test and if the test is a good way to answer if a machine can think.
At the end of chapter two, Searle summarizes his criticism of functionalism in the following way. The mental processes of a mind are caused entirely by processes occurring inside the brain. There is no external cause that determines what a mental process will be. Also, there is a distinction between the identification of symbols and the understanding of what the symbols mean. Computer programs are defined by symbol identification rather than understanding. On the other hand, minds define mental processes by the understanding of what a symbol means. The conclusion leading from this is that computer programs by themselves are not minds and do not have minds. In addition, a mind cannot be the result of running a computer program. Therefore, minds and computer programs are not entities with the same mental state. They are quite different and although they both are capable of input and output interactions, only the mind is capable of truly thinking and understanding. This quality is what distinguishes the mental state of a mind from the systemic state of a digital computer.
Functionalism is a materialist stance in the philosophy of mind that argues that mental states are purely functional, and thus categorized by their input and output associations and causes, rather than by the physical makeup that constitutes its parts. In this manner, functionalism argues that as long as something operates as a conscious entity, then it is conscious. Block describes functionalism, discusses its inherent dilemmas, and then discusses a more scientifically-driven counter solution called psychofunctionalism and its failings as well. Although Block’s assertions are cogent and well-presented, the psychofunctionalist is able to provide counterarguments to support his viewpoint against Block’s criticisms. I shall argue that though both concepts are not without issue, functionalism appears to satisfy a more acceptable description that philosophers can admit over psychofunctionalism’s chauvinistic disposition that attempts to limit consciousness only to the human race.
The traditional notion that seeks to compare human minds, with all its intricacies and biochemical functions, to that of artificially programmed digital computers, is self-defeating and it should be discredited in dialogs regarding the theory of artificial intelligence. This traditional notion is akin to comparing, in crude terms, cars and aeroplanes or ice cream and cream cheese. Human mental states are caused by various behaviours of elements in the brain, and these behaviours in are adjudged by the biochemical composition of our brains, which are responsible for our thoughts and functions. When we discuss mental states of systems it is important to distinguish between human brains and that of any natural or artificial organisms which is said to have central processing systems (i.e. brains of chimpanzees, microchips etc.). Although various similarities may exist between those systems in terms of functions and behaviourism, the intrinsic intentionality within those systems differ extensively. Although it may not be possible to prove that whether or not mental states exist at all in systems other than our own, in this paper I will strive to present arguments that a machine that computes and responds to inputs does indeed have a state of mind, but one that does not necessarily result in a form of mentality. This paper will discuss how the states and intentionality of digital computers are different from the states of human brains and yet they are indeed states of a mind resulting from various functions in their central processing systems.