Wait a second!
More handpicked essays just for you.
More handpicked essays just for you.
Difficulties in defining intelligence
Difficulties in defining intelligence
Theory of multiple intelligences essays
Don’t take our word for it - see why 10 million students trust us with their essay needs.
Recommended: Difficulties in defining intelligence
An Alternative Means to Intelligence
Through cognitive science, computer science, and psychology there has been an underlying question as to what qualifies for intelligent action. Allen Newell and Herbert A. Simon have proposed that a physical symbol system has the necessary and sufficient means for intelligent action. This is a view shared by many other notable figures from a variety of disciplines.
What I would like to do in this essay is present an alternative means to attribute intelligent action. I will try to show that there are limitations to the physical symbol system, and that something is missing in the theory.
Part 2: Method and Presuppositions
In order to show that the physical symbol is not the only means for intelligent action, I am going to attempt to give examples of alternative methods. I will also point out where I feel that Newell and Simon's theory is missing a piece of the puzzle. First I will state the theory of the physical symbol system. I will then give what I feel are appropriate criticisms of the theory. Finally I will show that there are alternative means for ascribing intelligent actions. I presuppose what is meant by intelligent action. This is the underlying question and if this is not already understood then I do not believe that we should be discussing a means for describing it. I will also presuppose what qualitative laws are and how they are used in science.
Part 3: The Text's Argument
Newell and Simon believe that symbols and physical symbol systems are fundamental in explaining intelligent action. In order to understand what a physical symbol system is one must first understand what symbols are. According to Newell and Simon symbols lie at the root of intel...
... middle of paper ...
...ad to concepts that the digital framework cannot achieve, such as human like learning and a strong reliance upon it's environment.
Part 5: Conclusion
I do not believe that the argument I have given for the accuracy against the physical symbol system is fully complete. What I do claim, however, is that I have shown that there are weakness in the theory of physical symbol systems. Overall, I believe that, to say anything that displays intelligent action must be a physical symbol system, such as one described by Newell and Simon, is not fully justified. This being because of the examples stated above.
Word Count: 1, 421
Bibliography:
Part 6: References
Newell, Allen & Simon, Herbert. "Computer Science as Empirical Inquiry: Symbols and Search." In J. Hougeland (Ed.), Mind Design II (pp. 81-95). Cambridge, Massachusetts: The MIT Press, 1997.
The problem I hope to expose in this paper is the lack of evidence in The Argument from Analogy for Other Minds supporting that A, a thought or feeling, is the only cause of B. Russell believes that there are other minds because he can see actions in others that are analogous to his own without thinking about them. He believes that all actions are caused by thoughts, but what happens when we have a reaction resulting as an action of something forced upon one’s self? Such as when a doctor hits your patellar tendon with a reflex hammer to test your knee-jerk reflex. Russell does not answer this question. He is only “highly probable” that we are to know other minds exist through his A is the cause of B postulate.
The purpose of this paper is to present John Searle’s Chinese room argument in which it challenges the notions of the computational paradigm, specifically the ability of intentionality. Then I will outline two of the commentaries following, the first by Bruce Bridgeman, which is in opposition to Searle and uses the super robot to exemplify his point. Then I will discuss John Eccles’ response, which entails a general agreement with Searle with a few objections to definitions and comparisons. My own argument will take a minimalist computational approach delineating understanding and its importance to the concepts of the computational paradigm.
John Searle’s Chinese room argument from his work “Minds, Brains, and Programs” was a thought experiment against the premises of strong Artificial Intelligence (AI). The premises of conclude that something is of the strong AI nature if it can understand and it can explain how human understanding works. I will argue that the Chinese room argument successfully disproves the conclusion of strong AI, however, it does not provide an explanation of what understanding is which becomes problematic when creating a distinction between humans and machines.
This leaves a particularly large hole in identity theory. From neural dependence and the causal problem, it is almost impractical to endorse any type of dualism. But multiple realizability makes identity theory suspect as well. Also emotional additives, and the fact that epiphenomenalism is self undermining but not impossible, lead to slight suspicion of physicalism in general. Basically, this paper set out to endorse and defend identity theory but has concluded nothing definitively.
Intelligence tests have been developed by scientists as a tool to categorize army recruits or analyze school children. But still discussing what intelligence is, academics have a difficult time defining what intelligence tests should measure. According to the American researcher Thorndike, intelligence is only that what intelligence tests claim it is (Comer, Gould, & Furnham, 2013). Thus, depending on what is being researched in the test and depending on the scientist’s definition of intelligence the meaning of the word intelligence may vary a lot. This essay will discuss what intelligence is in order to be able to understand the intelligence theories and aims of intelligence tests.
In this paper I will evaluate and present A.M. Turing’s test for machine intelligence and describe how the test works. I will explain how the Turing test is a good way to answer if machines can think. I will also discuss Objection (4) the argument from Consciousness and Objection (6) Lady Lovelace’s Objection and how Turing responded to both of the objections. And lastly, I will give my opinion on about the Turing test and if the test is a good way to answer if a machine can think.
Traditional theories of intelligence do not account for the ambiguity of classes such as philosophy or for the wide range of interests a child can have. For example, contemporary theories such as Sternberg’s Theory of Intelligence and Gardner’s Theory of Multiple Intelligences both account for more than the general intelligence accounted for in traditional intelligence theories. According to Robert Sternberg’s Successful (Triarchic) Theory of Intelligence, are Hector’s difficulties in philosophy indicative of future difficulties in the business world? According to Sternberg’s Theory of Intelligence, Hector’s difficulty in philosophy will not negatively affect his future. Sternberg would instead focus on elements of successful intelligence like Hector’s involvement and contribution as an individual, as opposed to relying on intelligence measured by tests.
Functionalism is a materialist stance in the philosophy of mind that argues that mental states are purely functional, and thus categorized by their input and output associations and causes, rather than by the physical makeup that constitutes its parts. In this manner, functionalism argues that as long as something operates as a conscious entity, then it is conscious. Block describes functionalism, discusses its inherent dilemmas, and then discusses a more scientifically-driven counter solution called psychofunctionalism and its failings as well. Although Block’s assertions are cogent and well-presented, the psychofunctionalist is able to provide counterarguments to support his viewpoint against Block’s criticisms. I shall argue that though both concepts are not without issue, functionalism appears to satisfy a more acceptable description that philosophers can admit over psychofunctionalism’s chauvinistic disposition that attempts to limit consciousness only to the human race.
The traditional notion that seeks to compare human minds, with all its intricacies and biochemical functions, to that of artificially programmed digital computers, is self-defeating and it should be discredited in dialogs regarding the theory of artificial intelligence. This traditional notion is akin to comparing, in crude terms, cars and aeroplanes or ice cream and cream cheese. Human mental states are caused by various behaviours of elements in the brain, and these behaviours in are adjudged by the biochemical composition of our brains, which are responsible for our thoughts and functions. When we discuss mental states of systems it is important to distinguish between human brains and that of any natural or artificial organisms which is said to have central processing systems (i.e. brains of chimpanzees, microchips etc.). Although various similarities may exist between those systems in terms of functions and behaviourism, the intrinsic intentionality within those systems differ extensively. Although it may not be possible to prove that whether or not mental states exist at all in systems other than our own, in this paper I will strive to present arguments that a machine that computes and responds to inputs does indeed have a state of mind, but one that does not necessarily result in a form of mentality. This paper will discuss how the states and intentionality of digital computers are different from the states of human brains and yet they are indeed states of a mind resulting from various functions in their central processing systems.
This essay will address the question of whether computers can think, possess intelligence or mental states. It will proceed from two angles. Firstly it is required to define what constitutes “thinking.” An investigation into this debate however demonstrate that the very definition of thought is contested ground. Secondly, it is required for a reflection on what form artificial intelligence should take, be it a notion of “simulated intelligence,” the weak AI hypothesis, or “actual thinking,” the strong AI hypothesis. (Russell, Norvig p 1020) The first angle informs us of the theoretical pursuit of what it means for something to think, whereas the second seeks to probe how it could demonstrated that thinking is occurring. As a result we have two fissures: on one hand, a disagreement of what constitutes thinking and on the other a question of the methodological approaches to AI. However, this essay will argue that both proponents of the possibility of AI and its detractors, are guilty of an anthropomorphic conception of thought. This is the idea that implicit in the question of whether computers can think, we are really asking whether they can think like us. As a result this debate can be characterised being concerned with narrow human understanding of the concept of thought. This I will argue that this flaw characterises the various philosophical theories of artificial intelligence.
Artificial intelligence, a figment of our imaginations in the past, but a reality of our futures. As a kid, movies like Smart House and i, robot were just cool ideas that I never could have imagined would be real someday. Artificial intelligence has made false realities of the past, real. Joi Ito, Neil Harbisson, and the movie i, robot all discuss different views from which we can understand artificial intelligence. Through the views of Ito, Harbisson, and i, robot we can analyze how artificial intelligence has and will change the future within the ideas and conclusions these authors have come to.
7. Reingold, Eyal. “Expert Systems”, “Artificial Neural Networks”, “Game Playing”, “Robotics and Computer vision”, “Artificial Life”
Crevier, D. (1999). AI: The tumultuous history of the search for Artificial Intelligence. Basic Books: New York.
We will discuss on the article of Intentional System Theory by a philosopher Daniel Dennett. The argument that we are going to use from this theory is about the intentional theory where Daniel Dennett thinks that both human and objects have beliefs and desires and from that the behaviors can be interpreted. From the article itself, Intentional System Theory is defined as an analysis of the meanings where people use the terms such as ‘believe’, ‘desire’, ‘expect’, ‘decide’, and ‘intend’ or in the terms of ‘folk psychology’ that we use to interpret, explain, and predict the behavior of other human beings including ourselves, animals and some artifacts such as robots and computers (Daniel, 2009).
We all know that computers can help a jumbo jet land safely in the worst of weather, aid astronauts in complex maneuvers in space, guide missiles accurately over vast stretches of land, and assist doctors and physicians in creating images of the interior of the human body. We are lucky and pleased that computers can perform these functions for us. But in doing them, computers show no intelligence, but merely carry out lengthy complex calculations while serving as our obedient helpers. Yet the question of whether computers can think, whether they are able to show any true intelligence has been a controversial one from the day humans first realized the full potential of computers. Exactly what intelligence is, how it comes about, and how we test for it have become issues central to computer science and, more specifically, to artificial intelligence. In searching for a domain in which to study these issues, many scientists have selected the field of strategic games. Strategic games require what is generally understood to a high level of intelligence, and through these games, researchers hope to measure the full potential of computers as thinking machines (Levy & Newborn 1).