“Man is a robot with defects,” (Emile Cioran, The Trouble With Being Born). Humans' are not perfect, but we seem to strive for perfection, so who is to say that in the future robots will not out number the human race on Earth? In Star Trek: The Next Generation, the character Data is very much a robot and not human, being composed of inorganic materials but designed with a human appearance (an android), but does that make it just a robot? In the show it is proposed that for one to be a sentient being and a person they must possess three qualities, intelligence, self-awareness, and consciousness. In accordance to these three conditions it is obvious that the character Data is in fact a sentient being with the qualities of being a person.
To start off, to be classified as a sentient being, one must exhibit intelligence. To start off it is important to acknowledge what the definition of intelligence is; intelligence is the ability to acquire and apply knowledge and skills. Data is fully composed of inorganic parts such as circuit boards, wiring, metal, sensors and so on, it is easier to say that Data is a walking and talking computer. On the subject of intelligence it is easier to refer to Data as a computer to determine whether or not it is intelligent. The debate on whether computers are intelligent or not is well supported on both sides of the argument.
The position that computers are intelligent is supported by three points: refusing to say that computers are intelligent is prejudice towards computers, being intelligent does not mean that one must be knowledgable in all fields; being intelligent in a single area also proves to display intelligence, and there is no single qualification for intelligence; intelligence is measure...
... middle of paper ...
...erefore Data has consciousness.
In a nut shell, the character Data from Star Trek: The Next Generation is in fact a sentient being with the status of personhood. He has satisfied all of the conditions of being sentient person. Data is intelligent being with the capacity to learn and apply knowledge, he has self-awareness being aware of his desires, and finally Data is a conscious being able to acknowledge his existence and his own thoughts. Though Data is only a fictional android and the problem of determining a non-human's status of personhood has little to no application in our present day, in the not so distant future this may become a very serious debate with very genuine consequences.
Works Cited
... in 21th century, and it might already dominate humans’ life. Jastrow predicted computer will be part of human society in the future, and Levy’s real life examples matched Jastrow’s prediction. The computer intelligence that Jastrow mentioned was about imitated human brain and reasoning mechanism. However, according to Levy, computer intelligence nowadays is about developing AI’s own reasoning pattern and handling complicated task from data sets and algorithms, which is nothing like human. From Levy’s view on today’s version of AI technology, Jastrow’s prediction about AI evolution is not going to happen. As computer intelligence does not aim to recreate a human brain, the whole idea of computer substitutes human does not exist. Also, Levy said it is irrelevant to fear AI may control human, as people in today’s society cannot live without computer intelligence.
Andy Clark strongly argues for the theory that computers have the potential for being intelligent beings in his work “Mindware: Meat Machines.” The support Clark uses to defend his claims states the similar comparison of humans and machines using an array of symbols to perform functions. The main argument of his work can be interpreted as follows:
Nowadays technology allows us to upload all the memory of a dead person on the computer and create a robot. But can we say the robot is a person? Or can we say the person is still alive? The robot indeed has memory, even the personality of this person before he passes on. But robots and human are different, human have flesh and blood, robots, however, are made by metal. Although it is technologically achievable that robots can react respectively toward different feelings such as pain and itch, these reactions are artificial and they are not real “feelings”, metal would not feel the same way as skin feels.
We live in a time where technology is at the center of our society. We use technology on a daily basis, for the simplest tasks, or to aid us in our jobs, and don’t give a second thought to whether these tools are actually helping us. Writers such as Kevin Kelly and Clive Thompson argue that the use of technology actually helps us humans; whiles writers such as Nicholas Carr argue that technology affects people’s abilities to learn information negatively.
The “human sense of self control and purposefulness, is a user illusion,” therefore, if computational systems are comparable to human consciousness, it raises the questions of whether such artificial systems should be treated as humans. (261) Such programs are even capable of learning like children, with time and experience; the programs “[get] better at their jobs with experience,” however, many can argue the difference is self-awareness and that there are many organisms that can conduct such complex behavior but have no sense of identity.
i, p. 673, James proposed a question wondering if one could accept what he dubbed the “Automatic Sweetheart” (a robot) as a human if it was made with no noticeable difference between a machine and a human. It would be a soulless body that could laugh, show emotion, and do all things a human could do as if a soul were present in them. Could we accept it as human? James thought not. That as humans we crave attention. We crave love and admiration, and the need to be recognized.
When watching the Star Trek episode I concluded that Data was a “person”. In the courtroom, Data revealed that he knows that he is fighting for his rights and possibly his life. I believe that Data should be considered a person because he is aware of what he is, what he is on trial for, and what the results of the trial would do to him. In addition, although Data has some oddities - i.e. super human strength - that humans do not have it was implied that he had an understanding of emotions. For example, it was shown that Data kept all of his medals and awards in a display case because he “wanted” them. When a person keeps accolade it is usually because they are proud of themselves for achieving a goal or that they want to be able to look back
The conditions of the present scenario are as follows: a machine, Siri*, capable of passing the Turing test, is being insulted by a 10 year old boy, whose mother is questioning the appropriateness of punishing him for his behavior. We cannot answer the mother's question without speculating as to what A.M. Turing and John Searle, two 20th century philosophers whose views on artificial intelligence are starkly contrasting, would say about this predicament. Furthermore, we must provide fair and balanced consideration for both theorists’ viewpoints because, ultimately, neither side can be “correct” in this scenario. But before we compare hypothetical opinions, we must establish operant definitions for all parties involved. The characters in this scenario are the mother, referred to as Amy; the 10 year old boy, referred to as the Son; Turing and Searle; and Siri*, a machine that will be referred to as an “it,” to avoid an unintentional bias in favor of or against personhood. Now, to formulate plausible opinions that could emerge from Turing and Searle, we simply need to remember what tenants found their respective schools of thought and apply them logically to the given conditions of this scenario.
The great philosopher Aristotle believed that humans had a fixed nature and should not be tampered with, although the 19th century philosopher Jean-Paul Sartre believed “existence precedes essence” which humans have their own freedom to choose to do what they wish. These two philosophical theories clash against one another about whether humans should alter our natural human nature and the issue of cyborgs. According to Merriam-Webster’s Dictionary a cyborgs is defined as “a person whose body contains mechanical or electrical devices and whose abilities are greater than the abilities or normal humans. Due to the increase in technology, today we are able to create artificial chips, organs, implants and other “life-like” body parts which can greatly enhance humans’ lives. The ethical debate that we have today is whether it is morally right to artificially implant object in humans and create cyborgs.
But what is intelligence? “Intelligence is often defined as the ability to adapt to the environment” (Sternberg). Computers can indeed adapt to their environment as demonstrated with various evolution simulators. The computer has been able to gather its surroundings, and come to appropriate decisions to best survive. How can a computer, this box with wires and electricity, even begin to come up with decisions on its own? Well, just like any other human, it has to learn.
Margaret Boden’s “Artificial Intelligence: Cannibal or Missionary” is a credible primary source article rebutting common concerns of artificial intelligence. Boden uses strong logic to combat against the thought of artificial intelligence making humans less special and artificial intelligence causing people to be dehumanized. Boden concludes that dehumanization and people finding themselves less special from AI are false and that other concerns include people overlying on AI.
In order to see how artificial intelligence plays a role on today’s society, I believe it is important to dispel any misconceptions about what artificial intelligence is. Artificial intelligence has been defined many different ways, but the commonality between all of them is that artificial intelligence theory and development of computer systems that are able to perform tasks that would normally require a human intelligence such as decision making, visual recognition, or speech recognition. However, human intelligence is a very ambiguous term. I believe there are three main attributes an artificial intelligence system has that makes it representative of human intelligence (Source 1). The first is problem solving, the ability to look ahead several steps in the decision making process and being able to choose the best solution (Source 1). The second is the representation of knowledge (Source 1). While knowledge is usually gained through experience or education, intelligent agents could very well possibly have a different form of knowledge. Access to the internet, the la...
The traditional notion that seeks to compare human minds, with all its intricacies and biochemical functions, to that of artificially programmed digital computers, is self-defeating and it should be discredited in dialogs regarding the theory of artificial intelligence. This traditional notion is akin to comparing, in crude terms, cars and aeroplanes or ice cream and cream cheese. Human mental states are caused by various behaviours of elements in the brain, and these behaviours in are adjudged by the biochemical composition of our brains, which are responsible for our thoughts and functions. When we discuss mental states of systems it is important to distinguish between human brains and that of any natural or artificial organisms which is said to have central processing systems (i.e. brains of chimpanzees, microchips etc.). Although various similarities may exist between those systems in terms of functions and behaviourism, the intrinsic intentionality within those systems differ extensively. Although it may not be possible to prove that whether or not mental states exist at all in systems other than our own, in this paper I will strive to present arguments that a machine that computes and responds to inputs does indeed have a state of mind, but one that does not necessarily result in a form of mentality. This paper will discuss how the states and intentionality of digital computers are different from the states of human brains and yet they are indeed states of a mind resulting from various functions in their central processing systems.
I don’t think there is any reason for these robots to have every ability that a human does. There is no way they are going to have the intelligence a human does. Artificial Intelligence is just going to bring more harm into our communities. We can’t trust the robots doing the “everyday” human activities, they are going to lead to unemployment, and will lead to laziness causing more obesity.
Would clones or cyborgs be considered nonhuman? Fukuyama believes that a computer should not be considered human because they lack basic sensory input and feeling of a human. Fukuyama goes on to say “ It is perfectly possible to design a robot with heat sensors in it’s fingers, the robot could keep itself from being burned, but it would actually be devoid of the most important quality of a human being, feelings” (199). This quote reflects that reg...