In class we watched a Star Trek episode, where the main focus of the episode was Data. Data was a hardware built by a man that gave the ability to a machine to act like a real human. That machine like human was the conflict in whether or not it was going to undergo an experiment that was going to be conducted by Maddox, a Starfleet officer. Even Data himself did not trust Maddox to be dismantled because there was a possibility for Data not return the same way he did once the experiment was over. Not only Data didn’t agree with his decision, but Commander Jean Luc was not going to permit Data to be taken apart either. Jean and Data worked closely together in Star Trek and Jean believed Data was more than a simple machine and therefore his decision …show more content…
in not being part of the experiment should be respected. Yet according to the rules of Star Trek, Data was seen as a property and therefore officer Maddox was allowed to conduct his experiment without the approval of Data, since it was just considered to be a simple machine. Although, Commander Jean decided to take the case to trial and wished to veto the decision. Therefore, the case went to trial, and Riker was Maddox defendant while Data’s defendant was Jean. Riker to prove Data was just a machine removed Data’s arm and emphasized that he was a machine that was simply composed of wires that was built by a human. Then Jean began his argument by interviewing Data. Before the trial took place Data was hesitant in participating but was being forced to be part of the experiment. So he had planned to resign and leave Star Trek that is when he packed all of his personal belongings. In trial, Jean brought those belongings and began to question Data in why he decided to take those objects with him. Jean began questioning Data about a book, Data answered by saying that was a present from Jean and wished to keep it because it was a gift that was special so him. Another item was a hologram of a women and Data expressed he had intimate feelings toward that person and wished to remember that individual wherever he would go. With all of the item Data was being question proved that Data did indeed had feelings just like a regular human being. Then Jean began to question Maddox and asked him what he thought an individual needs to be conscious. He responded that they needed intelligence and self-awareness. Yet, Jean proved that yes Data was a machine that was very highly intelligent and was indeed very self-aware of his surrounding and was also fully aware of every decision he took. Including the reasoning behind Data’s decision in rejecting his participation in the experiment. Jean proved Maddox otherwise that Data had conscious and has the complete right to refuse the procedure. The trial came in favor of Data and the procedure never took place. Before watching the episode I was skeptical in whether a machine could be conscious.
I believed that a human could not build a machine that is able to be conscious just like a human being. A machine was built for following orders and complete tasks, I would never thought that a machine could be built to be conscious. It never came to me that a machine can be able to show emotion, for something or someone. Or to be be aware of not only its external environment, but could be aware of its internal self as well. Although, after watching the film my perspective has changed to a certain limit. My definition of consciousness is when someone is fully aware of their surroundings and aware of the information that is presented. Once they are aware, they are cognitively able to gather that information and process it and if they are able to process that information successfully they are capable of taking decisions. A human is conscious because they make everyday decisions based on their cognitive process. Now when it comes to a machine, after watching the film it convinced me that it may be possible for a machine to be conscious as well. If a machine was built like Data it will be very intelligent and will be able to understand what is happening in his surrounding and also will be able to make decisions that will determine what the machine will do next. Therefore, I do believe a machine can be conscious, but to a certain limit. A machine can be built exactly like a human brain, to be able to be conscious and will gain more consciousness through experience. I think in order for a machine to be conscious not only it needs self-awareness and intelligence, but it also needs experience to be able to grow feelings. Like in the episode, Data as he began to gain more experience in Star Trek he began to get attached with the people he frequently encountered, began to determine what is right and what is wrong based on how he felt in what he thought was best in taking
decisions. Although, I believe a machine can be built to have consciousness, I don’t think it is possible for a machine to be fully conscious at to the same level as a human being. A machine may be created to learn how to be conscious to a certain limit, but not to the full capacity as a normal person. A person has many things a machine can never have like values and morals. A machine is something that is built and will make decision based on the information being given. It can grow emotion based on experience, and that may alter its cognitive process, but naturally it doesn’t have any morals or values to follow, unlike a human. For example, we humans feel sympathy for many different things because it is something that we naturally carry, whereas a machine will need to first experience more than once in order to grow sympathy. Because like mentioned before if they have never experienced it, the machine will decide based on what is right or wrong. Same thing is for humans, if our brain cells would slowly be replaced with a machine cell that is an exact replica of our brains cells, we would still be conscious but those instincts that a human has like morals, will affects and we will not be the same as were with our normal brain cells. In conclusion, after watching the episode I decided that machines can be built to have a conscious. Although, they can be built to have certain amount of conscious, but will not be able to carry the amount of consciousness that a normal human being does. A human has certain characteristics that a machine will not be able have naturally even if they are conscious. In order to do so they will need to have experience and maybe learn to grow those characteristics. Through experience a machine will be able to grow it consciousness with time, but without it won’t be able to be reach the full capacity of a normal human conscious.
Andy Clark strongly argues for the theory that computers have the potential for being intelligent beings in his work “Mindware: Meat Machines.” The support Clark uses to defend his claims states the similar comparison of humans and machines using an array of symbols to perform functions. The main argument of his work can be interpreted as follows:
deep need to probe the mysterious space between human thoughts and what is a machine can
A major falling point of robots and machines when placed in a human’s position is that robots cannot improvise. Robots can only do what they are programmed to do. if Damasio is right, emotions are ‘improvised’ by the human brain even before someone is conscious of what they are feeling. Therefore it is even harder to make machines feel true emotions. An example of this exists in Ray Bradbury’s short story “August 2026.” A completely automated house survives after nuclear warfare has devastated the Earth. Cheerful voices go on announcing schedules and birth dates, the stove prepares steaming hot food right on time, and robotic mice keep the house spotless and free of dust- in eerie contrast to the barren and destroyed city surrounding it. The house lets nothing in, closing its shutters even to birds, but lets in a sick and famished stray dog, which limps into the house and dies. The robotic mice think nothing of the dead dog but a mess that needed cleaning up: “Delicately sensing decay at last, the regiments of mice hummed out as softly as blown gray leaves in an electrical wind. Two-fifteen. The dog was gone. In the cellar, the incinerator glowed suddenly and a whirl of sparks leaped up the chimney.” The house, seeming so cheerful, caring for its attendants, has no compassion or reverence for the dog. The mice were programmed to clean up messes, and nothing beyond. This is why in science
One of the key questions raised by Rupert Sheldrake in the Seven Experiments That Could Change the World, is are we more than the ghost in the machine? It is perfectly acceptable to Sheldrake that humans are more than their brain, and because of this, and in actual reality “the mind is indeed extended beyond the brain, as most people throughout most of human history have believed.” (Sheldrake, Seven Experiments 104)
In the episode "The Measure of a Man", Commander Maddox walks onto the Starship Enterprise and makes a request to perform an experiment on the android named Data. During Maddox 's visit, it becomes clear his plans center on shutting down Data’s memory base. Furthermore, Maddox wants to make sure that his actions will prevent Data from being able to ascertain how scientists were able to create him. Although Maddox promises Data that all of his memory and features will be restored, Data feels like he will not be the same after the examination. As a result, Data refuses to proceed. However, Maddox states that Data is Starfleet property, and he cannot resign from the assignment.
Double Consciousness is the sense of having to look at oneself through the eyes of others, making it difficult to develop a sense of self. W. E. B. Dubois used the term mostly to recognize the black community in the early 1900’s, but now it affects many Americans, no matter what their ethnicity is. Therefore, double consciousness is still a significant factor in today’s society.
“Too black for the white kids, too white for the black kids.” “Where do I fit in?” These are common question one may ask himself if he is struggling with double consciousness. Many people struggle with double consciousness every day without even realizing the effects it has on themselves or even the people around them. Double consciousness was discovered in 1903 by W.E.B. Du Bois which he referenced the internal conflict experienced by subordinated groups in an oppressive society. He relayed his message in his writing “The Souls of Black Folk”. Like stated before, double consciousness has many different effects on a person such as them trying to fit in, them having to feel like they have to pick a side (black side or white side), or eventually losing himself.
The reading (a) From the 1967 Preface by Georg Lukacs presented three different arguments. Some of these themes have been mentioned in previous readings. Alienation, false consciousness, and stand point contribute to a better understanding of how society is expected to function based on socially constructed ideas. Alienation is losing your persona and becoming immune to a particular activity. For example, you can become alienated with work, since it's a constant and daily routine that you lose connection with the real world. False consciousness is the misleading of facts into believing unrealistic ideas; therefore they are not seeing what is in front of them. Therefore, you only see what you want to see. It is known that the proletarians are
If a machine passes the test, then it is clear that for many ordinary people it would be a sufficient reason to say that that is a thinking machine. And, in fact, since it is able to conversate with a human and to actually fool him and convince him that the machine is human, this would seem t...
The Turing test was a test introduced by Alan Turing (1912-1954) and it involves having a human in one room and an artificial intelligence, otherwise known as a computer, in another and as well as an observer. Turing himself suggested that as long as the observer is unaware whether it’s a human or a computer in either room the computer should be regarded as having human-level intelligence. (Nunez, 2016). But does the “human-level” intelligence mean it should be considered to be conscious? Is it more important to be clever or to be aware of being clever? Is it moral to create a conscious being that just serves our purposes? Aside from the moral implications there are technical implications and parameters
“Consciousness is defined as everything of which we are aware at any given time - our thoughts, feelings, sensations, and perceptions of the external environment. Physiological researchers have returned to the study of consciousness, in examining physiological rhythms, sleep, and altered states of consciousness (changes in awareness produced by sleep, meditation, hypnosis, and drugs)” (Wood, 2011, 169). There are five levels of consciousness; Conscious (sensing, perceiving, and choosing), Preconscious (memories that we can access), Unconscious ( memories that we can not access), Non-conscious ( bodily functions without sensation), and Subconscious ( “inner child,” self image formed in early childhood).
The traditional notion that seeks to compare human minds, with all its intricacies and biochemical functions, to that of artificially programmed digital computers, is self-defeating and it should be discredited in dialogs regarding the theory of artificial intelligence. This traditional notion is akin to comparing, in crude terms, cars and aeroplanes or ice cream and cream cheese. Human mental states are caused by various behaviours of elements in the brain, and these behaviours in are adjudged by the biochemical composition of our brains, which are responsible for our thoughts and functions. When we discuss mental states of systems it is important to distinguish between human brains and that of any natural or artificial organisms which is said to have central processing systems (i.e. brains of chimpanzees, microchips etc.). Although various similarities may exist between those systems in terms of functions and behaviourism, the intrinsic intentionality within those systems differ extensively. Although it may not be possible to prove that whether or not mental states exist at all in systems other than our own, in this paper I will strive to present arguments that a machine that computes and responds to inputs does indeed have a state of mind, but one that does not necessarily result in a form of mentality. This paper will discuss how the states and intentionality of digital computers are different from the states of human brains and yet they are indeed states of a mind resulting from various functions in their central processing systems.
“Man is a robot with defects,” (Emile Cioran, The Trouble With Being Born). Humans' are not perfect, but we seem to strive for perfection, so who is to say that in the future robots will not out number the human race on Earth? In Star Trek: The Next Generation, the character Data is very much a robot and not human, being composed of inorganic materials but designed with a human appearance (an android), but does that make it just a robot? In the show it is proposed that for one to be a sentient being and a person they must possess three qualities, intelligence, self-awareness, and consciousness. In accordance to these three conditions it is obvious that the character Data is in fact a sentient being with the qualities of being a person.
Our minds have created many remarkable things, however the best invention we ever created is the computer. The computer has helped us in many ways by saving time, giving accurate and precise results, also in many other things. but that does not mean that we should rely on the computer to do everything we can work with the computer to help us improve and at the same time improve the computer too. A lot of people believe that robots will behave like humans someday and will be walking on the earth just like us. There should be a limit for everything so that our world would remain peaceful and stable. At the end, we control the computers and they should not control us.
In the past few decades we have seen how computers are becoming more and more advance, challenging the abilities of the human brain. We have seen computers doing complex assignments like launching of a rocket or analysis from outer space. But the human brain is responsible for, thought, feelings, creativity, and other qualities that make us humans. So the brain has to be more complex and more complete than any computer. Besides if the brain created the computer, the computer cannot be better than the brain. There are many differences between the human brain and the computer, for example, the capacity to learn new things. Even the most advance computer can never learn like a human does. While we might be able to install new information onto a computer it can never learn new material by itself. Also computers are limited to what they “learn”, depending on the memory left or space in the hard disk not like the human brain which is constantly learning everyday. Computers can neither make judgments on what they are “learning” or disagree with the new material. They must accept into their memory what it’s being programmed onto them. Besides everything that is found in a computer is based on what the human brain has acquired though experience.