With the introduction of autonomous vehicles, various social dilemmas have arisen into the mainstream of debate. One of the biggest questions to come up is whether autonomous vehicles should be primarily utilitarian in nature, meaning that they reduce the total number of injuries and deaths on roadways as much as possible, or self-protective in nature, meaning that they protect the occupants of the vehicle no matter what, in every scenario. These two can't be mixed without causing unrest and debate on whether the correct decision was made by the vehicle; it has to be one or the other. However, when taking into account the primary purpose of developing autonomous vehicles, I believe that they should serve a utilitarian purpose, minimizing the …show more content…
A utilitarian vehicle will sacrifice the safety of its occupants to preserve what it computes as the most human life and well being possible (Bonnefon, Shariff, & Rahwan, 2016). For example, if the car had to decide whether to hit another car with a single passenger head on and risk killing the occupants of both cars or swerve into a group of four bikers and risk killing all four of them, a utilitarian vehicle would hit the other car head on and even possibly direct the accident out of harm's way for other cars and individuals. This would theoretically preserve more life than a self protective autonomous vehicle, which would put the safety of its passenger as a priority, thus choosing to hit the bikers instead of the other car, most likely killing the four of them while saving its passenger. Furthermore, it's possible that the vehicle that had to decide whether to risk killing its passenger or the bikers was not even remotely at fault for the circumstances of the accident, leading to more complications on whether it is the correct choice to attempt to avoid the oncoming car or to deliberately hit it to reduce the total number of casualties that would arise from the …show more content…
More specifically, would consumers only buy vehicles that are self protective? Would it require government legislation to ensure that all vehicles are utilitarian, if need be? And furthermore, what would keep people from just reprogramming the car to become self protective? First, a study run by Science Magazine shows that while consumers would prefer that all autonomous vehicles are utilitarian, they would only buy and ride in one that was self protective (Bonnefon, Shariff, & Rahwan, 2016). While contradictory and hypocritical, it makes sense, as people view others in utilitarian numbers as just statistics while they view themselves, friends, and family as more than that and would only want the absolute safest for them. Because of this, it may hinder the development and integration of autonomous vehicles, as it would require some form of agreement or legislation to ensure that all vehicles are utilitarian in nature. The problem with this is that legislation takes time, and so do intercooperation agreements, which would further push back the development of autonomous vehicles, thus potentially costing more lives in the long run. However, this legislation would be necessary to accomplish what the autonomous vehicles were originally designed to do - reduce road casualties. However, just like with everything else, people can find a loophole. Currently, even cars with locked down ECUs
There are a huge number of details that need to be worked out. My first thought is to go with the utilitarian approach and minimize the loss of life and save the greatest number of people, but upon farther reflection I started to see the problems with it. The utilitarian approach is too simplistic. It raised all kinds of questions such as will the computer make any decisions as to fault when it makes the decision on what to do. For example, if I am in the car with my young child, and three eighty-year-old drunks wander out in front of my car because they are drunk by their own choice, is the car going to choose them over me and my child because there are three of them? I would not want the computer to make that decision for me because frankly I probably would not make that decision. That kind of computer decision would probably curtail many people including me from buying a self-driving car. It is the same paradox that MIT Review refers to when it says, “People are in favor of cars that sacrifice the occupant to save other lives—as long as they don’t have to drive one themselves” (“Why
Ethical issues are, among those, the most notable ones. In “Why Self-Driving Cars”(2015), it arises a typical ethics dilemma when a driverless car can be programmed to either save the passengers by endangering the innocent nearby or sacrifice its owner to avoid crashing into a crowd. Knight(2015) cites Chris Gerdes, a professor at Stanford University, who gave another scenario when a automated car can save a child’s life but injure the occupant in the car. The real problem is, as indicated by Deng(2015), a car cannot reason and come up with ethical choices and decisions itself like a human does as it must be preprogrammed to respond, which leads to mass concerns. In fact, programmers and designers shoulder the responsibility since those tough choices and decisions should all be made by them prior to any of those specific emergencies while the public tends tolerates those “pre-made errors” less(Knight, 2015; Lin, 2015). In addition to the subjective factors of SDCs developing, Bonnefon and co concludes a paradox in public opinions: people are disposed to be positive with the automated algorithm which is designed to minimize the casualty while being cautious about owning a vehicle with such algorithm which can possibly endanger themselves.(“Why Self-Driving Cars”,
The goals behind self-driving cars are to decrease collisions, traffic jams and the use of gas and harmful pollutants. The autonomous automobile is able to maneuver around objects and create swift lines of cars on roadways (How Google’s self-Driving Car Works, 2011). The autonomous vehicle can react faster than humans can, meaning less accidents and the potential to save thousands of lives. Another purpose and vision for these cars is that vehicles would become a shared resource. When someone needed a car, he or she could just use his or her Smartphone and a self-sufficient car would drive up and pick him or her up.
Now, I am very intrested in cars and I love almost every aspect of them, but did you know, that each year 1 million, people die each year from car accidents? And 81% of these accidents are caused by human error? 1 million people, gone like that. Fortunately, there's a new technology that dramastically decrease this number. This technology is self-driving cars. A self-driving car is a car that is capable of sensing its environment and navigating without human input. Currently, about 33 companies including Tesla, BMW, and Google, are working to create self-driving cars that can prevent human errors and change the way people view driving. Self-driving cars, have other benefits besides preventing human error, such as less traffic congestion, and less fuel consumption. However, with these benefits come some costs such as cyber security problems and ethical dilemmas. So, should we have self-driving cars, or not?
It might be hard to see where the self-driving car could have issues with safety but an interesting question arises when an accident is unavoidable. The question posed is “How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?” (ArXiv). This is a very interesting question surrounding ethics. I’m not sure if there is a right answer to the question, which could stall the self-driving car industry. Before self-driving cars are mass produced a solution needs to be found to the question about unavoidable accidents. Although this question is a problem, there may not be a need to address the problem. It is said that “"driver error is believed to be the main reason behind over 90 percent of all crashes" with drunk driving, distracted drivers, failure to remain in one lane and falling to yield the right of way the main causes.” (Keating). Self-driving cars could eliminate those problems entirely and maybe with all cars on the road being self-driving cars, there would be no “unavoidable accidents”. Safety is the main issue the self-driving car is trying to solve in transportation and seems to do a good job at
In source #3 paragraph 4 it says “surveyed people want to ride in cars that protect passengers at all costs-even if the pedestrians would now end up dying.” This is important because the self driving cars create a conflict between society, about who the car could save. Also those surveyed people are in conflict with themselves, trying to decide what outcome could be better. In source #3 paragraph 13 it states “people imagined actually buying a driverless car...people again said pedestrians-protecting cars were more moral...people admitted that they wanted their own car to be programmed to protect its passengers.” This shows when you actually think about the reality of having a driverless car, you wouldn’t want to die in an accident when you could have been saved. As a pedestrian you wouldn’t want to get hit by a car when you could have been saved. There are different perspectives you have to look at. In conclusion this shows that society still isn’t sure about a self-driving
Inventors hope to help people with autonomous cars because “autonomous cars can do things that human drivers can’t” (qtd. in “Making Robot Cars More Human). One of the advantages that driverless cars have is that “They can see through fog or other inclement weather, and sense a stalled car or other hazard ahead and take appropriate action” (qtd. in “Making Robot Cars More Human). Harsh weather conditions make it difficult and dangerous for people to drive, however, the car’s ability to drive through inclement weather “frees the user’s time, creates opportunities for individuals with less mobility, and increases overall road safety” (Bose 1326). With all the technology and software in the car, it can “improve road traffic system[s] and reduces road accidents” (Kumar). One of the purposes for creating the driverless car was to help “make lives easier for senior citizens, people with disabilities, people who are ill, or people who are under influence of alcohol” (Kumar). It can be frightening to know that that we share share our roads with drivers that could potentially endanger our lives as well as other people’s lives. How can people not feel a sense of worry when “cars kill roughly 32,000 people a year in the U.S.” (Fisher 60)? Drivers who text while driving or drink and drive greatly impact the safety of other people, and Google hopes to reduces the risk of accidents and save lives with the
The opponents would also against self-driving cars because of personal privacy. The obvious point is that, if you use vehicles which is entirely control by a computer, your movements are extremely easy to be tracked by the company or a third party. Operating systems could be hacked, self-driving cars also do. Self-driving cars are facing with the serious privacy
Self-driving cars are the wave of the future. There is much debate regarding the impact a self-driving car will have on our society and economy. Some experts believe fully autonomous vehicles will be on the road in the next 5-10 years (Anderson). This means a vehicle will be able to drive on the road without a driver or any passengers. Like any groundbreaking technology, there is a fear of the unforeseen problems. Therefore, there will need to be extensive testing before anyone can feel safe with a vehicle of this style on the road. It will also take time for this type of technology to become financially accessible to the masses, but again alike any technology with time it should be possible. Once the safety concern has been fully addressed
Automotive executives touting self-driving cars as a way to make commuting more productive or relaxing may want to consider another potential marketing pitch: safety (Hirschauge, 2016). The biggest reason why these cars will make a safer world is that accident rates will enormously drop. There is a lot of bad behavior a driver exhibit behind the wheel, and a computer is actually an ideal motorist. Since 81 percent of car crashes are the result of human error, computers would take a lot of danger out of the equation entirely. Also, some of the major causes of accidents are drivers who become ill at the time of driving. Some of the examples of this would be a seizure, heart attack, diabetic reactions, fainting, and high or low blood pressure. Autonomous cars will surely remedy these types of occurrences making us
Regardless of whether driverless cars can figure out how to interface with human-driven autos, human drivers won't have the capacity to manage driverless cars. The subsequent perplexity would prompt more accidents rather than less. Another reason why driverless cars should not be sold is because systems could fail. Driverless car technologies are not yet ready to use.
Is it ever acceptable, ethically speaking, to program a self-driving car to sacrifice its occupants? Should every self-driving car be programmed with the same algorithm, or should occupants be able to select from a multitude of algorithms? How do we compare and weigh different types and quantities of harms across different people?
The results of the first study show that people approve of sacrificing the AVs passenger in order to save pedestrians. In addition, the results show willingness to see this sacrificing legally enforced when the decision is made by the car than by a human driver. In the second study, participants have generally thought that the AVs should be programmed to save their passengers at all cost. This study has also shown that the participants generally supported others to buy cars that will be programmed to self-sacrifice, but when they were asked about themselves they were less willing to buy these cars.
Imagine a scenario in the near future where auto-driving cars are a common sight. People are familiar with machines making decision for them. Nobody questions the effectiveness of these machines. One day, a car is driving its occupant down a windy road, all of a sudden a child runs into the street. The car must now make a decision based on the instructions given to it upon creation. Does the car swerve and crash to miss the child, killing the passenger? Or does it kill the child to save the passenger? This is an ethical problem that has been debated for many years. Ever since the first work into artificial intelligence. When we create intelligent machines which are able to make decisions on their own, it is inevitable that decisions unfavorable
One significant advantage of this approach is increased safety. Studies on public transportation have found that it has reduced injuries and fatalities, with figures like 28 times more car passengers injured and noticeably more pedestrians and cyclists injured by cars than by buses[3]. Additionally, it is highly speculated that autonomous vehicles will also drastically reduce injuries and fatalities resulting from driving as computers