Science Media Centre

Science Media Centre

Posted in Science Alert: Experts Respond

Get RSS of
Science Alert: Experts Respond

Who to kill? The dilemma of driverless cars – Expert reaction

Posted in Science Alert: Experts Respond on June 24th, 2016.

Driverless cars hold the promise of safer transport. But how should they react when loss of life appears inevitable?

Should a car swerve to miss a pedestrian on the road, even if doing so would kill the passenger? What if it was two people on the road? Or ten people?

New US research, published in Science, explores this ethical dilemma in a series of surveys, revealing that people generally want automated cars to be utilitarian (i.e. prevent the greatest loss of life) but when pressed, admit that they would prefer to buy a driverless car that protects the driver at all costs.

“If both self-protective and utilitarian AVs [automated vehicles] were allowed on the market, few people would be willing to ride in utilitarian AVs, even though they would prefer others to do so,” write the authors.

“Most people want to live in in a world where cars will minimize casualties,” says Iyad Rahwan, an associate professor in the MIT Media Lab and co-author of a new paper outlining the study. “But everybody wants their own car to protect them at all costs.”

“If everybody does that, then we would end up in a tragedy … whereby the cars will not minimize casualties.”

Regulation may provide a solution to this problem, the authors say, but their results also indicate that such regulation could “substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether.”

MIT have also created an interactive website, the Moral Machine, which allows users to work through various crash scenarios presented in the researchers’ survey.

The Science Media Centre collected the following commentary from New Zealand researchers.

Prof Hossein Sarrafzadeh, Director, Centre of Computational Intelligence for Cyber Security and High Tech Transdisciplinary Research Network, Unitec Institute of Technology, comments:

“The issue raised by this paper is an important one but perhaps not the only one. We need to decide how much control we give to machines and as machines are used more extensively in our lives this question becomes more central.

“This is an example of a global issue which is best studied using a transdisciplinary approach. More work needs to be done by social scientists, computer scientists, engineers, insurance companies and those involved in legislation.

“The challenge raised in the paper is similar to the decision a military jet pilot would need to make when the plane is crashing and there is a risk of the plane crashing in a residential area and whether or not the pilot(s) risk their lives to continue to fly the plane out of the residential zone. The possibility of cases such as this happening in aviation is not high. The same may be true in the case of driverless cars.

“Many automotive companies and universities have already started research aimed at studying this life and death challenge. Although driverless cars will need to substantially reduce the risk of accident situations such as what is raised in this paper, such issues would need to be resolved before driverless cars become common on our roads. When we use artificial intelligence we are trusting a machine to make decisions for us. Trading shares, driving cars and flying airplanes are examples of such cases.”

Assoc Prof Ian Yeoman, School of Management, University of Victoria Wellington, comments:

“There is a built in trigger, that we fear the unknown and don’t trust science whatever the expert opinion and scientific studies. This is one of the reasons when the Docklands Light Railway system was introduced in 1987, autonomous safety fears meant each train had a safety operator. Driveless trains now operate in many cities and can be seen in most international airports connecting us between terminals.  We already have autonomous pizza delivery systems.

“Autonomous vehicles will reach a tipping point where the advancement in science, the economic arguments and technology get to a point of incremental change and human consciousness ‘that it is going to happen’ and ‘will happen’. First of all we will see a series of small steps. Watch out for uber autonomous taxis in Pittsburgh or autonomous ships or autonomous cargo planes.

“We always fear the future, but without science and advancement we would still be in the cave and the wheel not have been invented.”

Dr Carolyn Mason, Lecturer, Philosophy, University of Canterbury, comments:

“The researchers  found that the majority of their research participants accept that it is morally right to program autonomous vehicles (AVs) to kill the occupant of the car to save ten pedestrians. As the researchers point out, this result agrees with utilitarianism, that is, the moral theory that holds that the right action is the one that, of all the available options, maximises happiness. It is also arguably consistent with other moral theories. Would a rational person want to live in a world where AVs were programmed to kill ten pedestrians rather than kill the occupant of the vehicle? If the answer is ‘no’, then according to Kantian ethics, programming AVs in this way would be immoral. It also seems reasonable to believe that a virtuous person want to drive an AV that would kill the occupant of the vehicle rather than kill ten pedestrians. If so, this position is consistent with virtue ethics.

“Bonnefon et al. also found that the majority of their research participants would prefer to buy a car that would kill ten pedestrians rather than the occupant of the AV. In contrast, the majority of research participants believed that it was both ethical to program an AV to kill one pedestrian if doing so would save ten, and would prefer to own an AV that would kill one pedestrian if doing so would save ten. Bonnefon et al. conclude that “there seems to be no easy way to design algorithms that … reconcile moral values and … self-interest” (1576).

“The situation is worse than this; people are not only bad at reconciling moral values and self-interest, they are also bad at making decisions that reflect their own interests. Studies like this one encourage the research participant to identify with the occupant of the AV rather than the pedestrians who may be harmed by the vehicle, and encourage them to think of the harm to their child in the AV, rather than the harm to their child walking to school. Bonnefon et al., also found that fewer than 50% of respondents wanted other people’s cars to be programed to kill the car’s occupant rather than killing ten pedestrians. This seems a failure of imagination.

“As Bonnefon et al. comment, people prioritising their interests over those of others is nothing new. People’s willingness to rely on others doing the morally right thing while cheating on the system has been investigated by biologists, psychologists, sociologists and economists, as well as ethicists. Bonnefon et al. mention people’s willingness to benefit from others vaccinating their children while not being willing to take the risks associated with vaccinating their own children. Attitudes like this have led some Australian states to pass legislation allowing childcare centres to refuse to enrol children who have not been immunised.

“Legislation is often the best way to prevent harm to others by those who take their own well-being to be significantly more important than the well-being of others. So, in study five and six, Bonnefon et al., questioned people about their attitudes to AV legislation. They found that the majority of participants believe that there should not be legislation requiring AVs to be programmed to sacrifice the AV’s occupant to save ten pedestrians. Not wanting to be affected by a law is a good reason for arguing against legislation, but a poor reason for not legislating.

“Bonnefon et al. comment that “enthusiasm for self-driving cars was consistently greater for younger, male participants”, but the report does not include information about differences in responses based on age or sex. Many studies report both that males are more likely to engage in risk-taking behaviour than females and that younger drivers are more likely to engage in risky driving behaviour. (See, for example, the 2003 Queensland study by Turner and McClure, ‘Age and gender differences in risk-taking behaviour as an explanation for high incidence of motor vehicle crashes as a driver in young males’.) It seems plausible that age and sex also affects attitudes towards legislation that restricts driving.  For example, in their 2011 New Zealand study, Charlene Hallett, Anthony Lambert, and Michael A. Regan found that legislation banning cellphone use was less acceptable to younger drivers than older drivers. It would be interesting to learn whether sex and age had any effect on responses in Bonnefon, Shariff and Rahwan’s study.

“Bonnefon et al. suggest that the distaste for legislating to ensure that AVs are programmed in the way that most people believe is most ethical combined with the majority preference to travel in an AV that would save your life at the cost of ten pedestrians may delay the uptake of AVs. They suggest that this is a concern because a delay in the uptake of AVs will mean a delay in the reduction in harm expected to follow from a reduction in self-driven cars. There are two reasons for thinking this may not be a genuine concern. The cost of purchase means that those who purchase AVs are likely to be over 24 years old, so if age affects attitudes, the study may not support worry. Second, public education campaigns have corrected mistaken attitudes towards the acceptability of other risky transport related practices, such as cellphone use and driving under the influence. It is reasonable to expect that public education campaigns will support more consistent thinking about morality and self-interest with AV programming.”

Print Friendly
Copyright 2016 Science Media Centre (New Zealand)

Disclaimer | Privacy Policy