Some Cars Can Drive Themselves But Should They? A Brief Outline of the Ethical Dilemmas Facing AVs

Phillip Wilcox
6 min readDec 29, 2020

--

This article contains a section from my book The Future is Autonomous: The U.S. and China Race to Develop the Driverless Car describing the nature of the competition between the U.S. and China related to autonomous vehicles. I then discuss some of the ethical concerns expressed by people about the problems of delegating potential life and death situations to machines.

Some people may criticize my book for not containing a section on the ethical dilemmas. However, I discuss many related problems such as issues of liability with autonomous vehicles. Also, many of these ethical dilemmas related to vehicle safety would also be the same if a human was driving the vehicle. Ultimately, this debate about human-machine interaction has been the subject of countless articles in peer reviewed journals and books. I did not feel that I could do these issues justice if I included them in my book along with all of the other topics that I explored related to autonomous vehicles. I have, however, done my best to include the bas9c ethical dilemmas related to AVs in this article.

On the surface, the US has many advantages over China in the race to develop the autonomous vehicle. The technology sector in the US is much more developed than in China. The US has been manufacturing vehicles for more than one hundred years, whereas most of the vehicles in China are imported from other countries or are new and unproven.

Finally, the DARPA Grand Challenge in 2004, which was a competition between groups of engineers to drive vehicles autonomously on roads in the Mojave Desert, represents the starting point for the autonomous vehicle industry in the US. Meanwhile, the first autonomous vehicle startup in China was Momenta in 2016. Therefore, the US has a twelve-year head start.

Many of the proposed long-term benefits of autonomous vehicles of reducing traffic congestion and carbon dioxide emissions involve shared rides instead of individuals owning their own vehicles. This would represent a dramatic shift in American’s consumer habits. The US’s long history of producing its own vehicles could actually be a disadvantage with the introduction of autonomous vehicles.

There also needs to be a counterweight for special interest groups in favor of autonomous vehicles against those opposed to them in order to accelerate the political process. Companies in the industry need to expand their educational and marketing efforts as well to convince people of the benefits of shared rides in autonomous vehicles. If these measures are not undertaken, then we could see a situation of greater safety and all of the other benefits of autonomous vehicles take place in Beijing, instead of Washington, D.C.

Autonomous vehicles also raise ethical questions about human relationship to machines. These questions could impact their deployment on roads in the U.S., China, and anywhere else in the world. They have existed for far longer than autonomous vehicles arrived on the scene.

In ancient Greek mythology, it was the melted wax that held the feathers to constructed wings that led to Icarus’s plunge to his death in the sea. Thousands of years later, science fiction author Isaac Asimov wrote a 1942 short story, “Runaround.” In this story, he described the famous “Three Laws of Robotics.” These laws described the human-robot interaction. The primary law states that a robot cannot harm or kill a person.

Since then, and especially in the past thirty years with the rise of artificial intelligence, thousands of research articles and books have been published outlining the human-machine relationship. One of the main arguments these works explore is whether a machine can have “moral agency.” This ethical dilemma is particularly relevant to autonomous vehicles in which their actions, or inactions, have life and death implications.

I interviewed Dr. Yochanan Bigman, postdoctoral fellow at UNC-Chapel Hill. Bigman studies how humans think about machines making moral decisions. He says that there is already a pretty standard framework for judging the moral behavior of people. He said, “When we judge the behavior of other people, we take into account intention, for example, whether it was an accident or an intended action that causes harm.”

This ethical framework gets complicated when discussing AI or autonomous vehicles because, “When they seem to be behaving in an autonomous way, so the regular template that we use for moral judgement or moral evaluation doesn’t seem to apply. Or it applies in a very different way.”

These moral discussions are not just for philosophers to talk and write about in academic journals. These questions have profound real-world implications for how a person judges the agency of autonomous vehicles. This dilemma about agency has implications for questions of legal liability. Who was really at fault depends on the vehicles’ ability to make “informed decisions” as Dr. Bigman described in his article, “Holding Machines Responsible.”

The classic case that is brought up in situations of machines making moral decisions is the “trolley car dilemma.” In this situation, the brakes of a trolley car malfunction. The trolley car driver has the decision to switch lanes and collide with a group of people, killing all of them before the trolley car stops. He could also continue on the path and hit a wall, which would kill himself but no one else.

For autonomous vehicles, the situation would be relatively similar. Except this would not just be a hypothetical dilemma. Would the car be programmed to continue on its current path and hit a person and kill them? Or would it swerve off the road, possible hit other people or hit a wall, killing the driver?

Dr. Azim Shariff conducted a study in which there was a paradoxical response from the participants in his study. The participants said that they would rather take a utilitarian perspective and try to kill as few people as possible. Therefore, inaction and allowing the vehicle to hit a tree or other barrier and only killing the driver would be the preferred option. However, they did not want to actually purchase or ride in a vehicle in which this was the automated driving direction that it was programed to follow. Dr. Shariff cautioned against industry leaders pushing for regulations that adopted this utilitarian approach.

Dr. Bigman differs from Dr. Shariff mainly in the way that the people’s attitudes are measured by Dr. Shariff. His study shows that people were fine with the utilitarian approach. This did not affect their decision about whether or not to purchase or ride in an autonomous vehicle.

In our interview I pushed Dr. Bigman for policy recommendations. He told me that, while laws sometimes have a psychological moral component, he was not able to make any suggestions for things like liability concerns

In my interview with Karolina Chachulska, a Board member of the group Women in Automotive Technology, she said that she does not really think the “trolly car dilemma” is a big issue. In our interview, she argues that this situation represents a very rare situation. Therefore, while hopefully autonomous vehicles will not be forced to make too many of these life and death decisions, the issues of liability and other safety concerns are issues that still have not yet been resolved. They require further debate and eventually legislation to be passed or decisions in court to be resolved.

Stay tuned for more information on the difficulties of consumer acceptance of the exciting new technology of the autonomous vehicle! Read more about these issues, and many more, in my book The Future is Autonomous: The U.S. and China Race to Develop the Driverless Car!

The link for te book on Amazon is below. For this month, the Kindle eBook version is ONLY 99 cents. Pick up your copy before the deal runs out! The book is also available in paperback!

Also, please don’t forget to rate and review the book on Amazon if you like it!

--

--