author: Sven Nyholm
This article discusses the ethics behind programming decisions surrounding self-driving cars in crash scenarios, and potential ethical frameworks to think about when thinking about recommendations regarding this topic. Although self-driving cars are said to be much safer than regular cars, they cannot be 100% reliable all the time. Self-driving cars will drive in unpredictable weather patterns, unfamiliar roads, and are bound to face certain abnormal situations with pedestrians or human drivers. Accidents in these circumstances cannot always be prevented by a simple brake motion. This leads to the argument that all self-driving cars should be pre-programmed for how to respond to different accidents. We then find ourselves questioning whether the “ethics settings” should be the same in all self-driving cars or if buyers should have a say about the ethical principles of which the car is programmed based on. On one hand, some argue that the car should have an “ethical knob”, so whoever is driving the car can set it to their preferred settings. On the other hand, some argue that if all self-driving cars shared the same settings, it would be easier for all parties to survive the crash because having a standard response to an accident would increase coordination and reduce the overall harm. The author, Nyholm, then introduces the trolley problem, a classic philosophical dilemma about choosing to save or harm different individuals. He applies this problem to self-driving cars, using various examples to demonstrate the many paths to choose from; for example, swerving onto the sidewalk, killing one pedestrian but saving 5 passengers in the car, or vice versa, saving the single passenger in the car, but in the process, killing 5 pedestrians on the sidewalk. Some writers also wonder whether each person’s identity would matter in this case, say crashing into a criminal versus crashing into a nun. Nyholm then questions whether the debates about the trolley problem are valid sources of inspiration for handling incidents with self-driving cars and ultimately decides against heavy reliance of the trolley problem for three main reasons: in real-world situations, there are far more complexities and considerations; in real-world situations, responsibility (especially legal) cannot be ignored; and in the real-word, we do not know the exact accident scenario the car will face, meaning that risk-assessments of some sort are needed more broadly. Furthermore, Nyholm extends his discussion to the world of empirical ethics, which uses research from psychological studies in order to create well-informed ethical theories. A group of psychologists and behavioral economists conducted these studies and found that when people were asked about how other people’s cars to be programmed, they seeked to just minimize overall harm, but when asked about cars that they wanted to use, people favored cars that would save themselves in the event of an accident. The discrepancies among these findings caused Nyholm to raise some skepticism. For example, we should not put too much weight on people’s current opinions on self-driving cars, as it is relatively new technology. In addition, people who are asked these moral dilemmas are usually not prompted to justify their answers; however, when really analyzing an ethical argument, it is crucial to assess all reasons behind it and the justifications that can be made. Moreover, people are inconsistent and often display contradictory attitudes when faced with this subject, and this double standard cannot be applied to the real world. Lastly, Nyholm presents various traditional ethical frameworks that could help formulate ethical arguments about how self-driving cars should handle crashes. For instance, a utilitarian argument would be to recommend that self‐driving cars be equipped with a powerful computer enabling it to make calculations about utility and would be programmed to always crash in ways that maximize expected utility. Nyholm’s goal is to learn different lessons from each perspective. From Kantian ethics, the rules should be fair and apply to all people so that no one has an unfair advantage over the other. From virtue ethics, self-driving cars could become “moral technologies” that encourage virtuous behavior. From contractualist reasoning, we should select accident-algorithms designed to protect the most vulnerable individuals in any crash scenario. In all, there are many different ways to approach this underlying dilemma, and Nyholm utilizes certain frameworks to demonstrate some options.
I thought this article did a great job at examining different ethical theories so that the audience could understand the possible solutions a bit better while also expanding their knowledge base in general. I also liked how the author brought up skepticism and objections to a few claims because it was very useful to grasp an understanding of all points of view, especially when exploring such a controversial topic.