Features

XXVII.1 January - February 2020
Page: 70
Digital Citation

The problem with trolleys: Why machines might struggle to solve the unsolvable


Authors:
Alexander Mirnig

back to top 

From its invention in the late 18th century, the automobile has seen a steady stream of technological progress. After the initial efforts to make the Benz Patent-Motorwagen easier to control and to get the technology to a level at which it could be mass-produced (which did not happen until 1913 with the Ford Model T), a series of incremental advances brought us to where we are today. Our vehicles are now lighter, faster, and capable of using several sources of energy for propulsion; the act of getting from A to B no longer seems to be the primary challenge for automation technology. Instead, development and marketing now focus on vehicle infotainment and assistance functionalities—especially the latter, which are often considered to be the future of driving. Assistance systems in vehicles can range from passive systems, such as an auditory distance-warning system or backup cameras, to active systems, which assist in or even perform an entire driving task autonomously.

back to top  Insights

ins01.gif

Given sufficient technological fidelity, it is expected that a vehicle will eventually not only be able to perform individual driving tasks but also complete the entire journey without needing a human to be at the controls. Assuming that a machine is not subject to "human weaknesses," which are the source of many on-road incidents (drunk driving, drowsiness, distraction, lack of driving experience, etc.), this would greatly increase on-road safety—considered to be the primary motivator for full vehicle automation [1]. This is only true, of course, if vehicle automation technology can indeed perform better than a human at the wheel (if not perfectly) and avoid introducing too many additional errors or other potential accident causes.

ins02.gif

Thus, in order to achieve such an improvement in on-road safety, a vehicle should be better than a human at detecting and classifying objects on the road, judging traffic situations, and performing maneuvers to resolve said situations. It stands to reason, then, that such a system that is purportedly better than a human in all relevant regards should also be able to resolve situations that a human would not be able to. The so-called trolley problem (see Figure 1 for a simplified illustration) is one such situation. In it, a vehicle is presented with a limited set of morally conflicting options, all of them leading to the loss of human life. Apart from the general difficulty in resolving such a situation in a satisfactory way as far as outcomes are concerned, there is also the larger issue of granting a machine authority over human life. After all, whichever decision a vehicle makes in such a situation, it will have decided who lives and who dies.

ins03.gif Figure 1. Sketch illustrating the "modern" trolley problem, where the trolley is replaced with an automated vehicle.

In an article presented at CHI 2019 [2], we argued that the trolley problem is by principle just as unsolvable for machines as it is for humans. Focusing on decision making within a trolley dilemma limits the design space so that meaningful solutions are not possible (or might make things even worse). In addition, it appears that situations involving nonhuman agents are assessed very differently compared to how we judge the actions of humans within moral dilemma situations. In this article, I present the essence of said argument with some additional bits of information and discussion.

back to top  About Problematic Trolleys

The trolley problem is not a novel concept, nor is it limited to cars. The trolley problem is part of a family of ethical dilemmas, which have served important functions in philosophy throughout history. The currently most well-known account of the trolley problem was put forward by Philippa Foot in 1967 [3]:

The driver of a runaway tram...can only steer from one narrow track onto another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed.

Thus, the dilemma situation contains the following necessary components:

  • One agent capable of making and acting on a decision
  • A limited number of possible courses of action the agent can take
  • At least two human individuals or groups of human individuals possibly affected by the actions
  • A situation with fatal consequences for one of the involved individuals (or groups) brought about by every choice possible to the agent.

As we can see, whichever action is taken within the problem, at least one person will die. And since the agent is put in a position where he or she must take an action (which includes the conscious decision to not interact with the trolley steering mechanism at all), they are forced to act in a way that kills other humans, which clashes with our common understanding of what is morally right and wrong.


The vehicle, just like the human, should do whatever it can to prevent such situations from occurring in the first place.


One of the primary purposes of such dilemmas in philosophy was to disprove the existence of objective universal norms. In a nutshell, some believe that there are objective rules that govern how humans ought to act. A common way to dispel this belief is to take the most convincing candidate for being a universal norm ("Thou shalt not kill") and show that there are instances where a human must violate it. Thus, the norm cannot be considered universally valid. Why is this relevant for the discourse surrounding automated vehicles? Because the essence is that such dilemmas are specifically designed so that they cannot be solved from within the dilemma. Once the dilemma has occurred, it is no longer possible to make the "right" decision that would not violate the norm. It is for this reason that solutions to such dilemmas rarely consist in answers to take action A or B, but instead are usually attempts to argue that the dilemma does not apply or is described incompletely (i.e., it seems like a dilemma only due to incomplete information available to the observer). Thus, it stands to reason that such dilemmas make for poor benchmarks of a decision-making system, such as an automated vehicle, if there is no metric by which any action taken within the dilemma could be considered right or correct.

back to top  How Humans Solve the Trolley Problem

Now assume, horrible as it might be, that you were in such a situation as you were operating a vehicle. Perhaps it's late at night, foggy, or rainy, and a human suddenly appears in your headlights. You swerve to the left, as your reflexes tell you to, where on the sidewalk you now see two individuals whom you will inevitably ram. You hit the brakes as hard as you can but you are too late and the inevitable happens. As you appear in front of the judge, do you expect to be asked which utilitarian calculus you used to determine that the group on the left had to die instead of the other person? No, of course not. You should expect questions such as whether you were drunk, on medication, talking on your mobile phone, or otherwise distracted by nondriving tasks. In essence, you should and would expect questions related to your ability and readiness to anticipate and avoid the situation in the first place. We accept that at that point it was already too late and that the critical decision point, if there was one, would have occurred earlier, where the dilemma situation might have been averted. Why is it, then, that we do not apply the same reasoning to the decision-making processes of automated vehicles?

The vehicle, just like the human, should do whatever it can to prevent such situations from occurring in the first place. This is not done by making it better or worse at deciding who lives or dies (if that is even possible) but rather by enabling it to detect objects better and from a longer distance; correctly assess and anticipate traffic formations and situations; and perform driving maneuvers accurately. In other words: The vehicle needs to become better at what it is already designed to do, what current development efforts are already focusing on. Putting the focus on the trolley dilemma itself and the decision taken within the problem contributes nothing to a potential solution, as the possibility of a positive outcome never existed in the first place. Instead, focusing on what occurs outside (more specifically: before) the very limited temporal scope of any given trolley-dilemma realm can actually contribute to "solving" the dilemma by making its occurrence less likely. But still, one could argue that we cannot exclude such situations from ever occurring, so surely we cannot ignore the trolley problem and are justified in looking for a solution. The answer to that is, of course, yes—but it is even more of a reason for why we should not look for this solution within the trolley problem itself.

What we need to understand is that the trolley dilemma is in itself a fail state, one that cannot be reversed or resolved once it has occurred. As such, limiting the design space to the point at which the fail state occurs makes it unfeasible to address it in any meaningful way. To illustrate this further, consider the following analogy: Assume the case of a manually operated vehicle driving off a cliff, for whatever reason. As the vehicle manufacturer, you would probably prefer that this did not happen, so you look for solutions. It would be unreasonable if you were to limit yourself to the time span between the vehicle driving off the cliff and its hitting the ground (unless your field of work is in rocket-powered engines or parachutes). Instead, you would want to find out what made the vehicle leave the road in the first place—be it faulty brakes, a failure of the lane-keeping assistance, or the overreliance of the driver on what was apparently incorrect GPS information.

back to top  Full Speed Ahead

Understanding that the trolley problem is unsolvable in principle and by design does not mean admitting defeat or throwing one's hands up in the air and doing nothing. What it does mean, however, is that the trolley problem does not hold the special place in the world of automotive engineering that we might have assumed. It is one of many possible fail states and needs to be treated as such. The design space must be shifted to before the problem, and developments in decision-making algorithms in vehicles must be able to anticipate it.

Some might consider randomization as a possible solution that can still be derived from within the problem itself. The idea here would be that as soon as a trolley problem has occurred, the vehicle no longer acts according to its regular decision-making algorithm and instead flips a coin (metaphorically speaking), randomly deciding on which action to take. While this would not alter the outcomes in a statistically meaningful way (people still die), we might at least be able to sleep better at night, knowing that we did not program a machine to actively decide to take human life. However, if we do pursue such a strategy, we might actually make things worse from a consequences standpoint.

Consider the following. Assume the system is not perfect; otherwise it would be possible to avoid the dilemma situation before it occurred. Assume further that the vehicle operates better (and more safely) under normal conditions when using its regular algorithm than when using randomization. Otherwise, we might as well just stop developing navigation algorithms and have all vehicles drive completely at random. If we consider these quite rational assumptions to be true, then we must also assume that we introduce a likelihood of false positives—in this case instances in which the system wrongly detects a situation as a dilemma situation and activates its decision randomization as a result. Thus, we must assume there will be situations in which the vehicle acts worse than it could have, potentially causing worse consequences than if the randomization had not been active. As we can see, randomization in this sense leads to no change in outcomes, at best, and could lead to even more accidents (fatal or otherwise) at worst.

It would appear to benefit everyone to stop tilting at the windmill that is the trolley problem, as enticing as it might be. And while it might seem almost paradoxical at first, focusing on the admittedly less exciting-sounding area of decision making outside of trolley dilemmas is perhaps the only way to address them feasibly. By acknowledging trolley dilemmas as fail states, measures can be taken to avoid them—much like any other fail state—at the appropriate point via design and engineering. This is not to say that the trolley problem is unimportant or irrelevant—far from it. It serves an important function both on a societal level and within scientific discourse, as it has for centuries. This function, however, has never consisted in putting an agent within the problem and condemning him or her for inevitably making the wrong choice. And this should not be any different for vehicle automation technology.

back to top  References

1. National Highway Traffic Safety Administration (NHTSA) 2017. Automated Driving Systems 2.0: A Vision for Safety; https://www.nhtsa.gov/sites/nhtsa.dot.gov/fles/documents/13069a-ads2.0_090617_v9a_tag.pdf

2. Mirnig, A.G. and Meschtscherjakov, A. Trolled by the trolley problem: On what matters for ethical decision making in automated vehicles. Proc. of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2019, Paper 509; https://doi.org/10.1145/3290605.3300739

3. Foot, P. The problem of abortion and the doctrine of the double effect*. Virtues and Vices: And Other Essays in Moral Philosophy 5 (1967), 5–15; https://doi.org/10.1093/0199252866.003.0002

back to top  Author

Alexander G. Mirnig is an HCI researcher currently focusing on human-centered aspects of vehicle automation technology. Coming from a philosophy background, he attempts to untangle some of today's multidisciplinary challenges in technology, so that we can understand ourselves and the machines we decide to surround ourselves with a little bit better. [email protected]

back to top 

©2020 ACM  1072-5520/20/01  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2020 ACM, Inc.

Post Comment


No Comments Found