Banner by Cynthia Kumaran

The Modern Reality of the Trolley Problem

By Reva Prabhune

Would you stand by and watch a train hurdle towards five people trapped on the tracks, or would you interfere? You see the situation occurring from a distance and find yourself next to a lever that would divert the trolley to another set of tracks. However, the catch is that there is one person on the alternate tracks. The situation boils down to a simple but impactful decision you must make: you could do nothing and allow the trolley to kill five people, or you could pull the lever to divert the trolley to where it will kill one person. What should you do? 

This is the famous philosophical debate known as the trolley problem. Since its introduction by English philosopher Phillipa Foot in 1967, this thought experiment has been a popular example of moral psychology. While we like to believe the undeniably uncomfortable trolley problem’s purview to be limited to the abstract, there are real-life occurrences of the trolley problem. We see iterations of the trolley problem everywhere from the Chernobyl Disaster to autonomous cars. Decisions made in these situations force us to examine what our decisions say about us and whether our sense of what we think should do lines up with what we actually would do. 

There is an important distinction to make when considering your choices in the trolley problem. If you pulled the lever, would one person die instead of five people, or for five people? The distinction between these two words is that instead means to substitute a greater harm with a lesser harm, while for means to cause a lesser harm to avoid a greater harm. Saying that one person died instead of five other people means that somebody was going to die either way and you chose the lesser harm. If one person was dying for five other people, it means that you would intentionally cause the death of that one person. Since the for scenario creates a new harm, it raises questions like “Why that particular one person? Why is their life worth sacrificing over anyone else’s?” and “Why should their life be sacrificed at all--- do they not have a right to live?” In real-life instances of the trolley problem, people need to consider whether they are dealing with a for scenario or an instead scenario and be ready to answer the questions that the scenarios entail. 

Attitudes towards these questions can differ across cultures.  Researchers Henrik Ahlenius and Torbjörn Tännsjo of the University of Stockholm presented trolley dilemmas to 3,000 randomly selected residents of the United States, China, and Russia in 2012. They found that while 81% of Americans and 63% of Russians agreed that you should pull the lever, only 52% of Chinese agreed that pulling the lever would be “morally permissible.” Clearly, differences in moral judgements can be heavily cultural.  

Cultural attitudes towards utility-maximizing decisions gain relevancy in light of the Chernobyl Disaster, a deadly explosion at a nuclear power plant in Ukraine SSR in 1986. It is classified as a Level 7 Major Disaster, the highest level on the International Nuclear Event Scale, and is one of the worst nuclear disasters the world has ever seen. The explosion was especially dangerous because it caused a major release of dangerous fission products - radioactive isotopes such as Cesium-137Iodine-131, and Strontium-90, all of which can cause cancer. 

It turns out that the wind direction near the nuclear accident was moving towards highly populated areas of Moscow, carrying with it radioactive isotopes. Major Aleksei Grushin flew above the Chernobyl plant and Belarus, a small neighboring country, and used artillery shells filled with silver iodide to make rain clouds that would wash out the radioactive particles in Belarus instead drifting towards nearby densely populated cities. According to British scientist Alan Flowers, one of the first western scientists to examine the remains of Chernobyl, the population of Belarus was exposed to radiation doses 20 to 30 times higher than normal as a result of the radioactive rainfall. This caused severe radiation poisoning in the children of Belarus but prevented a catastrophe for the millions living in Moscow and the surrounding highly-populated areas. The Chernobyl situation was essentially a trolley problem in which the Russian government was forced to decide if they should intervene and prevent greater harm by intentionally causing a lesser one. In the end, they pulled the lever. 

We see the trolley problem replicated again in the case of autonomous cars. Artificial intelligence needs to be trained on what to do in the scenario where a crash is inevitable. There are many crash scenarios that require protecting one group at the expense of another-- saving the driver versus passengers, the driver versus pedestrians, different groups of pedestrians. In other words, the car needs to decide who will live and who will die. Edmond Awad, a computer scientist at MIT media lab, extensively surveyed people on their preferences of who should be saved. The study included 39.6 million judgement calls in 10 languages from millions of people across the globe. Overall, the strongest preferences favored saving people over pets, children and pregnant women over the elderly, and saving many people over saving just a few. There was also a tendency to favor saving women over men, athletes over the obese, and high-status individuals over people with criminal records or the homeless. When it came time to choose in a high-stakes scenario, common biases were amplified.  

Edmond Awad clarified, “We don’t suggest that [policymakers] should cater to the public’s preferences. They just need to be aware of it, to expect a possible reaction when something happens. If, in an accident, a kid does not get special treatment, there might be some public reaction.” This is another crucial moral decision that developers of the artificial intelligence in self-driving cars need to make. Philosopher Immanuel Kant would argue that murder is unforgivable no matter who and how many people you are able to save, so the car should not have to chose who gets hurt at all. However, if we are to encourage self-driving car makers to program emergency decisions based on a collective understanding of moral duties, we need to be clear on what that understanding is. 

The argument on whether or not one should pull the lever has gone back and forth for decades. In November of 2016, a real-life trolley problem was simulated in a lab setting. Social psychology graduate student Dries Bostyn of the University of Ghent studied whether students would press a button and electrocute a single mouse to save the lives of five other mice. The students were given twenty seconds to redirect the shock. Of the participants, 84% chose to zap the one mouse, reasoning that it was ultimately causing less harm.  

Interestingly, Bostyn also ran a second version of the same dilemma. Here, instead of forcing the participants to choose between real, living mice, the participants were asked which decision they think they would make. Only 66% thought that they would redirect the current and shock the one mouse. When we are removed from the situation, we might want to avoid getting involved in order to keep our hands clean. But in the moment, we may instinctively value ending one life to save five. This study implies that the ethical debate extends to the very root of our behavior, showing a discrepancy between our hypothetical beliefs and our practical beliefs.  

Eric Schwitzgebel from the University of California at Riverside and Joshua Rust from Stetson University investigated this discrepancy. They led an experiment where they asked U.S. ethicists, non-ethicist philosophers, and a sample of professors from other departments their perceptions on the morality of different activities. These ranged anywhere from  regularly eating the meat of mammals to not being an organ donor to donating 10% of one’s income to charity. The researchers then collected self-reported and observational data about the participants’ actual behavior. While ethicists exhibited a pattern of having more stringent moral values, the study  “...suggests that ethicists are neither more nor less likely than other professors to behave in accord with their expressed moral attitudes.” This study explains how the results of Bostyn’s study are possible. It shows that people tend to judge an action to be right, like not getting involved in the ultimate direction of the trolley, yet when presented with the choice, they would betray their morals and kill one person to save five. 

The trolley problem is more than an uncomfortable philosophy question; it destabilizes what we think, what we believe, and who we think we are. What do you think is right - should you save five instead of one, or not interfere to avoid committing murder? And would you actually be willing to follow through if the lives were in your hands? If you saw the trolley, or were placed in front of the mice, or had to program the self-driving cars, or found yourself in charge of the response to the Chernobyl Disaster, would you make the call? Chances are, what you think one should do is not what you would actually do. What that says about the human psychology behind moral perceptions and the unfolding trajectory of our modern reality is a much deeper discussion.