in

The tram dilemma in the 21st century

In The Good Place episode “The Trolley Problem,” Micheal (Ted Danson) places Chidi (William Jackson Harper) in front of a streetcar and forces him to make a very important decision: save five people or one. This chapter is based on the famous streetcar dilemma, one of the best-known thought experiments in ethics.

In 1978, the British philosopher Philippa Foot posed an imaginary situation in which we become the driver of a tram that has lost its brakes and cannot stop. Further on, on the tracks, there are five people working. The tram is about to run over them, but we notice that a change of tracks is coming and we only have to pull a lever to do it. However, on these new routes there is a person working. In both options, there will be fatalities. Faced with this situation, the philosopher asks what is the correct decision: pull the lever and change tracks (and kill one) or do nothing (and allow five to die). In general, the vast majority of people choose the first option.

tram dilemma

In a different setting, we are standing on a bridge and we see the tram heading towards the five people. On this occasion, in front of us there is a large person and, if he was thrown, he would stop the tram: saving the five people, but he would die instantly. Once again, we are asked if we would push the person or do nothing. In this case, most people decide to do nothing.

Why do the answers vary if the result is the same in both cases? The thought experiment has many variables and has been used to try to delimit the different moral intuitions that we have. The Good Place shows one of the most likely solutions of all: Chidi is unable to make a decision on time and ends up running over five people. In a second attempt, he pulls the lever, saving five strangers, but sentenced his friend Henry (another variable in the dilemma). In both cases, the moral burden is overwhelming.

New technologies throw these moral dilemmas into the real world in a way never seen before. From the creation of Internet pages to artificial intelligences, it is necessary to program the set of rules – also known as algorithms – that guide the actions carried out by each program. Usually, it is an engineering problem and not a social, cultural and philosophical one. However, as technology becomes more complex, each decision has a greater impact on society since the biases and privileges of the programmers determine the possibilities of each algorithm.

Sometimes they can be inconsistent, other times they lead to terrible mistakes such as classifying the photograph of a dark-skinned person as a gorilla, an issue that happened in 2015.

You might also be interested: The digital dystopia of excess information.

The tram dilemma becomes important in the programming of autonomous car algorithms. Many of us think that this technology will help reduce the number of accidents, peak traffic hours and, probably, even the general stress of the population. However, when programming them, it will be necessary to define the options of the computers when facing situations similar to those exposed by the dilemma: an autonomous vehicle walks down a street and at the last moment detects that a person (or five or an infant) is crossing . To avoid running over them, he has to turn and crash into a wall, killing the passenger.

Sure, this situation will be unlikely and the danger will reduce as technology improves, but it is impossible to rule it out. Today there are already cases of autonomous test vehicles that had a human at the wheel to prevent any failure and, even so, one person died because both the machine and the driver erred. Therefore, it is imperative to program a response in the algorithm. What should the autonomous car do: turn or continue? If the program always safeguards the passerby, then who would buy the car? If it protects the passenger, then who will allow them to circulate?

Today, in the 21st century, we find ourselves in the position of Chidi. The tram dilemma is a reality and must be resolved if we want to have the advantages of autonomous vehicles. We will have to tell the machine with which answer we are least uncomfortable as a society.

If you want to find out more about their decisions, they can enter Moral Machine and judge the actions of the car in different scenarios, because, ultimately, it is not a decision that we should leave to the auto companies.

mm
Adán Lerma Adán Lerma is a screenwriter, doctor in Philosophy of science and professor of literature.