Can Cars be Moral?

E. Christian Brugger
September 29, 2017
Reproduced with Permission
Culture of Life Foundation

You see a runaway trolley hurtling towards five people. If you pull a lever the train will pass to a parallel track where there is only one person. Do you pull it?

Now reformulate this into a driving scenario. You are speeding down a highway. You round a sweeping bend and confront a tragic dilemma. A family of four with two small children is standing in one lane and an elderly couple is in the other. They've had an accident. You have almost no time to react. Two lanes, one-of-two unavoidable outcomes: slam into the family, or run down the couple. Which would you choose?

Scenarios such as these are being used to assess what models of "decision making" should be programed into self-driving cars to respond to situations where human harm is possible or inevitable. With over a billion cars on the world's roadways, and the era of self-driving cars upon us, the decisions about what programs to adopt is of obvious importance.

Cognitive scientists in Germany recently published a study looking at patterns of human ethical decision-making during virtual driving dilemmas. They used their data to develop and evaluate models, which, they believe, may provide a basis for self-driving cars' selection of acceptable options in conflict situations.

I should say at the outset, I am cautiously optimistic about self-driving cars. Since more than 90% of car crashes are caused either completely or in part by human error, self-driving cars, if programmed rightly, could improve road safety. Still, I found the German study very disturbing.

The field is called "machine ethics," an obvious misnomer. Ethics only exists among free agents, who, of course, program machines, but whose choices, though influenced by many factors, are finally determined by nothing outside the will. Machines have no freedom in this sense. They do what they're told. What they're told may, of course, come in the form of an astonishingly-complex algorithm. Thus, a machine's "morality" comes pre-programmed.

The study

Using a 3D virtual-reality game system, researchers exposed 105 participants to 153 crisis scenarios, where pairs of obstacles on a roadway were presented. (Note: participants were drawn from attendees at a conference of the "German Society for Analytic Philosophy.")

Drivers could only move between two lanes, each of which contained an unavoidable obstacle; they had to decide which to hit and which to avoid. These included inter-species obstacles (boy v. goat) and intra-species (girl v. woman).

Researchers found that when people have a decent amount time to respond (4 seconds), they use so-called "utilitarian" reasoning: they opt to run down animals before humans; smaller human groupings rather than larger ones; older before younger people; and (interestingly) males before females.

They concluded that algorithms for self-driving cars should be organized according to similar utilitarian calculations (although they reserved judgment on whether the data on hitting males was useful).

Five Criticisms

First, researchers claim to be investigating the ethics of self-driving cars - assisting car computers to make moral decisions. So as to avoid the irresponsible hype that predictably arises from such careless language, researchers should be clear that they are simply attempting to identify statistically-relevant decision-making patterns in humans to use as computational models for self-driving cars.

Second, the study failed to account for its hidden ethical assumption that utilitarian reasoning should be the basis for human choosing in crisis situations; there are other possible bases. What about Aristotelianism, natural law, divine command, "love thy neighbor," "Me Before You," etc.?

Third, their sample size (105) and diversity of participants (German analytic philosophy students!) are hardly representative of the world's drivers.

Fourth, the study's "value-of-life model," where each obstacle is assigned a number corresponding to its apparent value, seems philosophically unsound. Since intrinsic value calculations of human lives are impossible, the study's calculations are based on instrumental factors. I think that if sample size and diversity were increased, they'd find that the model and the decision-making outcomes were less correlated.

Fifth, the paired scenarios (goat v. boy) are superficial. The study (which it admits) avoids an almost incalculable number of variables that need to be taken into consideration to accurately model human choosing. For example, in a two-person scenario would it be relevant if you knew one of them? Or both? Or thought you did? What if you believe less harm will come to you if you hit A rather than B? How about race or religion bias (you're Muslim and one of the "obstacles" is wearing a head scarf)? What about the influence of virtue and vice? Will a selfish woman choose similarly to a saintly man? How do you factor in a ready disposition to self-sacrifice? What if you believe that people's value diminishes as crippling disabilities increase, and you're faced with a quadriplegic youth in a wheel chair and a healthy youth on a fancy bicycle? Etc.

Moving Forward

The proposal to model algorithms after reliable models drawn from human decision-making seems reasonable. But …

The sample size needs to be much larger and its diversity much wider. Since data is likely to be used universally, it should be collected from thousands of participants of different races, ethnicities, religions and worldviews.

The logistic regression models used to predict behavior need to be truly representative of the ethical theories shaping people's choices.

Drop all language about "cars making moral decisions." Humans are the moral agents here. The cars are doing what they're told.

Top