Smart Business

Interview with Janina Loh: The ethics of selfdriving cars

Self-Driving Cars: Driverless vehicles will have to deal with those tricky life-or-death decisions philosophers have argued about for years. So who will the artifcial intelligences behind the wheel choose to favor? Janina Loh offers a drive-through of the neighborhood.

This interview was conducted by our editor, Tim Cole.

Will autonomous vehicles represent a step toward greater safety or are they an added risk?
Since 90% of all trafc accidents are due to human error, chances are that self-driving cars will reduce the current number of pileups. But even the best autonomous vehicles will eventually be involved in serious incidents. Today, the driver makes decisions spontaneously and reacts by reflex because of the lack of time and information required to make informed ethical decisions before it’s too late. Essentially, the same will be true with self-driving cars but their decisions will be largely based on automation, so it will actually be the algorithms used and their programming that decide what actions to take. That’s why we need to make sure certain ethical principles are built into our technical systems.

Self-Driving Cars: Janina Sombetzki

Janina Loh teaches philosophy at the University of Vienna. Together with her husband, she recently contributed an essay on digital ethics to Patrick Lin’s anthology ‘Robot Ethics 2.0’ (Oxford University Press, 2017)

So in the worst case, machines will have to decide over life and death, won’t they?
It will be hard to program autonomous vehicles to include every conceivable scenario that could occur in trafc. That’s why we need to make sure certain moral principles are being followed. A car programmed to protect its own passengers at the cost of everyone else would be just as socially unacceptable as one that willingly sacrifces its occupants to save others.

So to which rules should a self-driving car adhere?
Let’s consider the classic case. A car is driving through a residential area when a group of small children run out from behind a parked car. To avoid them the car would have to pull to the left, but by doing so it would hit an 80-year-old man approaching on his bicycle.

Self-Driving Cars: Bentham-Bentley, Kant-Chrysler und Aristoteles-Audi

Bentham-Bentley, Kant-Chrysler und Aristoteles-Audi (left to right).

In your opinion, what should the car decide to do?
That depends on which school of ethics you choose to follow. The utilitarian school of thought, foundedby Jeremy Bentham in the early 19th century, states that the best action is the one that maximizes utility. According to Bentham’s Greatest Happiness Principle, we should be governed by the credo “the greatest happiness of the greatest number.” This means the old man must die because his usefulness to society is probably less than that of the children, one of whom might grow up to become the next Einstein. However, according to the school of deontological ethics, prescribed by Immanuel Kant, assigning different values to different human lives is completely unethical because human dignity is absolute, and you can’t compare absolutes. In fact, human dignity is the bedrock of most legal systems in liberal democracies today.

it will be hard to program autonomous vehicles to include every conceivable scenario that could occur in traffic

Sounds like there is no real ethical solution after all?
Philippa Foot, a famous British philosopher and ethicist, called this kind of situational dilemma a “trolley problem” [based on a similar dilemma as described above using a trolley bus rather than a car and adding further complications]. Philosophers often engage in thought experiments such as these, where they describe hypothetical situations, sometimes realistic and sometimes theoretical, designed to investigate our “moral intuitions” when dealing with dilemmas. There is no right or wrong answer to a trolley problem, which means we don’t need to solve some kind of puzzle but can choose wisely before we let autonomous vehicles loose on mankind.

So which school of ethics should carmakers follow?
The automotive industry is focused on making sure accidents don’t happen in the first place. Driver assistance systems are programmed to react defensively, meaning that when in doubt they will slow down or stop and ask questions later. If an accident becomes unavoidable, the European Ethics Commission has laid down that under no circumstances may a vehicle be programmed to choose between potential human victims. Instead, the steering systems must be programmed to seek to avoid an accident by all means, or at least to reduce speed (and thus collateral damage) as much as possible.

And if worst comes to worst, who is considered liable?
It’s certainly not the car itself, since algorithms cannot be held legally accountable.

Self-Driving Cars: Ethical cars

Vital decisions: Autonomous vehicles will one day be forced to make life-or-death choices in an instant

Do we need some kind of Digital Road Traffic Act?
The European Parliament suggested that autonomous driver assistance systems should be considered to be legal entities – a kind of electronic persona. After all, companies can be taken to court so why not an algorithm? This is especially true for self-learning artifcial intelligence (AI) systems, which would need to be issued a digital legal personality of some kind. Of course, this also means they would have to be registered with the authorities and would need to be provided with assets with which to pay compensation, or at the very least they would require liability insurance. We need to think long and hard about what this kind of “digital personhood” means in practice, because we are heading towards a future in which autonomous and self-learning machines will play an increasingly important role.

Couldn’t the creators of such systems conceivably argue that, since they are self-learning, it wasn’t they who taught the machine to do what it does, but that it actually taught itself?
It’s certainly true that the damage done by a self-learning system can’t easily be traced back directly to the programmer. In order to act in a virtue-ethical sense, an autonomous driver assistance system will need to be capable of a far greater degree of self-learning than a simple “Kant Car” or “Bentham Porsche.” It would not be inconceivable to imagine a kind of “Aristotlemobile” that would require owners to drive themselves around at first in order for the car to learn to drive the way they do.


Leave a Reply

Your email address will not be published. Required fields are marked *