25.06.2024 | AGIDE

The morality of the machine

Can and must autonomous vehicles make ethical decisions? Edmond Awad, founder of "The Moral Machine", will report on an unusual experiment on June 27 at the Austrian Academy of Sciences. Some initial insights he provides in an interview.

Self-driving cars have to make countless decisions - including ethical ones. © AdobeStock

Do you know "The Moral Machine"? It's a platform that puts moral dilemmas online in a game-like format: Users have to decide the outcome of an accident for a driverless car, for example whether two passengers in the vehicle or five pedestrians should be killed. The scientific project, which started in 2016 at the Massachusetts Institute of Technology Media Lab, was co-founded by Edmond Awad. It collected data on 40 million decisions made by three million people worldwide.

On June 27, Awad will talk live about the experiment and the insights gained from it as part of the "Narratives of Digital Ethics" conference at the Austrian Academy of Sciences (ÖAW). In the web interview, Awad describes how machines deal with moral decision-making situations - and what universal moral preferences we humans have.

Machines assume responsibility

The Moral Machine is a science project developed in 2016 that aims to engage the public in the moral choices that autonomous cars must make. What was the idea behind this project?

Edmond Awad: The plan was twofold: One was to collect data to understand what factors might actually influence what people think the car should do when resolving moral tradeoffs. The other is to engage the public on the big issues of machines making moral decisions and make them aware of the possibility of machines being assigned the responsibility of making moral decisions

Why do you think it's so important to engage people in the discussion about self-driving cars?

Awad: Because it embodies the value of freedom and democracy, and allowing people to be involved and say what they think. It's also potentially important because if drivereless cars prove to be safer and reduce accidents, there may be a moral obligation to have them on the road. However, adoption rates could be low due to psychological barriers. So, it’d be worth understanding those barriers. It's also worth trying to understand where the mismatch is between policies and public opinions.

So even if driverless cars end up being much safer than before, what if 90% of their accidents involve cyclists? Does that mean it's OK to put them on the road?

The moral machine still enters a grey area that we have tried to avoid. What benefits can a game bring to solving difficult problems?

Awad: The Moral Machine was a simplified version of the problem. The real issue is how driverless cars and machines in general change the calculus of value distribution. Imagine a judge who is biased in his/her decisions - it could be an individual bias or a systematic bias. With machine learning, you're automating decisions into machines that may be based on the decisions of those judges. Machines could actually magnify that effect. So they would have worsen such bias. In the case of driverless cars, perhaps when people are driving, they may have some kind of bias against cyclists or pedestrians – but if such biases end up being implemented in these automated cars we get into a worse situation. So even if driverless cars end up being much safer than before, what if 90% of their accidents involve cyclists? Does that mean it's OK to put them on the road just because they're so much better?

The Moral Machine has produced the largest dataset ever collected on the ethics of machines. What similarities and differences did you find across countries and sectors?

Awad: We have several dimensions that we tested, such as age, gender, fitness level, social status. Some answers are almost universal. For example, in most countries people spared women more than men, in most countries people spared the young more than the elderely. But the difference in preferences between saving the young and the elderely varied from country to country. This is where the cultural aspect becomes interesting. For example, in East Asian countries or in the Middle East, this difference (while still in the same direction) was less pronounced. There’re some cultural studies about how older people are more respected in the Eastern world. Whether it is because the family structure is stronger or because they think that the elderly have a kind of wisdom that they could not have acquired otherwise.

The role of the rule of law

Are there other social similarities scattered around the world?

Awad: One of the things we also found is the relationship between the preference of sparing rule-followers in a country and the rule of law (how much the law is respected and followed in a country) in that country. In some countries the rule of law is very strong, for example as in Germany or Japan. In those countries, people were more likely to spare the law-abiding at the expense of the law-breakers. On the other hand, there are countries where the rule of law is not as strong, like in the Middle East or in Latin America. In such countries, sacrificing the rule-breakers seemed like an outrageously harsh punishment for jaywalking.

To what extent does morality change over time?

Awad: Morality does change over time, and one way it does is what some people refer to as “the expanding moral circle”. While practices like killing animals for sport or food are still common in many countries today, future generations might find these actions shocking and hard to believe they were once accepted.

Experts now suggest that fully autonomous cars (that can drive well in all situations and environments) might not be here until around 2035.

What are your future plans for the Moral Machine?

Awad: The website is still active and attracts many daily visitors. It would be valuable to study how people's perceptions change over time. The field of automated vehicles is very volatile, with early media hype followed by disappointment and adjusted expectations.

And your prediction for when self-driving cars will become a reality?

Awad: Self-driving cars rely on deep learning models that learn from past data, but driving involves many unpredictable and novel situations, like a tree falling or an earthquake. Current technology struggles to handle these novel events because it cannot adapt to completely new scenarios. Experts now suggest that fully autonomous cars (that can drive well in all situations and environments) might not be here until around 2035, since the technology is not yet capable of learning from new experiences as humans do when driving.

 

At a glance

Edmond Awad is a Senior Research Fellow at the Oxford Uehiro Centre for Practical Ethics and the Wellcome Centre for Ethics and Humanities at the University of Oxford. He is also a Senior Lecturer in the Department of Economics and the Institute for Data Science and Artificial Intelligence at the University of Exeter.

At the conference "Narratives of Digital Ethics" organised by the Austrian Academy of Sciences (ÖAW) and the Vienna Science and Technology Fund WWTF, he will give the keynote speech          

„Narratives of Digital Ethics“

27 to 28 June 2024
Austrian Academy of Sciences, Festive Hall
Dr Ignaz Seipel-Platz 2, 1010 Vienna

PROGRAMMe

Registration