The Moral Machine

In recent years everything seems to be going into automatic mode. Smart meters, smart TVs, smart phones, and smart appliances to name but a few. However, driverless cars are slowly making their way onto our roads. In the US they’re already being tested in numerous states and only a few accidents have been reported. But what if the car had to choose who to injure, what then?

Tesla, Audi, BMW, Ford, Google, General Motors, Volkswagen and Volvo are developing a driverless car. This is a vehicle that has the ability to use a combination of sensors, cameras, radar and artificial intelligence (AI) to travel between destinations without a human operator. These vehicles are being tested across numerous states in America and with companies such as Uber. The UK government aims to have a commercial roll out by 2021. The technology is still not quite ready, and the cars can only currently be driverless in some circumstances (i.e. the driver must be prepared to drive if needs be). Are they safe?driverless

Considering the volume of tests that have been conducted and the number of accidents reported, yes. There has only been one pedestrian killed by a driverless car, in March this year, and a few instances where the drivers have been injured due to the cars not detecting oncoming threats. The issue comes however, when you stop to think about the following situation: what would happen if the car had to choose between killing the passengers of the car in a crash, or oncoming pedestrians? Who decides the ethical outlook of a machine, or the answer to a question like this? How does a machine make it’s decision?moralmachine

This question was posed by researchers at MIT, who created the Moral Machine in 2014: found HERE, if you want to give it a go. They wanted to investigate what decisions we would make as humans in various crash situations, to gauge how a driverless car might react in a similar situation; after all, the cars are meant to learn from us. This is the information MIT provides:

“We show you moral dilemmas, where a driverless car must choose between the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, you judge which outcome you think is more acceptable. You can then see how your responses compare with those of other people”cardecision

The data collected was published in the journal Nature last week, and after 4 years collecting data, 40 million decisions had been made by people from 233 countries across the Globe. This was one of the largest studies ever conducted into people’s moral preferences. The findings were very interesting. The Moral Machine tested 9 situations to see if people would prioritise: men or women, adults or children, humans or animals, high social class over low, healthy or sickly, and law abiding citizens over law breakers, over the participant’s own life in a crash scenario.

The data showed a correlation between the decisions made by people in a country and their cultural views and economy. For example, people from countries such as Japan and China, were less likely to save young people over old. Participants from countries with a high level of economic inequality showed greater gaps between the treatment of individuals with high and low social status. More individualistic cultures, like the UK and US, were more likely to spare more lives over saving themselves, whilst Japanese and Chinese participants were more likely to save themselves. A full break down of the results can be found HERE.decision

The results varied across the Globe. Does that mean companies developing these cars should program the cars dependent on where they’re being sold? Is it acceptable for a Japanese car to spare the driver but kill 5 pedestrians because of their cultural beliefs? Edward Awad, the author of the research paper, hopes this information will be used to override potential issues that could arise due to cultural differences across the Globe, instead of being used by manufacturers to develop vehicles dependent on the countries’ values. He also hopes that this study will make creators of AI stop and think more carefully about the ethics of AI technology.

This isn’t to say driveless cars are evil. In fact, it is thought that they could be great for the Global economy and improve our way of life. Awad said “more people have started becoming aware that AI could have different ethical consequences on different groups of people.” The Moral Machine was just one way to document how our values differ hugely from country to country. It begs the question, who is making the final decision and should we be questioning them a bit more before putting our lives in their hands?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s