• Add description, images, menus and links to your mega menu

  • A column with no settings can be used as a spacer

  • Link to your collections, sales and even external links

  • Add up to five columns

  • January 03, 2019 4 min read

    Who dies firstSelf-driving car survey shows who exactly the world wants autonomous vehicles to sacrifice. Animals and the old should be sacrificed in autonomous vehicle crashes, according to a major new study. When self-driving cars arrive, they will be forced to decide who should die when they collide with members of the public. When something goes wrong, they will have to be programmed to opt for one group or another when deciding where to crash, an issue that has become a central ethical problem for those designing the cars.

    Researchers asked more than 2 million people in an attempt to establish who the public thinks should be sacrificed in those crashes. They were told to imagine a situation where a deadly crash was going to occur and the car had to choose between two sets of people – and asked to decide which of those groups would die.

    And while the results varied widely between different groups of people, the scientists found a number of common beliefs between them.

    Chief among them were three main elements. In crashes, cars should favour that fewer people die rather than more, that older people are killed rather than younger ones and that humans are favoured over animals.

    They also found that people tended to believe that cars should favour law-abiding citizens over those that might be walking in the road.

    “The main preferences were to some degree universally agreed upon,” said Edmond Awad, a researcher at the MIT Media Lab and lead author of a new paper outlining the results of the project. “But the degree to which they agree with this or not varies among different groups or countries.”

    For instance, people in many eastern countries tended to agree less strongly that older people should be sacrificed for younger ones.

    The researchers hope that the discoveries can help inform the creation of new self-driving cars, so that they can represent the ethical beliefs of the people who use them.

    The researchers also said there was a vast and surprising amount of interest in the problem, and that they hoped similar exercises could be conducted in the future to involve the public in such decision making.

    More than 2 million people in over 200 countries took part in the survey, which represented an updated version of the famous “trolley problem” often used in philosophical thought experiments. In that problem, people are told to imagine a trolley heading down a track and to think about various possibilities – like whether diverting the train to kill more people is better than leaving the situation to happen – in an attempt to understand ethical decision making.

    If forced to choose, who should a self-driving car kill in an unavoidable crash Should the passengers in the vehicle be sacrificed to save pedestrians? Or should a pedestrian be killed to save a family of four in the vehicle?

    To get closer to an answer - if that were ever possible - researchers from the MIT Media Lab have analysed more than 40 million responses to an experiment they launched in 2014. Their Moral Machine has revealed how attitudes differ across the world.

    How did the experiment work?

    Weighing up whom a self-driving car should kill is a modern twist on an old ethical dilemma known as the trolley problem.nThe idea was explored in an episode of the NBC series The Good Place, in which ethics professor Chidi is put in control of a runaway tram.

    If he takes no action, the tram will run over five engineers working on the tracks ahead.

    If he diverts the tram on to a different track he will save the five engineers, but the tram will hit one other engineer who would otherwise have survived.

    The Moral Machine presented several variations of this dilemma involving a self-driving car.

    Moral Machine: Should a self-driving car save passengers or pedestrians?

    People were presented with several scenarios. Should a self-driving car sacrifice its passengers or swerve to hit:

    • a successful business person?
    • a known criminal?
    • a group of elderly people?
    • a herd of cows?
    • pedestrians who were crossing the road when they were told to wait?

    Four years after launching the experiment, the researchers have published an analysis of the data in Nature magazine.

    What did they find?

    The results from 40 million decisions suggested people preferred to save humans rather than animals, spare as many lives as possible, and tended to save young over elderly people. There were also smaller trends of saving females over males, saving those of higher status over poorer people, and saving pedestrians rather than passengers.

    About 490,000 people also completed a demographic survey including their age, gender and religious views. The researchers said these qualities did not have a "sizeable impact" on the decisions people made.

    The researchers did find some cultural differences in the decisions people made. People in France were most likely to weigh up the number of people who would be killed, while those in Japan placed the least emphasis on this.

    Most emphasis on sparing pedestrians 

    1. Japan
    2. Norway
    3. Singapore

    Least emphasis on sparing pedestrians 

    1. China
    2. Estonia
    3. Taiwan

    The researchers acknowledge that their online game was not a controlled study and that it "could not do justice to all of the complexity of autonomous vehicle dilemmas". However, they hope the Moral Machine will spark a "global conversation" about the moral decisions self-driving vehicles will have to make. 

    Most emphasis on sparing the young 

    1. France
    2. Greece
    3. Canada
    4. UK

    Least emphasis on sparing the young 

    1. Taiwan
    2. China
    3. South Korea
    4. Japan

    "Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now," the team said in its analysis.

    "Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them."

    Germany has already introduced a law that states driverless cars must avoid injury or death at all cost. The law says algorithms must never decide what to do based on the age, gender or health of the passengers or pedestrians. If we do ever get to the point where a car really can drive fully autonomously, it will be  able to make better split-second decisions, based on far more information, and with more precision, than the best human driver ever could.

    Leave a comment

    Comments will be approved before showing up.