Choosing who should die
The trolley (tram, train etc.) problem is a thought experiment in ethics. The simple outline of the problem is as follows:
You see a runaway trolley moving toward five people lying on the tracks that are tied up so they can’t move. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto another track, and the five people on the main track will be saved. However, there is a single person lying tied up on the other track. You have two options:
• Do nothing and allow the trolley to kill the five people on the main track
• Pull the lever, diverting the trolley onto the other track where it will kill one person.
Which is the more ethical option? It would appear that there is an obvious answer but when human emotions are considered then it is much more complex. You are making a choice who will die. The dilemma can be adjusted too, for example by saying that the single person on the track is your child or sibling or similar.
A similar question that is asked now in relation to ADVs is as follows: If forced to choose, who should a self-driving car kill or injure in an unavoidable crash?
Should the passengers in the vehicle be sacrificed to save pedestrians? Or should a pedestrian be killed to save a family of four in the vehicle? Of course, actual scenarios will be more complex, but it does illustrate how difficult it is to ‘program’ an automated driving vehicle with ethics. Several car manufacturers have stated that the car will always look to save its occupants.
In an experiment called The Moral Machine, people were presented with several scenarios. Should a self-driving car sacrifice its passengers or swerve to hit (for example) a:
- male doctor
- homeless person
A few years after launching the experiment, the researchers published an analysis of the data. The results from 40+ million decisions suggested people preferred to save humans rather than animals, spare as many lives as possible, and tended to save young over older people.
There were also smaller trends of saving females over males, saving those of higher status over poorer people, and saving pedestrians rather than passengers. The researchers acknowledge that their online game was not a controlled study and that it could not do justice to all of the complexity of autonomous vehicle dilemmas. However, they hope it will spark a conversation about the moral decisions self-driving vehicles will have to make.
Their view is that we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and those who will regulate them.
What is your view? Should we leave these decisions to the programmers? I think not.
Acceptance of technologies
There are a number of barriers to rolling out ADVs in the UK and other countries. Key challenges to adoption are:
- Consumer behaviour
- Connectivity infrastructure
- Business model.
Increased acceptance in society is likely to occur as their uptake gains momentum, first by influencers, and then by general consumers. Younger, more technology receptive people will make up a greater proportion of the driving population. Consumer confidence, as with everything else, will be greatly influenced by the media.
It is often perception rather than reality that sways public opinion. A self-driving car ran a red light during a trial in the US in 2016, which gained huge media attention. Imagine how many human drivers did the same during that time, with no media attention. As well as perceptions about safety, the consumer will also form opinions about cost.
This may also create an impression that the technology is less safe, which it is not. Imaging a world where we had never used petrol and then somebody invented a car that had 50 litres of it in a plastic tank at the back. It would be described as the most dangerous thing ever!
How do you insure an ADV?
Who is responsible for an autonomous vehicle crash? It’s not the driver surely! Drivers might not be overly concerned with the distinctions between different levels of automation, but insurance companies will be.
The Association of British Insurers (ABI) has advised that driverless vehicles, which will also be fully connected vehicles, should have a sufficient level of security to guard against cyber-attacks, before they are allowed to operate in fully autonomous mode. The ABI are major supporters of autonomous vehicles, because of the potential to dramatically improve road safety, but they expect the technologies to be developed with care.
There have been a few incidents around the world of ADVs causing death and injury. Statistically they are still much safer than when a human is in control. However, there is a general expectation that they should be 100% safe. This will never happen and it is why, as well as a few remaining technical barriers, there is still a social and political divide to cross before these vehicles are fully accepted.