At some point in time, engineers and lawyers will need to determine if an autonomous car needs to have a conscience. The classic thought exercise in ethics known as the trolley problem , or possibly the “fat man variant” will need to be dealt with. Wikipedia states it as well as I can hope to:
The general form of the problem is this: There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?
To state it in terms of autonomous vehicles, if an autonomous car can kill it’s single passenger, but save the lives of all those in a school bus, should it? Most (but certainly not all) who look at the trolley experiment and say throwing the switch is “ok.” The “fat man” variant asks if, instead of throwing a switch, you push a fat man onto the track, saving 5 lives, but killing the fat man. Is that ok? In other words, instead of having two groups of morally equivalent people (all working on the rails), what happens if you kill someone more innocent. Americans are very strongly opposed to pushing the fat man.
I anticipate, that like Edison and Tesla, there will be two competing schools of thought, and one will win out. The first school of thought will be that autonomous cars must take whatever action ends up with the least number of deaths. The second school of thought will aim to take any legal action that will save the lives of the car’s occupants.
I strongly suspect the second argument will win. Suppose another thought experiment: A group of ne’er do wells gathers their children (or something that an automated car’s sensor will think are children) and block a bridge The bridge happens to be located around a blind corner, such that anyone turning the corner at the speed limit must either cross the bridge or plunge 100 yards to their death. Those protestors happen to be protesting against a group of people that they know will be coming around the corner at 12:00. If the first (push-the-fat-guy) approach wins, the cars’ occupants will perish, and the protestors will win. If the second approach (run over the 5 track workers) wins, the people on the bridge could get hit. But they were there, in the wrong place, and would have been killed by a human driver just the same.
Let me note a few things. Legally, it’s my understanding that if a person can avoid an accident, they must. Autonomous cars will be far superior at this, even if all the other cars are human piloted. That is, in an age with even some autonomous cars, those cars will be far better at avoiding an accident by going into another lane, shoulder, etc (driveway?) than people are. Secondly, if the accident cannot be avoided, an autonomous car will be in a better position when the accident happens. That is, the car will hit its brakes up to 1.5 seconds faster than a person. In that time, a hunan piloted car going at 60 MPH will have travelled 132 feet – before starting to slow down! If you are travelling 60 MPH and you are 185′ away from a line of people, an autonomous car will have you almost stopped when you hit them. The human driver will just have their foot on the brake and hit at around 50MPH.
Secondly, a wise man once told me “do what’s expected” when you drive. for that reason, it may be impossible for automated cars to do too much to avoid accidents.
A third point that starts to become apparent is that
given the current number of auto fatalities, waiting for autonomous vehicles to be perfectly safe ignores the fact that many of these deaths could be prevented once the fatality rate for autonomous vehicles merely dips below that of manned vehicles, even if that is still a nonzero number.
Given that we put a $9 million + price tag on a human life, it seems that the reasonable cost of vehicular autonomy is really going to met quite soon.