Let's consider an ethical problem. You dear reader, are alone in a car, driving uphill on a one-lane mountain road. As you come out of a hairpin bend, you see several people casually standing in the middle of the steep road. The choices are stark.
- Kill them and hope you survive
- Save them and kill yourself by swerving off the road into the mountainside, or space.
Most people would consider their own lives, and the well-being of those who depend on them, more important than the lives of random strangers in such circumstances.
Now suppose you're flying a pilotless drone, controlling it from the ground. There is a malfunction as the drone is flying over a densely-populated area. You must crash-land it within say, a minute. You can take a clearheaded decision since danger to yourself is not involved. You will try to land it in a spot that minimises the danger to other people.
Now let's say that you're programming a driverless car which must take such split-second decisions autonomously. You will need to programme the car in one of several ways.
- It can either minimise loss of life, even if that means killing the occupants. (This is what most people advocate when their own lives are not involved according to an academic study in France. )
- Or, it must protect its occupants. regardless of how many non-occupants it kills.
- Or perhaps, the car is to use some sort of value judgment to decide whose lives are more important. If the occupant is a doctor or a scientific researcher, who could potentially save many lives, you may programme it to kill a large number of other people in order to protect the occupant. On the other hand, if the other parties at risk include children, the scale could be tilted against the occupant.
This of course, involves some mechanism to enable judgments about the worth of the passenger. For example, if the passenger earns say 10 times the average per capita income, the vehicle may be programmed to kill a total of up to 9 persons instead of jeopardising the passenger. Any such valuation-of-life system could obviously be very discriminatory and vulnerable to hacking.
Most people would feel a little unhappy at the thought of sitting in a vehicle, which can be programmed to kill them. Most people would feel even more unhappy if they realised that a driverless car must be programmed to kill its occupants, or at least, risk killing its occupants under certain circumstances.
This problem will soon be out of the ethics lab and on the streets. By 2020, around 10 million driverless vehicles are likely to be on the street.
Driverless vehicles are way, way safer than any manned car or truck can be. They have quicker reflexes; they have 360 degree "vision", including radar and infra-red vision. They can "talk" to other driverless vehicles, including cars which are out of line of sight. They can adjust for the actions of other driverless cars in real-time.
?It is guaranteed they ?will obey traffic rules. They can be potentially cheaper because a lot of redundant equipment (driving wheels, gears, brakes, etc.) can be eliminated.
As of now, around 1.2 million people are killed every year, and somewhere between 20-30 million are seriously injured, in road accidents around the world. An overwhelming majority of those accidents — close to 99 per cent — occur due to human error. As the number of driverless vehicles increases, there is likely to be a dramatic fall in accidents because all the driver errors will be progressively eliminated.
But that ethical problem must be solved before driverless vehicles can be adopted in really large numbers. Insurers and motor vehicle licensing departments must know how the programming works in such situations when an accident and loss of life appears inevitable.
What choices will a driverless car make under those circumstances? ?I?f you're a potential buyer you'd probably like to know.