Self Driving Car Ethical Question

There is a classical ethical question, “The Trolley Problem” which has an interesting parallel in the emerging world of self-driving vehicles.  The original problem posits a situation where 5 persons will be killed if you do not take action, but the action you take will directly kill one person. There are interesting variations on this outlined on the above wikipedia page.

So, we now have the situation where there are 5 passengers in a self driving car.  An oncoming vehicle swerves into the lane, and will kill the passengers in the car. The car can divert to the sidewalk, but a person there will be killed if that is done.  Note the question here becomes “how do you program the car software for these decisions?“.  Which is to say that the programmer is making the decision well in advance of any actual situation.

Let’s up the ante a bit.  There is only one person in the car, but 5 on the sidewalk. If the car diverts 5 will die, if not just the one passenger will die. Do you want your car to kill you to save those five persons?  What if it is you and your only  child in car? (Now 2 vs 5 deaths). Again, the software developer will be making the decision, either consciously, or by default.

What guidelines do we propose for software developers in this situation?

4 thoughts on “Self Driving Car Ethical Question

  1. If the self-driving car does not swerve, it, and by extension, the programmer, is not choosing to kill its passengers. It is, rather, discovering that it cannot save its passengers using any method at its disposal. The driver, who in the self-driving car is the programmer, is not authorized, for any reason, to choose to kill people (those on the sidewalk) who would not otherwise have died from the situation (oncoming vehicle swerves into opposing lane).

  2. Catholic moral ethics teaches that it is never permissible to take innocent life. You always strive to save all the lives involved. So my interpretation of that teaching would be that you strive to save as many lives as possible.

    To properly address your practical examples above it would be necessary to know what information the control system will have. Will it have a count of all the people in each situation? Will it know the familial relationships of all the people involved?

  3. This is not nearly as big a dilemma as it appears. Of course, the programmer would design to kill the least number of people possible. Of course, the vehicle would be required to be covered by insurance. Of course, the families of all the dead ones would sue. Of course, the settlements would be substantial. Of course, the self driving cars are not going to kill 35,000 Americans or the close to 1,000,000 citizens of the world every year — the way human drivers do. Of course, the total cost of insurance will decline substantially. In the same way that the airlines on rare occasions must pay out huge settlements but insure the passengers for only a small portion of the per ticket price.

Leave a Reply