There is a classical ethical question, “The Trolley Problem” which has an interesting parallel in the emerging world of self-driving vehicles. The original problem posits a situation where 5 persons will be killed if you do not take action, but the action you take will directly kill one person. There are interesting variations on this outlined on the above wikipedia page.
So, we now have the situation where there are 5 passengers in a self driving car. An oncoming vehicle swerves into the lane, and will kill the passengers in the car. The car can divert to the sidewalk, but a person there will be killed if that is done. Note the question here becomes “how do you program the car software for these decisions?“. Which is to say that the programmer is making the decision well in advance of any actual situation.
Let’s up the ante a bit. There is only one person in the car, but 5 on the sidewalk. If the car diverts 5 will die, if not just the one passenger will die. Do you want your car to kill you to save those five persons? What if it is you and your only child in car? (Now 2 vs 5 deaths). Again, the software developer will be making the decision, either consciously, or by default.
What guidelines do we propose for software developers in this situation?