With the rise of automation and the introduction of self-driving cars, the time has come to seriously discuss the moral dilemmas that a computer needs to be equipt to contend with—especially when human lives need to be weighed. Programming this morality is excrutiatingly difficult and it is almost assured that no solution will ever whooly satisfy everyone. Nevertheless, these sorts of problems need to be examined and judged.
One such instance would be if the brakes of a self-driving car were out and the car was about to hit a mass of people crossing the street. Should the car barrel through the crowd or instead crash into a building and likely kill the driver? Does the answer to this question change if the people crossing the street were J-walking? What if the driver was accompanied by several kids? There are practically infinite variables that can modify the situation and planning for all of them is impossible. However, setting certain standards is not only possible, but really does need to be established.
Most people tend to prefer solutions that met out the smallest loss of life—even at the expense of the driver. While people choose this option, they tend to not want to operate these vehicles themselves. So while people want self-driving cars to be willing to sacrifice the driver for the sake of sparing others, they are not enthusiastic about purchasing a car that is willing to kill them. This creates another issue where is people are less inclined to purchase self-driving cars then fewer would be bought and so more traditional cars would be on the road which would lead to more motor fatalities due to human error. So by wanted to protect themselves, most people would accidentally be putting themselves and other at risk by trusting themselves over a computer that is prepared sacrifice its occupants in the most extreme and rare of circumstances.
So the issue of programming morality in self-driving cars is extremely complicated. Should we do what would provide the most good in a strict utilitarian sense, or should we simply decree what public opinion most strongly supports, or some other compromise? Deciding which group of people should die in any circumstances will likely never be an easy decision to make, and may perhaps change over time. Maybe the final decision will be a totally random system so as to remove blame alltogether, or maybe a Rawlsian system will be employed, or maybe technology will always be made to be loyal to its owner. Only time will tell, but the time is now.