the “amorality” of self driving cars

This article talks about how a self-driving car might be programmed to make a hard decision in a split second.

Philosopher Jason Millar claims to have originated the idea of the ethically challenged self-driving car in a 2014 paper on robotics. As a grad student he proposed “The Tunnel Problem”—a formulation that has done well online thanks to its simple name (supposedly an analog to the Philosophy 101 “Trolley Problem”).

In the “The Tunnel Problem,” Millar’s driverless car (let’s call her Porsche again) is fast approaching a narrow tunnel, the entrance of which is blocked by a child who has fallen in the roadway. The car can either kill the kid or hit the wall of the tunnel, killing the driver (who is really just a passenger).

The trolley problem is fun – here is a run-down on Wikipedia. You can adapt it to a lot of real-life problems. Is it okay to hurt the few to help the many? Is it okay to hurt bad people who do bad things? Is it wrong to damage natural ecosystems, even if people are not directly hurt or they may even be helped? What if you aren’t sure whether people will be hurt, and the people who might be hurt aren’t even alive yet? Is it enough to not directly cause harm, or are you a bad person if you are not actively trying to reduce harm caused by others? What if you are doing something to reduce harm, but not everything you could be?

As fun as these ethical puzzles are to think about, with predictions that self-driving vehicles could reduce the death toll on our highways and streets by 80%, there is no moral ambiguity in choosing to make that happen as quickly as possible. I think it would be unethical not to.

Back where the rubber meets the road, I think you would just program the computer to always have a plan for how it would stop if it had to stop. Human drivers are supposed to do this, and a computer should be much, much better at it. I suppose there are cases where swerving is the better option – if something jumps out unexpectedly from the side, like a deer, or drops from above, like a tree branch, I suppose swerving could be the right response. But with almost anything unexpected that happens with another vehicle ahead or to the side, it seems like the best option would usually be for all vehicles to stop as quickly as possible. And if all vehicles are computer controlled, it seems like unexpected things shouldn’t happen that often.

Leave a Reply

Your email address will not be published. Required fields are marked *