fatal automated vehicle crash in Arizona

A Google automated vehicle has killed a pedestrian in Arizona, which has some of the loosest regulations on testing them. I learned about this from the Bicycle Coalition of Philadelphia.

This is certainly a tragedy. The Bicycle Coalition seems to condemn Google for testing the car, then goes on to make the point that human drivers kill people every day, every one is a tragedy, and there isn’t much public outcry about it. I agree with this, and yet I find it interesting that logic and our gut feeling about the morality of the situation seem to be so different. Imagine that changing all cars to self-driving ones would cut the number of people killed by 50% (and I have seen estimates of much larger reductions than that). It would seem immoral not to make that change. But at the same time, it would seem immoral to unleash a fleet of robot cars, knowing that a certain fraction of them are going to kill people, and by killing a few people learn how to not kill as many people in the future. I don’t know the answer to this, except that the technology will gradually get better, and insurance companies may eventually decide the human drivers are not worth the risk.

Leave a Reply

Your email address will not be published. Required fields are marked *