Would a Google Car Sacrifice You for the Sake of the Many?

Stay ahead of the curve... Get top posts first!

Thank you for subscribing!

Get updates on Facebook

Google self-driving cars are presumably designed for safety and programmed to protect their passengers.

But what will when there’s lots of them on the roads and when there are too many choices for the computer to handle?

Should these cars be designed with a Prime Directive:
>“Prevent my passengers from coming to harm.” But when the cars are networked, their Prime Directive well might be: “Minimize the amount of harm to humans overall.” And such a directive can lead a particular car to sacrifice its humans in order to keep the total carnage down.

“Asimov’s Three Rules of Robotics don’t provide enough guidance when the robots are in constant and instantaneous contact and have fragile human beings inside of them.”

The networking of robotic cars may change the basic moral principles that guide their behavior. Non-networked cars are currently designed to be morally-blind individualists trying to save their passengers without thinking about others, but networked cars will probably be designed to support some form of utilitarianism that tries to minimize the collective damage.

Fortunately, we don’t have to make these hard moral decisions. The people programming our robot cars will do it for us.

[SOURCE](https://medium.com/@dweinberger/would-a-google-car-sacrifice-you-for-the-sake-of-the-many-e9d6abcf6fed)

Want our best on Facebook?

Facebook comments

“Would a Google Car Sacrifice You for the Sake of the Many?”