1 minute read

This post started as a response to a provocative question by +David Brin on Google+:

It was in response to this fantastically titled article: “The Case Against Autonomous Killing Machines”

I don’t think we should be designing machines that kill people, period. But if you are going to have machine’s that kill sometimes and military uses fund a lot of this research they may not be willing to encode the Asimov laws. At least not in the same ordering (ie. following human orders will be more important than saving lives).

But a bigger question right now is whether an AI could reliably evaluate moral laws at all, even if it isn’t literally a killing machine. How sure does the system need to be that there is a human near by or that the machine’s actions will cause harm? How much failure are we willing to accept if it’s only 99% sure what it’s doing won’t cause harm?

Think of google’s self driving cars. That’s the nearest, potentially deadliest robotic systems we need to worry about in our daily lives. Any wrong move while driving could harm the passengers as well as people in other cars. You can never be _sure_ you won’t get into an accident. So how sure do you have to be? Then every once in a way someone will get killed anyways, who’s the blame? The robot? The engineer? No one? If we define these laws too strictly we may stop ourselves from creating amazing technologies that change the world. But if it’s too loose or not present at all then we enter this moral grey zone where accidents happen even though they might not have happened with more processing time.

Updated: