Machine Morality

This post started as a response to a provocative question by +David Brin on Google+:



It was in response to this fantastically titled article: "The Case Against Autonomous Killing Machines"

I don't think we should be designing machines that kill people, period. But if you are going to have machine's that kill sometimes and military uses fund a lot of this research they may not be willing to encode the Asimov laws. At least not in the same ordering (ie. following human orders will be more important than saving lives).

But a bigger question right now is whether an AI could reliably evaluate moral laws at all, even if it isn't literally a killing machine. How sure does the system need to be that there is a human near by or that the machine's actions will cause harm? How much failure are we willing to accept if it's only 99% sure what it's doing won't cause harm?

Think of google's self driving cars. That's the nearest, potentially deadliest robotic systems we need to worry about in our daily lives. Any wrong move while driving could harm the passengers as well as people in other cars. You can never be _sure_ you won't get into an accident. So how sure do you have to be? Then every once in a way someone will get killed anyways, who's the blame? The robot? The engineer? No one? If we define these laws too strictly we may stop ourselves from creating amazing technologies that change the world. But if it's too loose or not present at all then we enter this moral grey zone where accidents happen even though they might not have happened with more processing time.

2 comments:

  1. Mark,

    Robots cannot be fully moral unless they are programmed to kill humans in situations in which that's what morality requires.

    There's a recent book that discusses this and other topics including the extension of Asimov's laws. It's called Robot Nation -- Surviving the Greatest Socioeconomic Upheaval of all time by Stan Neilson.

    ReplyDelete
  2. I suppose, can we ever show humans have morality in those situations though? For humans those moral choices are not even clear so once again we come back to a requirements for morality in machines being equivalent to a requirements that machines become conscious, intelligent beings along a similar model to human beings.

    Mark

    ReplyDelete