3 minute read

The European Parliament has apparently been discussing giving rights to machines.

European lawmakers called on Thursday for EU-wide legislation to regulate the rise of robots, including an ethical framework for their development and deployment and the establishment of liability for the actions of robots including autonomous driving cars. — Reuters

So a bit more detail. The EU Parliament, bless it’s heart, is really trying to get ahead of the curve and not let other jurisdictions set rules which they react to. That’s fantastic, and on privacy and human rights the EU have led in many ways. They motivated this helpful initiative in this way:

“Humankind stands on the threshold of an era when ever more sophisticated robots, bots, androids and other manifestations of artificial intelligence seem poised to unleash a new industrial revolution, which is likely to leave no stratum of society untouched, it is vitally important for the legislature to consider all its implications” — Robot rights violate human rights, experts warn EU

This is generally true and governments around the world should be discussing the issues and planning for change … but the “rise of robots” is misleading. The fact they list “robots, bots, androids” as somehow important enough to distinguish is interesting . At the same time, they reserve a single vague point “other manifestations of AI” for the much larger impact that AI/ML software systems are having on automating financial trading, hiring processes, teaching, telephone call centers, and countless other jobs/industries that are being pressured by AI that doesn’t come in the form of a robot…or android.

(Aside: Does anyone know what the difference is there? I mean if we’re talking about the Star Trek/Star Wars universes I have strong opinions about the difference between robots and androids…but in the real world? Not so much. I mean, what about replicants?)

The specific text from the EU Parliamentary suggestion is quoted in the open letter from robotics and AI experts against it :

“Creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently;”

What needs to be regulated though is how people use AI/ML for nefarious ends, or recklessly without regard for the outcomes to society. These human caused problems with how AI is used come in at least three flavours:

1 — Autonomous Weapons

This one is simple, don’t use AI/ML to kill people or make it super easy to kill people. I signed a letter last year on behalf of Canadian researchers asking the Canadian government to act to the use of AI for autonomous weapons. (Call for an International Ban on the Weaponization of Artificial Intelligence). Recently, Google employees have created a letter (Rule #417: there’s always a letter) calling on their employer to pull out of militarizing their AI algorithms.

2 — Personal Data and Privacy

Maintaining huge, uncontrolled databases of all your information and then making inferences using that information to sell you stuff. That’s just how the internet works and pays for itself. If someone could come up with a better way to make so much content “free” that would be great. But selling that information to the highest bidder without any checks on who they are? Not a good idea. Facebook’s voracious appetite for your personal data and their slow reaction to the Cambridge Analytica crisis is an example of what not to do here.

3 — Structural Unemployment

This seems to be the one the EU is worried about, and it’s a real problem. The usual argument against it is that in long term innovation has always created more jobs than are displaced. But that is usually over a generation. The pace of technological change is now much faster than a generation. Also, recent advances in AI/ML may break this pattern further by reducing or replacing jobs we think of as highly skilled. Intelligent systems are being built and deployed faster than humans can retrain. Innovation is now changing industries on the scale of a decade rather than a generation. But who’s fault is it? The robot (or android)? No, it’s always some human’s fault.

Solutions

All of this means society does need to start considering creative solutions to help people deal with structural unemployment such guaranteed income and bans on using AI for building better killing machines.

But whatever the problem, pretending robots or algorithms are responsible for the bad decisions humans make isn’t the answer.