Should robots be allowed to kill?

More than a hundred tech leaders have called for a global ban on lethal autonomous weapons. Killer roots could be used against innocent populations and get out of control if hacked, they say, adding that once this Pandora's box is opened it would be very difficult to close again. Europe's press share their concerns.

Open/close all quotes
Mladina (SI) /

Danger can increase rapidly

Mladina endorses the entrepreneur's demands:

“[Tesla founder] Elon Musk, who has long warned of the dangers of artificial intelligence, believes that it is more dangerous for human beings than nuclear war with North Korea. ... Nobody likes to be regulated. But everything that is dangerous for humans (cars, aeroplanes, food, medicine…) is regulated. And AI should be regulated too. An Oxford University study recently showed that AI will outperform humans in all areas within 45 years. Many people therefore believe that the danger for humans is substantial if our objectives are not brought in line with those of the machines.”

Gazeta Wyborcza (PL) /

Algorithms make war even more gruesome

Gazeta Wyborcza also fears that automated weapons will usher in new and gruesome forms of war:

“Although we already have drones piloted by artificial intelligence, at the end of the day it's soldiers who decide over the life or death of those targeted by the algorithm. People who suffer from post-traumatic stress disorders often wind up on the therapist's couch. Now algorithms are supposed to liberate them from their traumas by deciding themselves which people to kill. It's not hard to imagine how murderous algorithms will divide the world into trench-like 'red and green zones'. Deploying autonomous weapons means introducing ruthless, cold-blooded calculation onto the battlefield in an entirely unprecedented way.”

El País (ES) /

Bans only follow field testing

An immediate ban is unlikely, El País believes:

“The moment a machine is allowed to decide whether or not to kill a person a highly complex red line will have been crossed. Once artificial intelligence is equipped with such power there can be no going back. History is full of examples of weapons bans, for example regarding mines and chemical weapons. ... Until now, however, such bans have been introduced only after the weapons have been deployed en masse. Calling for a ban and stimulating debate is important. However this would be the first time that the powerful armies of the world renounced a weapon without trying it out beforehand.”

Le Soir (BE) /

Pushing for moral self-regulation

Mary Warenham, coordinator of Human Rights Watch's campaign "Stop killer robots" presents an alternative to a ban in Le Soir:

“In view of the many different ways of using robots I'm not sure that a ban would really be effective. But that's no reason not to try to introduce international rules. Will we be able to? I fear it won't be easy. Nevertheless one can spread a moral message by refusing to use such weapons. That would delegitimise their use by other regimes, as has been the case with chemical weapons. First in line here are the more technologically advanced countries, in particular the United States.”