Military Robots Are Already Capable of Killing — If Humans Let Them

From left to right, Herbert Marshall (1890 - 1966), Richard Egan (1921 - 1987) and Constance Dowling (1920 - 1969) battle a killer robot in a still from the science fiction film 'Gog', 1954. (Photo by Archive Photos/Getty Images)
From left to right, Herbert Marshall (1890 - 1966), Richard Egan (1921 - 1987) and Constance Dowling (1920 - 1969) battle a killer robot in a still from the science fiction film 'Gog', 1954. Photo: Archive Photos/Getty Images

When an autonomous robot kills a human being, who bears the blame? The grunt who programmed the bot? The officer who sent it into the field? The civilians who manufactured it? The researchers who designed it? The lawmakers who started the conflict in the first place?

Human Rights Watch recently dug into that question and emerged with a disconcerting answer — no one. In a report released ahead of a United Nations meeting on “autonomous weapons systems,” HRW says current laws would allow all of the people associated with the manufacture and deployment of autonomous robotic weapons to avoid liability for their actions. “Existing mechanisms for legal accountability are ill suited and inadequate to address the unlawful harms fully autonomous weapons might cause,” the report says. Consequently, its authors think the robots should be banned.

The ban would be preemptive, since autonomous killing robots have not yet been deployed in war zones. They will be soon, though. Last November, the DOD began exploring how it can achieve “greater operational use of autonomy across all warfighting domains.” Semi-autonomous weapons systems are already in use, and militaries seem willing to allow their robots to do everything autonomously, except fire on humans. The U.K.’s Taranis, for example, is a prototype stealth fighter capable of finding, marking, and killing enemies without any human intervention. But British officials stress that’s not how it will be used. Similarly, armed robots in the DMZ between North and South Korea are able to autonomously identify humans and shoot them with a machine gun. For now, at least, humans are still required to make that decision.

Then there are the Russians, who are explicitly trying to develop robots that can “strike on their own.” Among their early successes is a tank-mounted machine gun built to patrol Russian missile sites and take out intruders on sight.

While the idea of autonomous killer robots may raise the specter of self-aware machines trying to take over the world, that’s not what Human Rights Watch is trying to stop. The organization is more worried about the lack of liability in a scenario where a robot kills someone it shouldn’t have. “No accountability means no deterrence of future crimes, no retribution for victims, no social condemnation of the responsible party. The many obstacles to justice for potential victims show why we urgently need to ban fully autonomous weapons,” the report’s lead author and Harvard Law lecturer Bonnie Docherty said in a statement.  

The bad news for HRW is that the ban looks unlikely. A U.N. discussion just like next week’s took place last year, and only 5 out of 80 nations voiced support for it. Those five objectors included Cuba and the Vatican — so you know where to head for amnesty when the robot wars begin.