PARIS — We’ve hardly made peace with the idea of driverless cars, and now we’re being told that artificial intelligence could also control rifles, missiles and bombs of the future. The warning was issued last summer by leading figures such as Tesla chief Elon Musk, physicist Stephen Hawking and MIT Professor Noam Chomsky.
During July’s International Joint Conference on Artificial Intelligence (IJCAI), they and thousands of researchers co-signed an open letter warning against weapons of the future. “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control,” it read.
The letter had the merit of publicizing a debate that has been agitating diplomats, the defense industry and NGOs for several years, but without any major impact until now. Given the recent progress of robotics and artificial intelligence, their use in the world of weapons is not a matter of Terminator-like science fiction anymore.
The defense industry is an important field of experimentation for robotic engineers. Ariel, one of the first machines made by American startup iRobot, which is known for its vacuum cleaners, was developed in 1996 to detect and eliminate battlefield mines. Aerial defense systems are also widely automated. But they aren’t designed to put human lives in danger, and aren’t entirely autonomous yet.
Developed to kill
What worries researchers and NGOs is that some systems may be developed specifically to kill and will be able to operate without any human supervision. They may not exist yet, but these weapons of the future already have a name: “Lethal autonomous weapons systems” (LAWS).
A November 2012 report published by Human Rights Watch and the International Human Rights Clinic, a program of Harvard’s law faculty, addressed the issue. Titled “Losing Humanity,” the report called on all countries to “prohibit the development, production and use of fully autonomous weapons,” which “could be developed within 20 to 30 years.”
[rebelmouse-image 27089356 alt=”” original_size=”300×341″ expand=1]
Samsung’s SGR-A1 military robot sentry — Photo: Wikimedia Commons
In the wake of this report, the United Nations began addressing the issue. A first conference was organized in May 2014 at the request of the then-French ambassador Jean-Hugues Simon-Michel, with the Geneva Conference on Disarmament. A second conference was held last May, but the represented countries failed to reach a consensus. That’s because the exact definition of lethal autonomous weapons varies according to different countries.
“The characteristic of autonomous weapons is that they can take very different shapes,” explains American researcher Peter Asaro, co-founder of the International Committee for Robot Arms Control (ICRAC) and one of the first academics to study the issue. “The most efficient way would be to consider that, whatever the system, it should be banned from the moment it can aim and fire without real human supervision.”
And indeed, the main problem with autonomous weapons is responsibility. Today, all defense systems put a human in the loop. Tomorrow, completely autonomous systems could lead to armed forces losing any liability, and thus to an increase in war crimes without anyone to bring before an international court. In a 2012 directive, the U.S. Defense Department stated that “autonomous and semi-autonomous systems must be designed to authorize … an appropriate level of human judgment in the use of force.”
“Robots aren’t scared”
Paradoxically, the human factor is also cited by those in favor of the development of lethal autonomous weapons. “Robots aren’t scared,” Steve Groves, from the conservative U.S. think tank Heritage Foundation, told CBS last May. “They don’t have fits of madness. They don’t react to rage.”
Peter Asaro dismisses this argument. “It can maybe be demonstrated that an autonomous system is more efficient that a human,” he says. “But what humans do goes well beyond aiming and shooting: They take the context into account, are capable of assessing if civilian lives could be at stake. All this will not necessarily make sense for a machine.”
Asaro is unconvinced that the recent appeal to preemptively ban these weapons could lead to global consensus and the equivalent of an international non-proliferation treaty. “The United Nations acts very slowly, both for bureaucratic reasons and because they’re looking to obtain a consensus from a large number of states,” he says. “I think that if it takes two or three more years before reaching a treaty, the most advanced countries will have developed very sophisticated systems, and they won’t necessarily want to sign.”