Last year Arkin ran computerized battlefield simulations involving an armed, autonomous aerial drone, viewable on his Web site. Rules of the ethical governor were clear: For instance, restricted by blast radius, the drone could not launch a 500-pound laser-guided bomb on targets less than 2,000 feet from noncombatants—unless that target was critical, perhaps a top-level Al Qaeda operative. The governor also controlled the drone’s Hellfire missiles (allowed to hit targets 20 feet from noncombatants) and machine guns (1 foot).
Before the decade is out, some fighting force may field a military robot that can kill without a joystick operator behind a curtain elsewhere in the world.
In these simulations, the drone approaches kill zones with civilian structures such as an apartment building, a religious landmark, and a hospital. In one scenario, the simulated Reaper targets a gathering of enemy combatants at a funeral. But the ethical governor finds no weapon or firing position that ensures the safety of civilians and avoids the desecration of the cemetery, so the software orders the drone to hold its fire.
A second scenario has the Reaper flying into a kill zone with an enemy convoy driving between an apartment building and a religious landmark. This time the governor permits the drone to fire on the convoy, but only after first weighing the military value of the target against potential damage to civilian structures.
With further development, the ethical governor could enable military robots to use lethal force with a closer adherence to the laws of war than human soldiers achieve, Arkin argues. “And robotic systems do not have an inherent right to self-defense,” he says, so a robot could be used to approach an unknown person or vehicle without the haste or panic that a human soldier might feel. “These systems can solve problems, and we should not think of them as operating in the same way as human soldiers. They won’t.”
Arkin says his ethical governor is still in its early stages, so rudimentary that it cannot even be prototyped for testing in the field. But he also calls it naive to suppose that military robots will not gain in autonomy as technology improves. In his view, building smarter and better artificial ethics is crucial to keeping robot autonomy in check.
Tech planners like Arkin focus on near-term, plausible battlefield challenges. The military machines that they worry about are far removed from the famous sci-fi killer robots of the Terminator movies or Battlestar Galactica or the sophisticated, near-human machines of Isaac Asimov’s I, Robot and Philip K. Dick’s Do Androids Dream of Electric Sheep? Yet philosopher of technology Peter Asaro of the New School in New York City, cofounder of the International Committee for Robot Arms Control, thinks there are useful lessons to be learned from the fictional extremes. He worries that even if military robots with emerging autonomy are dumb by human standards, they may still be smart enough in their specialized domains to commit accidental massacres or even start accidental wars.
For instance, Asaro says, glitches in an autonomous aerial drone that lead to accidental missile strikes may not be easily distinguishable from a drone’s “intentional” missile strikes. The more politically tense the situation, the more likely that unpredictable military actions by autonomous drones could quickly descend into all-out warfare, Asaro believes.
Say, for instance, that Iran is testing its own autonomous aerial drone and fires on an American troop convoy just over the Afghanistan border. Even if Iran disavows the drone’s actions, no one may ever truly establish intent. Iran could insist it was a glitch, while hawks in the United States would have all the casus belli they need to launch a new Middle East war.
A battle of robot against robot might also erupt. With up to 50 nations around the world developing military robots, says physicist Jürgen Altmann of Dortmund Technical University in Germany, opposing aerial drones could ultimately square off against each other. Operating drones by remote control via satellite adds at least a half-second delay, he notes, so the pressure to switch to quick-draw autonomous mode would be strong, if only to ensure having the upper hand.
Altmann describes another hypothetical situation in which Chinese and American drones encounter each other in the North Pacific off the coast of Guam. Such a confrontation might not remain restricted to unmanned drones for long. “You might have a solar reflection or something that is mistaken for a first shot,” he says. “If you have automatic reactions, you might stumble into a shooting war through any kind of unclear event.”
Robot Arms Control
Given the risks, many see robot autonomy as a genie in a bottle, best kept contained. Last September Altmann and ethicist Robert Sparrow from Monash University in Melbourne, Australia, traveled to the home of Sheffield University robotics researcher Noel Sharkey in the U.K. (Asaro attended electronically.) Sharkey had convened this small conference to allow 48 hours of intense debate and deliberation about the problem of autonomous military robots.
Bleary-eyed from one long day and two very late nights—and hastened by the need to catch their respective trains in the early afternoon—the four participants agreed to draft a founding document for what became the International Committee for Robot Arms Control. Inspired in part by the Nobel Peace Prize–winning anti–land mine movement, all present agreed that militaries around the world are asking too little and moving too quickly. “Machines should not be allowed to make the decision to kill people,” the committee summed up in its one-page position paper.
Sharkey frequently speaks to military officials around the world about autonomous lethal robots. At the Baltic Defense College in Tartu, Estonia, this past February, he took a straw poll of the military officers who had come to hear him speak. Fifty-eight of the 60, he said, raised their hands when asked if they would like to halt further development of armed autonomous robots. “The military people I talk to are as concerned as anybody,” he says.
At the same time, there is no question that unmanned machines like the PackBot and the Predator drone have been extremely useful in America’s most recent conflicts. Robot autonomy could be even more valuable, for both military and civilian applications: for search and rescue, disarming bombs, or delivering better and faster medical care. Most American fatalities in Iraq and Afghanistan, for instance, come from improvised explosive devices (IEDs). “Nobody’s making an autonomous bomb-finding robot,” Sharkey says. “At the moment, the machinery for doing that is too large to fit on a robot. But that’s where the money should go.”
Asaro proposes that the next step be an international treaty, like the Ottawa Mine Ban Treaty, to regulate the technology. “We should be worried about who is going to be harmed. We need international standards and a test to prove the technology can truly discriminate,” he says. Arkin agrees, noting that a functioning ethical governor to regulate robot behavior on the battlefield is at least a decade away.
But it is not even clear that ethical behavior is a programmable skill. The spotty record of artificial intelligence research does not inspire confidence. “If I were going to speak to the robotics and artificial intelligence people,” Colonel Sullivan says, “I would ask, ‘How will they build software to scratch that gut instinct or sixth sense?’ Combat is not black-and-white.”