How Does a Terminator Know When to Not Terminate?

In the skies above Afghanistan and along the roadsides of Iraq, unmanned military machines are changing the nature of combat. These robots may soon be making life-or-death decisions themselves.

By Mark Anderson
Sep 27, 2010 5:00 AMNov 12, 2019 4:31 AM

Newsletter

Sign up for our email newsletter for the latest science news
 

On a scorching early afternoon in August 2007, Col. David “Diesel” Sullivan was doing his daily rounds at Creech Air Force Base outside Las Vegas when he got the call. A pilot under his command at the base, remotely flying an armed, unmanned MQ-9 Reaper aerial drone in eastern Afghanistan, had spotted four men perched on an Afghan hilltop. Were they Taliban? If so, they were perfectly placed to ambush an American raiding party just hours away.

Sullivan walked out of the heat and into the small, single-wide trailer operations room (the “ops cell”) to assess the situation firsthand. Maneuvering the Reaper by joystick, the pilot pointed to the screen: Four human shapes were silhouetted against a tarp some 7,500 miles away. The local time was 2 a.m. These were hardly goatherds, and with no coalition soldiers reported on the hill, consensus emerged that the four figures were insurgents. Sullivan ordered the countdown for lethal force, and his pilot began the 10-minute sequence for launch of a laser-guided Hellfire missile. Then Sullivan noticed a detail that gave him pause. Two of the men were doing sit-ups and push-ups. “I’ve been watching the Taliban for years now in small units like that,” he said. “They would not be doing exercises.”

Ultimately U.S. commanders on the ground were asked to check with their field units one last time. Sullivan watched via infrared video feed from half a world away as one of the silhouettes picked up a portable phone. With only minutes to spare, Sullivan stopped what would have been a deadly friendly-fire missile strike on American troops.

Robotics experts call what Sullivan exercised “discrimination,” the ability to target enemy forces while keeping fire away from civilians, friendly troops, and prisoners of war. In the move toward increasing use of unmanned military machines, discrimination is the elephant in the room.

Rise of the MachinesThousands of unmanned aerial drones, tanks, and submarines have been developed and deployed by militaries in up to 50 nations. These include unpiloted planes such as the MQ-9 Reaper and its more famous predecessor, the MQ-1 Predator, as well as ground vehicles such as the unmanned minitank Talon SWORDS (Special Weapons Observation Reconnaissance Detection System) and the multipurpose PackBot, which has been widely used to defuse roadside bombs in Iraq and which can also gather intelligence and detect snipers.

They are machines, certainly, but calling these devices robots—as many in both the military and the public do—is a leap. Most mobile military robots in use today are piloted via remote control by a human operator. Discrimination, more than anything else, requires that humans call the shots. Indeed, field tests of autonomous military bots have tragically made the point: In a South African military exercise in October 2007, an automated antiaircraft gun went haywire, spraying 35mm cannon shells at a phalanx of nearby artillery soldiers, killing nine.

Even so, as surely as every modern jetliner runs primarily on autopilot, tomorrow’s military robots will increasingly operate on their own initiative. Before the decade is out, some fighting force may well succeed in fielding a military robot that can kill without a joystick operator behind a curtain elsewhere in the world. On American shores, the National Defense Authorization Act of 2000 [PDF] mandated that by 2010 one-third of the military’s combat aircraft be unmanned and that by 2015 one-third of all combat vehicles be unmanned. (As far as the larger drones go, we are running behind schedule.) Although legislation did not specify that bots be autonomous, Peter Singer, director of the Brookings Institution’s 21st Century Defense Initiative project and author of a recent book on military robots, Wired for War, calls the push toward autonomy real and pressing.

For instance, enemies could jam radio signals sent by satellite to control drones and robots from afar. “That leads us to make the system more autonomous,” Singer says. “If I’ve sent it out on a mission and the enemy jams it, we still want to be able to carry out the mission.” An autonomous drone following internal commands could do just that.

For those in the field, the rise of the machines has been dramatic and pervasive. Edward Barrett, a U.S. Naval Academy instructor and former Air Force officer who flew through war zones in Sarajevo in 1993 and in the Iraq War a decade later, says that in the past decade he has grown accustomed to sharing airspace with remotely piloted drones. “Technologies change and platforms change,” he says of the evolving Predators and Reapers. “That’s a major shift. Now we’ve got a nonhuman combatant, in many cases remotely controlled by humans. An autonomous nonhuman would be nice to have.”

First of Its Kind In one tiny sliver of the world—the demilitarized zone separating North and South Korea—the first autonomous killer bots have already finished a trial run. But discrimination is hardly their forte. “There are automatic guns in the demilitarized zone that can target and fire without human decision,” says Colin Allen, a cognitive scientist at Indiana University and coauthor of Moral Machines: Teaching Robots Right From Wrong. “They’re not the full sci-fi thing because they’re not mobile, and everybody knows where not to go. Nevertheless, if a child strays into that area, these machines cannot discriminate them from adults.”

Also close to launch are autonomous soldier bots requiring lesser powers of discrimination, designed for ancillary roles like ferrying supplies. Airdrops of combat and medical goods, for instance, may soon be handled by the Onyx autonomously guided parafoil system, a gliding robotic sail that guides cargo dropped from aircraft at altitudes of up to 25,000 feet and possibly higher. Onyx is controlled by two brake lines comparable to those used by parachuters: When one line is tugged, the vehicle turns, and when both are tugged, it slows. A more advanced model, LEAPP, will incorporate a computer-controlled propeller to guide cargo to specific locations on the ground. Flying in formation, the bots would avoid collision and land closer to troops than was ever possible before.

Meanwhile, Vecna Robotics is perfecting a tank with a humanoid upper body to remove injured soldiers from the battle­field. The system, called the Battlefield Extraction-Assist Robot, or BEAR, would ride into a battle zone, autonomously retrieve a wounded soldier, and then, following remote instructions, carry him to safety in its robotic arms.

To qualify for full-fledged combat roles, autonomous robots would have to master the art of discrimination, recognizing schools, hospitals, and city centers and distinguishing a bomb-packing insurgent from a book-toting schoolboy before firing a shot. Inching closer to this goal is an upgraded version of the MQ-9 Reaper, a single-propeller aircraft typically armed with several guided missiles and bombs. In its current incarnation, the MQ-9 is remote controlled. A pod of cameras hangs beneath its “chin,” sending images to a satellite. From there the pictures are beamed to a remote pilot, who surveys the scene and uses a keyboard and joystick to issue commands controlling the MQ-9’s actions. A limitation here is scope. Current-generation aerial drones have a minuscule field of vision, showing just the area immediately below. Only by placing those views in a broader context—by looking at pictures of the overall terrain sent independently by satellite—can the pilot know what to do.

To overcome such limitations, a new imaging system for the Reaper, called Gorgon Stare, will bequeath the gift of wide-angle sight. Named for the mythical Gorgon sisters, whose stares turned anyone to stone, the system will capture minute detail and subtle motion over an area the size of a small town, according to Robert Marlin, Air Force deputy director of Intelligence, Surveillance, and Reconnaissance Capabilities. It will do so by stitching together data gathered by five high-resolution cameras and four night-vision cameras. Although many of the technical specs remain classified, Gorgon-equipped Reapers are scheduled to begin scanning wide swaths of terrain in active duty this fall. Up to 10 individual video feeds within each swath will enable commanders and analysts to track activity in individual buildings or along side streets. Some Air Force plans hint at future versions of Gorgon Stare tracking up to 96 individual scenes from a single Reaper drone.

Already the technology has inspired private companies to develop software for monitoring the vast rivers of data each Gorgon-enhanced Reaper will send back. The first-generation software will probably resemble the computer programs that department stores use to monitor shoplifting surveillance cameras, Air Force technical adviser Mike Welch says. The department store security guards on duty at any given time may be looking at a bank of 40 monitors displaying camera feeds from throughout the store. One or two people cannot possibly watch every monitor simultaneously, so stores use security software that automatically looks for movements in “high threat” regions of the store and then alerts the guards. “They have software that tips them off for things that don’t fit the model,” Welch explains. “If people are standing around the jewelry counter, for instance, it will notify the guard.”

Human analysts working with Gorgon Stare would monitor baseline video to discover the equivalent of the jewelry counter. The computer would make this monumental job tenable, notifying analysts when a certain level of human motion or a certain type of vehicle has been observed.

Innocent as this may sound, as the number of Reapers and the number of video feeds from each Reaper continue to rise, monitoring will need to be increasingly automated and complex, Singer says. Gorgon Stare pushes the Reaper and its successors farther down autonomy’s slippery slope. “You don’t have a fire hose shooting back data at you; you have the system decide what’s important to send back or not,” he says. “That’s autonomy.”

Rules of Robot EngagementWhen a soldier decides whom to watch, target, or kill, he or she draws on a well-sorted set of rules of engagement. As we increasingly empower our machines to make those decisions, the problem of discrimination will be subsumed by a broader issue: robot ethics.

That is where Georgia Tech roboticist Ronald Arkin comes in. In 2009 he completed a three-year research project for the Army to create an autonomous military robot software system imbued with basic ethics. More sophisticated versions of the code might someday give a robot an artificial conscience, based on standard laws of war and on guidelines such as the Army Field Manual and the Geneva Conventions. This “ethical governor,” as Arkin calls it, would act like a judge reviewing every instance in which a military robot might use lethal force.

Arkin cites the wartime atrocities that human soldiers have committed in the heat of battle as a motivating factor behind his work. One of his friends was a lieutenant in the Vietnam War. “He mentioned that when they were in free-fire zones, the people in his unit fired at anything that moved,” Arkin recalls. “We can design robots that can do better than that, I am fully convinced.”

The current proof-of-concept version of Arkin’s ethical governor assumes that discrimination has been programmed into the machine’s software. The robot is permitted to fire only within human-designated kill zones, where it must minimize damage to civilian structures and other regions exempted by human operators. The program then weighs numerically what soldiers in battle weigh qualitatively. One part of the code, for instance, makes basic calculations about how much force should be used and under what circumstances. What Arkin calls the “proportionality algorithm” mathematically compares the military importance of a target (ranked on a scale of 0 to 5—a number that a human must provide) with the capability of every weapon the robot has and with every nearby place from which the robot can fire that weapon. In some situations the robot might have to reposition before it fires to ensure minimal collateral damage. In others the target might be ensconced within a school or crowd of civilians, creating a situation in which the algorithm prohibits firing under any circumstance at all.

Last year Arkin ran computerized battlefield simulations involving an armed, autonomous aerial drone, viewable on his Web site. Rules of the ethical governor were clear: For instance, restricted by blast radius, the drone could not launch a 500-pound laser-guided bomb on targets less than 2,000 feet from noncombatants—unless that target was critical, perhaps a top-level Al Qaeda operative. The governor also controlled the drone’s Hellfire missiles (allowed to hit targets 20 feet from noncombatants) and machine guns (1 foot).

Before the decade is out, some fighting force may field a military robot that can kill without a joystick operator behind a curtain elsewhere in the world.

In these simulations, the drone approaches kill zones with civilian structures such as an apartment building, a religious landmark, and a hospital. In one scenario, the simulated Reaper targets a gathering of enemy combatants at a funeral. But the ethical governor finds no weapon or firing position that ensures the safety of civilians and avoids the desecration of the cemetery, so the software orders the drone to hold its fire.

A second scenario has the Reaper flying into a kill zone with an enemy convoy driving between an apartment building and a religious landmark. This time the governor permits the drone to fire on the convoy, but only after first weighing the military value of the target against potential damage to civilian structures.

With further development, the ethical governor could enable military robots to use lethal force with a closer adherence to the laws of war than human soldiers achieve, Arkin argues. “And robotic systems do not have an inherent right to self-defense,” he says, so a robot could be used to approach an unknown person or vehicle without the haste or panic that a human soldier might feel. “These systems can solve problems, and we should not think of them as operating in the same way as human soldiers. They won’t.”

Arkin says his ethical governor is still in its early stages, so rudimentary that it cannot even be prototyped for testing in the field. But he also calls it naive to suppose that military robots will not gain in autonomy as technology improves. In his view, building smarter and better artificial ethics is crucial to keeping robot autonomy in check.

Tech planners like Arkin focus on near-term, plausible battlefield challenges. The military machines that they worry about are far removed from the famous sci-fi killer robots of the Terminator movies or Battle­star Galactica or the sophisticated, near-human machines of Isaac Asimov’s I, Robot and Philip K. Dick’s Do Androids Dream of Electric Sheep? Yet philosopher of technology Peter Asaro of the New School in New York City, cofounder of the International Committee for Robot Arms Control, thinks there are useful lessons to be learned from the fictional extremes. He worries that even if military robots with emerging autonomy are dumb by human standards, they may still be smart enough in their specialized domains to commit accidental massacres or even start accidental wars.

For instance, Asaro says, glitches in an autonomous aerial drone that lead to accidental missile strikes may not be easily distinguishable from a drone’s “intentional” missile strikes. The more politically tense the situation, the more likely that unpredictable military actions by autonomous drones could quickly descend into all-out warfare, Asaro believes.

Say, for instance, that Iran is testing its own autonomous aerial drone and fires on an American troop convoy just over the Afghanistan border. Even if Iran disavows the drone’s actions, no one may ever truly establish intent. Iran could insist it was a glitch, while hawks in the United States would have all the casus belli they need to launch a new Middle East war.

A battle of robot against robot might also erupt. With up to 50 nations around the world developing military robots, says physicist Jürgen Altmann of Dortmund Technical University in Germany, opposing aerial drones could ultimately square off against each other. Operating drones by remote control via satellite adds at least a half-second delay, he notes, so the pressure to switch to quick-draw autonomous mode would be strong, if only to ensure having the upper hand.

Altmann describes another hypothetical situation in which Chinese and American drones encounter each other in the North Pacific off the coast of Guam. Such a confrontation might not remain restricted to unmanned drones for long. “You might have a solar reflection or something that is mistaken for a first shot,” he says. “If you have automatic reactions, you might stumble into a shooting war through any kind of unclear event.”

Robot Arms Control Given the risks, many see robot autonomy as a genie in a bottle, best kept contained. Last September Altmann and ethicist Robert Sparrow from Monash University in Melbourne, Australia, traveled to the home of Sheffield University robotics researcher Noel Sharkey in the U.K. (Asaro attended electronically.) Sharkey had convened this small conference to allow 48 hours of intense debate and deliberation about the problem of autonomous military robots.

Bleary-eyed from one long day and two very late nights—and hastened by the need to catch their respective trains in the early afternoon—the four participants agreed to draft a founding document for what became the International Committee for Robot Arms Control. Inspired in part by the Nobel Peace Prize–winning anti–land mine movement, all present agreed that militaries around the world are asking too little and moving too quickly. “Machines should not be allowed to make the decision to kill people,” the committee summed up in its one-page position paper.

Sharkey frequently speaks to military officials around the world about autonomous lethal robots. At the Baltic Defense College in Tartu, Estonia, this past February, he took a straw poll of the military officers who had come to hear him speak. Fifty-eight of the 60, he said, raised their hands when asked if they would like to halt further development of armed autonomous robots. “The military people I talk to are as concerned as anybody,” he says.

At the same time, there is no question that unmanned machines like the PackBot and the Predator drone have been extremely useful in America’s most recent conflicts. Robot autonomy could be even more valuable, for both military and civilian applications: for search and rescue, disarming bombs, or delivering better and faster medical care. Most American fatalities in Iraq and Afghanistan, for instance, come from improvised explosive devices (IEDs). “Nobody’s making an autonomous bomb-finding robot,” Sharkey says. “At the moment, the machinery for doing that is too large to fit on a robot. But that’s where the money should go.”

Asaro proposes that the next step be an international treaty, like the Ottawa Mine Ban Treaty, to regulate the technology. “We should be worried about who is going to be harmed. We need international standards and a test to prove the technology can truly discriminate,” he says. Arkin agrees, noting that a functioning ethical governor to regulate robot behavior on the battlefield is at least a decade away.

But it is not even clear that ethical behavior is a programmable skill. The spotty record of artificial intelligence research does not inspire confidence. “If I were going to speak to the robotics and artificial intelligence people,” Colonel Sullivan says, “I would ask, ‘How will they build software to scratch that gut instinct or sixth sense?’ Combat is not black-and-white.”

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group