Should We Ban ‘Killer Robots’? Human Rights Group Thinks So

As if deploying drones -- unmanned aerial vehicles -- on the battlefield wasn't controversial enough, here's an even more disturbing question: Should we allow weapon-wielding robots that can "think" for themselves to attack people?

  • Share
  • Read Later

As if deploying drones — unmanned aerial vehicles — on the battlefield wasn’t controversial enough, here’s an even more disturbing question: Should we allow weapon-wielding robots that can “think” for themselves to attack people?

Imagine a drone that didn’t require a human controller remotely pulling its strings from some secure remote location — a drone that could make decisions about where to go, who to surveil…or who to liquidate.

No one’s deployed a robot like that yet, but international human rights advocacy group Human Rights Watch sees it as an issue we need to deal with before the genie’s out of the bottle. The group is calling for a preemptive ban on all such devices “because of the danger they pose to civilians in armed conflict.” They’ve even drafted a 50-page report titled “Losing Humanity: The Case Against Killer Robots,”which lays out its case against autonomous weaponized machines.

“There’s nothing in artificial intelligence or robotics that could discriminate between a combatant and a civilian,” argues Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield, in the HRW video above. “It would be impossible to tell the difference between a little girl pointing an ice cream at a robot, or someone pointing a rifle at it.”

And that’s the chief concern: that a robot given autonomy to choose who to attack, without human input, could misjudge and either injure or kill unlawful targets like civilians in combat. Autonomous, non-sentient robots would also, obviously, lack human compassion, as well as the ability to assess a situation proportionally — gauging whether risk of harm to civilians in a given situation outweighs the need to use force.

Autonomous weaponized robots also raise the thorny philosophical question of who would be accountable, say such a robot did injure or kill a civilian (or anyone else, unlawfully). Remember, autonomous doesn’t equal conscious, so punishing the robot’s out. Who then? The operational personnel that programmed or deployed it? The researchers that designed it? The military or government in general?

In a statement accompanying the report, HRW warns that we’re probably just two or three decades away — maybe even less — from weaponized, autonomous robots:

Fully autonomous weapons do not yet exist, and major powers, including the United States, have not made a decision to deploy them. But high-tech militaries are developing or have already deployed precursors that illustrate the push toward greater autonomy for machines on the battlefield. The United States is a leader in this technological development. Several other countries – including China, Germany, Israel, South Korea, Russia, and the United Kingdom – have also been involved. Many experts predict that full autonomy for weapons could be achieved in 20 to 30 years, and some think even sooner.

We’re already seeing non-weaponized autonomous robots pop up in contemporary research, such as a swarm of insect-like robots that can fly in lockstep “like escapees from Space Invaders,” or an eerily human-like robot that can climb and leap from obstacles, unaided. Boston Dynamics is even developing a robot for the Pentagon that can autonomously hunt human beings across rough terrain.

What does HRW recommend we do? Establish international as well as domestic laws that prohibit the development, production or use of such weapons, then initiate reviews of existing technologies that might preempt autonomous weapons and create a professional code among scientists to consider the many ethical and legal issues as we roll forward.

Maybe it’s time we revisited author Isaac Asimov’s three laws of robotics, codified in a 1942 short story, and — pun half-intended — foundational in getting people talking about the ethics of artificial intelligence.

7 comments
RevaPearlston
RevaPearlston

Killer robots!  Isaac Asimov must be rolling in his grave.

sam
sam

Absurd logic. Machines with better 'decision making' (taking great liberties in using the phrase right now) has a variety of other applications that can save lives just as easily as wage war. The machines are a means to an end, be it reconnaissance, attack or a rescue operation. The software coding and technology should not be blamed for its application; that is an entirely human failing.

Rio
Rio

Totally silly.  No intelligent military is going to relinquish a great advantage. 

MauriceLampl
MauriceLampl

This story reminds me of the movie:  Terminator and the sequels where robots are at war with humans, trying to take over the world by exterminating the human race...

gmc
gmc

Lets see the U.S. ban stupid land mines, then worry about smart, considered death.  A human with a knife or gun is essentially a cyborg anyway.

I can make microbes to kill, bugs and beetles to kill or spy, fake birds, fish, and none can be banned.

TonyStark
TonyStark

In the decades to come, I think it will be demonstrated that a robot will indeed be able to quickly and reliably distinguish an ice cream cone from a rifle. It will do so with better vision and without the handicaps of adrenaline and combat anxiety. Human ability to gauge a situation is a purely subjective matter, seen through the eyes of someone with preconceived beliefs and emotions. An AI construct, free from ideology and primal fear instincts, could be programmed to make an objective and well measured assessment. We can't do that ourselves right now. How many police shootings have been attributed to human mistakes or emotions? Friendly fire? I think at some point we will trust the machines more. They will be... predictable.

Right now, a moderately skilled individual can build a drone in their garage in a few days. Even if these machines are outlawed in the future, there will be rogue elements creating more complex and armed machines. We will have no choice but to create our own in defense. It's just a matter of time at this point.

Yoshi
Yoshi

Thinking about DOD contracts and research/development programs going out into the 2040's, yes, this is definitely most defense technologies' direction. Autonomy. Not very hard to hear THIS train a'comin'......

 What's the worst that could happen?