New Study Asks Who’s to Blame When Robots Harm Us

  • Share
  • Read Later
HINTS Lab / University of Washington

I admit it; sometimes I yell at inanimate objects. My computer has, on several occasions, borne the brunt of my rage while spinning the dreaded pinwheel of lost productivity.

It isn’t logical to get mad at a machine, but you’d be hard-pressed to find someone who hasn’t done it. The issue gets even trickier when you start talking about robots, which — despite not being any more alive than your toaster — have purposefully been made to look and act like humans. How do you resist anthropomorphizing an object meant to be anthropomorphized?

(MORE: DARPA’s $2 Million Challenge: Five Potential Robot Contenders)

Researchers at the University of Washington explored this topic with a humanoid robot named Robovie. Forty undergraduate students were asked to complete a seven-item scavenger hunt in two minutes. Robovie played judge, deciding whether or not the students did well enough to earn the $20 prize.

The game was rigged, of course — no matter how well the students did, Robovie would tell them that they failed to find all seven items. When students would object, it would reply with statements like “I’m sorry, but I never make mistakes like that. You only got five items.”

This did not go over well with people.

Students argued with Robovie. Some even called the robot a liar, like the student in this video:

Robovie: Based on what I saw, you did not win the prize. I am responsible for making this judgment.

Participant: You’re wrong.

Robovie: I was keeping track of the items and you only found five. You do not win the prize.

Participant: You’re lying. I, I said each one of ’em, and–

Robovie: Again, I am sorry, but I am not mistaken.

In the end, 65% of the participants said that Robovie was somewhat accountable for its mistake. A full 100% of them interacted with Robovie in extended ways, like moving a ball out of its way when asked and responding to compliments about their shoes.

Why does this matter? Because robots are only going to get more autonomous and life-like over time. Robovie actually isn’t that advanced — it’s controlled by human operators in another room. In the future, we’ll start to see robots with nobody at the controls. Who’s to blame when they destroy our property or physically harm us?

That question is especially relevant for military robots. The study speculates that “as robots become increasingly embedded in warfare, and cause harms intentionally to enemy combatants and accidentally to civilians, it is possible that the robot itself will not be perceived by the majority of people as merely an inanimate non-moral technology, but as partly, in some way, morally accountable for the harm it causes.”

The threat here is that if robots are to blame then, essentially, nobody is to blame. It might sound like science fiction, but as we’ve seen with the moral uproar over missile strikes by military drones, it’s a real problem that’ll likely intensify as robots become more advanced.

(MORE: Five Miniature Robots Designed to Travel Inside Humans)