The road to a sentient machine intelligence that eventually eats our minds and makes us wear black robes and Silhouette shades and fight Hugo Weaving long past the point of dramatic sanity is fraught with other perils — like a robot that not only pours you a beer, but “knows” not to if you start slopping your glass around.
Consider the end nigh, friends.
Blame the wizards at Cornell’s Personal Robotics Lab for figuring out how to empower a robot to anticipate all your smooth or occasionally inebriated moves.
Take putting something in the fridge. Granted, not the most interesting thing for a robot to help with (yes, mind-reading would be cooler), but by using Microsoft’s Kinect camera and a 3D video database of over 100 common household activities, Cornell’s robot can actually tell what you intend to do, at least some of the time. Take a step toward the fridge clutching the object and the robot notices what you’re up to, sees that you’re holding something, discerns that you’re moving toward the fridge, squares that with all the other possible interactions between objects it’s viewing, hypothesizes various possible scenarios, then selects the one it deems most likely to happen.
So a little like chess, then, only played with refrigerators. And beer.
“We extract the general principles of how people behave,” said Ashutosh Saxena, an assistant professor of computer science at Cornell who worked with computer science grad student Hema S. Koppula on the project. “Drinking coffee is a big activity, but there are several parts to it.”
Say you’re ready for a refill on that frothy mug of delicious whatever and Cornell’s robot approaches to do the honors, but as it’s twisting its booze-filled claw, tipping the bottle neck toward your waiting glass, you suddenly reach for the container and pull it toward you. Instead of pouring all that precious, precious liquid onto the table, the robot sees what you’re doing and stops — not after you’ve grabbed the glass, but the moment you reach for it.
Well, most of the time. According to Cornell’s tests, the robot made the right prediction 82% of the time when looking one second into the future, 71% at three seconds and 57% at 10 seconds.
Check out the video to see the robot in action, and yes, the movie reference you’re looking for is Donnie Darko when those creepy-cool lines start arcing around the room.