Most of the robots we’re vaguely familiar with are good at menial tasks, like welding car parts or assembling microchips. But that isn’t to say that what they do isn’t difficult: Those tasks require ultra-precise data to be programmed beforehand. And though there are practical applications for spatially aware technology–like the Roomba–the algorithmic decision-making process is still eons away from the actual human thought process.
But an article from this month’s issue of National Geographic takes a look at some of the new robot technologies that may be making their way into everyday life in the near-future. In it, a new Segway-wheeled humanoid bot dubbed “HERB” is engaging in processes a bit more akin to human consciousness: HERB can visualize and analyze his shifting surroundings before acting on them.
“I call it dreaming,” says HERB’s builder, Siddhartha Srinivasa. “It helps people intuitively understand that the robot is actually visualizing itself doing something.” It’s not dreaming in the sleeping sense, per se, but like humans it allows the robot to adapt to a dynamic environment with a number of factors constantly changing it (like other humans).
It allows HERB to envision the future similarly to the way we do. Some might even call it a primitive form of imagining. Here’s a key passage:
In the lab one of Srinivasa’s students taps a button, issuing a command to pick up a juice box sitting on a nearby table. HERB’s laser spins, creating a 3-D grid mapping the location of nearby people and objects, and the camera locks on a likely candidate for the target juice box. The robot slowly reaches over and takes hold of the box, keeping it upright. On command, he gently puts it down. To the uninitiated, the accomplishment might seem underwhelming. “When I showed it to my mom,” Srinivasa says, “she couldn’t understand why HERB has to think so hard to pick up a cup.”
The rest of the piece is equally fascinating, but I think one of the main takeaways is that soon machines won’t be as rigidly assigned to rudimentary tasks — they’ll be able to adapt on the go. Of course, this has practical applications in industries like home health care for the elderly or disabled, and if carried out effectively, could help improve the quality of living for lots of people who need it.
Couple that in with IBM’s Watson–which we’ve covered extensively here–and his ability to intake unstructured data (like human speech), and it’s pretty easy to picture robotic “helpers” making their way into households within the next 20 or so years.
Is it scary? Kind of, yeah. But at this point, robots becoming a part of our daily lives feels more like an inevitability than anything.