Take Kinect, Add Robotics, Strap to a Human and Presto — Automatic Building Mapper!

  • Share
  • Read Later
Patrick Gillooly / MIT News

Mapping the insides of anything in real time is hard. Certainly harder than it looks in movies like The Dark Knight, where, near the end, a bat-suited Christian Bale dashes through a high-rise, taking out small squadrons of heavily armed dudes in the dark as Morgan Freeman’s character guides him using “sonar-vision,” whereby mobile phone signals from citizens (as well as “the bad guys”) are converted into crisp, articulate images.

The onscreen effect is one of perfect geometric verisimilitude, like pulling up a high-res wireframe model of the world, every nook and cranny rendered, every mutable body in its place, and a chance to celebrate Batman’s geek-tech MacGyver modus operandi, converting mundane assumptions like “everyone has a cellphone” into a city-saving feat of derring-do.

[youtube=http://www.youtube.com/watch?v=obYNbQXCnx8]

In reality, well, I’m not sure exactly how over-the-top that The Dark Knight finale is, but I’m guessing it’s only around 98% narrative contraption.

(MORE: Double Robotics Lets You Turn Your iPad into a Telepresence Robot)

So how do you go about mapping a structure’s innards if you’re not a billionaire playboy living in Fantasyland, U.S.A.?

Would it surprise you to learn researchers at MIT have an answer that could eventually aid emergency responders? That it involves everyone’s favorite motion-sensing tinker toy, Microsoft’s Kinect?

According to MIT News, the MIT research team managed to cobble together a prototype system you can attach to your chest that maps your surroundings as you move through an area, even keeping track of which floor you’re on. It’s all laid out in a paper due to be presented at the Portugal-based Intelligent Robots and Systems conference in early October.

[youtube=http://www.youtube.com/watch?v=SY7rScDd5h8]

The device is remarkably small, too. According to MIT researcher Maurice Fallon, the lead author of the paper, it consists of a backpack that’s worn in reverse (sort of like a mini Baby Bjorn).

According to Fallon (in the video above), the backpack harbors “onboard processing” of some sort, a Kinect depth sensor, an inertial sensor and a laser rangefinder. Here’s how it works.

As the wearer explores an area, the device emits a laser beam that sweeps the area in a 270-degree arc, measuring the time it takes for the pulses of light to return (sometimes called LIDAR, or Light Detection And Ranging). An accelerometer, a gyroscope, a camera — even a barometer, in one test — augment the mapping algorithm by incorporating visual, velocity and altitude data. All of this is processed locally within the backpack itself, after which it can be dispatched wirelessly to a remote computer, where viewers can monitor the “continuously expanding” map’s genesis in real time.

You can even “tag” interesting points by clicking a handheld button as you’re walking, a crude proof-of-concept mechanic that could eventually be paired with voice or text input tools to allow the wearer to annotate a map.

“The goal of this project is to enable situational awareness by the user or an external commander in search and rescue operations,” explains Fallon.

And presumably well beyond. Imagine the detailing possibilities if you’re inspecting a partially destroyed building after a fire, trying to determine what’s salvageable and what isn’t (as well as where it’s safe — or not — to navigate). Imagine using such a device in an underground rescue operation, say lowering it down a mine shaft to map the geometric particulars of a pocket harboring trapped miners.

Another trick involved getting the device to create accurate maps while attached to a moving human. Researchers have experimented with similar mapping systems involving robots, but humans don’t “roll” on wheels or move with as level motion (think of “head bob,” an effect added to first-person video games to make motion through an environment seem more realistic). The MIT team compensated for the shakier gait of a human wearer by employing an inertial sensor.

Furthermore, the team had to deal with “motion drift,” which in larger spaces can cause broad map inaccuracies based on smaller measuring errors over time. The solution involves software that allows the the wearer to revisit areas already surveyed, comparing original and fresh scans to come up with even more accurate area estimates.

But wouldn’t a backpack-sized device interfere with other emergency responder gear? MIT News notes that where the sensor prototype is currently about the size of a tablet, everything save the rangefinder could be downsized so that the whole thing could eventually fit in a container roughly the size of a coffee cup.

MORE: Mission to Mars: 8 Amazing Tech Tools Aboard NASA’s Curiosity Rover