As much as the press — myself included — likes to talk about futuristic battle bots and advanced would-be androids, most robots are being developed with one of two tasks in mind: search and rescue or helping the disabled and elderly.
Robots are natural candidates for helping people with disabilities, mainly because they have to simulate senses that most of us take for granted. For a robot to “see,” usually it has to create a virtual map so it can carefully plot its course. That technology is being utilized by the Institute of Intelligent Systems and Robotics in Paris to help people who can’t see get a clear picture of what is around them.
The idea is that a person would wear a pair of glasses equipped with cameras and sensors. The information gathered by the glasses would be processed and used to create a map on the wearer’s handheld electronic Braille device. Think of it as Google Maps for the visually impaired.
According to New Scientist, the system creates up to 10 maps per second. Each forms when heat causes springs to expand, pushing metal pins upwards in the shape of the person’s environment. The accelerometers and gyroscopes pinpoint the location and speed of whoever is wearing the glasses, letting them track themselves in real-time at walking speed.
Another twist on similar technology comes from the University of Nevada, where researchers are working on a navigation system for the visually impaired using only a smartphone. First, the phone measures your stride. Then, using indoor maps (which are becoming more common), the phone simply gives you directions with artificial speech.
Sometimes, however, it’s not just the blind who need help seeing. In Moscow, a boy with a heart condition uses a robot to help him experience school from inside his own room. According to the Associated Press, 16-year-old Evgeny Demidov attends class vicariously via a green robot equipped with an HD camera, microphone and loudspeaker, which he controls through his laptop. The result is similar to online schooling, but obviously much more immersive.
The key advantage is that he can interact with his teachers at the same time that his peers are, making the learning experience as organic as one could hope for via robot. The machine itself costs around $4,000 — a little too expensive for them to be widely adopted, but not so expensive that subsidized units couldn’t be purchased by schools and hospitals.
Then, of course, there’s Google’s driverless car. How you define a robot is up for debate, but a machine that can independently navigate something as complex as an active road sounds like a robot to me. A few years ago, the idea of blind drivers seemed ludicrous. Today, it’s almost inevitable that similar cars will be driving the visually impaired to Taco Bell and beyond, probably within the next couple of years.
Through all of the hubbub surrounding the release of new smartphones and tablets, I think a lot of us forget that there are people out there who need this technology more than we do. Unfortunately, a lot of times these advances come only as the result of research in other fields.
Don’t get me wrong; it’s great that technology developed by the commercial and military sectors have benefitted people with disabilities. My point is that if we committed even a tiny fraction of that money solely to helping people with disabilities navigate the world, think about what we might have accomplished already.