This Device Could Help the Blind See Images with Their Ears

Daredevil time? This technology sounds even cooler.

  • Share
  • Read Later
GelatoPlus / Getty Images

Maybe you’ve heard of synesthesia, the conflation of one sense or body part with another — not a word that tumbles from the tongue in casual conversation.

It comes from the transliterated Greek words syn, “together” and aisthesis, “perception.” People who experience this peculiar-sounding effect might mentally conjure a certain smell when hearing a particular sound, or recollect a type of texture when seeing a specific color. The celebrated jazz pianist (and longtime host of NPR’s Piano Jazz) Marian McPartland, for instance, identifies musical keys with detailed colors: “The key of D is daffodil yellow, B major is maroon, and B flat is blue,” she once told jazz critic Whitney Balliett.

Now imagine that we could somehow synthesize synesthesia (there’s a tongue-twister), training our brains to intentionally conflate what’s coming in through our five senses, say associating certain sounds with specific images, effectively learning to “see” with our ears. Think about what that could mean to those who can’t see (yet have no trouble hearing) if you could do it on a practical level.

Researchers with the University of Bath’s psychology department in the U.K. have a device that they claim may be able to train the brain to do this today, teaching people to conjure mental images of what they’re hearing in their vicinity. While the research just out deals with blindfolded sighted people, the implications for the blind and partially-sighted are substantial, the impetus being that by exploiting neuroplasticity — the idea that the brain can essentially rewire itself to compensate, whether functioning with a missing limb or an absent sensory input — you might be able to train a brain to visualize its surroundings just by listening to them.

Don’t confuse that with echolocation, meaning the ability to locate objects using echoes, say tapping a cane to orient yourself in a room, which some people can already do today. If you read comics you may be familiar with a wildly exaggerated version of echolocation: Stan Lee and Bill Everett’s Daredevil (Matt Murdock), blinded by a radioactive substance as a child, acquires superhuman sonar-like abilities in the bargain that allow him to “see” in ways that dramatically transcend the abilities of normal sighted people. This isn’t that. This is actually cooler.

Dubbed “vOICe” — I’ll take a wild stab that the stylized OIC is having some fun with the phrase “oh I see” — this “sensory substitution” device was designed to help people compensate for the loss of one sense (usually vision) by subbing in another, and — this is important — without having to undergo invasive surgery. vOICe works by capturing live images with a video camera, then converting those images to sounds, mapping video to audio by correlating factors like luminosity to loudness or height to pitch, scanning each frame (refreshed once every second or two) from left to right.

At this point, we’re still learning how and to what extent you can push the brain’s neural plasticity, so to get a sense for this with vOICe, the research team — led by psychologist Dr. Michael Proulx with the University of Bath’s Crossmodal Cognition Laboratory — decided to observe how people who could see, but wearing blindfolds, responded to a visual test with the device.

vOICe

Alastair Haigh, David J. Brown, Peter Meijer and Michael J. Proulx / Frontiers in Psychology

Sitting before computer monitors, blindfolded, wearing over-the-ear headphones, participants listened as vOICe’s camera scanned images of the letter “E” displayed in various positions and sizes — something known as the Snellen Tumbling E test — then created soundscapes based on shading: dark areas produced loud sounds, while light areas (in this case, the “white” letter itself) were silent.

In a routine eye exam, your visual accuracy is determined by how far you’re sitting from this sort of chart while still being able to see the “E” (in focus); normal vision is considered 20/20. In the vOICe test, where the best possible results would be equivalent to 20/400 vision, participants were able to approach that figure consistently, whether they’d had prior training with the device or not. Interestingly, study participants who’d had musical training performed better than those without.

All of which is promising, says Proulx, citing a recent study that found a sight restoration technique involving stem cell implants only yielded 20/800 visual acuity. “Although this might improve with time and provide the literal sensation of sight, the affordable and non-invasive nature of The vOICe provides another option,” he says.

Not that invasive and non-invasive vision restoration techniques need be mutually exclusive: “Sensory substitution devices are not only an alternative, but might also be best employed in combination with such invasive techniques to train the brain to see again or for the first time,” adds Proulx.

How might this work in practice, further down the road? Imagine someone wearing an unobtrusive head-mounted camera (Google Glass, for instance), receiving wireless audio information through tiny earbuds: as they turn to look in various directions, the device scans images and correlates those with soundscapes, the person’s brain then translating those into mental images of the objects — braille for your ears, if you will.

And what of those who’ve lost their vision compared to those born without sight? Common sense suggests someone who’s seen a chair (with their eyes), then lost their vision, stands a better chance of correlating a soundscape of that chair with a representative mental image, than, say, someone who’s never been able to see at all (we haven’t yet devised a mechanism for accurately mapping three-dimensional images into the mental circuitry of those with no visual frame of reference).

Perhaps if this training involved a third sense, say touch, to help build out the audio-visual mental vocabulary, you could increase the resolution of the sensory substitution. But it may not matter: the bumps that identify words to those who can read braille look nothing like the characters or words they represent. People who assimilate information using braille are still reading. And in any case, if you’ve never seen the world, gaining a general, working, real time understanding of what’s around you by in essence “transliterating” your environment on the fly, gleaning “images” of objects from sounds, would be an incredible feat by any measure.

(You can read the full study, titled “How well do you see what you hear? The acuity of visual-to-auditory sensory substitution” and published in the journal Frontiers in Psychology here.)