Mind, prepare to be blown—and scanned, by a device that can almost read thoughts, or at least digitize them into almost-identifiable images. No, it’s not some artsy video project, it’s for real: a system developed by University of California Berkeley scientists to capture visual activity in our brains and reassemble it as recognizable video data.
It sounds like science fiction, but it’s not, and according to the scientists, it represents a “critical” step toward reconstructing everything from what we’re thinking while awake to what’s happening as we dream.
(MORE: Want to Scan Your Brain? There’s an App for That!)
That’s a video of the process up top. A test subject is undergoing an MRI while watching random video clips from Hollywood movies. On the left side, you see clips from what the subject viewed. On the right, you see the fMRI results of “quantitative modeling” using “a new motion-energy encoding model,” essentially a matchup of brain activity with the viewed images. As you can see, at worst, it’s capable of reconstructing what was viewed in terms of the video image’s elemental geometry, e.g. broad shapes, lights and darks, etc. And at best, you can make out identifiable human forms and even vague facial features. (Interestingly, the human-related images seem the least abstruse, which, perhaps—wild speculation on my part here—says something about species-related bias in our recognition patterns.)
Here’s the research team’s “intuitive” description of the process:
As you move through the world or you watch a movie, a dynamic, ever-changing pattern of activity is evoked in the brain. The goal of movie reconstruction is to use the evoked activity to recreate the movie you observed. To do this, we create encoding models that describe how movies are transformed into brain activity, and then we use those models to decode brain activity and reconstruct the stimulus.
“This is a major leap toward reconstructing internal imagery,” said U.C. Berkeley neuroscientist Jack Gallant, a coauthor of the study published yesterday in the journal Current Biology (via U.C. Berkeley News Center). “We are opening a window into the movies in our minds.”
How’d they do it? Volunteers remained still within the MRI scanner for several hours and watched two sets of Hollywood movie trailers. Blood flow through the visual cortex was measured and their brains divided into tiny three-dimensional cubes (no, not literally—on the computer!) called volumetric pixels…or as I first heard them described in gaming parlance decades ago, “voxels” (remember the Comanche attack helicopter series?).
Researchers created a computer model that describes how visual information translates as brain activity, then applied it to each “brain voxel.” As volunteers watched the first set of movie clips, their brain activity was processed by the computer to “learn” to associate visual patterns in the clips with corresponding brain activity. Brain activity from viewing the second set of clips was then used to test the computer’s reconstruction algorithm—the computer used 18 million seconds (5,000 hours) of random YouTube videos that didn’t include the movies viewed by the test subjects to build a kind of “reference library,” correlating the brain imagery with video clip imagery.
In the final step, the computer selected the 100 clips that were most like the ones the subjects had watched and combined them to form the movie you see above (the right-side feed)—an indirect way of going about it, to be sure, but astonishing for what it accomplishes.
We’re probably a ways from science fiction movies like Strange Days, where “Superconducting Quantum Interference Devices” record events direct from a wearer’s cerebral cortex and can translate them to others as if they were experiencing the same events firsthand—but with results like these, where computers can “learn” to accurately relate what was seen with how the brain recorded it, we’re definitely getting closer.
MORE: Scientific Study Suggests Internet Addiction May Cause Brain Damage
Matt Peckham is a reporter at TIME. Find him on Twitter at @mattpeckham or on Facebook. You can also continue the discussion on TIME’s Facebook page and on Twitter at @TIME.