My “aha” moment arrived two decades ago while I was an undergrad student (and dilettante “futurist,” though I didn’t know the word then), sitting in the Great Hall at St. John’s University in Collegeville, Minnesota. I was browsing an issue of Discover with Vitruvian Man on the cover, bathed in golden fire, beside the feature tease: “Playing God: The Making of Artificial Life.”
One of the stories was about a Japanese scientist, Masuo Aizawa, who was arranging lab-grown neurons on silicon, crafting primitive biological circuitry in hopes of approximating the brain’s massively parallel computing abilities. It was the first step, as Aizawa put it back then, toward building an artificial brain.
This was heady stuff for my mortality-obsessed 20-year-old self, most of all because it wasn’t science fiction: Someone on the other side of the planet was growing real brain cells on real computer chips. The prospect of tangible breakthroughs (on the road, say, to pie-in-the-sky human brain-machine consciousness transfer and/or preservation) seemed near.
Take this paragraph from the article:
Lots of scientists have devoted their careers to probing the secrets of the brain. And many researchers have designed computer programs and even chips that attempt to mimic a neuron’s properties. Where Aizawa stands apart is in trying to blend the two efforts–to get one of nature’s most sophisticated cells to serve as a living electronic component in a man-made device that could make transistor technology seem like Stone Age stuff. “A neuron looks bigger than a transistor,” he says, “but it processes so many signals that it’s really more like an entire computer chip in itself. I think we can use it to make biocomputers.”
Not much came of Aizawa’s hopes of replicating a brain as such, though he went on to do other important and pioneering work in bio-interface research. Still, I think of the early 1990s as a kind of threshold — the point at which these stories started to pick up speed. The notion that we might be able to synthesize a human brain shifted from sci-fi abstraction toward something involving scientific rigor. It was science based on a bunch of preliminary stabs in the dark, sure, but grounded in assumptions that the brain (and its most sophisticated operations, namely consciousness) was reducible to its constituent elements, and that at some point we’d be capable of fitting the puzzle pieces together instead of fumbling around in quantum soup.
You don’t hear much about bio-interface brain tech these days, or at least not in the direction Aizawa was headed. A more recent example involved pairing neurons and computer chips to potentially “reboot” inactive areas of the brain after a stroke or other form of brain damage, but the days of creating a plausible bio-computer brain using neurons glommed on chips seem to have passed. The most ambitious brain-related endeavors today, like EPFL’s Blue Brain Project — aimed at reverse-engineering the human brain down to the last molecule — focus on innovating through software while tapping the exponential crunch-power of modern supercomputers.
But the notion that computers can get us where we need to go assumes we’re on the verge of understanding how the brain works in a reductive, replicable sense. I worry that what we’re seeing today, like the virtual reality craze that swept the late 1980s and early 1990s (spawning crude, long-forgotten VR interfaces), is too much hype about artificial brain research. Or, to put that less politely, a bunch of public relations stunts involving nebulous phrases like “brain-like” designed more to lure investment dollars. “Brain-like” is one of these phrases that seem to run the gamut, too, from crudely calculative feats like IBM’s purported cat-brain “simulation” (which drew the ire of Blue Brain Project lead Henry Markram), to wishful thinking about what a new simulative approach might yield, the temporally fuzzy adjunct being “someday.”
Take IBM’s latest headline-grabbing project, a rejiggered approach to cognitive computing it’s calling TrueNorth. IBM plans to use the International Joint Conference on Neural Networks in Dallas today to tout what it’s calling “neurosynaptic core” technology, specifically a new “software ecosystem” designed to harness that technology’s potential, and that might someday — there’s that pesky word again — lead to computers that function more like biology-driven brains.
According to MIT Technology Review, which spoke to the project’s lead researcher, Dharmendra Modha:
Each core of the simulated neurosynaptic computer contains its own network of 256 “neurons,” which operate using a new mathematical model. In this model, the digital neurons mimic the independent nature of biological neurons, developing different response times and firing patterns in response to input from neighboring neurons.
“Programs” are written using special blueprints called corelets. Each corelet specifies the basic functioning of a network of neurosynaptic cores. Individual corelets can be linked into more and more complex structures—nested, Modha says, “like Russian dolls.”
TrueNorth isn’t itself new: IBM was already talking about the tech last November, when — promoting its IBM Blue Gene/Q supercomputer at Lawrence Livermore Labs — the company announced it had run a simulation of 530 billion neurons based on a network modeled after a monkey’s brain. To IBM’s credit, its research team backed away from notions the simulation had much to do with modeling an actual human brain, noting in the paper: “We have not built a biologically realistic simulation of the complete human brain. Rather, we have simulated a novel modular, scalable, non-Von Neumann, ultra-low power, cognitive computing architecture.”
It would be a mistake to wave off futurists who promote this stuff as charlatans, say a lighting rod in the area like Ray Kurzweil. Whatever else you think of a guy like Kurzweil, he takes considerable care in researching and reasoning through his predictions, even if he’s sometimes taken to task by respected science writers or, in my view, sometimes resorts to semantic slipperiness to justify his claims. Likewise, IBM’s TrueNorth project is tangible and important: The notion that we’d have to restructure — and continue restructuring — programming languages as we approach synthetic brain functionality makes intuitive sense.
But in attempting to provoke a hype-numb public, even scientists can over-promise, leading to stories (or versions of stories) that make it seem like we’re closing in on major brain-related breakthroughs when we’re not. We’re too easily impressed, in other words, which culminates in articles that employ meaningless phrases like “brain-like” — tantamount to describing a pillow with a pulsing heartbeat as “human-like,” despite the fact that the latter definition depends on anthropomorphic fantasies.
Don’t mistake this for dismissal. I’m as enthusiastic as I’ve ever been about trying to map the human mind, whether we’re talking about its replication (and full architectural understanding — we’ve mapped the human genome, for instance, but still don’t fully understand it) or something more like Kurzweil’s “pattern recognition theory” in How to Create a Mind, where he argues creating a mind needn’t require human brain replication.
But for now, we could stand a moratorium on phrases like “brain-like” — a phrase that gives people the impression that computer science (or any kind of science) is on the cusp of doing things in brain research it’s simply not — or not yet, anyway.