CAVE2: Not a Star Trek Holodeck Yet, but Getting Closer

How much closer are we, really, to Star Trek holodeck tech with the University of Illinois, Chicago's new CAVE2 project?

  • Share
  • Read Later
Lance Long / EVL

If you’ve ever wondered how in Star Trek humans managed to turn grid-lined rooms into striking replicas of Irish villages and Wild West facades replete with swaying trees and creaky leather saddles, well, don’t think too hard about it. These far-fetched (if diverting) starship chambers used fanciful teleporter tech to beam custom-tailored matter in on the fly and employed forcefields to give photon-sculpted objects illusory substance…pretty much impossible when you hold that formula up against actual physics.

Still, the idea behind holography-based virtual reality remains enticing — even without the teleportation and instantaneous matter replication fantasy, a giant wraparound room stacked with 3D panels would be significantly awesome in its own right, no?

Sort of like CAVE2, then — the “AVE” stands for “automatic virtual environment” — a “hybrid reality” system designed by the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago that spreads a bunch of stereoscopic 3D liquid crystal displays around a cylindrical room to all but envelope an observer, letting them soar over a distant planet’s surface or scoot Fantastic Voyage-style through an incredibly intricate rendering of the blood vessel network coursing through a human brain.

It’s quite the enterprising (pun intended) little setup: a 24-foot wide, eight-foot tall 320-degree panoramic room with 72 custom 3D micropolarized LCD panels framing 18 columns (four displays each). Those panels output at 37-megapixels in 3D or 74 megapixels in 2D — essentially the limit of human 20/20 visual acuity.

[Update: EVL notes below that “Each thin border panel is 1366 x 768 in monoscopic viewing and 1366 x 384 for each eye in stereoscopic (interleaved) viewing, so the total resolution of the cave2 system with its 18 x 4 array of panels is 24588 x 3072 in monoscopic mode and 24588 x 1536 for each eye in stereoscopic mode; we can also split the space up and share 2D and 3D content in a hybrid mode.”]

If you’re running in 3D mode, the room needs to be able to project its stereoscopic imagery to your headwear properly, which is where a 10-camera optical tracking system comes into play, allowing the system to render images on the screens from the point of view of one observer. Sound-wise, you’re listening to a 20-speaker ambisonic system — “ambisonic” meaning multichannel sound mixing, i.e. extremely high-end surround sound — to project sounds in 3D space. And all of that’s crunched by a 36-node “high performance” computer cluster — one computer powering each screen — with a 100 GB/sec network pipe to the outside world.

Here’s the sizzle reel:

[youtube=http://www.youtube.com/watch?v=d5XDbzy7vuE]

The “2” in CAVE2 implies it had an antecedent, and sure enough, back in 1992 (Star Trek: The Next Generation had launched only five years earlier) EVL built its first CAVE, a smallish cube-shaped room in which projectors beamed images onto the walls, ceiling and floor in tandem with electromagnetic sensors (allowing 3D imaging with special glasses), delivering a primitive sort of virtual reality (these were heady days for VR — recall, also, that 1992 was the year The Lawnmower Man arrived on film, and Nintendo’s doomed Virtual Boy was still three years off). EVL’s CAVE technology spawned a slew of software libraries (including an Unreal Tournament mod) and over the last two decades it’s been picked up by several universities. Brown University, for instance, recently spent $2 million renovating its implementation of a CAVE, which it uses for “volume visualization, molecular visualization, and simple 3d model manipulation.”

In 2004, EVL followed with something it dubbed OptIPortal, a scalable high-res wall of displays with high megapixel density, the idea being to create a system capable of dishing up interactive, ultra-high-res imagery for use in everything from collaborative scientific research to disaster response training.

CAVE2, which took two years to design, is essentially a synthesis of those two technologies and a radical improvement over CAVE, with roughly three times as much space to move around in, 13 times the resolution of CAVE’s projectors, much lower display costs ($14,000 per stereo megapixel versus $35,000) that yield 10 times the display resolution, an almost unfathomable increase in computing speed (CAVE ran on four 100MHz MIPS processors where CAVE2 employs 36 2.9GHz 16-core Xeon CPUs), 2.3TB of memory (contrast with CAVE’s 256MB), 72TB of storage (CAVE had just 3.2GB), wireless 3D tracking (versus wired), all for roughly half the total cost.

What do you do with something like CAVE2 (you know, aside from staging the world’s coolest LAN party)? According to EVL, “CAVE2 serves as the lens of a virtual ‘telescope’ or ‘microscope’, enabling users to simultaneously see and analyze one or more e-science datasets that reside in cyberspace.” To that end, EVL says the room’s list of potential uses includes space exploration (exploring a visually accurate render of a planet’s surface, say), engineering and nanoscale material design (think designing and interacting with objects in 3D space), medical training (among other uses, Fantastic Voyage time!), architectural design and archaeology (yes really — archaeology).

So no, nothing like a Star Trek-caliber holodeck — for the closest our best and brightest have come to that so far, see physicist Michio Kaku’s comedic “Physics of the Impossible” stab back in 2010 — but still a very cool step forward on the forking road toward fully realized holography with nanotechnology, or direct sensory manipulation through the brainpan, Matrix-style.

[youtube=http://www.youtube.com/watch?v=yf0sllpZx3w]