Apple and Google have each announced visually dazzling 3D mapping software that lets you pan around major metropolitan areas rendered in beautiful, photographic detail from a bird’s eye view.
In Google’s case, it’s an update to its existing all-but-ubiquitous 2D mapping software. In Apple’s, it’s part of a strategic maneuver that includes disentangling itself from Google with the rollout of iOS 6 while attempting to one-up its former mapping partner by adding features like vector-based map manipulation and Siri voice controls.
But with all this ballyhoo about 3D, there’s another question worth asking: What exactly do we gain in functional terms from either company’s implementation of 3D maps?
(MORE: Why Google’s (and Apple’s Alleged) 3D Maps Don’t Seem That Exciting)
Long expected to, Apple showed its hands at WWDC 2012 this week, revealing “an entire new mapping solution [built] from the ground up” and explaining that it was doing all of the cartography itself and on a global level. It showed how you could easily search for business locations (more than 100 million indexed so far) and interact with them on “beautiful” info cards featuring Yelp-based reviews, ratings and photos.
It also demonstrated an integrated traffic view with route incidents overlaid, and all of that being updated using “anonymous, real-time, crowd-sourced” data from iOS users. It even showed how its new Maps feature might supplant third-party GPS apps with turn-by-turn navigation and Siri voice controls. If you want to go somewhere in Apple Maps, you can simply ask Siri without having to thumb in details — you can even ask Siri questions while en route, like “Where can I get gas?” and the app will find nearby gas stations and offer to route you.
But it was the rumor that Apple had a new 3D maps feature in the offing that carried headlines for weeks, based in part on assumptions about third-party mapping companies Apple had acquired over the past few years. And sure enough, the company unveiled the feature during the closing moments of WWDC to audience accolades.
Apple calls its 3D mapping feature “Flyover” and says it’s based on actual flyovers of major metropolitan areas around the world using helicopters and planes.
It works pretty much as you’d expect it to, allowing you to zoom on a 2D map from way up, down to bird’s eye level, at which point you transition (seamlessly during Apple’s WWDC demo) into 3D mode, the area’s structures popping up from the ground and allowing you to see their lightly shaded, polygonal silhouettes. Enable “Flyover” mode and those block-like 3D structures suddenly acquire beautiful photographic textures, taking on the appearance of an actual camera shot of the area, which in a sense it is, only one you can pan across and rotate in real-time.
(PHOTOS: Google Earth Adds Historical Photos)
As many “oohs” and “aahs” as this generated from WWDC attendees, one thing Apple didn’t demonstrate was an ability to view these cities at a practical, all-the-way-zoomed-in tactical level. We’ve been flying over photorealistic 3D cities for years now in simulations like Microsoft Flight and X-Plane, and while having that available on your phone or tablet offers novelty value, what neither Google nor Apple have yet demonstrated is the ability to replicate these cities at the ground level. Such a feature would allow you to scope out an area beforehand and know exactly what it’s going to look like, from the position of a public mailbox or the entry point for a downtown parking garage, to seeing the shape and lettering of a company marquee or whether there’s a long line to get into a restaurant.
Enter Dr. Ed Lu, a NASA astronaut as well as the former Program Manager of Advanced Projects on Google Maps and Google Earth. He’s currently the CTO of Hover, a Los Altos-based company with 13 employees and founded by a Navy SEAL that’s been providing advanced 3D mapping technology to special military forces for several years.
Hover’s been running in stealth mode as a company so far, but plans to launch “in the consumer-focused map space” later this year. As its name implies, Hover hopes that it can leverage crowd-sourced imagery at the ground level — submitted through devices like mobile phones — to update 3D maps in real time.
“Our company is currently making 3D visualization software and the datasets that go behind it for military use,” said Lu when I caught up with him by phone. “The guys who started our company … realized that the need for replacing these flat overhead picture maps was something that could show what it really looks like from where you’re standing.”
Lu means a mapping tool with the ability to measure the distances, sizes and heights of the environment on the ground itself.
“It’s really easy to get yourself lost in a situation because the view that you use to orient yourself, that overhead map taken from a satellite, doesn’t look like the street corner you’re standing on with a bunch of buildings around,” he said. He offered a few examples of the technology’s practical military applications to date, which include everything from troops running in operations to convoy drivers figuring out where they’re supposed to make a left-hand turn.
(VIDEO: The Most Insanely Important, Mind-Blowing Tech News of the Week)
“What our company has been doing successfully is taking overhead imagery from whatever sources and turning that into three-dimensional models, kind of like what you see on the built-up areas in Google Earth and the new 3D mapping tools Google introduced the other day,” he said. “But it has some additional capabilities.”
Those abilities include taking high-resolution images and folding them into datasets as they’re acquired. “So for instance, if you can get imagery from street level or through other means, those things can be tacked onto the dataset so that it can continually be refined,” Lu explained. “Soldiers with cameras on helmets walking down the street — that imagery can be used to update and refine the models.”
And that’s where A.J. Altman comes in, a former Marine ground intel officer in Iraq — now Hover’s CEO. Altman said the company started out generating “immersive” 3D spaces that were based on aerial or satellite imagery and capable of being fused with incoming street-level imagery.
“That started to create a true virtual space mirroring the actual ground space,” he said, explaining the technology’s initial military-angled impetus. But the company quickly realized the technology could have much broader uses. “We see a lot of similarity in the way that people would discover a neighborhood or discover how to make a three-block walk from the restaurant to the bar with their spouse in a neighborhood that they don’t know very well,” said Altman.
Like any application, one of the biggest obstacles is usability, something Altman calls “the bajillion-dollar caveat.”
“This has to be so usable and intuitive that I’m having an easier experience with my 3D map than with my 2D map,” he said. “We’ve all been using 2D maps since we were children and we’ve hence learned the tricks of using those 2D maps and converting it in our minds to something spatial. In order for us to go to this kind of 3D spatial awareness with maps on the ground — this sort of unified view of a 3D map coupled with street level imagery and being able to fly around it, but fly very low — has to be so intuitive and the user experience so simple that people can’t really afford not to use it, because it just works.”
MORE: Google Street View to Map Australia’s Great Barrier Reef