Apple and Google have each announced visually dazzling 3D mapping software that lets you pan around major metropolitan areas rendered in beautiful, photographic detail from a bird’s eye view.
In Google’s case, it’s an update to its existing all-but-ubiquitous 2D mapping software. In Apple’s, it’s part of a strategic maneuver that includes disentangling itself from Google with the rollout of iOS 6 while attempting to one-up its former mapping partner by adding features like vector-based map manipulation and Siri voice controls.
But with all this ballyhoo about 3D, there’s another question worth asking: What exactly do we gain in functional terms from either company’s implementation of 3D maps?
Long expected to, Apple showed its hands at WWDC 2012 this week, revealing “an entire new mapping solution [built] from the ground up” and explaining that it was doing all of the cartography itself and on a global level. It showed how you could easily search for business locations (more than 100 million indexed so far) and interact with them on “beautiful” info cards featuring Yelp-based reviews, ratings and photos.
It also demonstrated an integrated traffic view with route incidents overlaid, and all of that being updated using “anonymous, real-time, crowd-sourced” data from iOS users. It even showed how its new Maps feature might supplant third-party GPS apps with turn-by-turn navigation and Siri voice controls. If you want to go somewhere in Apple Maps, you can simply ask Siri without having to thumb in details — you can even ask Siri questions while en route, like “Where can I get gas?” and the app will find nearby gas stations and offer to route you.
But it was the rumor that Apple had a new 3D maps feature in the offing that carried headlines for weeks, based in part on assumptions about third-party mapping companies Apple had acquired over the past few years. And sure enough, the company unveiled the feature during the closing moments of WWDC to audience accolades.
Apple calls its 3D mapping feature “Flyover” and says it’s based on actual flyovers of major metropolitan areas around the world using helicopters and planes.
It works pretty much as you’d expect it to, allowing you to zoom on a 2D map from way up, down to bird’s eye level, at which point you transition (seamlessly during Apple’s WWDC demo) into 3D mode, the area’s structures popping up from the ground and allowing you to see their lightly shaded, polygonal silhouettes. Enable “Flyover” mode and those block-like 3D structures suddenly acquire beautiful photographic textures, taking on the appearance of an actual camera shot of the area, which in a sense it is, only one you can pan across and rotate in real-time.
(PHOTOS: Google Earth Adds Historical Photos)
As many “oohs” and “aahs” as this generated from WWDC attendees, one thing Apple didn’t demonstrate was an ability to view these cities at a practical, all-the-way-zoomed-in tactical level. We’ve been flying over photorealistic 3D cities for years now in simulations like Microsoft Flight and X-Plane, and while having that available on your phone or tablet offers novelty value, what neither Google nor Apple have yet demonstrated is the ability to replicate these cities at the ground level. Such a feature would allow you to scope out an area beforehand and know exactly what it’s going to look like, from the position of a public mailbox or the entry point for a downtown parking garage, to seeing the shape and lettering of a company marquee or whether there’s a long line to get into a restaurant.
Enter Dr. Ed Lu, a NASA astronaut as well as the former Program Manager of Advanced Projects on Google Maps and Google Earth. He’s currently the CTO of Hover, a Los Altos-based company with 13 employees and founded by a Navy SEAL that’s been providing advanced 3D mapping technology to special military forces for several years.
Hover’s been running in stealth mode as a company so far, but plans to launch “in the consumer-focused map space” later this year. As its name implies, Hover hopes that it can leverage crowd-sourced imagery at the ground level — submitted through devices like mobile phones — to update 3D maps in real time.
“Our company is currently making 3D visualization software and the datasets that go behind it for military use,” said Lu when I caught up with him by phone. “The guys who started our company … realized that the need for replacing these flat overhead picture maps was something that could show what it really looks like from where you’re standing.”
Lu means a mapping tool with the ability to measure the distances, sizes and heights of the environment on the ground itself.
“It’s really easy to get yourself lost in a situation because the view that you use to orient yourself, that overhead map taken from a satellite, doesn’t look like the street corner you’re standing on with a bunch of buildings around,” he said. He offered a few examples of the technology’s practical military applications to date, which include everything from troops running in operations to convoy drivers figuring out where they’re supposed to make a left-hand turn.
“What our company has been doing successfully is taking overhead imagery from whatever sources and turning that into three-dimensional models, kind of like what you see on the built-up areas in Google Earth and the new 3D mapping tools Google introduced the other day,” he said. “But it has some additional capabilities.”
Those abilities include taking high-resolution images and folding them into datasets as they’re acquired. “So for instance, if you can get imagery from street level or through other means, those things can be tacked onto the dataset so that it can continually be refined,” Lu explained. “Soldiers with cameras on helmets walking down the street — that imagery can be used to update and refine the models.”
And that’s where A.J. Altman comes in, a former Marine ground intel officer in Iraq — now Hover’s CEO. Altman said the company started out generating “immersive” 3D spaces that were based on aerial or satellite imagery and capable of being fused with incoming street-level imagery.
“That started to create a true virtual space mirroring the actual ground space,” he said, explaining the technology’s initial military-angled impetus. But the company quickly realized the technology could have much broader uses. “We see a lot of similarity in the way that people would discover a neighborhood or discover how to make a three-block walk from the restaurant to the bar with their spouse in a neighborhood that they don’t know very well,” said Altman.
Like any application, one of the biggest obstacles is usability, something Altman calls “the bajillion-dollar caveat.”
“This has to be so usable and intuitive that I’m having an easier experience with my 3D map than with my 2D map,” he said. “We’ve all been using 2D maps since we were children and we’ve hence learned the tricks of using those 2D maps and converting it in our minds to something spatial. In order for us to go to this kind of 3D spatial awareness with maps on the ground — this sort of unified view of a 3D map coupled with street level imagery and being able to fly around it, but fly very low — has to be so intuitive and the user experience so simple that people can’t really afford not to use it, because it just works.”
But there’s another equally pressing issue with mapping technology when you start deploying photorealistic representations of areas and inviting people to depend on what they’re seeing: datedness.
Existing streets rarely change, but what’s surrounding those streets structurally speaking can change completely over the course of a year, say the area mapped was previously undeveloped. Google Maps in satellite mode is sometimes as much as a year behind, in terms of its imaging data, for instance. I asked Lu if that wasn’t the biggest challenge for any mapping company.
“There’s two things involved here and I think you’ve hit on one of them — basically the ‘it’s not up to date thing’,” said Lu. “The other thing is the resolution that you can get by the process. I’m convinced that what folks really need is not the flight simulator mode, which sure, it’s kind of cool to look at it, but what’s actually useful is when there’s enough information at a small enough level that it actually affects what you do.”
Doesn’t Google Maps Street View already more or less do that? Street View is the feature in Google Maps where, if you zoom all the way in — assuming there’s data for the area in question — it transitions to an eye-level view of the area based on stitched-together photos that you can pan around using picture-warping technology that offers the crude illusion of three-dimensionality.
Altman’s response: Street View is too complicated and unfriendly to use at this point.
“You talk to any person about Street View and it’s like, the last time they used it was three weeks or three months ago, and they used it for a moment because they needed to know something about a facade, and that was the only way they could figure it out,” he said. “And of course the moment they’ve got what they wanted, they get the heck out of there. They don’t spend any more time in that environment, because that environment is very clunky and difficult.”
Lu said that Hover wants to see where its tactical-level technology — already successfully deployed in the military — “can be used by everyday people, or by folks who want to reach everyday people, especially in urban areas.” And he has a metric for determining usability.
While he was working for Google, Lu said the company was trying to determine at what level of resolution satellite imagery made a difference in user engagement. The answer: about one meter.
“We found that at that resolution, people became much more engaged with the maps,” said Lu. “They spent more time with them, they went back and used them more.” More importantly, said Lu, at one meter resolution, people began to add their own information to the maps.
“At better than one meter resolution, people started to look for stuff, because at that resolution you’re beginning to see cars and the details on houses. That’s when people started to make corrections and began to mark places on those maps.”
“The aerial flyby of the city, that isn’t what’s useful,” added Lu.
It’s a point that resonated with me during the interview — as cool and clever as Google’s and Apple’s respective 3D mapping tools look, I can’t imagine doing more than fooling around in 3D mode for novelty’s sake, or maybe for educational purposes if I want to show my children the Eiffel Tower or Taj Mahal in a metropolitan context. What’s missing from both Apple’s and Google’s 3D maps is a reason to use them for practical, everyday stuff, like seeing what’s going on in an area at the tactical level right now.
Let’s assume for a moment that Hover is actually able to collate and serve the data in a way that satisfies its usability goals. How do you capture all that data in real-time and integrate it with existing map data? Google and Apple are talking about using planes and helicopters to survey cities by grabbing information aerially at 45-degree angles, for instance, but you’re not going to want airplanes and helicopters perpetually circling cities at all hours just to keep maps up to date.
“You can build the base maps that way,” said Lu. “But the real stuff that matters to people, once you have a base map, is what’s going on down low at the sidewalk level. And what we didn’t see at [Google's or Apple's] 3D map events is the ability to fuse that type of imagery into the data set. That’s where it’s going to become game-changing.”
“Our system allows updated imagery to be added on top of existing models, so the models can be incrementally improved,” said Lu.
“The sources could be theoretically anything, any sensor on the street level that takes a photo,” said Altman, referring to everything from fixed cameras to smartphones. “Whether you’re a vendor or a consumer or a teenager who’s just doing this for fun or being social, the crowd-sourcing element is there because we can bring the images in and augment textures with that. Or certainly there are folks who collect street level imagery that aren’t Google, commercial entities that collect data with cars that have sensors on top of them, and we can integrate that as well. We’re really agnostic from an imagery-source perspective.”
What about vetting the integrity of the image, or assuring someone using the maps that what they’re looking at is representative of the actual area being modeled?
“That sensor in your pocket, every time it takes a photo, it provides not just a photo, but some sort of rough GPS information and a timestamp,” said Altman. “The data is all there, it’s just a matter of someone creating a platform that manages that data in a way that’s easy for the consumer to quickly see if the information was captured five seconds ago, five days ago, or five years ago.”
Lu admitted that issues like these — call this one “photorealism integrity” — haven’t yet been fully resolved. The current solution: “Many, many approaches,” said Altman, “Typically a summation of dozens of approaches, all added together to pick away at the problem.”
With all it hopes to bring to the table from a tactical standpoint — the missing novelty-to-practical-use link — is Hover looking to get snatched up by an Apple or a Google, like so many prior 3D mapping outfits?
“We always take the position that we’re not trying to build a company to sell it, we’re trying to build a successful company,” said Altman. “That means finding a problem, a pain point, that a customer has and solving it in a really intelligent way. We’ve already had success with that, and we want to keep doing that.”