Behind The Scenes Of The Big Google Maps Redesign And Its Technical Challenges

Google unveiled its completely redesigned Google Maps product on the web at I/O 2013, and at a panel dedicated to the new Maps experience, Maps User Experience Design Lead Jonah Jones and Engineering Director for Maps on the web Yatin Chawathe took us through what went into creating Maps and the engineering effort behind the considerable change seems prodigious.

Specifically, Jones and Chawathe took us much deeper into two of the main driving concepts behind the redesign of Maps, including “Building A Map For Every Place” and “Explore The World.” The former has to do with customizing maps every time a user clicks on a new location, in real-time and with more contextually relevant information, and the latter involves providing beautiful imagery including via Earth integration directly into maps, and with 3D virtual photo tours.

A Map For Every Place

In making a Maps product that is extremely adaptive to both a user’s personal input sources and to specific locales, Google had to rethink its approach to maps, and it looked to the way we casually share directions as a marker of a good system for surfacing relevant information. When you draw a map on a napkin, you are automatically filtering out the most important information, and doing it with your specific audience in mind. The result is a simplified map, that involves maybe a few major routes, as well as smaller roads, and a prioritization that doesn’t necessarily reflect how important a road is to the general population.

IMG_8817
“A map draw for you is great because it highlights aspects and things personal to you,” Jones explained, adding that there’s also nostalgic value in something like a hand drawn map. Google wanted to be able to replicate both of these, and so it took an engineering approach to automate a process that’s normally human-powered.

Google didn’t want to exactly replicate the hand-drawn map, however, since it leaves out a lot of information that you want to still be present in a modern, digital, interactive map. But it did want to subtly highlight and downplay certain map elements, bring to the fore aspects that are useful and fading back others that aren’t as important. To do that, it took a big data analytics approach.

First, for a specific location the new Maps algorithm will analyze the entire set of people looking for directions in that area, and then highlight the routes that come up most often. Then from that subset they’ll focus in even further and weigh more vs. less important routes, based again on aggregated user data. They can see which roads are more popular, and then pop those out vs. the less important ones. Finally the less important ones are cut away, and you’re left with something resembling the hand-written map.

IMG_8818Once those are flagged, however, you could still be missing info on the ground regarding very small routes important to a specific place. Those are then targeted via a hyper local re-labeling algorithm that addresses just the immediate surroundings, adding labels to key routes and taking them away from other locations to decrease clutter and subtly change the focus.

That then informs the UI rendering of the Map itself, which still retains the street markers for all surrounding routes. Lines along routes important to getting there are made bold and lines on less important streets are thinned out, but not removed in case some users still require that information. It’s about drawing attention and changing perspective, not eliminating something altogether.

IMG_8824
All of the above takes advantage of the immense processing power in Google’s data center to do the whole thing in real-time every single second, for every single one of Map’ millions of users. Yet the impact on a user’s computing requirements is minimal; Google sends even less data than it did with the previous version of Maps, keeping bandwidth requirements low.

Explore The World

Google’s other big addition to the new Maps experience has to do with bringing beautiful imagery to the web, in the form of both Google Earth 3D flyovers and the new virtual tours that provide an up-close-and-personal view of some prime spots. Those virtual tours also represent a massive engineering effort, one which Chawathe explained in broad strokes on stage.

The virtual tours are a crowdsourced effort, which users may not even realize they’re actively contributing to. The images are drawn from pictures uploaded to Google+, Panoramio and other sources within the Google photo sharing ecosystem.

To get from that group of photos to an actual 3D tour requires a lot more than just aggregating photos, however. Google says it can map not only where every photo in its database was taken, but can also tie each individual pixel in every image to a very specific location using its algorithm, making it much easier to stitch sets together. Once that process is complete, it’s left with a point cloud that can flesh out a region, but that’s a brute force approach, and some art is required to make it look good.

IMG_8825That involves filtering the photos, picking ones that show the landmark in context with its surroundings, ones that show the landmark clearly from visually pleasing angles, pics that capture architectural detail, interesting picturesque scenes in various lighting conditions and more. It picks these photos based on visual recognition tech and their popularity and ratings on Google properties; so an image that gets a lot of +1s on Google+ will be rated over one that’s got none, for example.

Once it has a set of top-quality pictures, it determines an order in which they should appear that makes the most sense. Even then it wouldn’t be smooth as a finished product, however, since there gaps and the transition between angles would involve a lot of bizarre warping and image artifacts that would taint the overall experience. So finally, Google’s algorithm goes back to the larger set of images and picks ones that fit nicely in the gaps. These don’t need to be the best quality, since they’re just filling out the animation.

IMG_8830

Jones said that what they’ve built is impressive, but still pales in comparison to what a human artist could achieve manually stitching together their own photo tour. He hopes to bring up Google’s automated process to the point where it’s impressive regardless of the source, and comparable with what humans are capable of working on their own.

In response to a question from the audience, Chawathe also said that Google could in the future look for a way to make its 3D guided tour feature a consumer tool. It sounds like it’s not something Google is currently developing, but putting that power in the hands of Google+ users for instance might make it more of a draw for photography enthusiasts. Google already showed that it’s making efforts in that direction with the new auto-enhance and auto-awesome features it introduced for G+ at I/O.

The World In Your Browser Changes As Fast As The Real One Does

These efforts show how Google is making use of its immense computer processing power to deliver experiences via Maps that reflect a continually changing world. It sounds like this is just the beginning for both of the projects, too, and as with every major change, we’ll probably see more refinement of these approaches as users come on board and provide more feedback.