Augmented reality seems to be all the rage this week: Microsoft earlier today got in touch to give us the heads up on some technology it’s been working on — its designs for how to make a user’s experience of a location specific only to that user — one day after Google revealed more details about its own take on that idea in the form of Project Glass.
Called SemanticMap, the idea is technology that lets physical signage change based on a specific user, that user’s location and what that person is looking for. Unlike Google’s glasses, Microsoft’s technology doesn’t require the user to have any special headgear or other equipment; and it makes use of three key bits of technology that Microsoft is working on and will very likely become more and more ubiquitous in the years ahead: face analysis, gesture recognition and proximity detection. Microsoft has already been using some of this to good effect in the Kinect.
SemanticMap’s only in prototype form now, says Sergio Paolantonio, senior research designer with the Human-Computer Interaction Group at Microsoft Research Asia in Beijing, where the technology was created. And, he adds, “There are no current plans to deploy the current scenario as a new Microsoft product.”
But this is a working prototype and very much shows the direction that we might expect things to go from here on in. “The demo is here right now, in front of me and works very well!” he told me earlier today. “It is not science fiction! …it is Super-Real, using Microsoft Research Technology.”
As you can see in this video, the ‘Super-Real’ technology is being used to help a woman navigate her way around an anonymous, labyrinthine office space. That’s a bit dull, but you can see how it could be used for more: marketing campaigns, games, and anywhere, really, that a wall full of information has never felt like quite enough — and a pair of glasses or a smartphone with an augmented reality app might feel like too much.