If you’ve been to cities and you’ve had enough, have you been to Paris, France? Paris is defined by a few magical characteristics – the street signs, the architecture, the street features – and a new system at Carnegie Mellon identifies cities based on their special traits.
The project describes a fairly complex algorithm that is able to find aspects from Google Street view.
Given a large repository of geotagged imagery, we seek to automatically find visual elements, e.g. windows, balconies, and street signs, that are most distinctive for a certain geo-spatial area, for example the city of Paris. This is a tremendously difficult task as the visual features distinguishing architectural elements of different places can be very subtle. In addition, we face a hard search problem: given all possible patches in all images, which of them are both frequently occurring and geographically informative? To address these issues, we propose to use a discriminative clustering approach able to take into account the weak geographic supervision. We show that geographically representative image elements can be discovered automatically from Google Street View imagery in a discriminative manner.
The system currently works in multiple city using large samples of images from cities around the world. Using these, the system can identify where a random photo was taken with some degree of accuracy. Interestingly, the system can also be used on everyday objects, including “discovering stylistic elements in other weakly supervised settings, e.g. “What makes an Apple product?’”
You can download the study PDF here.