Computer Vision Startup ThirdEye Pivots From Google Glass To Mobile

ThirdEye, a startup started by four Penn sophomores that helps the blind “see” what is around them, launched its mobile apps this morning. The company’s product, previously only available for Google Glass, allows users to point at an object and get an audio description of what their mobile device is being pointed at.

While ThirdEye’s initial product was well-received, even landing the company a partnership with the National Federation for the Blind (it tested the technology on the visually impaired and provided feedback), the team felt that wearables were far from the best way to achieve real impact.

“The problem with wearables is that they’re immature in the market, and they’re expensive,” explains cofounder Rajat Bhageria. He says the decision to move to the mobile app was motivated by the ubiquity of smartphones, especially Apple devices, amongst the visually impaired.
With the new apps available both on iOS and Android, the new platforms will certainly help with international growth, one of the company’s goals for the new year as it seeks to leverage its newly-obtained 501(c)(3) nonprofit status.

One of ThirdEye’s more powerful features comes from how the product combines object and text recognition. Pointing the app at a book, for example, yields different results based on what mode you choose. Just pressing “Recognize This” tells you that you’re pointing at a book and reads out the title, but a more detailed text recognition mode (“Read This”) will read more detailed information off of the spine and cover.

 

[gallery size="large" ids="1257337,1257338,1257339"]

Most of the team’s efforts have been focused on making sure that the products are accessible for the visually impaired. ThirdEye’s image and text recognition has been outsourced to a set of commercially available tools like CamFind’s CloudSight API.

The decision to stick with off-the-shelf tools instead of building proprietary systems involved a lot of trade-offs, explained head engineer Ben Sandler. Although onboard image recognition would have been faster, it would have been limited by the recognition engine and the limited number of reference images that a device can store, he explained. Although using the CloudSight API and doing processing in the cloud slows the app down, it increases scalability.

“[We] have access to the entire internet of images that have been classified,” says Sandler, adding that not processing onboard the device significantly reduces battery drain from the ThirdEye product, too.

ThirdEye is heavily reliant on off-the-shelf systems in many other aspects, which could adversely impact scalability. But it does have its advantages in the short term. One crucial benefit to using existing tools is a known level of data privacy. With so many users potentially reliant on ThirdEye for their day-to-day, there is a massive amount of data available about what users are pointing at, where they are doing the pointing, and other identifiable information. There are obvious security concerns, too, that could be exacerbated by improper data storage and handling.

Sandler says ThirdEye addresses concerns by not handling data itself. “We use different services to store and handle data,” including Parse for data handling.

Every recognition is globally recorded, but ThirdEye assures that this data is not tied to user identity. “We do not keep track of what any individual is doing,” he says. “That would feel like a violation of privacy.”

As ThirdEye grows in scale and capability, one of the team’s greatest hurdles will be time. All of its founders continue to study at Penn, and so far none plans to drop out or take time out of school.