Cortica teaches autonomous vehicles with unsupervised learning

Cortica is moving into mobility by opening an automotive branch for its AI technology. The key, according to the company, is in the use of unsupervised learning that allows an autonomous system to process data and figure out its environment.

For those just dipping their toes into the artificial intelligence pool, supervised machine learning is when the AI is given a set of examples and what they mean so that it can learn to recognize similar items in the future. For example, you could feed the system a thousand images of stop signs — some faded, some obscured by tree branches, etc. — so that the AI can learn to recognize stop signs.

In unsupervised machine learning, the examples aren’t labeled. The AI has to classify and organize the examples based on common characteristics. Stop signs, for example, are red with white borders and eight sides. The AI can learn as it goes that sometimes that red is faded and sometimes a white border will be obscured by a tree branch, but it can make changes as needed to classify stop signs as stop signs.

Cortica says unsupervised machine learning will allow autonomous cars of the future to better adapt to new situations on the road. The system Cortica developed to help manufacturers and developers takes in all the data from the sensors, processes the images, clusters them, and tags them with metadata that has already been defined. Igal Raichelgauz, Cortica’s CEO and cofounder, said in a press release that this system can process the massive amounts of data that will soon be generated by in-vehicle cameras — “enough video data to equal YouTube’s entirety every hour” in California alone.

In order to process this volume of data, the system uses a cloud/local hybrid architecture. “Non-vital” processing is offloaded to the cloud to reduce complexity and power consumption, which could become more of a factor as vehicles become more electrified and less reliant on gasoline.

The company says that this kind of machine learning is closer to the way humans see and process visuals. We can recognize a stop sign even when it’s covered in stickers, even in the rain, even at twilight. Cortica’s AI can also adapt to what it calls “next generation obstacles,” including gestures, to predict the intent of other vehicles and humans.

If you’d like to see four minutes or so of AI learning to maneuver around city buses and not hit pedestrians in Tel Aviv crosswalks, here’s a video. It’s kind of reassuringly boring.