Oakland-based art and tech studio 💾🌵 takes critical look at A.I.

Artificial intelligence is not as advanced as you may think.

Sure, it’s technologically advanced, but it lacks a general understanding of what’s socially acceptable to say, the real world and the way humans interact.

In the last couple of months, Microsoft has had a couple of failed attempts with artificial intelligence. The first involved an image recognition app called Fetch!, which looks at photos of dogs to identify its breed. People, of course, started to use the app to determine what breed of dog people resemble. In doing so, people began to notice that the app identified Asian people either as Pekingese or Chinese Crested dogs. Earlier this week, Microsoft launched an A.I.-powered bot named Tay to interact with people on the Internet. Within a day, people had pretty much hijacked the bot to tweet racist commentary and other inappropriate content. After just 16 hours worth of chats, Microsoft silenced Tay.

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay…,” Microsoft VP of Research Peter Lee wrote in a blog post. “Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time.”

Oh, and let’s not forget about the time when Google screwed up royally by tagging photos of black people as gorillas. Yeah, that was really bad.

Enter AI Scry, a project from 💾🌵

💾🌵(“disk cactus”) recently launched an iOS app called AI Scry that is designed to identify objects in the world. It relies on a Microsoft data set called MS COCO, which uses neural networks to recognize images. Neural networks are one type of model for machine learning. So, when it comes to images, you can think of it as a pattern recognition system. 

The AI Scry app is intentionally fun and light-hearted, 💾🌵co-creator Sam Kronick (pictured above through the lens of AI Scry) told me. Kronick and 💾🌵’s other co-creator Tara Shi (also pictured above) were looking for a creative, non-intimidating way to talk about the issues of A.I. Ultimately, the goal with AI Scry is to pick apart A.I. and “point out that it’s not magic,” Shi said.

“People created this and people inherently have flaws and have biases, and these are simply reproduced,” Shi said. “A.I. is an algorithm, which is a tool that we create.”

While working with the Microsoft data set, Kronick and Shi said they really started to understand the biases built into the system — the objects that it’s really good at seeing and the objects it refuses to see.

“Conceptually, this is interesting because you have this whole class of algorithms that are designed to do things that feel really, like, almost from another century,” Kronick said. But, at the same time, he said, it’s as if all of the advances we’ve made in social sciences and the way we talk about variants in the world amongst people or objects or things don’t exist.

“You’re going back to creating systems that want to discriminate explicitly, that have hard binaries between different types of objects because you want to know what a rock is and what a doughnut is,” Kronick said. “You’re concerned about separating those two as far as you can rather than understanding anything about what’s in between.”

ai scry

AI Scry working its “magic”

AI Scry identifies everyday, generic things like food, a car at a stoplight, a man holding a cake, and so on. What it doesn’t do well is tell you anything about the scenario. In a war zone, Kronick said, AI Scry would be wrong in a way that would be really uncomfortable.

“You might just say, it’s the wrong data set,” Kronick said. “What’s interesting here is realizing that it’s easy to feel like you’re pushing these systems to be more objective than humans would. You’re removing a person from the loop. It’s easier to say it’s an objective observer. But in reality, there’s this whole chain of human designers, human laborers, human workers that create this system, and they bring along with them all these choices of what’s going to be included versus excluded in this data, what kind of information they’re looking for and what kind of functions they’re trying to draw out of this.”

Upon first hearing about AI Scry, TechCrunch’s Matthew Panzarino wondered about potential accessibility uses. Kronick told me that he’d feel “a little concerned” using this technology to try to help blind people navigate a city or try to cross the street, in part because the tech software wasn’t written from the perspective of what someone who is blind might need. He went on to say that the technology, as it stands right now, prioritizes what a sighted person may find interesting about a particular scene. He went on to say that AI Scry is more of a critical project that’s trying to break apart what’s going on at the core of artificial intelligence, and there’s a lot more critical thinking to be done on the topic. In the near future, Shi is going to embark on a 3D generating rocks project that relies on neural networks to further explore the implications of artificial intelligence.

“The idea is to train it on a lot of images of rocks and then have it slowly learn over time to create something that it believes to be a rock,” Shi said. “And then reinserting them back into the landscape, as a comment on even the most subtle parts of this change could be the most stirring and interesting.”

Keeping the lights on at 💾🌵

With the 3D generating rocks project and AI Scry, the goal isn’t to make money. Although 💾🌵charges $0.99 for the app, that goes towards keeping the project running and paying for the costs of servers and whatnot. Some of their other projects include an emoji keyboard, which they recently started selling via Urban Outfitters, and a Wi-Fi walkman that reads off Wi-Fi network names as you walk through the city.

[gallery ids="1297303,1297305,1297304,1297302,1297606"]

When  💾🌵 isn’t tinkering around with projects like AI Scry and the Wi-Fi walkman, the studio is working with big corporate clients like Google. At the last Maker Faire, 💾🌵 created an experiment for Google as part of its Breaker lab at the fair, in which kids performed science experiments by breaking stuff. The kids stuffed a bunch of confetti of different colors in a balloon and inflated it until it popped. The 💾🌵 watched it with a high-speed camera outfitted with computer vision algorithms to track the particles and generated music out of the explosions. 

“The studio is a platform for us to build on all these ideas, whether they’re going to be profitable or not, rather than trying to have a startup model of building some wacky ideas and gambling on whether they’re going to end up being something big,” Kronick said.

Wacky ideas, indeed. 💾🌵’s process for coming up with ideas is also a wacky one. TL;DR They go into a dessert with Consenual Vibes — an algorithm-based device they hacked together — and then sit in around it in a big, carved-out dirt hole. They bust out the app that goes along with it, and swipe left or right on ideas like “Meat + trial,” “drones + bodies” or “Emoji + keyboard” until they come up with a match. Consenual Vibes is like Tinder, but for ideas, Kronick said. You can check out their process here: