Facebook connects 1.5 billion people all over the world, but for the blind and visually impaired, it can be difficult to gain access to what has become a vast platform for connectivity. That’s where Facebook’s accessibility team, led by Jeff Wieland, comes in. The sole purpose of the accessibility team is to help people with disabilities have a seamless experience on Facebook, and ultimately help the social network achieve its mission of connecting the world.
Right now, blind and visually impaired people who have access to screen readers — tools used to identify what’s displayed on a screen — can listen to what people are writing on Facebook, but there’s currently no way to figure out what’s going on in the millions of photos shared on Facebook every day.
“You just think about how much of your news feed is visual — and is probably most of it — and so often people will make a comment about a photo or they’ll say something about it when they post it, but they won’t really tell you what is in the photo,” Matt King, Facebook’s first blind engineer, told TechCrunch. “So for somebody like myself, it can be really like, ‘Ok, what’s going on here? What’s the discussion all about?’”
That’s why Facebook is currently working on an artificial intelligence-based object recognition tool to help blind users get an idea of what’s in all of the photos people share on Facebook. King, who started at the company just three months ago, recently showed me how he uses a screen reader to navigate Facebook.
“My view of the page is totally sequential,” King explained to me. “I can’t see the whole thing at one time. I see a little piece.”
As he scrolled down the page, the screen reader would tell King that he’s at a list of six items, which referred to the number of notifications he had at the time. It also told him when he reached a “convo box,” which signaled to him that he could interact with that element and leave a comment.
King eventually scrolled to a friend’s post that featured text and a photo. His friend, Anne, wrote, “Ready for picture day of first grade” accompanied with a photo. Thanks to the object recognition technology Facebook is prototyping, King heard: “This image may contain, colon, one or more people. Child.” Without it, all King would’ve known was that Anne wrote, “Ready for picture day of first grade,” and that she posted a photo — but nothing about what was in the photo. For another photo, the tool told him: “This image may contain colon nature, outdoor, cloud, foliage, grass, tree.”
In the photo gallery below, you’ll see what a blind person hears read aloud when they’re using a screen reader to browse photos on Facebook.
“This might not be 100 percent yet, but even if it’s just halfway there, the level of engagement that’s possible, the amount of enjoyment I can get — that’s like going from zero percent to at least 50 percent of what you might get,” King said. “That’s a huge jump, and it’s only going to get better from here. I personally find Facebook’s willingness to invest in ways like that just really powerful and exciting, and just one more way to make connecting people with disabilities a great experience.”
Unfortunately, there’s a bit of a learning curve with screen readers. That’s why one of King’s goals is to make that on-boarding process a lot simpler for the blind and visually impaired people. He wants to make it as easy for people with disabilities to access web connectivity as it is for people who don’t have disabilities, regardless of where they are in the world.
“We start thinking of access to information and information technology almost like a human right,” King said. “I mean, it’s the gateway to employment, it’s the gateway to opportunity of all different kinds — participating in your government and everything. So when we can flatten that on-ramp, I see that as the ultimate goal in accessibility and I think that Facebook is really uniquely positioned to help do that. So that gets me, just really excited. It’s a way of giving dignity to every person with a disability in the world by helping them get connected to everybody else.”
Ideally, the team hopes to release this product by end of the year to one platform — either web or iOS — and allow people to opt in to experience it.
“We want to make sure that the concepts we deliver, we feel strongly that they are in the photo,” Wieland said. “We don’t want to get that wrong. So we definitely need to continue investing in AI to make this great. We’re optimistic we can ship this in the relatively short term.”