Apparently I “like” OfficeMax, Folgers, JCPenney, Kraft and several other big-name brands. At least, according to Facebook I do. Except, it’s wrong. It’s not that I dislike these brands, really (well except maybe Folgers – I mean, yuck). But the truth is, I’m just indifferent to them. Why did I like them? I don’t recall. I probably saw a coupon. Maybe a freebie or contest. I was bored, and I had to click “like” to enter the drawing. This is a common situation among Facebook users, but not really a huge problem. For Facebook, however, it is.
With Facebook now launching a search engine on top of its own structured data – conveniently after turning off the ability for users to opt out of search – there’s now a greater need to think about the implications of Facebook’s “over one trillion connections,” and how representative they are of the people who created them.
As a whole, Facebook’s “like” data may paint a relatively accurate picture of my public self – I like technology, apps, Star Wars, TV, family activities, and jokes about how much wine working mothers have to drink to stay sane (hint: it’s a lot). But when examined one-by-one, there’s a lot of junk data in there.
Initially, when and if people begin to transition to Facebook search over Google, they will be searching for things of personal interest, like old photos, places to travel, new bars or restaurants to try, local singles with shared interests, and more. But further down the road, you can imagine Facebook graph search becoming a part of the consumer shopping experience, too.
Say I’m looking for a new dryer, an HDTV, a new credit card, or maybe even a good sushi restaurant, and I want to know what my Facebook graph can recommend. The problem, as it stands today, is that Facebook would do a poor job of extracting that information accurately. “Friends who liked X” is not a recommendation. “Friends who checked in at Y venue” isn’t either.
With check-ins, at least, Facebook is addressing that latter issue somewhat via its Nearby feature, which has recently been prompting users to recommend or rate the restaurant, bar or other venue on a five-star scale. This data isn’t perfect either, because some friends’ ratings are more valuable than others’ ratings due to a variety of factors – their palates, biases, whether they’re wine-savvy, or similarities with a user’s own tastes, for example. But it’s a start.
On the “like” front, though, the data is murkier.
Steve Cheney, GroupMe’s Head of Biz Dev, explained this problem in more detail in a thoughtful, if pessimistic, post: “Graph Search’s Dirty Promise and the Con of the Facebook ‘Like’.” It’s well worth reading in its entirety. In it, Cheney says:
The truth however is that the link between query intent and your social interactions for interests and places is much weaker than FB wants you to believe.
In computer architecture they call an out of date piece of data “dirty”. Accessing dirty data is bad, wasting time and causing more harm than good. And in this context, much of the structured data that makes up Graph Search is just that: totally irrelevant and dirty.
It turns out as much as half of the links between objects and interests contained in FB are dirty—i.e. there is no true affinity between the like and the object or it’s stale. Never mind does the data not really represent user intent… but the user did not even ‘like’ what she was liking.
He goes on to explain that the problem was created by the way the Facebook “like” system worked. The company told brands that users would see their posts in their news feeds if the users liked the brand’s page on Facebook. So the brands paid big money in terms of advertising dollars to acquire fans. “Across the board big advertisers were told to spend 50% of their ad buy solely on fan acquisition,” Cheney writes. He calls it a “dirty little secret in ad agency land.”
This is why, today, Facebook users can’t just request a coupon, get a free sample, enter a contest, hear about a limited-time sale, etc. via a brand’s page – they are forced to like the page first. This then establishes a connection between the brand and the user. And now Facebook is mining that connection to build its own search engine. Google has PageRank. Facebook has “like” data, check-ins, posts, comments and photos. Here are some of the queries you can perform with Facebook Graph Search, to get an idea of how it will work.
A “Like” Is But One Signal, Facebook Has More
But that being said, while Cheney has a point about Facebook’s “dirty data,” I think that point of view also discounts Facebook’s capacity to innovate. Yes, some data is bad. But not all of it.
And most importantly, it seems clear that a Facebook like, in the context of businesses and brands, will eventually have to become one signal among many in Facebook’s search results ranking algorithm. Just as engines such as Google rely on thousands of signals to determine where a link appears in search results, Facebook too could turn to other means to determine how much value any particular “like” has. For example, with a restaurant, it could also know not just whether you liked it, but when you checked in, how often you returned, who you were with, how often they return, how you rated it, what friends and friends of your friends rated it, and more.
As TechCrunch’s Josh Constine also recently pointed out, even users’ photo uploads could translate into recommendations, thanks to the photos’ geotags (location where the photo was taken). A photo says “I was there,” and it often implicitly implies an element of fun, too. As Josh noted, “I don’t see many people posting pics from the DMV.”
With brands, determining a like’s value is more difficult, though. However, with Facebook’s advertising network, Facebook Exchange, which brought the first cookie-based retargeted ads to Facebook, Facebook can gain access to other signals about user behavior in order to better examine what a “like” means. For example, Facebook could learn whether a user visited a brand’s website, when their visit(s) occurred, whether or not those visits later led them to the advertiser’s Facebook page (after seeing a Facebook ad, for instance), and then provoked the user’s “like.”
Facebook could port the Facebook Exchange to mobile as well, which could bring in even more data, including perhaps one day, geolocation. That would solve the messy “check-in” problem. (Check-ins are a decidedly manual, privacy-sensitive way of getting location data an app could just know, if users gave it permission to run in the background.)
The trick will be in finding the proper way to massage all these various signals into an algorithm that makes sense and determines the proper relevancy. And Facebook is still critically missing information related to users’ financial transactions, which is the end result of clicking “like,” at least in brands’ eyes. It currently has some access to purchase data, through its relationship with Datalogix, but that may be limited to things such as grocery store purchases – data Facebook receives via Datalogix loyalty card data sets. Facebook clearly needs more of this kind of information.
Facebook also doesn’t necessarily know if you ever ate at that restaurant you liked, unlike when you checked in, posted geotagged photos, or reviewed it. And it doesn’t know how much you spent there, either. That’s why if Google ever gets its Google Wallet mobile payments service into the mainstream, it could best Facebook on at least this portion: closing the loop.
Still, Facebook does have access to a relatively powerful data set through Open Graph, which tells it not just what people like, but what they do. Open Graph data comes in through any third-party app that auto-shares with Facebook, providing information on things like your media consumption behavior (e.g. bands, TV shows, movies, books), as well as info gleaned from things like food or travel apps, and it may also have a decent amount of information from local businesses, too.
At the end of the day, the way I see Facebook’s graph search now is like looking at the bare bones of an idea, really the raw skeleton of what could one day become a more fully-fledged search offering. Today, dirty data will abound, perhaps. But when there are one trillion connections to examine (and growing), there’s also the possibility to find the golden nuggets of quality from among the junk.
Image credit: sofiabudapest/flickr
Additional reporting by: Josh Constine