While large companies like Facebook and publishers continue to rethink what their role is in disseminating news these days in the wake of the growing influence of ‘fake news’ and the ever-present spread of misleading clickbait, a London-based startup called Factmata has closed a seed round of $1 million in its ambition to build a platform using AI to help fix the problem across the whole of the media industry, from the spread of biased, incorrect or just crappy clickbait on various aggregating platforms; to the use of ad networks to help disseminate that content.
There is no product on the market yet — the company is piloting different services at the moment — and so it’s reasonable to wonder if this might ever get off the ground. But what Factmata is doing is notable anyway for a couple of reasons.
First and foremost, for the timeliness of Factmata’s mission. It’s been over since the US election, and nearly two years after Brexit in the UK. Both events raised the profile of just how strategically placed, biased or plainly wrong stories might have influenced people’s voting in those pivotal events; and people (and businesses) are still talking about how to fix the problem, which started as a public relations risk but threatens to tip into becoming a business and legal risk if not checked.
And secondly, because of who is backing it. The list includes Biz Stone, one of the co-founders of Twitter (which itself is grappling with its role as a ‘neutral’ player in people’s wars of words); and Craig Newmark, a longtime supporter of freedom of information other civil liberties as they cross into the digital world. In August of last year, when Factmata announced the first close of this round, it also named Mark Cuban (the investor who is a very outspoken opponent of US President Donald Trump), Mark Pincus, Ross Mason and Sunil Paul as investors.
In an interview with TechCrunch, Factmata’s CEO and founder Dhruv Ghulati — a machine learning specialist whose field of work has included “distant supervision for statistical claim detection” (which seems to have a strong correlation for how one might model a detection system for a massive trove of news items) — would not be drawn out on the specifics of how Factmata would work, except to note that it would be based on the concept of “community-driven AI: How do we take a machine learning model where you get data to train your model, perhaps pay 10,000 people to flag content? How can you build a system where [what you have and what you want] is symbiotic?”
(And you can, in fact, think of many ways that this could in the end be implemented: consider, for example, pay walls. You could build up credits to bypass pay walls with readers for every report they make that’s determined to be a help to the fake news challenge.)
Ghulati said that Factmata’s team of machine learning and other AI specialists are building three different strands to its product at the moment.
The first of these will be a product aimed at the adtech world. Programmatic ad platforms and the different players that feed into it have built a system that is ripe for abuse by bad actors.
Those who are posting “legitimate” stories are finding their work posted alongside ads that are being inserted without visibility into what is in those ads, and those selling ads sometimes will not know where those ads will run. The idea will be that Factmata will help detect anomalies and present them to different players in the field to help reduce those unintended placements.
“We have a recognition system that can detect things like spoof websites,” which might use legitimate-looking ads to help further the image of their legitimacy, said Ghulati.
The success and adoption of the adtech product is predicated on the idea that most of the players in this space are more worried about quality than they are about traffic, which seems antithetical to the business. However, as more junk infiltrates the web, people might gradually move away from using these services (Facebook’s recent traffic fall is an interesting one to ponder in that light), so the quality issue may well win out in the end.
Ghulati said that AppNexus, Trustmetrics, Sovrn and Bucksense are among the companies in the programmatic space that are already testing out Factmata’s platform.
“Sovrn is passionate about working with independent publishers of quality content. To offer further quality metrics to our buyers, we have chosen to work with Factmata to help build new whitelists of inventory that are free of hate speech, politically extreme, and fake/spoof content,” said Andy Evans, CMO at Sovrn. “This is a new offering in the programmatic advertising market, and Factmata is a strong partner in this space. We are excited to be part of Factmata’s journey to help indirect programmatic offer a cleaner, healthier environment for brands”.
The second area where Factmata is hoping to make a mark is on aggregation platforms. While news publishers’ own sites continue to be strong drivers of traffic for those businesses, platforms like Google and Facebook continue to play an even bigger role in how traffic gets to those in the first place — and in some cases, where your story is read full stop.
Here, Ghulati said that Factmata is working on an “alpha” of a product that would also work on these platforms to, like the programmatic ad networks, detect when something that is biased or incorrect is being shared and read. (He would not disclose which of these platforms might be talking with Factmata, but given Stone has a role again at Twitter, it would be interesting to see if it’s one of them.)
Ghulati is clear to point out that this is not censorship: nothing would ever get removed based on Factmata’s determinations, but rather would be flagged for readers. This sounds not unlike Facebook’s own attempts at getting people to report when something is of questionable origin, and ultimately, in my opinion, this part of the business might only succeed if it proves to be able to arrive at the goal faster and better than the companies it will be trying to sell its services to.
He also notes that in the fight against ‘bias’ it’s not trying to remove all opinion from the web, but merely to inform readers when it’s there. “We are not trying to build tech that is trying to make articles unbiased. We’re not trying to create automated machine journalism that makes the most unbiased articles. We’re trying to surface and make clear to the reader that those biases do exist. For example, they made the claim because it’s like this. It’s difficult in a fast news cycle always to know the context.”
The third area where Ghulati hopes Factmata will exist is as a consumer-facing service, and this might be the more plausible outcome of the work it’s doing to potentially work with publishers, platforms and others in the world of news and news distribution. Here you can imagine a kind of plug-in or extension that would pop up additional information about a news piece right as you are reading it.
Factmata is just getting started and this seed round is potentially just the tip of the iceberg for what it would need to bring a full product to market. There’s certainly a will behind its mission today, and hopefully, that will not ebb away as people move on to, well, the next item in the news cycle.
“It is def a huge problem, and not one that will be solved this year or next year,” Ghulati said. “We are taking a long-term perspective on this. We think in five to ten years, will we have a new news platform that puts the user at its core? From the tech perspective, it is well known that this space has been dominated by social media platforms. That market is there, but there is a huge chunk that is not, and we think there is a huge opportunity to revamp safety in that market.”