Factmata gets backed by eyeo, maker of Adblock Plus, and takes over its Trusted News app

“Fake news” — news content that either misleads people with half-truths, or outright lies — has become a permanent fixture of the internet. Now, as tech and media platforms continue to search for the best way to fight it, Factmata — a London startup backed by Biz Stone, Craig Newmark, Mark Cuban, Mark Pincus and more to build a platform to detect when false information is shared online — is announcing a new investor and partnership that will see it expanding its scope.

The company is picking up an investment from eyeo, the company behind Adblock Plus, and as part of it, Factmata is taking on the running of Trusted News, the Chrome extension that eyeo launched last year to give a nudge to those browsing content on the web to indicate whether a story is legit or shit.

Dhruv Ghulati, the CEO of Factmata — who co-founded the company with Sebastian Riedel and Andreas Vlachos (Riedel’s other fake-news-fighting startup, Bloomsbury AI, was acquired by Facebook last year) — said that the financial terms of the deal were not being disclosed. He added that “eyeo invested both cash and the asset” and that “it’s a significant amount that strategically helps us accelerate development.” He points out that Factmata has yet to raise money from any VCs.

Trusted News today — an example of how it looks is in the screenshot above — has “tens of thousands” of users, Ghulati said, and the aim will be to continue developing and taking those numbers to the next level, hundreds of thousands of users by changing up the product. The plan will be to build extensions for other browsers — “You can imagine a number of platforms across browsers (e.g. Brave), search engines (e.g. Mozilla), hosting companies (e.g. Cloudflare) could be interested but we haven’t engaged in discussions yet,” he said — as well as to expand what Trusted News itself provides.

“The goal… is to make it a lot more interactive where users can get involved in the process of rating articles,” he said. “We found that young people especially surprisingly really want to get involved in debating how an article is written with others and engaging in rating systems, rather than just being handed a rating to trust.”

Ghulati said that eyeo’s decision to hand off running Trusted News to Factmata was a case of horses for courses.

“They are giving it to us in return for a stake because we are the best placed and most focused natural language understanding company to make use of it, and progress it forward fast,” he said. “For Factmata, we partner with a company that has proven ability to generate large, engaged community growth.”

“Just as eyeo and Adblock Plus are protecting users from harmful, annoying ads, the partnership between Factmata and Trusted News gets us one step closer to a safer, more transparent internet. Content that is harmful gets flagged automatically, giving users more control over what kind of content they trust and want to read,” said Till Faida, CEO and co-founder, eyeo, in a statement.

Factmata has already started thinking about how it can put some of its own technology into the product, for example by adding in the eight detection algorithms that it has built (detailed in the screenshot above that include clickbait, hate speech, racism, etc.). Ghulati added that it will be swapping out the way that Trusted News looks up information. Up to now, it’s been using a tool from MetaCert to power the app, a database of information that’s used to provide a steer on bias.

“We will replace MetaCert and make the system work at the content level rather than a list lookup, using machine learning,” he said, also noting that Factmata plans to add other signals “beyond just if the content is politically hyperpartisan or hate speech, and more things like if it is opinionated, one-sided, and or could be deemed controversial. “We won’t deploy anything into the app until it reaches 90% accuracy,” Ghulati said. “Hopefully from there, humans get it more accurate, per a public testing set we will make available for all signals.”

Ghulati himself is a machine learning specialist, and while we haven’t heard a lot from Factmata in the last year, part of that is likely because building a platform from scratch to detect a problem that seems to have endless tentacles (like the web itself) can be a challenge (just as Facebook, which is heavily resourced and still seems to let things slip through).

He said that the eight algorithms it’s built “work well” — which more specifically he said are rating at more than 90% accuracy on Factmata’s evaluation sets on U.S. English language news articles. It’s been meanwhile refining the algorithms on short-form content using YouTube video transcripts, Tweets, Blog posts and a move into adding more languages, starting with French.

“The results are promising on the expanded types of content because we have been developing proprietary techniques to allow the models to generalise across domains,” he said.

Factmata also has been working with ad exchanges — as we noted back when Factmata first raised $1 million, this was one of the big frontiers it wanted to tackle, since ad networks are so often used to disseminate false information. It’s now completed case studies with 14 major ad exchanges, SSPs and DSPs and found that up to 4.92% of a sample of pages served in some ad exchanges contain high levels of hate speech or hyperpartisan language, “despite them thinking they were clean and them using a number of sophisticated tools with bigger teams than us.”

“This for us showed us there is a lot of this type of language out there that is being inadvertently funded by brands,” he noted.

It’s also been gathering more training data to help classify content, working with people who are “experts in the fields of hate speech or journalistic bias.” He said that Factmata has “proven our hypothesis of using ‘expert driven AI’ makes sense for classifying things that are inherently subjective.” But that is in conjunction with humans: using experts leads to inter-annotator agreement rates above 68%, whereas using non-experts the agreement of what is or is not a claim or what is or is not bias is lower than 50%.

“The eyeo deal along with other commercial partnerships we’re working on are a sign: though the system is not 100% accurate yet, within a year of building and testing our tech is ready to start commercialisation,” Ghulati added.