Spectrum Labs raises $32M for AI-based content moderation that monitors billions of conversations daily for toxicity

Two years into the pandemic, online conversations are for many of us still the primary interactions that we are having every day, and we are collectively having billions of them. But as many of us have discovered, not all of those are squeaky clean, positive experiences. Today, a startup called Spectrum Labs — which provides artificial intelligence technology to platform providers to detect and shut down toxic exchanges in real time (specifically, 20 milliseconds or less) — is announcing $32 million in funding. It plans to use the money to continue investing in its technology to double down on its growing consumer business and to forge ahead in a new area, providing services to enterprises for their internal and customer-facing conversations, providing not just a way to help detect when toxicity is creeping into exchanges, but to provide an audit trail for the activity for wider trust and safety tracking and initiatives.

“We aspire to be the leaders in language where civility matters,” CEO Justin Davis said in an interview.

The round is being led by Intel Capital, with Munich Re Ventures, Gaingels, OurCrowd, Harris Barton, and previous backers Wing Venture Capital, Greycroft, Ridge Ventures, Super{set} and Global Founders Capital also participating. Greycroft led Spectrum’s previous round of $10 million in September 2020, and the company has now raised $46 million in total.

Davis, who co-founded the company with Josh Newman (the CTO), said Spectrum Labs is not disclosing valuation, but the company’s business size today speaks to how it’s been doing.

Spectrum Labs today works with just over 20 big platforms — they include social networking companies Pinterest and The Meet Group, dating site Grindr, Jimmy Wales’ entertainment wiki Fandom, Riot Games and e-learning platform Udemy — which in turn have millions of customers sending billions of messages to each other, either in open chat rooms or in more direct, private conversations.

Its technology is based around natural language and works in real time both on text-based interactions and audio interactions.

Davis notes that its audio work is “read” as audio, not transcribed to text first, which gives Spectrum’s customers a significant jump on responding to the activity, and counteracting what Davis referred to as “The Wild West nature of voice,” due to how slow responses typically are for those not using Spectrum’s technology: a platform has to wait for users to flag iffy content, then the platform has to find that audio in the transcriptions, and then it can take action — a process that could take days.

This is all the more important since voice-based services — with the rise not just of podcasting but services like Clubhouse and Spaces on Twitter — are growing in popularity.

Whether text or audio, Spectrum scans these exchanges for toxic content covering more than 40 behavior profiles that it built initially in consultation with researchers and academics around the world and continues to hone as it ingests more data from across the web. The profiles cover parameters like harassment, hate speech, violent extremism, scams, grooming, illegal solicitation and doxxing. It currently supports scanning in nearly 40 languages, Davis tells me, adding that it could work with any language, although Davis tells me that there is no language limit.

“We can technically cover any language in a matter of weeks,” he said.

The most visible examples of online toxicity have been in the consumer sphere — where they have played out in open-forum and more private online bullying and hate speech and other illegal activity, an area where Spectrum Labs will continue to do work and invest in technology to detect ever more complicated and sophisticated approaches from bad actors. One focus for Spectrum Labs will be in working on ways to improve how customers themselves can also play a role in deciding what they do and definitely do not want to see, alongside controls and tools for a platform’s trust and safety team. This is a tricky area, and arguably one reason why toxicity has gotten out of hand is because traditionally platforms have wanted to take a hands-off, free speech approach and not meddle in content, since the other side of the coin is that they can also be accused of censoring, a debate that is still very much playing out today.

“There is a natural tension between what the policy implements and what users want and are willing to accept,” Davis said. His company’s view is that the job of a platform “is keeping the worst of the worst off, but also to provide consumer controls to make selection over what they want to see over time.”

Alongside that, Spectrum plans to move more into enterprise services.

The opportunity in enterprise is an interesting one, as it includes not just how people within a company converse with each other (which largely might take a similar form to the consumer-facing services that Spectrum Labs already provides), but also how a company interfaces with the outside world in areas like sales, customer service and marketing, and then leveraging the information that Spectrum Labs gathers in its analytics to potentially alter how each of those areas subsequently operates.

To be sure, this is not a market segment that has been ignored. Spectrum’s competitors here will include another startup in the conversation monitoring space, Aware, which focuses on enterprise exclusively. (L1ght, meanwhile, is another competitor in the consumer sphere.)

And there will certainly be others. We noted when we last wrote about Spectrum Labs that the founders and founding team came from Krux, a marketing technology company that was acquired by Salesforce (where they worked before leaving to found Spectrum Labs). I wouldn’t be surprised to see Salesforce taking a more interested role in this area in the future, not least because it is building out a very wide toolset to help companies run their businesses more efficiently, not just limited to CRM; but also because Bret Taylor, who once founded another social network and used to be the CTO of Facebook, is now helping to run the show, and may well have an especially informed grip on how communications forums can be used and abused.

For now, to address both the consumer and enterprise issues, Intel is coming in as a strategic investor in this round, Davis tells me. The plan will be to integrate Spectrum Labs’ technology to work more closely with Intel’s chip designs, which will increase the speed that it works even more, and Intel will be able to use as a unique selling point with Intel’s would-be hardware customers as they give a higher priority to trust and safety issues themselves.

“We believe Spectrum Labs’ Natural Language Understanding technology has the potential to become the core platform that powers the trust initiatives of thousands of companies around the world,” said Mark Rostick, VP and senior MD at Intel Capital, in a statement. “As digital trust and ethical operations emerge as a key factor to help organizations differentiate themselves, we see a huge opportunity to build a Trust & Safety tech layer into enterprise operations.”