Atla wants to build text-generating AI models with ‘guardrails’

Today’s most capable text-generating AI models are also those most likely to make mistakes.

It’s well-established at this point that text-generating models hallucinate, or make up facts, and fall victim to all sorts of biases and toxicities — including sexism, Anglocentrism and racism. For example, without sufficient filtering, GPT-4 — OpenAI’s flagship model — dispenses advice on how to self-harm without anyone noticing, synthesize dangerous chemicals and write ethnic slurs to avoid social media moderation.

Obviously, all that’s an anathema to the enterprises looking to build these models into their apps and services. According to a recent Gartner survey, 58% of companies are concerned about incorrect or biased outputs from models. A similar percentage said that they were worried about the models leaking confidential information — another notorious characteristic of text-generating models.

Work on AI models continues. But for companies eager to deploy the models — particularly the open source models — already available, there’s startups like Atla. Co-founded by Maurice Burger and Roman Engeler, Atla is building what Burger describes as “guardrails” for text-analyzing and -generating models in “high-stakes” domains.

Burger previously co-founded the startup Syrup Tech, which develops AI-powered ecommerce inventory software. Engeler, meanwhile, was previously an AI researcher at Stanford, where he studied text-generating models and their existential risks.

Atla’s mission, Burger says, is to build safer AI systems by improving their truthfulness, reducing their harmfulness and increasing their reliability. The company’s first product is a model for legal research trained in collaboration with teams at Volkswagen and N26, which responds to questions with citations trom “trusted” legal sources.

Why focus on AI for legal research first first? The demand’s palpable, Burger says. To avoid errors, corporate counsel often leans on external law firms — which are expensive and time-consuming. It’s not uncommon for a legal professional to spend hours reviewing dozens of documents to answer a single question, Burger says — a burden a reliable AI system can in theory massively alleviate.

“We’re excited by the enormous potential of generative AI and by the challenge of pushing the limits of reliability of [text-analyzing] models,” Burger said in a statement. “At Atla, we’re committed to creating safer AI systems that are designed to perform reliably in high-stakes situations.”

It’s a sensible goal — if an ambitious one. Atla is revealing little about how, exactly, it’s making AI systems “safer.” I’m skeptical myself — if a safety-focused AI company as well-funded and high-profile as Anthropic can’t build a vastly less biased, hallucination-prone text-generating models, well… Atla has its work cut out for it.

Plus, Atla isn’t the only startup working on building safer text-generating AI. There’s Protect AI, Fairly AI and Kolena, to name a few, as well as the recently-emerged-from-stealth Vera and Calypso.

But Atla’s attracted investments — which means that at least a few folks are willing to put cash behind its projects. Today, Atla announced that it secured $5 million funding in a seed round led by Creandum with participation from Y Combinator and Rebel Fund.

Here’s Creandum partner Hanel Baveja:

“From our first interactions, we’ve been incredibly impressed by the ambition, relentless work ethic and deep AI expertise from Maurice and Roman,” Baveja said via email. “We’re excited to join the Atla team in their journey to build reliable, safe and trusted AI applications for the sectors where this matters most.”

Burger says that the new cash will be put toward expanding Atla’s team to scale its tech, go live with more customers and recruit for technical roles in its London-based team.