Will startups have a shot in the enterprise AI race?

It’s impossible to escape AI chatter as the largest tech companies race to build or partner with new large language models and integrate them into their software and search services. The underlying technology is advancing quickly enough that we have seen calls for the work to pause and Congress interrogate technology leaders on the subject.


The Exchange explores startups, markets and money.

Read it every morning on TechCrunch+ or get The Exchange newsletter every Saturday.


But while ChatGPT and other, similar tools are popular, there’s a less discussed side to the current artificial intelligence race: the enterprise.

Recent news from Appian, a public software company, and Neeva, a startup born to build a search engine that could compete with offerings from majors, make plain that the number of participants racing to build AI tooling and services for large companies is expected to be healthy. Given how lucrative selling software to large corporations can prove, the players are not chasing a small market.

TechCrunch+ has covered enterprise AI in the present context several times in recent weeks, providing much-needed intellectual footing. It’s important to understand what Databricks and Cisco are building, after all. But I have a different question: Do smaller tech companies also have a chance at market share?

This morning let’s remind ourselves how generative AI may fit into the enterprise, and then dig into the latest news that matters to form a clearer understanding of the direction tech companies are building toward.

Industry or company?

What makes ChatGPT and related tools so fun to use is that you can throw nearly anything at them and they will come up with a retort. Want a generative AI service to write you a haiku about Dream Theater’s discography? Sure! Here’s what ChatGPT gave me this morning:

Melodies cascade,
Dreams painted with symphonies,
Time’s journey unfolds.

I regret to inform you that that is a better — and roughly 1,000x faster — poem than I could have managed with the same prompt.

But while it’s incredibly neat to use generative AI tools built from the simply massive datasets, corporations have different needs and priorities than the humble consumer population of the world. Different needs, different inputs and different outputs. As Ron Miller wrote last month, “What if each industry or even each company had its own model trained to understand the jargon, language and approach of the individual entity?”

Recent news underscores that Ron might’ve been onto something.

In its most recent earnings call, enterprise automation company Appian discussed its efforts to integrate new AI technologies into its own software corpus. Appian provides process-mining and automation tools, alongside low-code app creation capabilities, for reference. Here’s Appian CEO Matt Calkins (via Fool transcript, emphasis added):

We announced [a new feature] which I call low-code AI that make it easy for customers to cultivate their own AI on Appian connected data sets. This public-private split separates Appian from its largest competition. By being a champion of private AI, we appeal to buyers who prefer not to share their data assets. Our ability to assemble large data sets to train private AI algorithms comes from a feature called data fabric.

Data fabric is a fancy term for a virtual database and it means that we can address data from across the enterprise like it was together even though it remains apart. This strategy is preferable for our clients who dislike having to relocate data. Data is the hardest part of building and running processes, so this feature constitutes a substantial advantage. Our data fabric in turn gives us a critical edge at inventing the next generation of process mining.

Appian, worth a few billion dollars and en route to more than $500 million in revenue this year, is not a massive technology company. It’s simply a big one that went public back in 2017. It thinks that due to its place as digital connective tissue between corporate datasets — helping them find processes that are inefficient and can be automated — will give it an edge in providing AI services for customers who don’t want to lever mass-market tools.

This is actually pretty cool. Not that I have a dog in the fight of “who will win the enterprise AI race,” I don’t. But I do like a competitive market as those tend to generate not only the quickest pace of innovation but also allow for greater customer surplus thanks to competitive pricing. If Appian thinks that it has an edge and can bring its new tech to market quickly, it could carve up a nice slice of the future enterprise AI (generative enterprise AI? enterprise generative AI? generative AI for enterprise?) for itself.

It’s good that Appian is going to take on competing products that I presume will issue forth from tech giants by the legion, but what about even smaller tech companies? What about startup themselves?

Neeva is an interesting case. The search-focused startup wanted to build a new search engine that was not monetized by ads. Instead, users would pay a small monthly fee, and Neeva would be able to invest those revenues into search tech that served end users and not advertisers. The idea was neat.

But over the weekend, Neeva mothballed its consumer search engine. Why? Because what it initially built was an interesting take on the old, or classic, search model. With consumer demand and corporate search work pivoting rapidly to the use of LLMs to generate answers more than lists of pertinent links, Neeva had to pivot to the new reality:

In early 2022, the upcoming impact of generative AI and LLMs became clear to us. We embarked on an ambitious effort to seamlessly blend LLMs into our search stack. We rallied the Neeva team around the vision to create an answer engine. We are proud of being the first search engine to provide cited, real-time AI answers to a majority of queries early this year.

However, the company added that user acquisition had proved difficult. It was easier, Neeva shared, to get users to pay for its service than it was to get them to try it out. With ChatGPT and LLM-powered search from majors like Bing and Google consuming headlines, Neeva decided to try something new (emphasis added):

Over the past year, we’ve seen the clear, pressing need to use LLMs effectively, inexpensively, safely, and responsibly. Many of the techniques we have pioneered with small models, size reduction, latency reduction, and inexpensive deployment are the elements that enterprises really want, and need, today. We are actively exploring how we can apply our search and LLM expertise in these settings, and we will provide updates on the future of our work and our team in the next few weeks.

It’s too early to say whether Neeva will manage to use its tech to create internal LLMs for enterprises, but the fact that it is trying at all is interesting. Perhaps it, too, will manage to score some market share in a new-ish market and give Big Tech an even denser competitive landscape to try to dominate. If it can, perhaps other startups will be able to as well.

A final thought: The Neeva pivot might feel like a pivot from search. But if search is going the way of LLMs, and Neeva is simply taking that tech and applying it to a particular customer type, would it be fair to say that it is still pursuing search? Enterprise search, sure, but all the same it’s a question I am chewing on. Taking it one point further, is Appian building an enterprise search engine? Perhaps.

If we expand the definition of search to AI-powered answers, and we expect that those same LLMs help us create and execute tasks, perhaps search is simply evolving toa chat box that can answer questions, create on command, and help execute tasks.” If so, then a lot of companies are going to be fighting for the same enterprise turf. Here’s to hoping that some startups get a piece of the action.