How this VC evaluates generative AI startups

The launch of ChatGPT in November of 2022 propelled our world into the Age of AI, and the tech industry will never be the same.

Nearly every pitch deck I’ve seen since December has had AI on the front two pages.

As with any emerging technology, however, venture capitalists like myself have had to quickly develop a strategy to separate the high-potential startups from those that are mostly hype or are likely to face insurmountable challenges that will prevent them from achieving venture scale.

Understanding that distinction requires fluency in the various layers of the generative AI value stack, determining which are ripe for investment and creating a due diligence strategy to evaluate the risks and opportunities of a given startup.

Specifically, generative AI is composed of:

  • Data.
  • Middleware.
  • Fine-tuned specialized models.
  • The cloud and infrastructure layer.
  • Foundational models.
  • The application layer.

Within this tech stack, there are a few areas that we think are especially investable and others that are more challenging for a seed-stage company to compete in. Here’s how we break it all down.

Areas we’re interested in

Data

One of generative AI’s greatest challenges — and thus one of its greatest areas of opportunity — is the accuracy and reliability of the information it provides. Today, generative AI models are built on massive datasets, some as wide and as broad as the internet itself, containing both relevant and useful information, and a whole lot of everything else.

We believe that the galaxy of generative AI applications that will emerge in the coming years will be composed of more precise data, or bits and pieces of different, more specialized models. Rather than casting a wide net, these specialized models will utilize proprietary data specific to a domain, which will help to personalize the output of the application as well as ensure accuracy.

There are a few areas that we think are especially investable and others that are more challenging for a seed-stage company to compete in.

Having proprietary data to infuse with foundational models — combined with the right middleware architecture — will result in these specialized models, which we believe will power the application layer that consumers and businesses interact with.

Middleware

Accompanying the data layer of the generative AI stack is middleware, which we define as tooling and infrastructure that supports the development of new generative AI applications and is the second part of our investment thesis in the sector.

Specifically, we are bullish on infrastructure and tooling companies that evaluate and ensure safety, accuracy, and privacy across model outputs; orchestrate inference across multiple models; and optimize incorporating proprietary data into large language models (LLMs).

Fine-tuned specialized models

Having proprietary data to infuse with foundational models, combined with the right middleware architecture, results in a specialized model capable of powering the application layer in which consumers and businesses interact. These applications will produce major moats not only through access to proprietary data and specialized models, but also through traditional advantages like distribution and user experience.

Areas we aren’t interested in

Cloud and infrastructure

Hardware and software infrastructure players — including semiconductor and chip-making companies, and the cloud data centers that host the “compute” the GPUs produce at scale — are vital pieces of the generative AI development economy. Nearly everything in generative AI passes through a cloud-hosted GPU, and almost all of this cloud infrastructure is owned by one of the big three cloud providers, namely Google, Microsoft, and Amazon. As a result, no seed-stage company will be able to compete.

Foundational Models

Foundational LLMs like OpenAI, Cohere, and Stability are the pioneers of generative AI and have appropriately become household names within the technology industry. These models are built on hundreds of billions of parameters, took years to create, and cost hundreds of millions of dollars to train. Simply looking at these businesses’ funding histories underscores the sheer amount of investment needed to build their products. No seed-stage company can compete with the head start these AI-native LLMs have already gained.

Theses within investment areas of the tech stack

Vertical applications

We believe the most successful generative AI–powered business and consumer applications of the future will extend beyond the foundational models and include ensembles of specialized models that are yet to be formed. These purpose-built AI models will be tailored through fine-tuning, in-context learning, or other techniques to fulfill specific parts of a use case or workflow.

Models will leverage proprietary data unique to their respective domains, enhancing application output, personalization, and accuracy. If all the data needed for a use case is largely public, that limits some of the value a startup can provide. However, if all that data needs your startup to connect to a customer’s warehouse or other applications, there is potential to provide lasting value. However, not only must your AI outputs be defensible, but the software and workflow automation you wrap around it must also be robust.

We believe there is significant potential for this kind of technology in the legal, healthcare, finance, retail, logistics, manufacturing and hospitality sectors, specifically in supporting document analysis, HR, process automation, generative design and agent support.

ML middleware

Foundational models do not simply work “out of the box.” In fact, most require ancillary steps that typically encompass model orchestration, model operations, prompt engineering, safety and privacy, and developer frameworks.

More powerful and flexible tooling will empower existing builders who want to leverage generative AI. Democratizing this and making the foundation model stack accessible to a much broader population of new builders will accelerate development and adoption in the space.

We believe this will have significant implications for developer frameworks, data sources and actions, evaluation, and “model operations” more broadly.

Diligence

Should a startup exist within one of these exciting investable paradigms, there are some nuances to conducting a thorough due-diligence process that accurately captures the risks and opportunities of each generative AI company.

We’ve combined these nuances with our long-standing best practices for evaluating early-stage software companies and have created a new comprehensive deal evaluation framework specific to AI. A selection of these areas include the following:

Team

AI is now the new, hot sector. We want to avoid “tourist” founders who simply want to build in a hot new space; instead we seek true AI experts who have demonstrated prowess.

AI stack and architecture

While foundation models will likely commoditize in the future, for now, model choice matters. Therefore, an AI product’s value depends on the architecture that developers build around it. This includes technical decisions like prompts, embeddings and their storage and retrieval mechanisms, context window management, and intuitive UX (user experience) design that guides users in their product journeys.

Data

This includes an evaluation of the proprietary data the company has access to and how they are training their models to learn over time.

Training

How are they benchmarking the training of their specific models to the foundation models? How accurate are their models? If there is a step-function improvement in either of these areas, then there is demonstrated opportunity for a new generative AI application.

Unit economics

This new AI stack introduces new compute costs, which results in a new unit economic analysis that must be applied to every generative AI business. Particularly in this macroeconomic funding environment, compelling unit economics are highly important.

David versus Goliath

There’s never been a market where the incumbents, who have an immediate distribution advantage, can launch generative AI into their existing product suite. We call this the David versus Goliath framework, and this form of defensibility analysis is something we spend a lot of time on for every generative AI deal we see.