How to build the foundation for a profitable AI startup

Investment in AI companies has now entered its cautious phase. Following a year when the money directed at AI startups far outpaced any other sector, investments have recently become more sound or validated. Investors are more wary about the AI hype and are looking for companies that will turn a profit.

Building a profitable AI business poses unique challenges beyond those faced when launching a typical tech startup. Systemic issues like the high cost of renting GPUs, a widening talent gap, towering salaries, and expensive API and hosting requirements can cause costs to quickly spiral out of control.

The coming months could be daunting for AI company founders as they watch their fellow leaders struggle or even fail in new businesses, but there is a proven path to profitability. I applied these steps when I joined SymphonyAI at the beginning of 2022, and we just wrapped up a year in which we grew 30% and approached $500 million in revenue run rate. The same formula worked at my previous companies (Cerence, Harman, Symphony Teleca and Aricent, among others): focusing on specific customer needs and capturing value across a particular industry. All along the way, here are the considerations that formed the foundation for our successful efforts.

Build a realistic and accurate cost model

Let’s begin with one of the most important upfront decisions: Is it more cost-effective to use a cloud-based AI model or host your own?

Startups face many challenges, but AI businesses have some unique factors that can skew financial models and revenue projections, leading to spiraling costs down the road. It’s easy to miscalculate here — decisions on big issues may have unintended consequences, while there’s a long list of non-obvious expenses to consider as well.

Let’s begin with one of the most important upfront decisions: Is it more cost-effective to use a cloud-based AI model or host your own? It’s a decision that teams must make early because as you head down your chosen path, you’ll either go deeper into the custom capabilities offered by the AI giants or you’ll begin building your own tech stack. Each of those carries significant costs.

Defining your answer begins with determining your particular use case, but generally, the cloud makes sense for training and inference if you won’t be moving vast amounts of data in and out of data stores and racking up huge egress fees. But be careful, if you expect to sell your solution for $25 per user per month with unlimited queries — and OpenAI is charging you per token behind the scenes — that model will fall flat pretty quickly as your unit economics fail to turn a profit.

Interestingly, one of the biggest stories of the past year, the boom in GPUs for AI, isn’t that big a factor in your ultimate gross margin equation. Most startups typically pick up a pre-deployed model and use an available API, with the onus on OpenAI to figure out the GPU allocation and give you the production capacity. It’s much more important to procure high-quality training data than to chase the latest GPU hardware — that’s the real foundation for a successful AI application built on top of an existing model.

Beyond those factors, there’s a host of other costs that can have outsized impacts. Don’t forget to factor ongoing data cleaning and PII (personal identifiable information) removal into your resource and budget allocations, as this is crucial for both model accuracy and risk mitigation. And think critically about your hiring plan — a balanced team of data scientists and industry experts, including remote roles, are essential to optimal growth and contextual decision-making.

Go vertical, not horizontal

Building a broad AI platform or solution may be the biggest pitfall for many promising AI businesses. A horizontal approach with more general-purpose capabilities aims for a wide audience but leaves the company open to more focused, targeted competitors that incorporate specialized domain expertise and workflows or put the onus on your customers to define and fit it within their use cases. Other startups can take the same AI models and APIs and get a head start to build a similar horizontal solution over a few months. Also, the latest updates or features from AI giants like OpenAI and Google leave horizontal businesses open for disruption.

A smarter approach is to go narrow and deep — identify a specific industry use case with urgent problems that AI can solve well and bring value (by the way, not an easy task in itself), then channel all your efforts into building vertical-specific models tailored and tuned to deliver maximum value for that specific use case within that industry. That means investing heavily in your technology and hiring subject matter experts to inform your software architecture and go-to-market strategy. Resist the temptation to scale horizontally until you have unequivocally solved your initial use case.

Fine-tune existing models

As part of this vertical approach, there’s no need to spend valuable capital training a model on massive general-purpose datasets. Once you’ve determined the specific vertical problem to solve, you can fine-tune open source variants of GPT to create domain-specific models to underpin your applications.

The use of digital copilots in industrial businesses, financial services, and retail illustrates this approach well. Tailored, vertically optimized predictive and generative AI together provide contextual answers to specific questions or generate and organize data for business insights.

Know when to say when

One of the most critical product decisions on your way to profitability is: How do you know when your AI solution is ready for production? The sooner you can go to market, the sooner you can monetize your hard work. Training and fine-tuning models can go on indefinitely, so creating a standardized benchmark that can serve as both an evaluation and a comparison point is essential.

Begin by comparing your model against existing rule-based engines. Does it perform the work better than what’s in the market today? Does it help upskill less experienced team members to perform more like their highest-performing peers? That’s what makes a compelling value proposition for a prospective customer. You’re aiming for a real-world results measurement versus a consideration of what’s possible.

There’s always a trade-off between improving the accuracy and relevance of your data and the resulting training costs. At some point, you’ll need to determine the right amount of data and when to stop. There is a balance between data training costs and incremental quality improvement that you get by continuing to train — that is, the benefit the end user will derive from those few additional points of inference quality for that use case. (One example: we have an industrial AI model with 10 trillion available data points for training, but we stopped at 3 trillion for our first release.)

The road to profitability

The coming year will mark a dividing line in the growth of enterprise AI. After the hype of 2023, it will take more than an eye-popping product demo to attract investors or close a sale: AI companies will need to demonstrate a thoughtful approach to their business and more fully developed products ready for testing and deployment — with bonus points for having real customers who will provide feedback on requirements and testing that improve the product.

AI companies still have immense potential, but those that succeed will need to stay nimble, contain costs, and resist scope creep in these final shaping stages. Profitability awaits those who move confidently forward.