How to avoid AI commoditization: 3 tactics for running successful pilot programs

With the rise of open-source AI models, the commoditization of this groundbreaking technology is upon us. It’s easy to fall into the trap of aiming a newly-released model at a desirable tech demographic and hoping it catches on.

Creating a moat when so many models are easily accessible creates a dilemma for early-stage AI startups, but leveraging deep relationships with customers in your domain is a simple, yet effective tactic.

The real moat is a combination of AI models trained on proprietary data, as well as a deep understanding of how an expert goes about their daily tasks to solve nuanced workflow problems.

In highly-regulated industries where outcomes have real-world implications, data storage must pass a high bar of compliance checks. Typically, customers prefer companies with prior track records over startups, which promotes an industry of fragmented datasets where no single player has access to all the data. Today, we have a multi-modal reality in which players of all sizes hold datasets behind highly compliant walled-garden servers.

This creates an opportunity for startups with existing relationships to approach potential customers who would typically outsource their technology to launch a test pilot with their software to solve specific customer problems. These relationships could arise through co-founders, investors, advisors, or even prior professional networks.

The real moat is a combination of AI models trained on proprietary data, as well as a deep understanding of how an expert goes about their daily tasks to solve nuanced workflow problems.

Showing customers tangential credentials is an effective way to build trust: positive indicators include team members from a university known for AI experts, a strong demo where the prototype enables prospective customers to visualize outcomes, or a clear business case analysis of how your solution will help them save or make money.

One mistake founders commonly make at this stage is to assume that building models of client data is sufficient for product-market-fit and differentiation. In reality, finding PMF is much more complex: just throwing AI at a problem creates issues regarding accuracy and customer acceptance.

Clearing the high bar of augmenting experienced experts in highly-regulated industries who have an intricate knowledge of day-to-day changes typically turns out to be a tall order. Even AI models that are trained well on data can lack the accuracy and nuance of expert domain knowledge, or even more importantly, any connection to reality.

A risk-detection system trained on a decade of data may have no idea about industry expert conversations or recent news that could render a formerly-considered “risky” widget completely harmless. Another example could be a coding assistant suggesting code completion of a prior version of a front-end framework which has separately benefitted from a succession of high-frequency breaking feature releases.

In these types of situations, it’s better for startups to rely on the pattern of launching and iterating, even with pilots.

There are three key tactics in managing pilots:

Shadow your customers

What users say can be different from what they do. Product designers need to develop a holistic understanding of their pain points, instead of simply building feature requests.

What’s the best way to study user behavior initially, especially during onsite pilots? Have product teams observe in silence and take detailed notes while customers work through various tasks.

This often generates unique insights for the startup team and most importantly, empathy for people who struggle with various complex workflows during their workday. Screen-sharing is also effective, but monitoring in-person is optimal.

Manage their requirements closely

Customers are likely to have a laundry list of feature requests, but it’s your team’s job to figure out the 10% that will get 90% of the job done or render most other features unnecessary.

Focus on designing a few elements instead of shipping a multitude of features.

Stage weekly demos

Every week, the team must figure out the most important things to work on and really push to launch them that week — even in beta. The end of the week is “office hours” with users where they can submit product feedback on usage of existing features and live feedback on the new ones.

Iterating in this manner for a few months helps the initial pilot evolve into a product users actually want that’s tailored to their needs. Iterating with AI models and workflows while getting feedback from industry experts is the best way to obtain PMF.

Once reaching that bar, the chances of converting your highly customized product as a paid SaaS are extremely high. Initial pilot customers may even want to invest in your startup and recommend your product to other firms to ensure stability and reliability in their new software supplier.

If you get an opportunity to go from training on a single firm’s data to training on a variety of data sets, that’s where network effects tend to kick in. Over time, a moat is built around not only the features built upon iterating with various players, but also with an AI model that has been trained on the most diverse of datasets. As value starts accruing to the AI model, it becomes increasingly likely that customers seeking to adopt AI software services would use the companies whose models have the most extensive training.

Startups would do well to consider a relationship-based strategy that creates a moat in addition to building better proprietary technology: it’s one tool in the arsenal to help startups in the age of AI commoditization.