Analytics can solve generative AI apps’ product problem

Large language models (LLMs) are becoming a commodity. A year after ChatGPT’s release, there’s a straightforward formula to launch an AI assistant: Stick a wrapper around GPT-4, hook it up to a vector database, and invoke some APIs based on user inputs.

But if that’s all you do, don’t be surprised when your app struggles to stand out.

Technology alone isn’t a sustainable moat for AI products, especially with the barrier to entry only continuing to go down. Everyone has access to mostly the same models, and any leaps in technical knowledge quickly get replicated by the competition.

The application layer is the true differentiator. Companies that identify and address genuine user problems are best positioned to win. The solution might look like yet another chatbot, or it might look entirely different.

Experimenting with products and design is the often neglected path to innovation.

TikTok is more than “the algorithm”

While not a generative AI application, TikTok is the perfect example of product ingenuity being the unsung hero.

Experimenting with products and design is the often neglected path to innovation.

It’s easy to attribute the app’s success wholly to the algorithm. But other recommendation engines are incredibly powerful, too (take it from two ex-YouTube product managers).

At their core, these systems all rely on the same principles. Suggest content similar to what you already like (content-based filtering) and recommend content that people similar to you enjoy (collaborative filtering).

TikTok wouldn’t be what it is without packaging its algorithm in a novel way: an endless stream where viewers frictionlessly vote with their swipes. With an emphasis on short-form video, this product decision amplified the rate at which TikTok could learn user preferences and feed data into its algorithm.

It wasn’t just that. TikTok also led with best-in-class creator tools. Anyone can film and edit a video directly from a smartphone; no video production experience is required.

Today, the competition for short-form videos is more about the ecosystem each app offers. Having an engaging algorithm is table stakes; you need a loyal user base, creator revenue share, content moderation, and other features to round out the platform to stand out.

Generative AI apps are still searching for product-market fit

The common wisdom about product-market fit (PMF) is that you’ll know when you have it. It’s that elusive quality of a product that users love and can’t get enough of. In more practical terms, apps that are growing exponentially and successfully retaining their users often have PMF.

The vast majority of generative AI apps are far from PMF. And what’s the No. 1 reason why? It’s that they don’t solve real problems.

User needs have long been at the heart of product development before AI became popular. But whenever a groundbreaking technology like LLMs emerges, the temptation to use it anywhere and everywhere kicks in. It’s a classic example of a solution in search of a problem.

In the last year, almost every major company has at least flirted with augmenting its core product with AI. And just as many AI-native startups are trying to capitalize on the momentum, too. Many of these products probably won’t find PMF, although some may stumble upon it, so something should be said about being experimental.

Rather than leaving things to chance, let’s examine one of generative AI’s biggest success stories to increase our odds.

GitHub Copilot

Having exceeded $100 million in ARR (annual recurring revenue), GitHub Copilot is arguably the most successful generative AI product to date (save ChatGPT). Retool’s State of AI in 2023 report shows that Copilot is a favorite AI tool among 68% of technology professionals.

That same report noted a 58% decrease in Stack Overflow usage compared to 2022, overwhelmingly because of Copilot and ChatGPT. This is even more telling evidence of PMF. Writing and debugging code is a clear pain point for software developers, and Copilot, with its enhanced ease of use and accuracy, is displacing Stack Overflow.

The thoughtfulness behind Copilot runs deep on both a product and engineering level. As a product, Copilot is far more than a reskinned version of ChatGPT. The most common way users are likely to interact with Copilot isn’t even a chat interface. Instead, it’s through code completion suggestions that appear natively in the text editor.

And Copilot goes beyond helping write boilerplate code. It also helps refactor, document, and explain code. A lot of empathy goes into identifying those user journeys and tailoring the experience to add value beyond vanilla ChatGPT. For instance, the team at GitHub devised a technique called “neighboring tabs” to offer all the files a developer has open as context to the LLM beyond just the active area around the cursor.

Insights like that start to blur the lines between product and engineering. Another example is Copilot’s deep investment in prompt engineering. Despite the name, prompt engineering is a way to shape an LLM using product knowledge. A ton of prioritization goes into what information to tell an LLM and how to phrase it.

The broader platform around Copilot is also very robust, including enterprise features, straightforward billing, and an emphasis on privacy. These wouldn’t mean much without the core product experience having found PMF. But together, the technology and application layers put GitHub in an enviable position.

So how do you incorporate product thinking into an AI app?

If you’re not addressing a legitimate user problem, no amount of engineering will dig you out of that hole. You’ll need to talk to users (or prospective customers), understand what they’re trying to accomplish, and what is preventing that. Generative AI may or may not be the right tool for the job.

With some conviction that you’re building in a suitable space and that LLMs are an appropriate solution, you’ll still have to offer something more than ChatGPT. This could be behind the scenes, in the form of clever prompt engineering or retrieval algorithms. Or it might be on the front end, such as a novel user interface. Either way, you must approach this as a product problem: What is genuinely helpful to your target audience that existing solutions don’t satisfy?

An iterative approach often works well when it’s grounded in data. Take prompt engineering: A common pitfall is to revise LLM prompts once, inspect the new outputs for a test input or two, and deploy if the outputs seem better. This process isn’t rigorous at all. With so few test cases, there’s no guarantee that your LLM will perform well with a broader variety of inputs. And what does it even mean for outputs to be “better”? Hopefully, that isn’t just a gut feeling.

As it stands, most AI product builders rely on their instincts to find PMF. Companies that adopt proper analytics give themselves a significant competitive advantage.

Ironically, LLM applications have unique access to authentic user feedback from all the natural language data they collect. Unlike conventional software, LLM products give a direct lens into what users ask. Yet, so many companies aren’t making full use of this information.

Analytics enable a feedback loop between product and engineering, bringing them closer together. Start with a hypothesis about a user problem, launch early to collect feedback, and synthesize that feedback with analytics. Analytics will surface the key product insights so you can strategically set your engineering roadmap and land an AI product that sticks.