AI

How our new AI feature earned 5% adoption in its first week

Comment

Popcorn exploding out of the striped package. Clipping path included.
Image Credits: ktsimage (opens in a new window) / Getty Images

Garth Griffin

Contributor

Garth Griffin is co-founder and CTO of Gigasheet, a web-based, no-code big data spreadsheet tool. He is a pioneering data scientist and an esteemed member of Sigma Xi, the Scientific Research Honor Society.

Since the launch of ChatGPT, a stampede of technology company leaders has been chasing the buzz: Everywhere I turn, another company is trumpeting their pioneering AI feature. But real business value comes from delivering product capabilities that matter to users, not just from using hot tech.

We achieved a 10x better return on engineering effort with AI by starting with core principles for what users need from your product, building an AI capability that supports that vision, and then measuring adoption to make sure it hits the mark.

Our first AI product feature was not aligned with this idea, and it took a month to reach a disappointing 0.5% adoption among returning users. After recentering on our core principles for what our users need from our product, we developed an “AI as agent” approach and shipped a new AI capability that exploded to 5% adoption in the first week. This formula for success in AI can be applied to almost any software product.

The waste of hype haste

Many startups, like ours, are often tempted by the allure of integrating the latest technology without a clear strategy. So after the groundbreaking release of the various incarnations of generative pretrained transformer (GPT) models from OpenAI, we began looking for a way to use large language model (LLM) AI technology in our product. Soon enough, we’d secured our spot aboard the hype train with a new AI-driven element in production.

This first AI capability was a small summarization feature that uses GPT to write a short paragraph describing each file our user uploads into our product. It gave us something to talk about and we made some marketing content, but it didn’t have a meaningful impact on our user experience.

We knew this because none of our key metrics showed an appreciable change. Only 0.5% of returning users interacted with the description in the first month. Moreover, there was no improvement in user activation and no change in the pace of user signups.

When we thought about it from a wider perspective, it was clear that this feature would never move those metrics. The core value proposition of our product is about big data analysis and using data to understand the world.

Generating a few words about the uploaded file is not going to result in any significant analytical insight, which means it’s not going to do much to help our users. In our haste to deliver something AI-related, we’d missed out on delivering actual value.

Success with AI as agent: 10x better return

The AI approach that gave us success is an “AI as agent” principle that empowers our users to interact with data in our product via natural language. This recipe can be applied to just about any software product that is built on top of API calls.

After our initial AI feature, we’d checked the box, but we weren’t satisfied because we knew we could do better for our users. So we did what software engineers have been doing since the invention of programming languages, which was to get together for a hackathon. From this hackathon, we implemented an AI agent that acts on behalf of the user.

The agent uses our own product by making API calls to the same API endpoints that our web front end calls. It constructs the API calls based on a natural language conversation with the user, attempting to fulfill what the user is asking it to do. The agent’s actions are manifested in our web user interface as a result of the API calls, just as if the user had taken the actions themselves.

This AI as agent capability had a powerful impact. The key benefit for our users is that they no longer need to build an understanding of the controls in our interface in order to analyze their data. We have put significant effort into crafting what we believe is an intuitive user interface, but even so, it’s hard to beat the simplicity of just typing what you want into a text box in your own words and letting an AI do the rest.

To validate that we’d made a positive impact with this feature, we looked at our numbers. The first indication of success was that it took less than a week for 5% of our returning users to start using the feature; this was 10x better adoption than our first feature, climbing to 20x better by the end of the first month.

We shipped without any in-product guides, hints, or marketing, meaning that the initial adoption was pure organic uptake, and we saw an improvement in our user activation metric after the launch. We attribute some of that improvement to a training effect from the agent, where the user corrects the AI actions and in the process comes to better understand what they can do with our product. The AI as agent approach was a clear success for our users and was a much better use of our engineering team.

Validating with peers

Anecdotal reports from other professionals validate the AI as agent approach. For example, a product for cataloging bibliographic references might incorporate AI for retrieval via natural language. This would be an application of AI for its core user value and it can use the AI as agent principle to translate the natural language retrieval query into an API action to retrieve matching content.

I have heard success stories from technical leaders across multiple industry segments where this approach has had a positive impact on their users. In contrast, when the AI feature is incidental to the core value of the product, the impact is not so notable.

Benefits beyond user adoption

There are several strengths of the AI as agent approach that go beyond adoption. One important aspect for us is that it assuages the security concerns around prompt injection. In a well-designed public-facing API, all inputs are considered untrusted, no matter the origin of the request.

This is because attackers can intercept calls from the browser and send modified calls with malicious inputs, so the API itself needs to sanitize and safeguard against bad actors. Our AI agent makes API calls just as if it were the user making the calls, with no special access to internal systems. Thus, all the same API security enforcement is applied, and no amount of prompt injection will grant any kind of escalated access. Prompt injection becomes irrelevant because the API is secure.

Our product supports big data, like really big, as in a billion rows in a single spreadsheet. This highlights another benefit of the AI as agent architecture. In some other uses of AI for data analysis, the complete dataset must be provided to the LLM as part of the prompt.

For us, this was a nonstarter. While models with large prompt sizes might be able to handle some of the smaller datasets from our users, even the most generous context window sizes were not going to support our largest datasets, and it’s a core value of our product that you don’t have to give anything up when your data gets big. In contrast, the AI agent interacts with data in a similar way to how a human analyst would.

This means examining metadata like column headers, viewing a selected sample of row values, computing aggregates, or other analytical actions supported by our API. With this, the difference between 10 rows and a billion rows is just adding eight more zeroes in the “row count” field of the metadata, so the scale does not have a significant effect on prompt size.

A related benefit is the better privacy we can offer to our users. While some of our users don’t care much about where their data goes, others have reasonable questions about where data is shared and their privacy rights.

Even though OpenAI offers opt-out for using your inputs in training data, users can be reluctant to use AI because they worry that their entire dataset has been fed into some mysterious box and is out of their hands forever. With the AI as agent, the data is always with us, and the agent is a mere servant that is situated well outside our foundational data storage and database layers, without needing direct access to the user’s entire dataset at any point.

One last advantage to note is that it also improves our ability to be transparent about how the AI produced an answer. Oftentimes, it’s hard to trust AI output, exemplified by the “hallucinations” that were common with earlier versions of GPT-3 in particular. Our AI agent reports back a bulleted list of whatever actions it took, drawn from our API documentation.

By reviewing this readout, our user can immediately decide whether the results make sense, and if needed, the user can make adjustments to further refine the analysis. This may also contribute to the user training benefit we saw in relation to improved user activation, as the user sees what kinds of actions are possible with the product. The AI is no longer a black box because it self-reports every step it takes.

I have no doubt that the use of AI in startups will continue to evolve. In the end, I’m happy we released all the AI capabilities that we did, but it’s clear in hindsight that the AI as agent approach was always going to be much more impactful for our users, and we could have saved ourselves some trouble by going straight to that.

I invite technology leaders, product owners, and executives to consider our story as a path to empowering your users with AI as we all figure out together what the future will be.

More TechCrunch

It’s tough to say that a $100 billion business finds itself at a critical juncture, but that’s the case with Amazon Web Services, the cloud arm of Amazon, and the…

Matt Garman taking over as CEO with AWS at crossroads

Back in February, Google paused its AI-powered chatbot Gemini’s ability to generate images of people after users complained of historical inaccuracies. Told to depict “a Roman legion,” for example, Gemini would show…

Google still hasn’t fixed Gemini’s biased image generator

A feature Google demoed at its I/O confab yesterday, using its generative AI technology to scan voice calls in real time for conversational patterns associated with financial scams, has sent…

Google’s call-scanning AI could dial up censorship by default, privacy experts warn

Google’s going all in on AI — and it wants you to know it. During the company’s keynote at its I/O developer conference on Tuesday, Google mentioned “AI” more than…

The top AI announcements from Google I/O

Uber is taking a shuttle product it developed for commuters in India and Egypt and converting it for an American audience. The ride-hail and delivery giant announced Wednesday at its…

Uber has a new way to solve the concert traffic problem

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

Google is preparing to launch a new system to help address the problem of malware on Android. Its new live threat detection service leverages Google Play Protect’s on-device AI to…

Google takes aim at Android malware with an AI-powered live threat detection service

Users will be able to access the AR content by first searching for a location in Google Maps.

Google Maps is getting geospatial AR content later this year

The heat pump startup unveiled its first products and revealed details about performance, pricing and availability.

Quilt heat pump sports sleek design from veterans of Apple, Tesla and Nest

The space is available from the launcher and can be locked as a second layer of authentication.

Google’s new Private Space feature is like Incognito Mode for Android

Gemini, the company’s family of generative AI models, will enhance the smart TV operating system so it can generate descriptions for movies and TV shows.

Google TV to launch AI-generated movie descriptions

When triggered, the AI-powered feature will automatically lock the device down.

Android’s new Theft Detection Lock helps deter smartphone snatch and grabs

The company said it is increasing the on-device capability of its Google Play Protect system to detect fraudulent apps trying to breach sensitive permissions.

Google adds live threat detection and screen-sharing protection to Android

This latest release, one of many announcements from the Google I/O 2024 developer conference, focuses on improved battery life and other performance improvements, like more efficient workout tracking.

Wear OS 5 hits developer preview, offering better battery life

For years, Sammy Faycurry has been hearing from his registered dietitian (RD) mom and sister about how poorly many Americans eat and their struggles with delivering nutritional counseling. Although nearly…

Dietitian startup Fay has been booming from Ozempic patients and emerges from stealth with $25M from General Catalyst, Forerunner

Apple is bringing new accessibility features to iPads and iPhones, designed to cater to a diverse range of user needs.

Apple announces new accessibility features for iPhone and iPad users

TechCrunch Disrupt, our flagship startup event held annually in San Francisco, is back on October 28-30 — and you can expect a bustling crowd of thousands of startup enthusiasts. Exciting…

Startup Blueprint: TC Disrupt 2024 Builders Stage agenda sneak peek!

Mike Krieger, one of the co-founders of Instagram and, more recently, the co-founder of personalized news app Artifact (which TechCrunch corporate parent Yahoo recently acquired), is joining Anthropic as the…

Anthropic hires Instagram co-founder as head of product

Seven orgs so far have signed on to standardize the way data is collected and shared.

Venture orgs form alliance to standardize data collection

As cloud adoption continues to surge toward the $1 trillion mark in annual spend, we’re seeing a wave of enterprise startups gaining traction with customers and investors for tools to…

Alkira connects with $100M for a solution that connects your clouds

Charging has long been the Achilles’ heel of electric vehicles. One startup thinks it has a better way for apartment dwelling EV drivers to charge overnight.

Orange Charger thinks a $750 outlet will solve EV charging for apartment dwellers

So did investors laugh them out of the room when they explained how they wanted to replace Quickbooks? Kind of.

Embedded accounting startup Layer secures $2.3M toward goal of replacing QuickBooks

While an increasing number of companies are investing in AI, many are struggling to get AI-powered projects into production — much less delivering meaningful ROI. The challenges are many. But…

Weka raises $140M as the AI boom bolsters data platforms

PayHOA, a previously bootstrapped Kentucky-based startup that offers software for self-managed homeowner associations (HOAs), is an example of how real-world problems can translate into opportunity. It just raised a $27.5…

Meet PayHOA, a profitable and once-bootstrapped SaaS startup that just landed a $27.5M Series A

Restaurant365, which offers a restaurant management suite, has raised a hot $175M from ICONIQ Growth, KKR and L Catterton.

Restaurant365 orders in $175M at $1B+ valuation to supersize its food service software stack 

Venture firm Shilling has launched a €50M fund to support growth-stage startups in its own portfolio and to invest in startups everywhere else. 

Portuguese VC firm Shilling launches €50M opportunity fund to back growth-stage startups

Chang She, previously the VP of engineering at Tubi and a Cloudera veteran, has years of experience building data tooling and infrastructure. But when She began working in the AI…

LanceDB, which counts Midjourney as a customer, is building databases for multimodal AI

Trawa simplifies energy purchasing and management for SMEs by leveraging an AI-powered platform and downstream data from customers. 

Berlin-based trawa raises €10M to use AI to make buying renewable energy easier for SMEs

Lydia is splitting itself into two apps — Lydia for P2P payments and Sumeria for those looking for a mobile-first bank account.

Lydia, the French payments app with 8 million users, launches mobile banking app Sumeria

Cargo ships docking at a commercial port incur costs called “disbursements” and “port call expenses.” These might include port dues, towage, and pilotage fees. It’s a complex patchwork and all…

Shipping logistics startup Harbor Lab raises $16M Series A led by Atomico