4 ways to show customers they can trust your generative AI enterprise tool


4 antique keys on a white background
Image Credits: umdash9 (opens in a new window) / Getty Images

Luigi La Corte


Luigi La Corte is co-founder and CEO at Provision.

At the dawn of the cloud revolution, which saw enterprises move their data from on premise to the cloud, Amazon, Google and Microsoft succeeded at least in part because of their attention to security as a fundamental concern. No large-scale customers would even consider working with a cloud company that wasn’t SOC2 certified.

Today, another generational transformation is taking place, with 65% of workers already saying they use AI on a daily basis. Large language models (LLMs) such as ChatGPT will likely upend business in the same way cloud computing and SaaS subscription models did once before.

Yet again, with this nascent technology comes well-earned skepticism. LLMs risk “hallucinating” fabricated information, sharing real information incorrectly, and retaining sensitive company information fed to it by uninformed employees.

Any industry that LLM touches will require an enormous level of trust between aspiring service providers and their B2B clients, who are ultimately those bearing the risk of poor performance. They’ll want to peer into your reputation, data integrity, security, and certifications. Providers that take active steps to reduce the potential for LLM “randomness” and build the most trust will be outsized winners.

For now, there are no regulating bodies that can give you a “trustworthy” stamp of approval to show off to potential clients. However, here are ways your generative AI organization can build as an open book and thus build trust with potential customers.

Seek certifications where you can and support regulations

Although there are currently no specific certifications around data security in generative AI, it will only help your credibility to obtain as many adjacent certifications as possible, like SOC2 compliance, the ISO/IEC 27001 standard, and GDPR (General Data Protection Regulation) certification.

You also want to be up-to-date on any data privacy regulations, which differ regionally. For example, when Meta recently released its Twitter competitor Threads, it was barred from launching in the EU due to concerns over the legality of its data tracking and profiling practices.

As you’re forging a brand-new path in an emerging niche, you may also be in a position to help form regulations. Unlike Big Tech advancements of the past, organizations like the FTC are moving far more quickly to investigate the safety of generative AI platforms.

While you may not be shaking hands with global heads of state like Sam Altman, consider reaching out to local politicians and committee members to offer your expertise and collaboration. By demonstrating your willingness to create guardrails, you’re indicating you only want the best for those you intend to serve.

Set your own safety benchmarks and publish your journey

In the absence of official regulations, you should be setting your own benchmarks for safety. Create a roadmap with milestones that you consider proof of trustworthiness. This may include things like setting up a quality assurance framework, achieving a certain level of encryption, or running a number of tests.

As you achieve these milestones, share them! Draw potential customers’ attention to these attempts at self-regulation through white papers and articles. By showing that safety achievements are front of mind, you’re establishing your own credibility.

You’ll also want to be open about which LLMs or APIs you’re using, as this will enable others to get a fuller understanding of how your technology functions and establishes greater trust.

When possible, open source your testing plan/results. Provide highly detailed test cases, with a simple framework composed of questions, answers, and ratings for each against a benchmark.

Open sourcing parts of your process will only build trust with your user base, and they’ll likely ask to see examples during procurement.

Back up the data integrity of your product

Liability is a complicated issue. Let’s take the example of risk in the construction industry. Construction firms can outsource risk management to lawyers — which enables the company to hold that third party accountable if something goes wrong.

But if you, as a new provider, offer AI tools that can replace a legal advisor for a 10x–100x lower price, the likely trade-off is that you’ll absorb far less liability. So the next best thing you can offer is integrity.

We think that integrity will look like an auditable quality assurance process that potential customers can peer into. Users should know which outputs are currently “in distribution” (i.e., which outputs your product can provide reliably), and which aren’t. They should also be able to audit the output from tests in order to build confidence in your product. Enabling prospective customers to do so puts you ahead of the curve.

Along those lines, AI providers will need to start explaining data integrity as a new “leave-behind” pillar. In traditional B2B SaaS, businesses address common questions such as “security” or “pricing” with leave-behind materials like digital pamphlets.

Providers will now have to start doing the same with data integrity, diving into why and how they can promise “no hallucination,” “no bias,” edge case tested, and so on. They will always need to backstop these claims with quality assurance.

(As an aside, we’ll likely also see underwriters creating policies for agents’ errors and omissions, once they proliferate.)

Stress test your product until your error rate is acceptable

It may be impossible to guarantee that a platform never makes mistakes when it comes to LLMs, but you’ve got to do whatever it takes to bring your error rate down as low as possible. Vertical AI solutions will benefit from tighter, more focused feedback loops, ideally using a steady stream of preliminary usage data, that will propel them to decrease error rate over time.

In some industries, the margin for error may be more flexible than others — think caricature generators versus code generators.

But the honest answer is that the error rate the client accepts (with eyes wide open) is a good one. For certain cases, you want to reduce false negatives, in others, false positives. Error will need to be scrutinized more closely than with a single number (e.g., “99% accurate”). If I were a buyer, I would instead ask:

  • “What’s your F1 score?”
  • “When designing, what type of error did you index on? Why?”
  • “In a balanced dataset, what would your error rate be for labeling data?”

These questions will really uncover the seriousness of a provider’s iteration process.

An absence of regulation and guidelines does not mean that customers are naive when examining your level of risk as an AI provider. A prudent customer will demand that any company prove that their product can perform within an acceptable error rate, and show respect for robust safeguards. The ones that don’t will surely lose.

More TechCrunch

Meta’s Oversight Board has now extended its scope to include the company’s newest platform, Instagram Threads, and has begun hearing cases from Threads.

Meta’s Oversight Board takes its first Threads case

The company says it’s refocusing and prioritizing fewer initiatives that will have the biggest impact on customers and add value to the business.

SeekOut, a recruiting startup last valued at $1.2 billion, lays off 30% of its workforce

The U.K.’s self-proclaimed “world-leading” regulations for self-driving cars are now official, after the Automated Vehicles (AV) Act received royal assent — the final rubber stamp any legislation must go through…

UK’s autonomous vehicle legislation becomes law, paving the way for first driverless cars by 2026

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

SoLo Funds CEO Travis Holoway: “Regulators seem driven by press releases when they should be motivated by true consumer protection and empowering equitable solutions.”

Fintech lender SoLo Funds is being sued again by the government over its lending practices

Hard tech startups generate a lot of buzz, but there’s a growing cohort of companies building digital tools squarely focused on making hard tech development faster, more efficient and —…

Rollup wants to be the hardware engineer’s workhorse

TechCrunch Disrupt 2024 is not just about groundbreaking innovations, insightful panels, and visionary speakers — it’s also about listening to YOU, the audience, and what you feel is top of…

Disrupt Audience Choice vote closes Friday

Google says the new SDK would help Google expand on its core mission of connecting the right audience to the right content at the right time.

Google is launching a new Android feature to drive users back into their installed apps

Jolla has taken the official wraps off the first version of its personal server-based AI assistant in the making. The reborn startup is building a privacy-focused AI device — aka…

Jolla debuts privacy-focused AI hardware

OpenAI is removing one of the voices used by ChatGPT after users found that it sounded similar to Scarlett Johansson, the company announced on Monday. The voice, called Sky, is…

OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

The ChatGPT mobile app’s net revenue first jumped 22% on the day of the GPT-4o launch and continued to grow in the following days.

ChatGPT’s mobile app revenue saw its biggest spike yet following GPT-4o launch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

1 day ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine