AI

Best practices for developing a generative AI copilot for business

Comment

keyboard with chatbot icon hovering above it
Image Credits: BlackJack3D / Getty Images

Chris Ackerson

Contributor

Chris Ackerson, formerly of IBM Watson, is currently the Vice President of Product at AlphaSense, a market intelligence and search platform, where he spearheads the development of AI and ML capabilities to deliver better data and insights to thousands of enterprise companies.

Since the launch of ChatGPT, I can’t remember a meeting with a prospect or customer where they didn’t ask me how they can leverage generative AI for their business. From internal efficiency and productivity to external products and services, companies are racing to implement generative AI technologies across every sector of the economy.

While GenAI is still in its early days, its capabilities are expanding quickly — from vertical search, to photo editing, to writing assistants, the common thread is leveraging conversational interfaces to make software more approachable and powerful. Chatbots, now rebranded as “copilots” and “assistants,” are the craze once again, and while a set of best practices is starting to emerge, step 1 in developing a chatbot is to scope down the problem and start small.

A copilot is an orchestrator, helping a user complete many different tasks through a free text interface. There are an infinite number of possible input prompts, and all should be handled gracefully and safely. Rather than setting out to solve every task, and run the risk of falling short of user expectations, developers should start by solving a single task really well and learning along the way.

At AlphaSense, for example, we focused on earnings call summarization as our first single task, a well-scoped but high-value task for our customer base that also maps well to existing workflows in the product. Along the way, we gleaned insights into LLM development, model choice, training data generation, retrieval augmented generation and user experience design that enabled the expansion to open chat.

LLM development: Choosing open or closed

In early 2023, the leaderboard for LLM performance was clear: OpenAI was ahead with GPT-4, but well-capitalized competitors like Anthropic and Google were determined to catch up. Open source held sparks of promise, but performance on text generation tasks was not competitive with closed models.

My experience with AI over the last decade led me to believe that open source would make a furious comeback and that’s exactly what has happened. The open source community has driven performance up while lowering cost and latency. LLaMA, Mistral and other models offer powerful foundations for innovation, and the major cloud providers like Amazon, Google and Microsoft are largely adopting a multi-vendor approach, including support for and amplification of open source.

While open source hasn’t caught up in published performance benchmarks, it’s clearly leap-frogged closed models on the set of trade-offs that any developer has to make when bringing a product into the real world. The 5 S’s of Model Selection can help developers decide which type of model is right for them:

  • Smarts: Through fine-tuning, open source models can absolutely outperform closed models on narrow tasks. This has been proven time and time again.
  • Spend: Open source is free outside of fixed GPU time and engineering ops. At reasonable volumes this will always scale more efficiently than usage-based pricing.
  • Speed: By owning the full stack, developers can continuously optimize latency and the open source community is producing new ideas every day. Training small models with knowledge from large models can bring latency down from seconds to milliseconds.
  • Stability: Drifting performance is inherent to closed models. When the only lever of control is prompt engineering, this change will inevitably hurt a carefully tuned product experience. On the other hand, collecting training data and regularly retraining a fixed model baseline enables systematic evaluation of model performance over time. Larger upgrades with new open source models can also be planned and evaluated like any major product release.
  • Security: Serving the model can guarantee end-to-end control of data. (Note: I would go further and say that AI safety in general is better served with a robust and thriving open source community.)

Closed models will play an important role in bespoke enterprise use cases and for prototyping new use cases that push the boundaries of AI capability. However, I believe open source will provide the foundation for all significant products where GenAI is core to the end-user experience.

LLM development: Training your model

To develop a high-performance LLM, commit to building the best dataset in the world for the task at hand. That may sound daunting, but consider two facts: First, best does not mean biggest. Often, state-of-the-art performance on narrow tasks can be achieved with hundreds of high-quality examples. Second, for many tasks in your enterprise or product context, your unique data assets and understanding of the problem offer a leg up on closed model providers collecting training data to serve thousands of customers and use cases. At AlphaSense, AI engineers, product managers and financial analysts collaborate to develop annotation guidelines that define a process for curating and maintaining such datasets.

Distillation is a critical tool to optimize this investment in high-quality training data. Open source models are available in multiple sizes from 70 billion+ parameters to 34 billion, 13 billion, 7 billion, 3 billion and smaller. For many narrow tasks, smaller models can achieve sufficient “smarts” at significantly better “spend” and “speed.” Distillation is the process of training a large model with high-quality human-generated training data and then asking that model to generate orders of magnitude of more synthetic data to train smaller models. Multiple models with different performance, cost and latency characteristics provide great flexibility to optimize user experience in production.

RAG: Retrieval augmented generation

When developing products with LLMs, developers quickly learn that the output of these systems is only as good as the quality of the input. ChatGPT, which is trained on the entire internet, maintains all of the benefits (access to all published human knowledge) and downsides (misleading, copyrighted, unsafe content) of the open internet.

In a business context, that level of risk may not be acceptable for customers making critical decisions every day, in which case developers can turn to retrieval-augmented generation, or RAG. RAG grounds the LLM in authoritative content by asking it only to reason over information retrieved from a database rather than reproduce knowledge from its training dataset. Current LLMs can effectively process thousands of words as input context for RAG, but nearly every real-life application must process many orders of magnitude more content than that. For example, AlphaSense’s database contains hundreds of billions of words. As a result, the task of retrieving the right context to feed the LLM is a critical step.

Expect to invest more in building the information retrieval system than in training the LLM. As both keyword-based retrieval and vector-based retrieval systems have limitations today, a hybrid approach is best for most use cases. I believe grounding LLMs will be the most dynamic area of GenAI research over the next few years.

User experience and design: Integrate chat seamlessly

From a design perspective, chatbots should fit in seamlessly with the rest of an existing platform — it shouldn’t feel like an add-on. It should add unique value and leverage existing design patterns where they make sense. Guardrails should help a user understand how to use the system and its limitations, they should handle user input that can’t or shouldn’t be answered, and they should provide for automatic injection of application context. Here are three key points of integration to consider:

  1. Chat vs. GUI: For the most common workflows, users would prefer not to chat. Graphical user interfaces were invented because they are a great way to guide users through complex workflows. Chat is a fantastic solution for the long tail when a user needs to provide difficult-to-anticipate context in order to solve their problem. Be thoughtful about when and where to trigger chat in an app.
  2. Setting context: As discussed above, a limitation with LLMs today is the ability to hold context. A retrieval-based conversation can quickly grow to millions of words. Traditional search controls and filters are a fantastic solution to this problem. Users can set the context for a conversation and know that it’s fixed over time or adjust it along the way. This can reduce cognitive load while increasing the probability of delivering accurate and useful responses in conversation.
  3. Auditability: Ensure that any GenAI output is cited to the original source documents and is auditable in context. Speed of verification is a key barrier to trust and adoption of GenAI systems in a business context, so invest in this workflow.

The release of ChatGPT alerted the world to the arrival of GenAI and demonstrated the potential for the next generation of AI-powered apps. As more companies and developers create, scale and implement AI chat applications, it’s important to keep these best practices in mind and focus on alignment between your tech and business strategies to build an innovative product with real, long-term impact and value. Focusing on completing one task well while looking for opportunities to expand a chatbot’s functionality will help set a developer up for success.

More TechCrunch

YouTube TV has announced that its ‘multiview’ feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

1 hour ago
Two students uncover security bug that could let millions do their laundry for free

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI —then let it wither, source says

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

3 hours ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo

Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission.…

Sony Music warns tech companies over ‘unauthorized’ use of its content to train AI

Winston Chi, Butter’s founder and CEO, told TechCrunch that “most parties, including our investors and us, are making money” from the exit.

GrubMarket buys Butter to give its food distribution tech an AI boost

The investor lawsuit is related to Bolt securing a $30 million personal loan to Ryan Breslow, which was later defaulted on.

Bolt founder Ryan Breslow wants to settle an investor lawsuit by returning $37 million worth of shares

Meta, the parent company of Facebook, launched an enterprise version of the prominent social network in 2015. It always seemed like a stretch for a company built on a consumer…

With the end of Workplace, it’s fair to wonder if Meta was ever serious about the enterprise

X, formerly Twitter, turned TweetDeck into X Pro and pushed it behind a paywall. But there is a new column-based social media tool in town, and it’s from Instagram Threads.…

Meta Threads is testing pinned columns on the web, similar to the old TweetDeck

As part of 2024’s Accessibility Awareness Day, Google is showing off some updates to Android that should be useful to folks with mobility or vision impairments. Project Gameface allows gamers…

Google expands hands-free and eyes-free interfaces on Android

A hacker listed the data allegedly breached from Samco on a known cybercrime forum.

Hacker claims theft of India’s Samco account data

A top European privacy watchdog is investigating following the recent breaches of Dell customers’ personal information, TechCrunch has learned.  Ireland’s Data Protection Commission (DPC) deputy commissioner Graham Doyle confirmed to…

Ireland privacy watchdog confirms Dell data breach investigation

Ampere and Qualcomm aren’t the most obvious of partners. Both, after all, offer Arm-based chips for running data center servers (though Qualcomm’s largest market remains mobile). But as the two…

Ampere teams up with Qualcomm to launch an Arm-based AI server

At Google’s I/O developer conference, the company made its case to developers — and to some extent, consumers — why its bets on AI are ahead of rivals. At the…

Google I/O was an AI evolution, not a revolution