5 steps to ensure startups successfully deploy LLMs

ChatGPT’s launch ushered in the age of large language models. In addition to OpenAI’s offerings, other LLMs include Google’s LaMDA family of LLMs (including Bard), the BLOOM project (a collaboration between groups at Microsoft, Nvidia, and other organizations), Meta’s LLaMA, and Anthropic’s Claude.

More will no doubt be created. In fact, an April 2023 Arize survey found that 53% of respondents planned to deploy LLMs within the next year or sooner. One approach to doing this is to create a “vertical” LLM that starts with an existing LLM and carefully retrains it on knowledge specific to a particular domain. This tactic can work for life sciences, pharmaceuticals, insurance, finance, and other business sectors.

Deploying an LLM can provide a powerful competitive advantage — but only if it’s done well.

LLMs have already led to newsworthy issues, such as their tendency to “hallucinate” incorrect information. That’s a severe problem, and it can distract leadership from essential concerns with the processes that generate those outputs, which can be similarly problematic.

The challenges of training and deploying an LLM

One issue with using LLMs is their tremendous operating expense because the computational demand to train and run them is so intense (they’re not called large language models for nothing).

LLMs are exciting, but developing and adopting them requires overcoming several feasibility hurdles.

First, the hardware to run the models on is costly. The H100 GPU from Nvidia, a popular choice for LLMs, has been selling on the secondary market for about $40,000 per chip. One source estimated it would take roughly 6,000 chips to train an LLM comparable to ChatGPT-3.5. That’s roughly $240 million on GPUs alone.

Another significant expense is powering those chips. Merely training a model is estimated to require about 10 gigawatt-hours (GWh) of power, equivalent to 1,000 U.S. homes’ yearly electrical use. Once the model is trained, its electricity cost will vary but can get exorbitant. That source estimated that the power consumption to run ChatGPT-3.5 is about 1 GWh a day, or the combined daily energy usage of 33,000 households.

Power consumption can also be a potential pitfall for user experience when running LLMs on portable devices. That’s because heavy use on a device could drain its battery very quickly, which would be a significant barrier to consumer adoption.

Integrating LLMs into devices presents another critical challenge to the user experience: effective communication between the LLM and the device. If the channel has a high latency, users will be frustrated by long lags between queries and responses.

Finally, privacy is a crucial component of offering an LLM-based service that conforms to privacy regulations that customers want to use. Given that LLMs tend to memorize their training data, there is a risk of exposing sensitive data when users query the model. User interactions are also logged, which means that users’ questions — sometimes containing private information — may be vulnerable to acquisition by hackers.

The threat of data theft is not merely theoretical; several feasible backdoor attacks on LLMs are already under scrutiny. So, it’s unsurprising that over 75% of enterprises are holding off on adopting LLMs out of privacy concerns.

For all the above reasons, including bankrupting their companies or creating catastrophic reputational damage, business leaders are concerned about taking advantage of the early days of LLMs. To succeed, they must approach things holistically because the challenges need to be simultaneously conquered before launching a viable LLM-based offering.

It’s often difficult to know where to start. Here are five crucial points tech leaders and startup founders should consider when planning a transition to LLMs:

1. Keep an eye out for new hardware optimizations

Although training and running an LLM is expensive now, market competition is already driving innovations that reduce power consumption and boost efficiency, which should reduce costs. One of these solutions is Qualcomm’s Cloud AI 100. The organization claims it’s designed for “deep learning with low power consumption.”

Leaders need to empower management to stay abreast of developments in hardware to reduce power consumption and, therefore, costs. What may not be within reach currently may soon become feasible with the next wave of efficiency breakthroughs.

2. Explore a distributed data analysis approach

Sometimes the infrastructure supporting an LLM could combine edge and cloud computing for distributed data analysis. This would be appropriate for several use cases, such as when one has critical and highly time-sensitive data on an edge device while leaving less time-sensitive data to be processed in the cloud. This approach enables much lower latency for users interacting with the LLM than if all computations were done in the cloud.

On the other hand, offloading computations to the cloud will help preserve a device’s battery power, so there are critical trade-offs to consider with a distributed data analysis approach. Decision-makers must determine the optimized proportion of computations done by each processor given the needs at that moment.

3. Stay flexible regarding which model to use

It’s essential to be flexible on which underlying model to use in building a vertical LLM because each has its pros and cons for any particular use case. That flexibility should not only be at the outset when selecting a model but should also remain a critical factor throughout the use of the model, as needs could change. In particular, open source options are worth considering because these models can be smaller and less expensive.

Building an infrastructure that can accommodate switching to a new model without operational disruption is essential. Some companies now offer “multi-LLM” solutions, such as Merlin, whose DiscoveryPartner generative AI platform uses LLMs from OpenAI, Microsoft, and Anthropic for document analysis.

4. Make data privacy a priority

In an era of increasing regulation for data and data breaches, data privacy must be a priority. One approach is to use sandboxing, in which a controlled computational environment confines data to a restricted system.

Another is data obfuscation (such as with data masking, tokenization, or encryption), which allows the LLM to understand the data while making it unintelligible to anyone who might tap into it. These and other techniques can assure users that privacy is baked into your LLMs.

5. Looking ahead, consider analog computing

An even more radical approach to deploying hardware for LLMs is to move away from digital computing. Once considered more of a curiosity in the IT world, analog computing could ultimately prove to be a boon to LLM adoption because it could reduce the energy consumption required to train and run LLMs.

This is more than just theoretical. For example, IBM has been developing an “analog AI” chip that could be 40 to 140 times more energy efficient than GPUs for training LLMs. As similar chips enter the market from competing vendors, we will see market forces bring down their prices.

The LLM future is here — are you ready?

LLMs are exciting, but developing and adopting them requires overcoming several feasibility hurdles. Fortunately, an increasing number of tools and approaches are bringing down costs, making systems more challenging to hack and ensuring a positive user experience.

So, don’t hesitate to explore how LLMs might turbocharge your business. With the right approach, your organization can be well positioned to take advantage of everything this new era offers. You’ll be glad you got started now.