Generative AI’s future in enterprise could be smaller, more focused language models

The amazing abilities of OpenAI’s ChatGPT wouldn’t be possible without large language models. These models are trained on billions, sometimes trillions of examples of text. The idea behind ChatGPT is to understand language so well, it can anticipate what word plausibly comes next in a split second. That takes a ton of training, compute resources and developer savvy to make happen.

But maybe the future of these models is more focused than the boil-the-ocean approach we’ve seen from OpenAI and others, who want to be able to answer every question under the sun. What if each industry or even each company had its own model trained to understand the jargon, language and approach of the individual entity? Perhaps then we would get fewer completely made up answers because the answers will come from a more limited universe of words and phrases.

In the AI-driven future, each company’s own data could be its most valuable asset. If you’re an insurance company, you have a completely different lexicon than a hospital, automotive company or a law firm, and when you combine that with your customer data and the full body of content across the organization, you have a language model. While perhaps it’s not large, as in the truly large language model sense, it would be just the model you need, a model created for one and not for the masses.

This will also require a set of tools to collect, aggregate and constantly update the corporate dataset in a way that makes it ingestible for these smaller large language models (sLLMs).

Building these models could pose a challenge. They will probably tap into something like open source or a private company’s existing LLMs and then fine-tune it on the industry or company data to bring it more into focus, all in a more secure environment than the generic LLM variety.

This represents a huge opportunity for the startup community, and we are seeing lots of companies with a head start on this idea.

May Habib, co-founder and CEO at Writer, a generative AI startup, says that is exactly what her firm is trying to do: customize the model for each customer, their words and way of working. She says her company is going to market “in a hyperverticalized way,” and this should result in more accurate and tailored content.

“We are essentially building that last mile of allowing them to use LLMs that are informed by their data and things they’ve written before. [It’s] their information and everything that we put in our models at the retrieval layer,” Habib recently told TechCrunch+.

She says that this involves a kind of product underneath the base Writer product that basically turns the firehose of a large language model into something more focused and useful for each individual customer. “The way that we talk to customers about it is that it’s like having small language models on top of large language models,” she said.

Hello, Dolly

Databricks, which is mostly known for being a hot startup with a huge valuation, building a cloud data lakehouse, recently released an sLLM it called Dolly, after the first cloned sheep (not the musical), based on a 2-year-old model. You may ask why they built it on top of an older model, which on its own produces mostly garbage, according to company CEO Ali Ghodsi.

It’s because it’s training that model on smaller, more focused corpuses of data and coming up with what the company claims is more accurate and focused answers. “The model underlying Dolly only has 6 billion parameters, compared to 175 billion in GPT-3, and is 2 years old, making it particularly surprising that it works so well. This suggests that much of the qualitative gains in state-of-the-art models like ChatGPT may owe to focused corpuses of instruction-following training data, rather than larger or better-tuned base models,” the company wrote in a blog post announcing the availability of Dolly.

The beauty of this approach, the company claims, is that it trained Dolly in three hours on a single machine and it cost just $30, compared to the hundreds of thousands to millions of dollars it likely cost to train ChatGPT.

Your cost will probably vary depending on the size of your dataset, but the idea is to feed Dolly your data and then put it to work to understand your particular company’s data and answer questions in a ChatGPT fashion, all while keeping your data private.

“Every company on the planet has a corpus of information related to their [organization]. Maybe it’s [customer] interactions, customer service; maybe it’s documents; maybe it’s the material that they published over the years. And ChatGPT does not have all of that and can’t do all of that.”

“With Dolly you can actually train the model to understand and be specialized on your dataset, and you keep it. You don’t need to give it to the rest of the world. It’s your proprietary information that you can use in your competition with other folks in your industry,” Ghodsi said.

That’s important as we think about using this data moving forward. It’s the same point that Habib makes about her customers: They not only want the wow factor that we get from ChatGPT, they want practical application of the AI against their data in a secure way.

Where could we go from here?

As it becomes more about the data and less about the model, and startups and established companies continue to build the tooling, the hard part will be taking the information and making it available in a format that the model can use and constantly update.

Jeetu Patel, executive vice president and general manager of security and collaboration at Cisco, believes the future is not necessarily sLLMs, but it definitely involves feeding your company’s data into some sort of existing LLM.

“To be clear, every company will have some sort of a custom dataset based on which they will do inference that actually gives them a unique edge that no one else can replicate. But that does not require every company to build a large language model. What it requires is [for companies to take advantage of] a language model that already exists,” he said.

He sees a future in which companies use more specific models than ChatGPT and feed it their own data, not unlike what Databricks is trying to do with Dolly.

“Where I think there’ll be a difference is that there are going to be some AI models that are going to be generic, like what you see with ChatGPT, and then there will be some which are just company specific,” he said.

Using his own company as an example, Patel suggests that in the future, you could interact with Cisco applications like WebEx and get a summary of all your meetings from that day simply by asking it. As a security executive, he is keenly aware that such an approach would have to have careful permissions built in, but it provides a possible scenario where this type of application could be put to work on top of a specific company’s products and services in a very practical way.

All of this is moving so fast, it’s hard to make any clear predictions about where this technology will go tomorrow or next week. But there is some thinking that in order to work in the enterprise, the models will have to be flexible enough to deal with proprietary company data for model training, and if that’s the case, the future could involve smaller and more focused models.