Featured Article

The emerging types of language models and why they matter

Comment

Natural Language Processing concept. Business communication vector illustration
Image Credits: Ledi Nuge / Getty Images

AI systems that understand and generate text, known as language models, are the hot new thing in the enterprise. A recent survey found that 60% of tech leaders said that their budgets for AI language technologies increased by at least 10% in 2020 while 33% reported a 30% increase.

But not all language models are created equal. Several types are emerging as dominant, including large, general-purpose models like OpenAI’s GPT-3 and models fine-tuned for particular tasks (think answering IT desk questions). At the edge exists a third category of model — one that tends to be highly compressed in size and limited to few capabilities, designed specifically to run on Internet of Things devices and workstations.

These different approaches have major differences in strengths, shortcomings and requirements — here’s how they compare and where you can expect to see them deployed over the next year or two.

Large language models

Large language models are, generally speaking, tens of gigabytes in size and trained on enormous amounts of text data, sometimes at the petabyte scale. They’re also among the biggest models in terms of parameter count, where a “parameter” refers to a value the model can change independently as it learns. Parameters are the parts of the model learned from historical training data and essentially define the skill of the model on a problem, such as generating text.

“Large models are used for zero-shot scenarios or few-shot scenarios where little domain-[tailored] training data is available and usually work okay generating something based on a few prompts,” Fangzheng Xu, a Ph.D. student at Carnegie Mellon specializing in natural language processing, told TechCrunch via email. In machine learning, “few-shot” refers to the practice of training a model with minimal data, while “zero-shot” implies that a model can learn to recognize things it hasn’t explicitly seen during training.

“A single large model could potentially enable many downstream tasks with little training data,” Xu continued.

The usage of large language models models has grown dramatically over the past several years as researchers develop newer — and bigger — architectures. In June 2020, AI startup OpenAI released GPT-3, a 175 billion-parameter model that can generate text and even code given a short prompt containing instructions. Open research group EleutherAI subsequently made available GPT-J, a smaller (6 billion parameters) but nonetheless capable language model that can translate between languages, write blog posts, complete code and more. More recently, Microsoft and Nvidia open sourced a model dubbed Megatron-Turing Natural Language Generation (MT-NLG), which is among the largest models for reading comprehension and natural language inference developed to date at 530 billion parameters.

“One reason these large language models remain so remarkable is that a single model can be used for tasks” including question answering, document summarization, text generation, sentence completion, translation and more, Bernard Koch, a computational social scientist at UCLA, told TechCrunch via email. “A second reason is because their performance continues to scale as you add more parameters to the model and add more data … The third reason that very large pre-trained language models are remarkable is that they appear to be able to make decent predictions when given just a handful of labeled examples.”

Startups including Cohere and AI21 Labs also offer models akin to GPT-3 through APIs. Other companies, particularly tech giants like Google, have chosen to keep the large language models they’ve developed in house and under wraps. For example, Google recently detailed — but declined to release — a 540 billion-parameter model called PaLM that the company claims achieves state-of-the-art performance across language tasks.

Large language models, open source or no, all have steep development costs in common. A 2020 study from AI21 Labs pegged the expenses for developing a text-generating model with only 1.5 billion parameters at as much as $1.6 million. Inference — actually running the trained model — is another drain. One source estimates the cost of running GPT-3 on a single AWS instance (p3dn.24xlarge) at a minimum of $87,000 per year.

“Large models will get larger, more powerful, versatile, more multimodal and cheaper to train. Only Big Tech and extremely well-funded startups can play this game,” Vu Ha, a technical director at the AI2 Incubator, told TechCrunch via email. “Large models are great for prototyping, building novel proof-of-concepts and assessing technical feasibility. They are rarely the right choice for real-world deployment due to cost. An application that processes tweets, Slack messages, emails and such on a regular basis would become cost prohibitive if using GPT-3.”

Large language models will continue to be the standard for cloud services and APIs, where versatility and enterprise access are of more importance than latency. But despite recent architectural innovations, these types of language models will remain impractical for the majority of organizations, whether academia, the public or the private sector.

Fine-tuned language models

Fine-tuned models are generally smaller than their large language model counterparts. Examples include OpenAI’s Codex, a direct descendant of GPT-3 fine-tuned for programming tasks. While still containing billions of parameters, Codex is both smaller than OpenAI and better at generating — and completing — strings of computer code.

Fine-tuning can improve a models’ ability to perform a task, for example answering questions or generating protein sequences (as in the case of Salesforce’s ProGen). But it can also bolster a model’s understanding of certain subject matter, like clinical research.

“Fine-tuned … models are good for mature tasks with lots of training data,” Xu said. “Examples include machine translation, question answering, named entity recognition, entity linking [and] information retrieval.”

The advantages don’t stop there. Because fine-tuned models are derived from existing language models, fine-tuned models don’t take nearly as much time — or compute — to train or run. (Larger models like those mentioned above may take weeks or require far more computational power to train in days.) They also don’t require as much data as large language models. GPT-3 was trained on 45 terabytes of text versus the 159 gigabytes on which Codex was trained.

Fine-tuning has been applied to many domains, but one especially strong, recent example is OpenAI’s InstructGPT. Using a technique called “reinforcement learning from human feedback,” OpenAI collected a data set of human-written demonstrations on prompts submitted to the OpenAI API and prompts written by a team of human data labelers. They leveraged these data sets to create fine-tuned offshoots of GPT-3 that — in addition to being a hundredth the size of GPT-3 — are demonstrably less likely to generate problematic text while closely aligning with a user’s intent.

In another demonstration of the power of fine-tuning, Google researchers in February published a study claiming that a model far smaller than GPT-3 — fine-tuned language net (FLAN) — bests GPT-3 “by a large margin” on a number of challenging benchmarks. FLAN, which has 137 billion parameters, outperformed GPT-3 on 19 out of the 25 tasks the researchers tested it on and even surpassed GPT-3’s performance on 10 tasks.

“I think fine-tuning is probably the most widely used approach in industry right now, and I don’t see that changing in the short term. For now, fine-tuning on smaller language models allows users more control to solve their specialized problems using their own domain-specific data,” Koch said. “Instead of distributing [very large language] models that users can fine tune on their own, companies are commercializing few-shot learning through API prompts where you can give the model short prompts and examples.”

Edge language models

Edge models, which are purposefully small in size, can take the form of fine-tuned models — but not always. Sometimes, they’re trained from scratch on small data sets to meet specific hardware constraints (e.g., phone or local web server hardware). In any case, edge models — while limited in some respects — offer a host of benefits that large language models can’t match.

Cost is a major one. With an edge model that runs offline and on-device, there aren’t any cloud usage fees to pay. (Even fine-tuned models are often too large to run on local machines; MT-NLG can take over a minute to generate text on a desktop processor.) Tasks like analyzing millions of tweets may rack up thousands of dollars in fees on popular cloud-based models.

Edge models also offer greater privacy than their internet-bound counterparts, in theory, because they don’t need to transmit or analyze data in the cloud. They’re also faster — a key advantage for applications like translation. Apps such as Google Translate rely on edge models to deliver offline translations.

“Edge computing is likely to be deployed in settings where immediate feedback is needed … In general, I would think these are scenarios where humans are interacting conversationally with AI or robots or something like self-driving cars reading road signs,” Koch said. “As a hypothetical example, Nvidia has a demo where an edge chatbot has a conversation with clients at a fast food restaurant. A final use case might be automated note taking in electronic medical records. Processing conversation quickly in these situations is essential.”

Of course, small models can’t accomplish everything that large models can. They’re bound by the hardware found in edge devices, which ranges from single-core processors to GPU-equipped systems-on-chips. Moreover, some research suggests that the techniques used to develop them can amplify unwanted characteristics, like algorithmic bias.

“[There’s usually a] trade off between power usage and predictive power. Also, mobile device computation is not really increasing at the same pace as distributed high-performance computing clusters, so the performance may lag behind more and more,” Xu said.

Looking to the future

As large, fine-tuned and edge language models continue to evolve with new research, they’re likely to encounter roadblocks on the path to wider adoption. For example, while fine-tuning models requires less data compared to training a model from scratch, fine-tuning still requires a dataset. Depending on the domain — e.g., translating from a little-spoken language — the data might not exist.

“The disadvantage of fine-tuning is that it still requires a fair amount of data. The disadvantage of few-shot learning is that it doesn’t work as well as fine-tuning, and that data scientists and machine learning engineers have less control over the model because they are only interacting with it through an API,” Koch continued. “And the disadvantages of edge AI are that complex models cannot fit on small devices, so performance is strictly worse than models that can fit on a single desktop GPU — much less cloud-based large language models distributed across tens of thousands of GPUs.”

Xu notes that all language models, regardless of size, remain understudied in certain important aspects. She hopes that areas like explainability and interpretability — which aim to understand how and why a model works and expose this information to users — receive greater attention and investment in the future, particularly in “high-stake” domains like medicine.

“Provenance is really an important next step that these models should have,” Xu said. “In the future, there will be more and more efficient fine-tuning techniques … to accommodate the increasing cost of fine-tuning a larger model in whole. Edge models will continue to be important, as the larger the model, the more research and development is needed to distill or compress the model to fit on edge devices.”

More TechCrunch

Line Man Wongnai, an on-demand food delivery service in Thailand, is considering an initial public offering on a Thai exchange or the U.S. in 2025.

Thai food delivery app Line Man Wongnai weighs IPO in Thailand, US in 2025

The problem is not the media, but the message.

Apple’s ‘Crush’ ad is disgusting

Ever wonder why conversational AI like ChatGPT says “Sorry, I can’t do that” or some other polite refusal? OpenAI is offering a limited look at the reasoning behind its own…

OpenAI offers a peek behind the curtain of its AI’s secret instructions

The federal government agency responsible for granting patents and trademarks is alerting thousands of filers whose private addresses were exposed following a second data spill in as many years. The…

US Patent and Trademark Office confirms another leak of filers’ address data

As part of an investigation into people involved in the pro-independence movement in Catalonia, the Spanish police obtained information from the encrypted services Wire and Proton, which helped the authorities…

Encrypted services Apple, Proton and Wire helped Spanish police identify activist

Match Group, the company that owns several dating apps, including Tinder and Hinge, released its first-quarter earnings report on Tuesday, which shows that Tinder’s paying user base has decreased for…

Match looks to Hinge as Tinder fails

Private social networking is making a comeback. Gratitude Plus, a startup that aims to shift social media in a more positive direction, is expanding its wellness-focused, personal reflections journal to…

Gratitude Plus makes social networking positive, private and personal

With venture totals slipping year-over-year in key markets like the United States, and concern that venture firms themselves are struggling to raise more capital, founders might be worried. After all,…

Can AI help founders fundraise more quickly and easily?

Google has found a way to bring a variation of its clever “Circle to Search” gesture to iPhone users. The new interaction, launched in January, allows Android users to search…

Google brings a variation on ‘Circle to Search’ to iPhone users

A new sculpture going live on Wednesday in the Flatiron South Public Plaza in New York is not your typical artwork. It combines technology, sociology, anthropology and art to let…

Always-on video portal lets people in NYC and Dublin interact in real time

Apple’s iPad event had a lot to like. New iPads with new chips and new sizes, a new Apple Pencil, and even some software updates. If you are a big…

TechCrunch Minute: When did iPads get as expensive as MacBooks?

Autonomous, AI-based players are coming to a gaming experience near you, and a new startup, Altera, is joining the fray to build this new guard of AI agents. The company announced…

Bye-bye bots: Altera’s game-playing AI agents get backing from Eric Schmidt

Google DeepMind has taken the wraps off a new version of AlphaFold, their transformative machine learning model that predicts the shape and behavior of proteins. AlphaFold 3 is not only…

Google DeepMind debuts huge AlphaFold update and free proteomics-as-a-service web app

Uber plans to deliver more perks to Uber One members, like member-exclusive events, in a bid to gain more revenue through subscriptions.  “You will see more member-exclusives coming up where…

Uber promises member exclusives as Uber One passes $1B run-rate

We’ve all seen them. The inspector with a clipboard, walking around a building, ticking off the last time the fire extinguishers were checked, or if all the lights are working.…

Checkfirst raises $1.5M pre-seed to apply AI to remote inspections and audits

Close to a decade ago, brothers Aviv and Matteo Shapira co-founded a company, Replay, that created a video format for 360-degree replays — the sorts of replays that have become…

Controversial drone company Xtend leans into defense with new $40 million round

Usually, when something starts to rot, it gets pitched in the trash. But Joanne Rodriguez wants to turn the concept of rot on its head by growing fungus on trash…

Mycocycle uses mushrooms to upcycle old tires and construction waste

Monzo has raised another £150 million ($190 million), as the challenger bank looks to expand its presence internationally — particularly in the U.S. The new round comes just two months…

UK challenger bank Monzo nabs another $190M as US expansion beckons

iRobot has announced the successor to longtime CEO, Colin Angle. Gary Cohen, who previous held chief executive role at Timex and Qualitor Automotive, will be heading up the company, marking a major…

iRobot names former Timex head Gary Cohen as CEO

Reddit — now a publicly-traded company with more scrutiny on revenue growth — is putting a big focus on boosting its international audience, starting with francophones. In their first-ever earnings…

Reddit tests automatic, whole-site translation into French using LLM-based AI

Mushrooms continue to be a big area for alternative proteins. Canada-based Maia Farms recently raised $1.7 million to develop a blend of mushroom and plant-based protein using biomass fermentation. There’s…

Meati Foods bites into another $100M amid growth to 7,000 retail locations

Cleaning the outside of buildings is a dirty job, and it’s also dangerous. Lucid Bots came on the scene in 2018 with its Sherpa line of drones to clean windows…

Lucid Bots secures $9M for drones to clean more than your windows

High interest rates and financial pressures make it more important than ever for finance teams to have a better handle on their cash flow, and several startups are hoping to…

Israeli startup Panax raises a $10M Series A for its AI-driven cash flow management platform

The European Union has deepened the investigation of Elon Musk-owned social network, X, that it opened back in December under the bloc’s online governance and content moderation rulebook, the Digital Services Act…

EU grills Elon Musk’s X about content moderation and deepfake risks

For the founders of Atlan, a data governance startup, data has always been at the heart of what they do, even before they launched the company. In fact, co-founders Prukalpa…

Atlan scores $105M for its data control plane, as LLMs boost importance of data

It is estimated that about 2 billion people, especially those in lower and middle-income countries, lack access to quality and affordable essential medicines. The situation is exacerbated by low-quality or even killer…

Axmed raises $2M from Founderful to streamline drug supply chains in underserved markets

For decades, the Global Positioning System (GPS) has maintained a de facto monopoly on positioning, navigation and timing, because it’s cheap and already integrated into billions of devices around the…

Xona Space Systems closes $19M Series A to build out ultra-accurate GPS alternative

Bankruptcy lawyers representing customers impacted by the dramatic crash of cryptocurrency exchange FTX 17 months ago say that the vast majority of victims will receive their money back — plus interest. The…

FTX crypto fraud victims to get their money back — plus interest

On Wednesday, Google launched its digital wallet in India with local integrations, nearly two years after the app was relaunched as a digital wallet platform in the U.S. As TechCrunch exclusively reported last month,…

Google Wallet is now available in India

Bluesky has launched a new product roadmap for the coming months. The decentralized social network said on Tuesday that it is planning to introduce direct messages, support for videos, improved…

Bluesky to add DMs, video support and in-app custom feed curation