Active learning is the future of generative AI: Here’s how to leverage it

During the past six months, we have witnessed some incredible developments in AI. The release of Stable Diffusion forever changed the artworld, and ChatGPT-3 shook up the internet with its ability to write songs, mimic research papers and provide thorough and seemingly intelligent answers to commonly Googled questions.

These advancements in generative AI offer further evidence that we’re on the precipice of an AI revolution.

However, most of these generative AI models are foundational models: high-capacity, unsupervised learning systems that train on vast amounts of data and take millions of dollars of processing power to do it. Currently, only well-funded institutions with access to a massive amount of GPU power are capable of building these models.

The majority of companies developing the application-layer AI that’s driving the widespread adoption of the technology still rely on supervised learning, using large swaths of labeled training data. Despite the impressive feats of foundation models, we’re still in the early days of the AI revolution and numerous bottlenecks are holding back the proliferation of application-layer AI.

Downstream of the well-known data labeling problem exist additional data bottlenecks that will hinder the development of later-stage AI and its deployment to production environments.

These problems are why, despite the early promise and floods of investment, technologies like self-driving cars have been just one year away since 2014.

These exciting proof-of-concept models perform well on benchmarked datasets in research environments, but they struggle to predict accurately when released in the real world. A major problem is that the technology struggles to meet the higher performance threshold required in high-stakes production environments and fails to hit important benchmarks for robustness, reliability and maintainability.

For instance, these models often can’t handle outliers and edge cases, so self-driving cars mistake reflections of bicycles for bicycles themselves. They aren’t reliable or robust so a robot barista makes a perfect cappuccino two out of every five times but spills the cup the other three.

As a result, the AI production gap, the gap between “that’s neat” and “that’s useful,” has been much larger and more formidable than ML engineers first anticipated.

Counterintuitively, the best systems also have the most human interaction.

Fortunately, as more and more ML engineers have embraced a data-centric approach to AI development, the implementation of active learning strategies have been on the rise. The most sophisticated companies will leverage this technology to leapfrog the AI production gap and build models capable of running in the wild more quickly.

What is active learning?

Active learning makes training a supervised model an iterative process. The model trains on an initial subset of labeled data from a large dataset. Then, it tries to make predictions on the rest of the unlabeled data based on what it has learned. ML engineers evaluate how certain the model is in its predictions and, by using a variety of acquisition functions, can quantify the performance benefit added by annotating one of the unlabeled samples.

By expressing uncertainty in its predictions, the model is deciding for itself what additional data will be most useful for its training. In doing so, it asks annotators to provide more examples of only that specific type of data so that it can train more intensively on that subset during its next round of training. Think of it like quizzing a student to figure out where their knowledge gap is. Once you know what problems they are missing, you can provide them with textbooks, presentations and other materials so that they can target their learning to better understand that particular aspect of the subject.

With active learning, training a model moves from being a linear process to a circular one with a strong feedback loop.

Why sophisticated companies should be ready to leverage active learning

Active learning is fundamental for closing the prototype-production gap and increasing model reliability.

It’s a common mistake to think of AI systems as a static piece of software, but these systems must be constantly learning and evolving. If not, they make the same mistakes repeatedly, or, when they’re released in the wild, they encounter new scenarios, make new mistakes and don’t have an opportunity to learn from them. They need to have the ability to learn over time, making corrections based on previous mistakes as a human would. Otherwise, models will have issues of reliability and micro robustness, and AI systems will not work in perpetuity.

Most companies using deep learning to solve real-world problems will need to incorporate active learning into their stack. If they don’t, they’ll lag their competitors. Their models won’t respond to or learn from the shifting landscape of possible scenarios.

However, incorporating active learning is easier said than done. For years, a lack of tooling and infrastructure made it difficult to facilitate active learning. Out of necessity, companies that began taking steps to improve their models’ performance with respect to the data have had to take a Frankenstein approach, cobbling together external tools and building tools in-house.

As a result, they don’t have an integrated, comprehensive system for model training. Instead, they have modular block-like processes that can’t talk to each other. They need a flexible system made up of decomposable components in which the processes communicate with one another as they go along the pipeline and create an iterative feedback loop.

The best ways to leverage active learning

Some companies, however, have implemented active learning to great effect and we can learn from them. For companies that have yet to put active learning in place also can do a few things to prepare for and make the most out of this methodology.

The gold standard of active learning is stacks that are fully iterative pipelines. Every component is run with respect to optimizing the performance of the downstream model: data selection, annotation, review, training and validation are done with an integrated logic rather than as disconnected units.

Counterintuitively, the best systems also have the most human interaction. They fully embrace the human-in-the-loop nature of iterative model improvement by opening up entry points for human supervision within each subprocess while also maintaining optionality for completely automated flows when things are working.

The most sophisticated companies therefore have stacks that are iterative, granular, inspectable, automatable and coherent.

Companies seeking to build neural networks that take advantage of active learning should build their stacks with the future in mind. These ML teams should project the types of problems they’ll have and understand the issues they’re likely to encounter when attempting to run their models in the wild. What edge cases will they encounter? In what unreasonable way is the model likely to behave?

If ML teams don’t think through these scenarios, models will inevitably make mistakes in a way that a human never would. Those errors can be quite embarrassing for companies and they should have been highly penalized because they’re so misaligned with human behavior and intuition.

Fortunately, for companies just entering the game, there’s now plenty of know-how and knowledge to be gained from companies that have broken through the production barrier. With more and more companies putting models into production, ML teams can more easily think about forward problems by studying their predecessors, as they will likely face similar problems when moving from proof of concept to production.

Another way to troubleshoot problems before they occur is to think about what a working model looks like beyond its performance metric scores. By thinking about how that model should operate in the wild and the sorts of data and scenarios it will encounter, ML teams will better understand the kinds of issues that might arise once it’s in the production stage.

Lastly, companies should make themselves aware of and understand the tools available to support an active learning and training data pipeline. Five or six years ago, companies had to build infrastructure internally and combine these in-house tools with imperfect external ones. Nowadays, every company should think before they build something internally. New tooling is being developed rapidly, and it’s likely that there’s already a tool that will save time and money while requiring no internal resourcing to maintain it.

Active learning is still in its very early days. However, every month, more companies are expressing an interest in taking advantage of this methodology. The most sophisticated ones will put the infrastructure, tooling and planning in place to harness its power.