Instead of fine-tuning an LLM as a first approach, try prompt architecting instead

Amid the generative AI eruption, innovation directors are bolstering their business’ IT department in pursuit of customized chatbots or LLMs. They want ChatGPT but with domain-specific information underpinning vast functionality, data security and compliance, and improved accuracy and relevance.

The question often arises: Should they build an LLM from scratch, or fine-tune an existing one with their own data? For the majority of companies, both options are impractical. Here’s why.

TL;DR: Given the right sequence of prompts, LLMs are remarkably smart at bending to your will. The LLM itself or its training data need not be modified in order to tailor it to specific data or domain information.

Exhausting efforts in constructing a comprehensive “prompt architecture” is advised before considering more costly alternatives. This approach is designed to maximize the value extracted from a variety of prompts, enhancing API-powered tools.

TL;DR: Given the right sequence of prompts, LLMs are remarkably smart at bending to your will.

If this proves inadequate (a minority of cases), then a fine-tuning process (which is often more costly due to the data prep involved) might be considered. Building one from scratch is almost always out of the question.

The sought-after outcome is finding a way to leverage your existing documents to create tailored solutions that accurately, swiftly, and securely automate the execution of frequent tasks or the answering of frequent queries. Prompt architecture stands out as the most efficient and cost-effective path to achieve this.

What’s the difference between prompt architecting and fine-tuning?

If you are considering prompt architecting, you have likely already explored the concept of fine-tuning. Here is the key distinction between the two:

While fine-tuning involves modifying the underlying foundational LLM, prompt architecting does not.

Fine-tuning is a substantial endeavor that entails retraining a segment of an LLM with a large new dataset — ideally your proprietary dataset. This process imbues the LLM with domain-specific knowledge, attempting to tailor it to your industry and business context.

In contrast, prompt architecting involves leveraging existing LLMs without modifying the model itself or its training data. Instead, it combines a complex and cleverly engineered series of prompts to deliver consistent output.

Fine-tuning is appropriate for companies with the most stringent data privacy requirements (e.g., banks)

On the surface, fine-tuning seems efficient: You skip the ordeal of building a new model and simply retrain an existing one with your own data.

Fine-tuning’s surprising hidden cost arises from acquiring the dataset and making it compatible with your LLM and your needs. In comparison, once the dataset is ready, the fine-tuning process (uploading your prepared data, covering the API usage and computing costs) is no drama.

Given the high costs, fine-tuning is recommended only when prompt architecting–based solutions have failed.

Not to mention, a robust prompt architecture is often necessary to make optimal use of the outputs of fine-tuning anyway.

When approaching technology partners for fine-tuning activities, inquire about dataset preparation expertise and comprehensive cost estimates. If they omit them, it should raise a red flag, as it could indicate an unreliable service or a lack of practical experience in handling this task.

Generally, valuable fine-tune cases should undergo a prompt architecture–based proof of concept stage before operational investment.

Build secure solutions tailored to your company’s data

Let’s jump into the example of a research tool: a solution that provides near-instant answers to questions relating to hundreds of documents. This tool can be accessible by employees via a web interface with enterprise-grade security controls and user management. It’s built using an API and tailored to your data and objectives through prompt architecting.

Users can pose questions like “Show me all conversations between Jane Doe and John Smith referencing ‘transaction,'” and the tool scans your documents to provide easily readable results. The system uses a careful system of retrieval mechanisms combined with intelligent prompts to scan through the lengthy text contained in the documents to produce a coherent response.

Dentons [a Springbok AI customer] recently introduced FleetAI: their proprietary ChatGPT version, and a legal industry first, for analyzing and querying uploaded legal documents.

Building a new LLM from scratch is no small task

If, to achieve the same outcomes, you were to build “your own LLM” from scratch, expect an uphill battle. This ambition is often misguided. It can cost at least $150 million and yield experimental outcomes. Aspiring to create a proprietary LLM often competes with established players like Meta, OpenAI, and Google, or the best university research departments.

The number of companies equipped to do this is probably only in the double digits worldwide. What executives usually mean by their “own LLM” is a secure LLM-powered solution tailored to their data. The pragmatic route for most executives seeking their “own LLM” involves solutions tailored to their data via fine-tuning or prompt architecting.

Prompt architecting basic best practices

First you need to create data flow and software architecture diagrams that represent the overall design of a solution, with analytics feedback mechanisms in place.

There should be guidelines for context-based text enhancement, with prompt templates and specified tone and length.

Then the architecture should be adapted to the chosen output mode, such as a dashboard, conversational interface, or template-based document.

Integration with additional data sources is made possible: databases for efficient data retrieval, Salesforce for CRM communication, and optical character recognition (OCR) capabilities for processing text from images or scanned documents.

Finally, output quality measures are implemented. When an output fails the criteria, the text is amended by a feedback loop. It checks for offensive language, inappropriate tone and length, and false information. Once the checks are passed, the message is sent to the user.

There is no guarantee that the LLM will not hallucinate or swerve offtrack. AI can never reach 100% accuracy. Nonetheless, these response accuracy checks strive to nip anomalous output in the bud.

Key takeaways

Innovation directors seek tailored chatbots and LLMs, facing the dilemma of building from scratch or fine-tuning. For most, both options are impractical.

LLMs are impressively adaptive through well-structured prompts. An exhaustive exploration of prompt architectures is recommended before more costly alternatives, especially given that a prompt architecture will be needed to achieve desired results even if you fine-tune or build a model.