OpenAI brings fine-tuning to GPT-3.5 Turbo

OpenAI customers can now bring custom data to the lightweight version of GPT-3.5, GPT-3.5 Turbo — making it easier to improve the text-generating AI model’s reliability while building in specific behaviors.

OpenAI claims that fine-tuned versions of GPT-3.5 can match or even outperform the base capabilities of GPT-4, the company’s flagship model, on “certain narrow tasks.”

“Since the release of GPT-3.5 Turbo, developers and businesses have asked for the ability to customize the model to create unique and differentiated experiences for their users,” the company wrote in a blog post published this afternoon. “This update gives developers the ability to customize models that perform better for their use cases and run these custom models at scale.”

With fine-tuning, companies using GPT-3.5 Turbo through OpenAI’s API can make the model better follow instructions, such as having it always respond in a given language. Or they can improve the model’s ability to consistently format responses (e.g. for completing snippets of code), as well as hone the “feel” of the model’s output, like its tone, so that it better fits a brand or voice.

In addition, fine-tuning enables OpenAI customers to shorten their text prompts to speed up API calls and cut costs. “Early testers have reduced prompt size by up to 90% by fine-tuning instructions into the model itself,” OpenAI claims in the blog post.

Fine-tuning currently requires prepping data, uploading the necessary files and creating a fine-tuning job through OpenAI’s API. All fine-tuning data must pass through a “moderation” API and a GPT-4-powered moderation system to see if it’s in conflict with OpenAI’s safety standards, says the company. But OpenAI plans to launch a fine-tuning UI in the future with a dashboard for checking the status of ongoing fine-tuning workloads.

Fine-tuning costs are as follows:

  • Training: $0.008 / 1K tokens
  • Usage input: $0.012 / 1K tokens
  • Usage output: $0.016 / 1K tokens

“Tokens” represent raw text — e.g. “fan,” “tas” and “tic” for the word “fantastic.” A GPT-3.5-turbo fine-tuning job with a training file of 100,000 tokens, or about 75,000 words, would cost around $2.40, OpenAI says.

In other news, OpenAI today made available two updated GPT-3 base models (babbage-002 and davinci-002), which can be fine-tuned as well, with support for pagination and “more extensibility.” As previously announced, OpenAI plans to retire the original GPT-3 base models on January 4, 2024.

OpenAI said that fine-tuning support for GPT-4 — which, unlike GPT-3.5, can understand images in addition to text — will arrive sometime later this fall, but didn’t provide specifics beyond that.