Mistral AI makes its first large language model free for everyone

The most popular language models out there may be accessed via API, but open models — as far as that term can be taken seriously — are gaining ground. Mistral, a French AI startup that raised a huge seed round in June, has just taken the wraps off its first model, which it claims outperforms others of its size — and it’s totally free to use without restrictions.

The Mistral 7B model is available today for download by various means, including a 13.4-gigabyte torrent (with a few hundred seeders already). The company has also started a GitHub repository and Discord channel for collaboration and troubleshooting.

Most importantly, the model was released under the Apache 2.0 license, a highly permissive scheme that has no restrictions on use or reproduction beyond attribution. That means the model could be used by a hobbyist, a multi-billion-dollar corporation, or the Pentagon alike, as long as they have a system capable of running it locally or are willing to pay for the requisite cloud resources.

Mistral 7B is a further refinement of other “small” large language models like Llama 2, offering similar capabilities (according to some standard benchmarks) at a considerably smaller compute cost. Foundation models like GPT-4 can do much more, but are far more expensive and difficult to run, leading them to be made available solely through APIs or remote access.

“Our ambition is to become the leading supporter of the open generative AI community, and bring open models to state-of-the-art performance,” wrote Mistral’s team in a blog post accompanying the model’s release. “Mistral 7B’s performance demonstrates what small models can do with enough conviction. This is the result of three months of intense work, in which we assembled the Mistral AI team, rebuilt a top-performance MLops stack, and designed a most sophisticated data processing pipeline, from scratch.”

For some (perhaps most), that list may sound like more than three months’ work, but the founders had a head start in that they had worked on similar models at Meta and Google DeepMind. That doesn’t make it easy, exactly, but at least they knew what they were doing.

Of course, although it can be downloaded and used by everyone, that is very different from being “open source” or some variety of that term, as we discussed last week at Disrupt. Though the license is highly permissive, the model itself was developed privately, using private money, and the datasets and weights are likewise private.

And that is what appears to make up Mistral’s business model: The free model is free to use, but if you want to dig in, you’ll want their paid product. “[Our commercial offering] will be distributed as white-box solutions, making both weights and code sources available. We are actively working on hosted solutions and dedicated deployment for enterprises,” the blog post reads.

I asked Mistral for a little extra info on their processes and plans. CEO Arthur Mensch said that not all future models will be released under Apache 2.0, though “some will.” Larger models will be available through an API (presumably paid) rather than a DIY approach. And he declined to offer more detail on the training and dataset assembly processes, saying they were proprietary “so far.”