Google makes more Gemini models available to developers

Google is expanding the range of Gemini large language models it is making available to developers on its Vertex AI platform today.

Gemini 1.0 Pro (which was still known as Gemini Pro 1.0 only a week ago — because Google is very good at branding) is now generally available after being in public preview for a while. Meanwhile, Google says that Gemini 1.0 Ultra (which you may also remember under its previous guise of Gemini Ultra 1.0) is now generally available “via allowlist,” which isn’t exactly how general availability generally works.

Google also today announced Gemini 1.5 Pro (and not Gemini Pro 1.5, of course), an update to its existing Gemini Pro model that, the company says, performs at the level of Gemini 1.0 Ultra, its current flagship model. What’s maybe more important, though, is that this model can handle a context of one million tokens. That’s about 1 hour of video, 30,000 lines of code and more than 700,000 words. This model, which also uses what Google describes as a “new Mixture-of-Experts approach,” is currently in private preview.

In Vertex, Google is also now adding support for adapter-based tuning, with support for techniques like reinforcement learning from human feedback and distillation coming soon. In addition, developers can now more easily augment their models with up-to-date data and for more complex workflows, and they can now also call functions. This will allow developers to connect the Gemini model to external APIs.

As for other developer tools, Google calls out that it now offers access to the Gemini API from the Dart SDK so developers can easily use it in their Dart and Flutter apps. It’s also making it easier for developers to use the Gemini API with Project IDX, its experimental web-based integrated development platform, and adding integration to Firebase, its mobile development platform, in the form of an extension.