Google’s AlloyDB AI transforms databases to power generative AI apps

AlloyDB, Google’s fully managed PostgresSQL-compatible database service, is gaining a few AI smarts.

Google today announced the launch of AlloyDB AI, an integrated set of capabilities built into AlloyDB for PostgreSQL to support developers in building generative AI apps using their own data. AlloyDB AI, available in preview via AlloyDB Omni (which is moving from a technical preview to public preview), provides built-in support for vector embeddings — delivering the foundation for AI search apps and more.

“AlloyDB AI was built with portability and flexibility in mind … Developers [can] incorporate their real-time data into generative AI applications,” Andi Gutmans, GM and VP of database engineering at Google, wrote in a blog post shared with TechCrunch. “Not only is it PostgreSQL-compatible, but with AlloyDB Omni, customers can take advantage of [AlloyDB AI] to build enterprise-grade, AI-enabled applications everywhere: on premises, at the edge, across clouds or even on developer laptops.”

Vector embeddings — numerical representations of data, including but not limited to text, audio and image data — allow AI algorithms to better understand the relationships between different types of data and their semantic relevance to each other. That’s useful for, say, recommendation engines, which can tap embeddings to find data similar to other data (e.g. similar movies and TV shows). But the use cases extend beyond that — think things like fraud detection and typo correction.

AlloyDB AI, then, aims to help users transform data within databases — the databases that serve information to generative AI models — into vector embeddings with a single line of code and without a specialized data stack.

PostgreSQL already supports vectors. But AlloyDB AI takes this support a step further, providing access to Google’s on-premises embeddings models for in-database embeddings generation and cloud embeddings models served via Vertex AI, Google’s platform for building and deploying AI apps.

Both the on-premises and Vertex AI models can be used to generate embeddings on the fly in response to user inputs, Google says. Or they can be used to automatically create embeddings via inferencing in any generated database columns.

Beyond the models, AlloyDB AI delivers up to 10x faster vector query performance than standard PostgreSQL thanks to what Google describes as “tight integrations” with the AlloyDB query processing engine. AlloyDB AI, in addition, is integrated with Vertex AI Extensions, a set of fully managed tools that help developers connect models to proprietary data or third parties, and LangChain, an open framework designed to simplify the creation of apps that leverage generative AI text models.

In addition to AlloyDB Omni, AlloyDB AI will launch later this year on the AlloyDB managed service. The capabilities in AlloyDB can be added to any AlloyDB deployment by installing the relevant extensions at no additional charge, Google says.

Read more about Google Cloud Next 2023 on TechCrunch