Gradient, a startup that allows developers to build and customize AI apps in the cloud using large language models (LLMs), today emerged from stealth with $10 million in funding led by Wing VC with participation from Mango Capital, Tokyo Black, The New Normal Fund, Secure Octane and Global Founders Capital.
Chris Chang, Gradient’s CEO, co-founded the company alongside Mark Huang and Forrest Moret several months ago while working on AI products at Big Tech firms including Netflix, Splunk and Google. The trio came to the realization that LLMs like OpenAI’s GPT-4 could be transformative for the enterprise, but believed that getting the most out of LLMs would require creating a reliable way to add private, proprietary data to them.
“Traditionally, teams have focused on improving a single, generalist model — and existing solutions support this model,” Chang told TechCrunch via email. “This is largely because it was too complex to manage multi-model systems. However, relying on a single model is suboptimal because there’s an inevitable tradeoff in task-specific performance.”
Chang, Huang and Moret designed Gradient, then, to make it easier for teams to deploy “specialized” and fine-tuned LLMs at scale. The platform runs in the cloud, allowing an organization to develop and integrate as many as “thousands” of LLMs into a single system, Chang says.
Gradient customers don’t have to train LLMs from scratch. The platform hosts a number of open source LLMs, including Meta’s Llama 2, which users can fine-tune to their needs. Gradient also offers models aimed at particular use cases (like data reconciliation, context-gathering and paperwork processing) and industries (like finance and law).
Gradient can host and serve models through an API à la Hugging Face, CoreWeave and other AI infrastructure providers. Or it can deploy AI systems in an organization’s public cloud environment, whether Google Cloud Platform, Azure or AWS.
In either case, customers maintain “full ownership” and control over their data and trained models, Chang says.
“The barriers to development are far too high for AI today,” he added. “Building high-performance, custom AI is inaccessible due to the high complexity and cost of setting up the necessary infrastructure and developing new models. We’ve seen that the vast majority of businesses understand the value AI can bring to their business, but struggle to realize the value due to the complexity of adoption. Our platform radically simplifies harnessing AI for a business, which is a tremendous value-add.”
Now, you might ask — like this reporter did — what sets Gradient apart from the other startups engineering tools to pair LLMs with in-house data? And what about the many other companies already customizing LLMs for enterprise clients as a service? It’s a reasonable question.
Take a look at Reka, for example, which recently emerged from stealth to work with companies to build custom-tailored LLM-powered apps. Writer lets customers fine-tune LLMs on their own content and style guides. Contextual AI, Fixie and LlamaIndex, which recently emerged from stealth, are developing tools to allow companies to add their own data to existing LLMs. And Cohere trains LLMs to customers’ specifications.
They’re not the only ones. OpenAI offers a range of model fine-tuning tools, as do incumbents like Google (via Vertex AI), Amazon (via Bedrock) and Microsoft (via the Azure OpenAI Service).
Chang makes the case that Gradient is one of the few platforms that lets companies “productionize” multiple models at once. And, he asserts, it’s affordable — the platform is priced on-demand such that users only pay for the infrastructure they use. (Larger customers have the option of paying for dedicated capacity.)
But even if Gradient isn’t drastically different from its rivals in the LLM dev space, it stands to benefit — and is benefiting — from the massive influx in interest around generative AI, including LLMs. Nearly a fifth of total global VC funding this year has come from the AI sector alone, according to Crunchbase. And PitchBook expects the generative AI market to reach $42.6 billion in 2023.
“Gradient makes it much easier to develop complex AI systems that leverage many ‘expert LLMs,'” he said. “This approach ensures the AI system consistently achieves the highest performance for each task, all in a single platform … Our platform is designed to make it extremely easy for teams to deploy specialized LLMs, purpose-built for their specific problems, more effectively.”
Gradient claims to be working with around 20 enterprise customers at the moment with “thousands” of users combined. Its near-term goal is scaling the cloud back end and growing its team from 17 full-time employees to 25 by the end of the year.