Roughly two years ago, Microsoft announced a partnership with OpenAI, the AI lab with which it has a close commercial relationship, to build what the tech giant called an “AI Supercomputer” running in the Azure cloud. Containing over 285,000 processor cores and 10,000 graphics cards, Microsoft claimed at the time that it was one of the largest supercomputer clusters in the world.
Now, presumably to support even more ambitious AI workloads, Microsoft says it’s signed a “multi-year” deal with Nvidia to build a new supercomputer hosted in Azure and powered by Nvidia’s GPUs, networking and AI software for training AI systems.
“AI is fueling the next wave of automation across enterprises and industrial computing, enabling organizations to do more with less as they navigate economic uncertainties,” Scott Guthrie, executive vice president of Microsoft’s cloud and AI group, said in a statement. “Our collaboration with Nvidia unlocks the world’s most scalable supercomputer platform, which delivers state-of-the-art AI capabilities for every enterprise on Microsoft Azure.”
Details were hard to come by at press time. But in a blog post, Microsoft and Nvidia said that the upcoming supercomputer will feature hardware like Nvidia’s Quantum-2 400Gb/s InfiniBand networking technology and recently detailed H100 GPUs. Current Azure instances offer previous-gen Nvidia A100 GPUs paired with Quantum 200Gb/s InfiniBand networking.
Notably, the H100 — the flagship of Nvidia’s Hopper architecture — ships with a special “Transformer Engine” to accelerate machine learning tasks and — at least according to Nvidia — delivers between 1.5 and 6 times better performance than the A100. It’s also less power-hungry, offering the same performance as the A100 with up to 3.5 times better energy efficiency.
One of the first industrial-scale machines to sport H100 GPUs, the Lenovo-built Henri system operated by the Flatiron Institute in New York City, topped the list of this year’s most efficient supercomputers. But Henri — a relatively small system by supercomputer standards, with only thousands of CPUs — ranked 405 on the Top500, a ranking of the 500 most powerful supercomputers in the world, implying the H100 is by no means a holy grail.
As part of the Microsoft collaboration, Nvidia says that it’ll use Azure virtual machine instances to research advances in generative AI, or the self-learning algorithms that can create text, code, images, video or audio. (Think along the lines of OpenAI’s text-generating GPT-3 and image-producing DALL-E 2.) Meanwhile, Microsoft will optimize its DeepSpeed library for new Nvidia hardware, aiming to reduce computing power and memory usage during AI training workloads, and work with Nvidia to make the company’s stack of AI workflows and software development kits available to Azure enterprise customers.
Why Nvidia would opt to use Azure instances over its own in-house supercomputer, Selene, isn’t entirely clear; the company’s already tapped Selene to train generative AI like GauGAN2, a text-to-image generation model that creates art from basic sketches. Evidently, Nvidia anticipates that the scope of the AI systems that it’s working with will eventually surpass Selene’s capabilities and perhaps even those of Eos, a proof-of-concept supercomputer Nvidia is building that’ll feature 4,608 H100 GPUs and reach up to 18.4 exaflops of AI computing performance.
“AI technology advances as well as industry adoption are accelerating. The breakthrough of foundation models has triggered a tidal wave of research, fostered new startups and enabled new enterprise applications,” Manuvir Das, VP of enterprise computing at Nvidia, said in a statement. “Our collaboration with Microsoft will provide researchers and companies with state-of-the-art AI infrastructure and software to capitalize on the transformative power of AI.”
The insatiable demand for powerful AI training infrastructure has led to an arms race of sorts among cloud and hardware vendors. Just this week, Cerabras, which has raised over $720 million in venture capital to date at an over-$4 billion valuation, unveiled a 13.5-million core AI supercomputer called Andromeda it claims can achieve more than 1 exaflop of AI compute. Google and Amazon continue to invest in their own proprietary solutions, offering custom-designed chips — e.g. TPUs and Trainium — for accelerating AI training in the cloud.
The push for more powerful hardware will continue for the foreseeable future. A recent study found that the compute requirements for large-scale AI models has been doubling at an average rate of 10.7 months between 2016 and 2022. And OpenAI once estimated that, if GPT-3 were to be trained on a single Nvidia Tesla V100 GPU, it would take around 355 years.