Run:ai raises $75M for its AI platform

Tel Aviv-based Run:ai, a startup that makes it easier for developers and operations teams to manage and optimize their AI infrastructure, today announced that it has raised a $75 million Series C funding round led by Tiger Global Management and Insight Partners, which also led the company’s $30 million Series B round in 2021. Previous investors TLV Partners and S Capital VC also participated in this round, which brings Run:ai’s total funding to $118 million.

Run:ai’s Atlas platform helps its users virtualize and orchestrate their AI workloads with a focus on optimizing their GPU resources, no matter whether they are on-premises or in the cloud. It abstracts all of this hardware away, while developers can still interact with the pooled resources through standard tools like Jupyter notebooks and IT teams can get better insights into how these resources are being used.

The new round comes at a time of fast growth for the company. Its annual recurring revenue grew 9x in the last year, while its staff more than tripled, the company tells me. Run:ai CEO Omri Geller attributes this to a number of factors, including the company’s ability to build a global partner network to accelerate its growth and the overall momentum for the technology within the enterprise. “As organizations leave the incubation stage and start scaling their AI initiatives, they are unable to meet their expected pace of AI innovation due to severe challenges managing their AI infrastructure,” he said.

Image Credits: Run:ai

He also noted that he believes Run:ai has an advantage because as enterprises are modernizing their infrastructure, Run:ai’s cloud-native AI orchestration platform that plugs into Kubernetes environments fits in nicely into this overall trend. Geller noted that these customers are increasingly moving from experiments to production — and that’s where the need for an MLOps platform with a focus on optimizing GPU utilization like Run:ai quickly becomes apparent.

“As part of the development of our product, we added unique features to help them efficiently manage their AI production clusters and easily deploy inference at scale. Inference workloads require maximum throughput and extremely low latency,” he said. “With Run:ai coordinating job scheduling on inference servers, maximal throughput and low latency are maintained while optimizing GPU utilization to nearly 100%.”

The company plans to use the new funding to grow its team, but Geller also noted that the company will consider strategic acquisitions to enhance its overall platform. “Our approach will always be to develop in-house and we do not have a specific area in mind for acquisition. However, if the opportunity presents itself for a strategic acquisition of a technology that will accelerate our time to market and help speed up market dominance, we will definitely seize the opportunity,” he said.

“As enterprises in every industry reimagine themselves to become learning systems powered by AI and human talent, there has been a global surge in demand for AI hardware chipsets such as GPUs,” said Lonne Jaffe, managing director at Insight Partners. “As the Forrester Wave AI Infrastructure report recently highlighted, Run:ai creates extraordinary value by bringing advanced virtualization and orchestration capabilities to AI chipsets, making training and inference systems run both much faster and more cost-effectively. Because of explosive demand since 2020, Run:ai has almost quadrupled its customer base, and we couldn’t be more excited to double down on our partnership with Omri and the incredible Run:AI team as they lean into their momentum and Scale Up.”