While some of the largest chip manufacturers are looking to shift their focus onto the GPU for their biggest machine learnings, there’s a blooming ecosystem of new chip startups looking to rethink the way processing for AI works
Graphcore, like some other startups, is looking to rethink the way AI computation works at an actual substrate level. There isn’t a product on the market yet — CEO Nigel Toon says that’s on track for Q1 next year for early-access customers. But it’s been an area that’s been tantalizing enough to convince companies like Google and Apple to look to design their own GPU technology to tap this kind of streamlined processing for operations like computer vision, language recognition, and others centered around machine learning.
“What this really does is allows us to scale,” Toon said. “We’re already working on a roadmap, we can tack on and drive the development of those really quickly. We can look at some other areas, we can expand so we can support more customers more quickly. I think it really allows us to fundamentally speed up.”
Graphcore’s core product is what the company is calling the “intelligence processor unit,” or IPU. But that’s more or less a way of saying that it’s a new breed of processor that’s designed to do the kinds of rapid-fire calculations that machine learning requires, running through thousands or millions of weights in a minimal amount of time with as little power consumption as possible. It’s something that a GPU is good at, but for Toon and some other startups, it’s an area that is ripe for re-thinking and specialization.
Should that be successful, the kinds of technologies that Graphcore and startups like Cerebras Systems, which has also received significant funding from Benchmark Capital, will find themselves sitting in devices around the world that demand high-power machine learning operations. That could be sitting on the actual device doing inference — like a car analyzing live video as it comes in to determine whether or not you’re about to run over a squirrel — or helping optimize machine training to improve the accuracy of the models that tell you whether or not that’s a squirrel you’re about to run over.
So it’s no surprise that Sequoia would want to get in this game as it chases down a space that’s blossoming into one that can support several startups raising tens of millions of dollars — all of which have yet to see mass product adoption, but whose upside may turn out to be significant enough to take these kinds of massive early bets. Toon said that Graphcore showed up on Sequoia’s radar as it was doing diligence in the space.
Then there’s getting back to the flurry of activity from existing companies, all of which seem interested in building out technology that suits their specific AI needs. Google has the TPU that plays nicely with TensorFlow, Apple will have its own hardware in its A11 Bionic Chip (or whatever other string of modifiers you want to add to that). And then there are reports like ones which suggests Tesla may be working with AMD on its own AI chip, and it may be that the world moves to a place where the biggest, most-demanding companies simply make their own hardware.
There’s also, of course, Nvidia — which has been the biggest benefactor in this space and has a massive head start and one that’s sent the stock skyrocketing in the past years. Originally centered around gaming, the kinds of architectures Nvidia built also work well with machine learning models like computer vision, turning it into a massive provider of hardware for everything from machine learning to gaming and mining cryptocurrency. Nvidia, for now, serves as a one-stop shop, though it could be ripe for disruption as many massive companies are amid major shifts in technology.
There are definitely going to be some significant challenges when it comes to adoption. Nvidia, for example, has an ecosystem locked down with both its hardware and Cuda, its software layer. Prying developers off of Cuda may be a tall order, though Toon said that Graphcore’s layers will support popular architectures — like TensorFlow as most developers and companies won’t see the software that’s a layer deeper than that. Nvdia’s specialization may also help it devise a more powerful AI processing unit, but given the market opportunity (and Nvidia’s stellar run), it seems big enough for startups like Graphcore to go after those kinds of giants.
“Having [Sequoia Capital] in, it’s really going to allow us to build a big company, which is fundamentally what we’re hoping to do,” Toon said. “This is a massive opportunity. This is the next generation of compute. This is the opportunity for a new player to build an industry standard. I see a strong parallel with what ARM was able to do in the mobile space, but I think the opportunity here is really bigger.”