This morning at the WSJ’s D.Live event, Intel formally unveiled its Nervana Neural Network Processor (NNP) family of chips designed for machine learning use cases. Intel has previously alluded to these chips using the pre-launch name Lake Crest.
The technology underlying the chips is heavily tied to Nervana Systems, a deep learning hardware startup Intel purchased last August for $350 million. Intel’s NNP chips nix standard cache hierarchy and use software to manage on-chip memory to achieve faster training times for deep learning models.
Intel has been scrambling in recent months to avoid being completely leveled by Nvidia. By refocusing on the growing AI market, the legacy chip maker surely hopes to build on its industry connections to stay afloat. To this point, Intel has been actively chasing a goal of 100 times greater AI perforce by 2020.
The Intel Nervana NNP prioritizes scalability and numerical parallelism. The team is promising robust bi-directional data transfer. Using a proprietary numeric format called Flexpoint, Intel says it can achieve higher degrees of throughput. And by shrinking circuit size, the team notes it has been able to supercharge parallelism while reducing power per computation.
Of course, every player in the market aims to build chips that enable neural network parameters to be distributed across a large number of chips at high-efficiency. We’ll know more once these chips hit the market.
Today’s announcement didn’t come with benchmarking — in time. Intel says its chips will ship before the end of the year. Facebook has been supporting development by sharing technical insights with Intel.
Intel appears to have every intention of building a full product line around its Nervana NNP chips. A subsequent Xeon processor for AI has been rumored under the code name “Knights Crest.”