EnCharge AI, a company building hardware to accelerate AI processing at the edge, today emerged from stealth with $21.7 million in Series A funding led by Anzu Partners, with participation from AlleyCorp, Scout Ventures, Silicon Catalyst Angels, Schams Ventures, E14 Fund and Alumni Ventures. Speaking to TechCrunch via email, co-founder and CEO Naveen Verma said that the proceeds will be put toward hardware and software development as well as supporting new customer engagements.
“Now was the right time to raise because the technology has been extensively validated through previous R&D all the way up the compute stack,” Verma said. “[It] provides both a clear path to productization (with no new technology development) and basis for value proposition in customer applications at the forefront of AI, positioning EnCharge for market impact … Many edge applications are in an emerging phase, with the greatest opportunities for value from AI still being defined.”
Encharge AI was ideated by Verma, Echere Iroaga and Kailash Gopalakrishnan. Verma is the director of Princeton’s Keller Center for Innovation in Engineering Education while Gopalakrishnan was (until recently) an IBM fellow, having worked at the tech giant for nearly 18 years. Iroaga, for his part, previously led semiconductor company Macom’s connectivity business unit as both VP and GM.
EnCharge has its roots in federal grants that Verma received in 2017 alongside collaborators at the University of Illinois at Urbana-Champaign. An outgrowth of DARPA’s ongoing Electronics Resurgence Initiative, which aims to broadly advance computer chip tech, Verma led an $8.3-million effort to investigate new types of non-volatile memory devices.
In contrast to the “volatile” memory prevalent in today’s computers, non-volatile memory can retain data without a continuous power supply, making it theoretically more energy efficient. Flash memory and most magnetic storage devices, including hard disks and floppy disks, are examples of non-volatile memory.
DARPA also funded Verma’s research into in-memory computing for machine learning computations — “in-memory,” here, referring to running calculations in RAM to reduce the latency introduced by storage devices.
EnCharge was launched to commercialize Verma’s research with hardware built on a standard PCIe form factor. Using in-memory computing, EnCharge’s custom plug-in hardware can accelerate AI applications in servers and “network edge” machines, Verma claims, while reducing power consumption relative to standard computer processors.
In iterating the hardware, EnCharge’s team had to overcome several engineering challenges. In-memory computing tends to be sensitive to voltage fluctuations and temperature spikes. So EnCharge designed its chips using capacitors rather than transistors; capacitors, which store an electrical charge, can be manufactured with greater precision and aren’t as affected by shifts in voltage.
EnCharge also had to create software that let customers adapt their AI systems to the custom hardware. Verma says that the software, once finished, will allow EnCharge’s hardware to work with different types of neural networks (i.e. sets of AI algorithms) while remaining scalable.
“EnCharge products provide orders-of-magnitude gains in energy efficiency and performance,” Verma said. “This is enabled by a highly robust and scalable next-generation technology, which has been demonstrated in generations of test chips, scaled to advanced nodes and scaled-up in architectures. EnCharge is differentiated from both digital technologies that suffer from existing memory- and compute-efficiency bottlenecks and beyond-digital technologies that face fundamental technological barriers and limited validation across the compute stack.”
Those are lofty claims, and it’s worth noting that EnCharge hasn’t begun to mass produce its hardware yet — and doesn’t have customers lined up. (Verma says that the company is pre-revenue.) In another challenge, EnCharge is going up against well-financed competition in the already saturated AI accelerator hardware market. Axelera and GigaSpaces are both developing in-memory hardware to accelerate AI workloads. NeuroBlade last October raised $83 million for its in-memory inference chip for data centers and edge devices. Syntiant, not to be outdone, is supplying in-memory, speech-processing AI edge chips.
But the funding it has managed to secure so far suggests that investors, at least, have faith in EnCharge’s roadmap.
“As Edge AI continues to drive business automation, there is huge demand for sustainable technologies that can provide dramatic improvements in end-to-end AI inference capability along with cost and power efficiency,” Anzu Partners’ Jimmy Kan said in a press release. “EnCharge’s technology addresses these challenges and has been validated successfully in silicon, fully compatible with volume production.”
EnCharge has roughly 25 employees and is based in Santa Clara.