Amazon Web Services today announced Amazon Elastic Inference, a new service that lets customers attach GPU-powered inference acceleration to any Amazon EC2 instance and reduces deep learning costs by up to 75 percent.
“What we see typically is that the average utilization of these P3 instances GPUs are about 10 to 30 percent, which is pretty wasteful with elastic inference. You don’t have to waste all that costs and all that GPU,” AWS chief executive Andy Jassy said onstage at the AWS re:Invent conference earlier today. “[Amazon Elastic Inference] is a pretty significant game changer in being able to run inference much more cost-effectively.”
Amazon Elastic Inference will also be available for Amazon SageMaker notebook instances and endpoints, “bringing acceleration to built-in algorithms and to deep learning environments,” the company wrote in a blog post. It will support machine learning frameworks TensorFlow, Apache MXNet and ONNX.
It’s available in three sizes:
- eia1.medium: 8 TeraFLOPs of mixed-precision performance.
- eia1.large: 16 TeraFLOPs of mixed-precision performance.
- eia1.xlarge: 32 TeraFLOPs of mixed-precision performance.
Dive deeper into the new service here.