Thousands of companies from Dropbox to Netflix rely on Amazon Web Services to provide storage and computing in the cloud. Amazon’s cloud computing offerings range from storage to on-demand computing cycles. But Amazon wants companies to ask themselves what their engineers could do if they had access to a supercomputer?
Today at the Web 2.0 Summit, Amazon highlighted a combination of existing services which allow companies to spin up the equivalent of a supercomputer to solve big data problems. Amazon uses these services itself to better handle the 50 million changes per week to its retail catalog of 1.5 billion items. Depending on the job, it could require the combination of Amazon S3 (its cloud storage service), EC2 (Elastic Compute Cloud), Elastic MapReduce (Hadoop clusters).
Yelp uses the approach to autocorrect spelling in its millions of reviews. Cycle Computing, spun up a cluster of 30,000 computing cores which would have cost $18 million for them to build themselves. Instead it only cost $1,300 per hour of data crunching.
We are now full circle with the ability to rent time on massive computing resources. Shouldn’t IBM be doing this?
Update: In an earlier version of this post, I misidentified the service as Amazon Elastic MapReduce (EMR), which is what I heard while liveblogging. It is actually a combination of existing services which combine to create supercomputing capabilities. No new services were announced today.