AWS Server Issues Take Down Instagram, Vine, Airbnb And IFTTT

Next Story

What Games Are: The March Of The Muggles

Those of you looking to spend the rest of today watching people do other things on Instagram or Vine probably just had a rough time trying to do it. Both services went offline for over an hour, most likely because of issues with Amazon Web Services.

Instagram was the first of the two to publicly acknowledge its issues on Twitter, and Vine followed suit half an hour later.

The deluge of tweets that accompanied the services’ initial hiccups first started at around 4 p.m. Eastern time, and only increased in intensity as users found they couldn’t share pictures of their food or their meticulously crafted video snippets. Some further poking around on Twitter and beyond revealed that some other services known to rely on AWS — Netflix, IFTTT, Heroku and Airbnb to name a few — have been experiencing similar issues today. At this point Instagram and Vine are slowly coming back online for most, and the trickle of tweets bemoaning a Netflix outage has slowly dried up, but IFTTT’s website is still out of commission.

A quick look at AWS’s health dashboard reveals that the company’s north Virginia data center is having some issues that are likely at the heart of all this (a situation that Airbnb confirmed in a tweet earlier this afternoon). The company has been dutifully reporting issues with its EC2, relational database, and load-balancing services for the past two hours, though recent updates indicate that they’ve managed to lock down the root issue and are currently cleaning up the remaining mess. Amazon figured out what happened with EC2 first:

2:21 PM PDT We have identified and fixed the root cause of the performance issue. EBS backed instance launches are now operating normally. Most previously impacted volumes are now operating normally and we will continue to work on instances and volumes that are still experiencing degraded performance.

And then moved onto load balancing issues:

2:45 PM PDT We have identified and fixed the root cause of the connectivity issue affecting load balancers in a single availability zone. The connectivity impact has been mitigated for load balancers with back-end instances in multiple availability zones. We continue to work on load balancers that are still seeing connectivity issues.

In any case, it appears that the worst is over, but we’ll continue to keep an eye on the situation for the time being. I hope your time spent away from those services was spent wisely (i.e. not just moaning about it on Twitter).