Cloud Latency Issues? Dedicated Network Connections Will Help

Editor’s note: Jelle Frank van der Zwet is Segment Marketing Manager, Cloud, for Interxion, and David Strom is a freelance writer. 

Do you ever encounter delays in loading web-based applications and get impatient? Resulting from poor network latency, or how fast data is transferred from one location to another, these delays are frustrating to as end users, but are far more costly to Internet businesses that depend on lightning-quick web experiences for their customers. With the proliferation of cloud computing placing added demands on Internet speed and connectivity, latency is becoming a more critical concern for everyone, from the end user to the enterprise.

Pre-Internet, latency was characterized by the number of router hops between you and your application, and the delay that packets took to get from source to destination. For the most part, your corporation owned all of the intervening routers, so network delays remained fairly consistent and predictable. As businesses migrate and deploy more and more applications to the cloud, the issue of latency is becoming increasingly complex. In order to leverage cloud computing as a true business enabler, it is critical that organizations learn how to manage and reduce latency.

The first step in reducing latency is to identify its causes. To do this, we must examine latency not as it relates to the web, but as it relates to the inherent components of cloud computing. After some analysis, you’ll see that it isn’t just poor latency in the cloud but the unpredictable nature of the various network connections between on-premise and cloud applications that can cause problems. We’ve provided below five layers of complexity that are unpredictable in nature and must therefore be considered when migrating applications to the cloud.

Despite latency’s complexity, it provides a great opportunity for innovative cloud-based solutions, such as the Radar benchmarking tool from Cedexis, which provides insight into what goes on across various IaaS providers, as well as tools like Gomez, which are helpful in comparing providers. While tools are of course helpful in providing insight on trends, the overarching solution to measuring and mitigating cloud latency is providing more consistent network connections.

The best available option is to connect to a public cloud platform. Amazon’s Direct Connect is the best that we’ve seen in providing more predictable metrics for bandwidth and latency. [Disclosure: Interxion recently announced a direct connect to the AWS platform in each of its data centers.] Another viable option is Windows Azure – both products are particularly useful for companies looking to build hybrid solutions, as they allow some data to be stored on premise while other solution components can be migrated to the cloud.

Finally, by colocating within a third-party data center, companies can rest assured that their cloud applications are equipped to handle all of latency’s challenges and reap extra benefits in terms of monitoring, troubleshooting, support, and cost. Colocation facilities that offer specific Cloud Hubs can provide excellent connectivity and cross-connections with cloud providers, exchanges and carriers that improve performance and reduce latency to end users. Furthermore, colocation data centers ensure that companies not only have the best coverage for their business, but also a premium network at their fingertips.

In this connected, always-on world, users increasingly demand immediate results for optimal website and application performance. For businesses looking to boost ROI and maintain customer satisfaction, every millisecond counts. While several dimensions and complicating factors of latency can introduce a number of disturbances for users and providers of cloud services, having dedicated network connections can help avoid these pitfalls and achieve optimal cloud performance.

Latest Stories