Cloud Latency Issues? Dedicated Network Connections Will Help

Editor’s note: Jelle Frank van der Zwet is Segment Marketing Manager, Cloud, for Interxion, and David Strom is a freelance writer. 

Do you ever encounter delays in loading web-based applications and get impatient? Resulting from poor network latency, or how fast data is transferred from one location to another, these delays are frustrating to as end users, but are far more costly to Internet businesses that depend on lightning-quick web experiences for their customers. With the proliferation of cloud computing placing added demands on Internet speed and connectivity, latency is becoming a more critical concern for everyone, from the end user to the enterprise.

Pre-Internet, latency was characterized by the number of router hops between you and your application, and the delay that packets took to get from source to destination. For the most part, your corporation owned all of the intervening routers, so network delays remained fairly consistent and predictable. As businesses migrate and deploy more and more applications to the cloud, the issue of latency is becoming increasingly complex. In order to leverage cloud computing as a true business enabler, it is critical that organizations learn how to manage and reduce latency.

The first step in reducing latency is to identify its causes. To do this, we must examine latency not as it relates to the web, but as it relates to the inherent components of cloud computing. After some analysis, you’ll see that it isn’t just poor latency in the cloud but the unpredictable nature of the various network connections between on-premise and cloud applications that can cause problems. We’ve provided below five layers of complexity that are unpredictable in nature and must therefore be considered when migrating applications to the cloud.

  • Distributed computing is one component adding to cloud latency’s complexity. With enterprise data centers a thing of the past, the nature of applications has completely changed from being contained within a local infrastructure to being distributed all over the world. The proliferation of Big Data applications using tools such as R and Hadoop are incentivizing distributed computing even more. The problem lies in the fact that these applications, which are deployed all over the world, have varying degrees of latency with each of their Internet connections. Furthermore, latencies are entirely dependent on Internet traffic, which waxes and wanes to compete for the same bandwidth and infrastructure.
  • Virtualization adds another layer of complexity to latency in the cloud. Gone are the days of rack-mounted servers; enterprises are building virtualized environments for consolidation and cost efficiency, and today’s data centers are now a complex web of hypervisors running dozens of virtual machines. Unfortunately, virtualized network infrastructure can introduce its own series of packet delays before the data even leaves the rack itself.
  • Another complexity layer lies in the lack of measurement tools for modern applications. While ping and traceroute can be used to test Internet connection, modern applications don’t have anything to do with ICMP, the protocol behind these tracing devices. Instead, modern applications and networks use other protocols such as HTTP and FTP and therefore need to try to measure their performances accordingly.
  • Prioritizing traffic and Quality of Service (QoS) delve deeper into cloud latency’s complexity. Pre-cloud, Service Level Agreements (SLAs) and QoS were created to prioritize traffic and to make sure that latency-sensitive applications would have the network resources to run properly. However, cloud and virtualized services make this a dated process, given that we need to now differentiate between certain items like an outage in a server, a network card, a piece of the storage infrastructure, or a security exploit. Different cloud applications have different tolerances for network latency, depending on their level of criticality; while an application controlling back-office reporting may tolerate lower uptime, not all corporate processes can allow for downtime without having a significant impact on the business. This makes it increasingly important for SLAs to prioritize particular applications based on performance and availability.
  • Evasive cloud providers, who sometimes neglect to inform you about where their cloud data centers are located, can also complicate latency. To really understand latency, you should know the answers to the following questions:
    • Are your VMs stored on different SANs or different hypervisors, for example?
    • Do you have any say in decisions that will impact your own latency?
    • How many router hops are in your cloud provider’s internal network and what bandwidth is used in their own infrastructure?

Despite latency’s complexity, it provides a great opportunity for innovative cloud-based solutions, such as the Radar benchmarking tool from Cedexis, which provides insight into what goes on across various IaaS providers, as well as tools like Gomez, which are helpful in comparing providers. While tools are of course helpful in providing insight on trends, the overarching solution to measuring and mitigating cloud latency is providing more consistent network connections.

The best available option is to connect to a public cloud platform. Amazon’s Direct Connect is the best that we’ve seen in providing more predictable metrics for bandwidth and latency. [Disclosure: Interxion recently announced a direct connect to the AWS platform in each of its data centers.] Another viable option is Windows Azure – both products are particularly useful for companies looking to build hybrid solutions, as they allow some data to be stored on premise while other solution components can be migrated to the cloud.

Finally, by colocating within a third-party data center, companies can rest assured that their cloud applications are equipped to handle all of latency’s challenges and reap extra benefits in terms of monitoring, troubleshooting, support, and cost. Colocation facilities that offer specific Cloud Hubs can provide excellent connectivity and cross-connections with cloud providers, exchanges and carriers that improve performance and reduce latency to end users. Furthermore, colocation data centers ensure that companies not only have the best coverage for their business, but also a premium network at their fingertips.

In this connected, always-on world, users increasingly demand immediate results for optimal website and application performance. For businesses looking to boost ROI and maintain customer satisfaction, every millisecond counts. While several dimensions and complicating factors of latency can introduce a number of disturbances for users and providers of cloud services, having dedicated network connections can help avoid these pitfalls and achieve optimal cloud performance.