What’s network latency and why does it matter?
We’ll be frank: Sluggish web pages are the scourge of the digital earth. In the we-want-it-yesterday demands of our modern lives, “slow” is unacceptable. We’ll often close our browser windows in a frustrated huff, because we don’t have the time or patience to—gasp!—wait for a page to load. (The horror!)
You may ask yourself, “Hey, I pay big bucks for high-speed Internet. What gives?”
Well, to put it simply, you’re not in control. In fact, there are many things beyond your control when it comes what actually controls page loads. Whether you’re running big data solutions, operating an online store, or your global team is accessing files on your company’s network, nothing—especially slow data transfer speeds—should keep you from making that sale or allowing your employees to be as productive as they can be.
Why do some pages load more slowly than others?
It could be as simple as bad code or massive images, for starters. But slow page loads can also be caused by network latency. Not to insult your intelligence or anything, but you ought to know that data isn’t just floating out there in some amorphous space. In reality, data is stored on physical hard drives—somewhere out there. Network connectivity provides a path for that data to travel to end users around the world. That connectivity can vary significantly—depending on how far it’s going, how many times the data has to hop between service providers, how much bandwidth is available along the way, the other data traveling across the same path, and a number of other variables.
The measurement of how quickly data travels between two connected points is called network latency. Network latency is an expression of the amount of time it takes a packet of data to get from one place to another.
What is network latency?
Much like Superman, data can travel at the speed of light across optical fiber network cables. In practice (and unlike Superman), data typically travels slower than that. If a network connection doesn’t have any available bandwidth capacity, data might temporarily queue up to wait for its turn to travel across the line. If a service provider’s network doesn’t route a network path optimally, data could be sent hundreds or thousands of miles away from the destination in the process of routing to the destination. These kinds of delays and detours lead to higher network latency—which leads to slower page loads and download speeds.
Network latency is measured in milliseconds (that’s 1,000 milliseconds per second). While a few thousandths of a second may not mean much to us as we go about our business, those milliseconds can be the deciding factors for whether we stay on a webpage or end up screaming at our computer screens. As high-speed Internet consumers, we want what we want—when we want it. (Yes, we’re spoiled, but we already know that.) But lag time can be more dire than instant gratification. In the financial sector, milliseconds can mean billions of dollars in gains or losses from trade transactions on a daily basis.
No matter why we want it when we do, everyone wants the lowest network latency to the greatest number of users.
How to minimize network latency
If our shared goal is to minimize latency for our data, the most common approaches to addressing network latency involve limiting the number of potential variables that impact the speed of data’s movement. While we don’t have complete control over how our data travels across the Internet, we can do a few things to keep our network latency in line:
Distribute data around the world. Users in different locations can pull data from a location that’s geographically close to them. Because the data is closer to the users, it is handed off fewer times and therefore has a shorter distance to travel. Inefficient routing is less likely to cause a significant performance impact.
Provision servers with high-capacity network ports. Huge volumes of data can travel to and from the server every second. If packets are delayed due to fully saturated ports, milliseconds of time pass, pages load slower, download speeds drop, and users get unhappy.
Understand how your providers route traffic. When you know how your data is transferred to users around the world, you’ll make better decisions about where your data is hosted.
How Bluemix minimizes network latency
To minimize latency, we took a unique approach to building our network. Our data centers are connected to network points of presence (PoPs). Our network points of presence are connected to each other via our global backbone network. By maintaining our own global backbone network, our network operations team controls network paths and data handoffs with much more granularity than if we relied on other providers to move data between geographies.
Let’s put this into practical terms.
If a user in Berlin wants to watch a cat video hosted on a Bluemix server in Dallas (as you do), the data packets comprising that cat video will travel across our backbone network (which is exclusively used by Bluemix traffic) to Frankfurt, where the packets are handed off to one of our peering or transit public network partners to get to the user in Berlin.
Without a global backbone network, the packets would be handed off to a peering or transit public network provider in Dallas. That provider would route the packets across its network and/or hand the packets off to another provider at a network hop, and the packets would then bounce their way to Germany. Sure, it’s entirely possible that the packets could get from Dallas to Berlin with the same network latency with or without the global backbone network. But without the global backbone network, there are a lot more variables.
In addition to building a global backbone network, we also segment public, private, and management traffic onto different network ports so that different types of traffic can be transferred without interfering with each other.
But at the end of the day, all of that network planning and forethought means nothing if you can’t see the results for yourself. That’s why we put speed tests on our website so you can check out our network yourself.