November 30, 2016 By Fayza A. Elmostehi 4 min read

What’s network latency and why does it matter?

We’ll be frank: Sluggish web pages are the scourge of the digital earth. In the we-want-it-yesterday demands of our modern lives, “slow” is unacceptable. We’ll often close our browser windows in a frustrated huff, because we don’t have the time or patience to—gasp!wait for a page to load. (The horror!)

You may ask yourself, “Hey, I pay big bucks for high-speed Internet. What gives?”

Well, to put it simply, you’re not in control. In fact, there are many things beyond your control when it comes what actually controls page loads. Whether you’re running big data solutions, operating an online store, or your global team is accessing files on your company’s network, nothing—especially slow data transfer speeds—should keep you from making that sale or allowing your employees to be as productive as they can be.

Why do some pages load more slowly than others?

It could be as simple as bad code or massive images, for starters. But slow page loads can also be caused by network latency. Not to insult your intelligence or anything, but you ought to know that data isn’t just floating out there in some amorphous space. In reality, data is stored on physical hard drives—somewhere out there. Network connectivity provides a path for that data to travel to end users around the world. That connectivity can vary significantly—depending on how far it’s going, how many times the data has to hop between service providers, how much bandwidth is available along the way, the other data traveling across the same path, and a number of other variables.

The measurement of how quickly data travels between two connected points is called network latency. Network latency is an expression of the amount of time it takes a packet of data to get from one place to another.

What is network latency?

Much like Superman, data can travel at the speed of light across optical fiber network cables. In practice (and unlike Superman), data typically travels slower than that. If a network connection doesn’t have any available bandwidth capacity, data might temporarily queue up to wait for its turn to travel across the line. If a service provider’s network doesn’t route a network path optimally, data could be sent hundreds or thousands of miles away from the destination in the process of routing to the destination. These kinds of delays and detours lead to higher network latency—which leads to slower page loads and download speeds.

Network latency is measured in milliseconds (that’s 1,000 milliseconds per second). While a few thousandths of a second may not mean much to us as we go about our business, those milliseconds can be the deciding factors for whether we stay on a webpage or end up screaming at our computer screens. As high-speed Internet consumers, we want what we want—when we want it. (Yes, we’re spoiled, but we already know that.) But lag time can be more dire than instant gratification. In the financial sector, milliseconds can mean billions of dollars in gains or losses from trade transactions on a daily basis.

No matter why we want it when we do, everyone wants the lowest network latency to the greatest number of users.

How to minimize network latency

If our shared goal is to minimize latency for our data, the most common approaches to addressing network latency involve limiting the number of potential variables that impact the speed of data’s movement. While we don’t have complete control over how our data travels across the Internet, we can do a few things to keep our network latency in line:

  • Distribute data around the world. Users in different locations can pull data from a location that’s geographically close to them. Because the data is closer to the users, it is handed off fewer times and therefore has a shorter distance to travel. Inefficient routing is less likely to cause a significant performance impact.

  • Provision servers with high-capacity network ports. Huge volumes of data can travel to and from the server every second. If packets are delayed due to fully saturated ports, milliseconds of time pass, pages load slower, download speeds drop, and users get unhappy.

  • Understand how your providers route traffic. When you know how your data is transferred to users around the world, you’ll make better decisions about where your data is hosted.

How Bluemix minimizes network latency

To minimize latency, we took a unique approach to building our network. Our data centers are connected to network points of presence (PoPs). Our network points of presence are connected to each other via our global backbone network. By maintaining our own global backbone network, our network operations team controls network paths and data handoffs with much more granularity than if we relied on other providers to move data between geographies.

Let’s put this into practical terms.

If a user in Berlin wants to watch a cat video hosted on a Bluemix server in Dallas (as you do), the data packets comprising that cat video will travel across our backbone network (which is exclusively used by Bluemix traffic) to Frankfurt, where the packets are handed off to one of our peering or transit public network partners to get to the user in Berlin.

Without a global backbone network, the packets would be handed off to a peering or transit public network provider in Dallas. That provider would route the packets across its network and/or hand the packets off to another provider at a network hop, and the packets would then bounce their way to Germany. Sure, it’s entirely possible that the packets could get from Dallas to Berlin with the same network latency with or without the global backbone network. But without the global backbone network, there are a lot more variables.

In addition to building a global backbone network, we also segment public, private, and management traffic onto different network ports so that different types of traffic can be transferred without interfering with each other.

But at the end of the day, all of that network planning and forethought means nothing if you can’t see the results for yourself. That’s why we put speed tests on our website so you can check out our network yourself.

Sign up for Bluemix. It’s free!

Was this article helpful?

More from Cloud

Innovation with IBM® LinuxONE

4 min read - The IBM® LinuxONE server leverages six decades of IBM expertise in engineering infrastructure for the modern enterprise to provide a purpose-built Linux server for transaction and data-serving. As such, IBM LinuxONE is built to deliver security, scalability, reliability and performance, while it’s engineered to offer efficient use of datacenter power and footprint for sustainable and cost-effective cloud computing. We are now on our fourth generation of IBM LinuxONE servers with the IBM LinuxONE Emperor 4 (available since September 2022), and IBM…

6 ways to elevate the Salesforce experience for your users

3 min read - Customers and partners that interact with your business, as well as the employees who engage them, all expect a modern, digital experience. According to the Salesforce Report, nearly 90% Of buyers say the experience a company provides matters as much as products or services. Whether using Experience Cloud, Sales Cloud, or Service Cloud, your Salesforce user experience should be seamless, personalized and hyper-relevant, reflecting all the right context behind every interaction. At the same time, Salesforce is a big investment,…

IBM Tech Now: February 12, 2024

< 1 min read - ​Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published. IBM Tech Now: Episode 92 On this episode, we're covering the following topics: The GRAMMYs + IBM watsonx Audio-jacking with generative AI Stay plugged in You can check out the IBM Blog Announcements for a full rundown of…

Public cloud vs. private cloud vs. hybrid cloud: What’s the difference?

7 min read - It’s hard to imagine a business world without cloud computing. There would be no e-commerce, remote work capabilities or the IT infrastructure framework needed to support emerging technologies like generative AI and quantum computing.  Determining the best cloud computing architecture for enterprise business is critical for overall success. That’s why it is essential to compare the different functionalities of private cloud versus public cloud versus hybrid cloud. Today, these three cloud architecture models are not mutually exclusive; instead, they work…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters