Network and internet connection speeds have always been hot-topics – especially since many ISPs fall short of the speeds they promise. While there are plenty of tools that’ll help you look at your connection speed, there are fewer that help you look into latency and how it’s impacting your network performance – and, even if you identify latency as a problem, it can be difficult to know what to do about it.
To understand how to tackle network latency, it’s first important to understand what it is.
What does the term ‘latency’ mean?
In simple terms, latency is the delay between the instruction that tells an application that data should be transferred – and that actual delivery of that data.
This time period (usually measured in milliseconds – ms) is calculated by the sending application. A tiny data packet will be sent to the receiving application, and the time it takes for that data to be received and an acknowledgement sent back is the measure of latency in the network. This sets the speed at which the rest of the data will be sent. When that number is low, the network is considered to have just a small delay. When the number is high, the latency is also considered to be high.
Are latency, bandwidth and throughput the same thing?
There are quite a number of different terms used around the speed that data is transferred across the internet – so it’s easy to get mixed up around what applies to what. Although they’re often used in similar contexts, latency, bandwidth and throughput are very different – although they all impact the speed of transfers.
Bandwidth is the sheer volume of information that can be carried over a connection. If you think about data in the same way you’d think about traffic using the road, the bandwidth is the road – a connecting part of your infrastructure that’s needed to get data from one place to another.
Throughput is the volume of data – or, to continue our road example, the volume of traffic on the road. Now, as any driver knows – busy roads are more likely to involve hold-ups – as traffic is slowed at junctions, intersections, and stop signs. In fact, data is subject to many more hold-ups – as the data will likely to analysed by software or hardware designed to ensure networks are safe.
Is all latency to do with bandwidth and throughput?
It’s not fair to blame networks for all latency – as there’s always going to be a delay when you’re transmitting data. That said, that delay is almost always very small – especially since increases in fibre optic connections mean a lot of data is moving close to the speed of light. Whether travelling along cables or being beamed to communications satellites in orbit, your data is moving quickly – so it’s usually bandwidth and throughput issues that cause latency.
When features on the road or in your network cause traffic speed to drop, throughput is reduced – and latency occurs. Quite simply, at any delay point, the speed that the initial transmission of data has set becomes unachievable – so data starts to backup.
Now, if this congestion is cleared quickly, then there’s no problem – but if it continues to grow, applications and devices will start to reduce it; dropped data packets in an effort to get things moving again.
What happens when data packets are dropped?
As you’re probably already aware, any transmission of data is broken down by a transmission protocol – a language that’s shared by similar devices or programs to understand how data is dissected and pieced together again.
So, an email becomes numerous data packets – all pieced back together by the application that receives it. Drop a few data packets when you’re sending an email and you won’t have a problem – as the information will still piece back together in a coherent manner. The trouble is, not all application’s data is as simple as that making up an email – and when data is time sensitive – there’s no room for waiting or making up the gaps.
A prime example of this comes when you’re having a real-time video call. If data is dropped, the picture deteriorates quickly – as the application simply can’t hang around to wait for the missing packets to eventually drop into place. If latency becomes too significant, the call will become unrecognisable to both the application and the user – and the connection will be lost.
When is latency a problem for businesses?
Latency becomes a real issue for businesses when it’s impacting mission critical system. Now, email and other robust systems are obviously critical – but it’s the applications that run close to real-time that have the biggest problems – and cloud-based technology is seeing to it that there are plenty of these being used in most companies.
If your users are accessing systems remotely, latency can be a real problem – as in many cases, your end-users won’t be able to access crucial systems – such as your payment gateways and CRM systems.
The prospect of slow or stalling mission critical systems is one that can’t be taken lightly. It’s estimated that even small businesses can lose tens of thousands of pounds/dollars for every hour that they cannot access their systems – and that figure spirals quickly upwards when the company size grows.
So, what do you do about latency?
There’s good news and bad news when it comes to tackling latency. The good news is, you can usually do something about it – the bad news is; the emphasis is on the ‘you’ part of that sentence.
Most latency occurs within business networks – rather than on the wider series of connections that make up the internet. Therefore, looking at working with a specialist service provider who can lift the hood on your network infrastructure is usually the answer. This is generally going to be an expense that’s shouldered by your business – but it’s important to consider what’s at stake if you don’t. Downtime is costly – and there’s often no recovering your reputation if you let customers down due to your systems being ‘down’.
If you’re running systems that are very sensitive to latency, it pays to be protected – the alternative can be very costly indeed.