What is Latency and How Can You Reduce It?

Network latency is a measure of the time that it takes for a packet to move from the client to the server and back again. Latency can be created by various factors and negatively affects productivity and the user experience.

Request a Demo Learn More

What Causes Network Latency?

A number of different factors can contribute to latency, including:

  • Physical Distance: Traffic can flow over a network via cables or through the air via wireless networks. Regardless of the medium used, there is a maximum speed that electrons, light, or radio waves can propagate through the cable or air. As a result, the farther apart that the client and server are, the longer it takes for a network packet to travel between them.
  • Number of Hops: While the number of devices or network “hops” between two locations is related to physical distance — further distances likely mean more hops — each device that the packet encounters en route adds to its latency. The reason for this is that each router or other device needs to read a packet’s header and determine where it should be sent next. This process takes time, so a packet traveling via a route with more hops will have greater latency than one that travels the same distance but with fewer devices en route.
  • Network Congestion: All network media — copper cables, fiber optics, wireless networks, etc. — have a maximum bandwidth or amount of data that they can transmit in a certain period of time. If the demand for this bandwidth exceeds the available supply, then some packets will be queued or dropped to make room. Therefore, the more congested a network, the higher the probability that a packet will need to wait its turn or be dropped and transmitted, both of which increase network latency.
  • Server Load: As mentioned earlier, network latency is the measure of how long a packet takes to make a full round trip from client to server and back again. Since the server is on this route, it can have an impact on network latency as well. An overloaded server with limited available computational power will take longer to process a request and have greater network latency than a server that can immediately generate and transmit a response.
  • Server-Side Architecture: The web server that receives and responds to a network packet may not be the only party involved in generating that response. For example, a request to a web application may trigger one or more database queries as the web app collects the information needed to generate a response. If these databases are far away — i.e. located on-prem while the web app is cloud-hosted — or overloaded, then network latency will increase.

These are some common causes of network latency. For any request, all of these — and any other potential delays — all contribute to the overall latency experienced by the client

How to Reduce Latency?

Network latency can be caused by various factors, so organizations have a few different options for managing it. Some ways to reduce network latency include the following:

  • Quality of Service (QoS): QoS functions optimize the use of limited bandwidth by prioritizing certai types of traffic. This can reduce latency for business-critical or latency-sensitive applications, such as video conferencing traffic.
  • Infrastructure Upgrades: Latency may be caused by limited bandwidth or overworked servers and other IT assets. Upgrading infrastructure to increase capacity can help to reduce network latency.
  • Architectural Redesign: Applications may experience high network latency if they rely on geographically distant databases and other resources. Colocating resources that communicate frequently can reduce the latency of user requests.
  • Content Distribution Networks (CDNs): Users can experience high latency if they are geographically distant from the servers managing their requests. CDNs cache static content in multiple, geographically-distributed locations. This can reduce latency by decreasing the distance between a user and the nearest CDN node.

Low Latency with Quantum Lightspeed

Latency can be caused by a variety of factors, including the networking and security systems that process a network packet en route to its destination. Network security solutions — such as a next-generation firewall (NGFW) — can be a significant source of latency as they might delay a packet as they inspect it for potentially malicious content or policy violations. If network security solutions aren’t optimized or scaled appropriately, they can create queues of packets waiting for processing, increasing network latency.

Check Point’s Lightspeed NGFW offers enterprise-grade threat prevention and security while offering the rapid throughout and low-latency that corporate data centers need. To learn more about Check Point Lightspeed and how it can improve the performance of applications and other resources in your organization’s data center, sign up for a free demo today.

×
  Feedback
This website uses cookies for its functionality and for analytics and marketing purposes. By continuing to use this website, you agree to the use of cookies. For more information, please read our Cookies Notice.
OK