ultimate guide packet loss

The Internet Protocol now dominates LAN management systems and messaging over the internet. The design of the protocol introduced new thinking into communications systems design.

Unlike proprietary systems, the Internet Protocol is designed to work in situations where there is no overall control of the network path. Issues like ‘packet loss’ take on much more significance because, as the sender, you have to rely on the competence of a chain of intermediaries to get your message through.

Packet loss generates extra traffic because, with TCP, if a packet fails to arrive, the receiver sends a request for it to be resent. These delays cause high latency, which means a slow delivery time. Packet loss monitoring is an important task to increase delivery speeds and reduce network traffic loads.

Reasons for packet loss

The health of routers on the path of a packet is the main bellwether of packet loss. Router issues fall into three categories:

  • Defective routers
  • Overloaded routers
  • Too many hops

Defective routers

Anyone familiar with computers or electronic equipment knows so many different operating factors are involved in computerized hardware that eventually something is bound to go wrong. It is unrealistic to expect that every router in the world will work perfectly all the time forever.

If a packet is sent to a troubled router, it won’t get any further on its journey. That problem could be hardware-related or caused by a bug in the software. The problem may be permanent, a short-term error, or just a blip. All the other routers connected to the defective device will soon notice the problem and stop sending packets to that faulty router. However, a few seconds’ delay will cause hundreds of packets already on their way to be lost. You need to get a diagnostic tool that can monitor the health of your network equipment, enabling you to head off device failure.

Overloaded routers

Network equipment has a throughput capacity. No rule specifies a minimum capacity for any router that operates on the internet. Some can handle a lot of traffic, some can’t. In all cases, however, routers run a buffering system. A sudden surge in demand can still be dealt with even if it exceeds that router’s processing speed.

If the amount of traffic received at a router exceeds the processing speed to the extent that it fills up the buffer, any subsequent packet arriving at the router will not be processed and so will be lost. This situation only goes on for so long because the upstream routers that send packets to the overloaded router also send querying packets every 60 seconds. If one of those packets isn’t get answered, the requesting router will stop sending packets to the overloaded router. In these instances, new traffic will be routed elsewhere. Once the busy router has room in its buffer again, it will send out an availability notice to its neighbors and the traffic will start flowing again.

Too many hops

The network software that sends data packets has a small influence on the journey of a packet, but it isn’t a positive one. The main option that a sender has over the path is the maximum number of hops it should take. This is the “Time to Live” mechanism (TTL) in the IP header of a packet.

Despite its name, TTL doesn’t specify a maximum travel time. Instead, it contains a number that represents the maximum number of routers that the packet should pass through. Each router that the packet passes through reduces the TTL number by one. When a router receives a packet with a TTL of zero, it drops the packet.

Under normal circumstances, the TTL should never expire. However, if a packet is rerouted to get around a defective router, it may end up passing through an unusually high number of points and so expire.

Sometimes a TTL issue is fixed almost instantly, only losing one or two packets in a stream. If a problem on one router endures, however, the routers approaching it will calculate a more efficient workaround. A few packets drop because of TTL, but the rest get through on a newly organized and more efficient route.

Packet loss detection

Two commonly-used network programs can help you identify packet loss: Ping and Traceroute. Both of these use messaging procedures built into a standard TCP/IP protocol called the Internet Control Message Protocol, or ICMP. These command line utilities can be pretty difficult to read, although veteran network administrators can interpret the results.

Fortunately, you don’t have to put up with low-quality presentation anymore thanks to the many diagnostic tools with GUI-based front-ends that incorporate the standard Ping and Traceroute tools.

For a free Ping and Traceroute utility with a user-friendly interface, take a look at ManageEngine’s Ping/Traceroute/DNS Lookup utility. Two other honorable mentions are the SolarWinds Traceroute NG (free trial) and Visual Traceroute by IPSwitch.

For more robust network troubleshooting, you should consider using a network monitoring tool like the SolarWinds Network Performance Monitor.


A comprehensive network device health checker, that can run on Windows Server. It employs an SNMP for live network monitoring which helps you locate where packet loss is occurring in real time. It also has an autodiscovery function that populates a list of network devices and generates a network map.

Official Site: SolarWinds.com

OS: Windows Server

The downside of using Ping and Traceroute is that they only identify ongoing packet loss and route failure. They can’t analyze what happened after a completed transfer, and they won’t fix or prevent the defects that cause packet loss. To find a solution to packet loss, you must first understand a little bit about communication protocols.

Related post: How to Fix Packet Loss.

Other network performance issues

There are two other important network metrics that you need to look out for when troubleshooting system performance.


The term “latency” refers to the speed that it takes for a packet to travel across the network. This issue is particularly important with links that carry data a long distance over the internet. The typical tool that measures latency returns a value that is called “round trip time” or RTT. This measures the time it takes for a packet to get to a destination and back again. High latency means the packet took a long time to arrive. This metric is usually detected with a Ping test.


Jitter is the variation in the speed of packet delivery. This can result from network congestion or route variation. Jitter particularly causes a problem for interactive applications, such as video conferencing and VoIP. Jitter can cause video and sound to break up.


IP SLA is a Cisco standard that means “Internet Protocol service level agreement.” IP SLA tests measure packet loss and latency as well as jitter, so if you find a monitor that can collect IP SLA data you will be able to spot a range of network performance issues.

Connections and connectionless communications

The Internet Protocol is part of a suite of networking guidelines known as TCP/IP. The name of this stack comes from two standards documents: the Internet Protocol and the Transmission Control Protocol. However, many other protocols are also part of the group. One of the systems in this bundle that didn’t make it into the title is the User Datagram Protocol. TCP and UDP are the two options available for transport management.

TCP establishes settings for a connection before data transfer occurs. UDP doesn’t establish a session between the two communicating computers, and so is known as a “connectionless” system. The effects of packet loss on a transmission differ greatly depending on whether it is managed by TCP or UDP.

Transmission Control Protocol (TCP)

Data packets have two headers. The IP header resides in the outermost layer. Inside that, but still outside of the payload, sits the TCP header. In TCP terminology, a unit of data being processed is not referred to as a packet, but a “segment.”

It is the responsibility of TCP to break streams of data into chunks for transmission. Once a header has been added, the segment is processed into a packet by the Internet Protocol implementation. The TCP function in the recipient device receives the packet with the IP header stripped off. It reads the TCP header and behaves accordingly.

The main tasks of TCP are explained in its name: “transmission control.” Responsibilities include segmenting and reassembling streams of data, which involves sequencing each segment so the stream can be correctly reassembled. In order to put the stream back together again, the receiving program must ensure all segments arrive. This accounts for inevitable packet loss. To reassemble the data stream, TCP assembles the segments in order. This requires buffering, which has the added benefit of smoothing out the irregular arrival rate of packets.

A transmission governed by TCP doesn’t suffer any consequences of packet loss. Each packet that arrives is acknowledged. If the sender doesn’t receive an acknowledgment for a packet, it sends the data again. The receiver holds all of a stream’s arriving packets in the buffer. If one segment is missing, the lack of acknowledgment causes the device to wait for its arrival before forwarding the complete stream on to the destination application.

User Datagram Protocol (UDP)

UDP is the main alternative to TCP. This is a lightweight transport protocol. It has no session establishment procedures, and thus no control procedures. For decades after TCP/IP was defined, no one really used UDP. Just about all internet-based programs employed TCP because of its control and data verification procedures. However, over the past few decades, UDP has suddenly found a purpose and serves many high-tech internet applications.

As with TCP, a UDP data unit sits inside the IP packet. In UDP terminology, the packet is referred to as a “datagram” before it is passed to the IP program. Session-establishment procedures and data integrity checks are not possible with UDP. However, it is possible to specify a port number. The originating and destination port addresses are specified in the UDP header.

UDP suddenly became popular with the advent of high-speed broadband because it doesn’t delay transmission like TCP. Interactive applications like VoIP, video conferencing, and video streaming were all developed to use UDP instead of TCP. Elements of TCP have been replaced by other procedures. For example, the Session Initiation Protocol provides session establishment and ending functions for VoIP applications. Buffering is an example of a TCP function that video systems replicate within the application.

The overall ethos of video and voice programs is to get arriving data up to the application as quickly as possible. The need to check whether packets arrive in order, at a consistent speed, undamaged, or at all overwhelms the need to check the integrity of arriving data.


Given the operating procedures of TCP and UDP, the easiest solution to packet loss over the internet is to use TCP instead on UDP. Unfortunately, the transport procedures of almost all applications (except for specialist networking software) are embedded in programs and the user rarely gets to choose which transport protocol to use.

If you have a program that uses TCP, your connections will encounter packet loss, but you don’t have to worry about it because the protocol will handle data recovery for you.

Programs that employ UDP sacrifice complete data integrity in exchange for speed. Packet loss quality of service impairments are frequent occurrences in voice and video applications over the internet. In fact, they are so common that most people have become used to short gaps or robotic quirks in VoIP conversations, or pauses, jumps, and pixelated frames in live video streams.

Packet loss over the internet

In terms of packet loss caused by the failure or congestion of internet routers, there is no simple remedy. Despite the lack of choice over whether a transfer uses TCP or UDP, however,  you can use a trick to enforce transmission control on UDP communications. You can’t switch a UDP program to be a TCP system, but you can wrap UDP packets in TCP procedures.

VPNs establish a secure link between two computers, one of which is the VPN server. The secure link is called a “tunnel” and it uses TCP procedures. Once a tunnel has been created, all traffic between those two computers is sent down it. That means both UDP and TCP transfers are protected by TCP procedures. Some VPNs allow you to switch the tunnel to run over UDP. However, the maintenance of the tunnel by the VPN client and server, emulates TCP protection in the application even if it is running to a UDP port.

Note the path from the VPN server to the final recipient isn’t enclosed in the tunnel. However, two strategies can reduce or eliminate packet loss during that final leg of the journey.

Reduce UDP exposure to packet loss

The easiest way to get near-total TCP coverage for your UDP transfers is to choose a VPN server as close as possible to the remote computer that you are connected to. In some countries, such as the United States, Germany, or the UK, large VPN companies offer servers in several cities. Select a location close to the source of the call or video stream.

Eliminate UDP exposure to packet loss

VPNs allocate a temporary IP address to each client at the point of connection. This new address represents the customer until the session ends. Most VPNs assign a new address each time you connect, but many VPNs offer a “static IP address service.” When using a static IP, the customer is represented by the same IP address every session.

A VPN with a static IP lets you use the VPN-allocated IP address rather than your real address. Whenever you connect to another device over the internet, it will first connect to the VPN server where your static IP address is registered. The route from the VPN server to the client (you) is always protected by the encrypted tunnel.

If you own several sites and you want TCP procedures to cover all of their communications, you can buy a static IP address from a VPN provider for each site.

When all sites are connected to the VPN service, all outgoing messages are protected as far as the VPN server. If those messages are addressed to the remote VPN-allocated address, the remainder of the journey from the VPN to the destination will also be covered by an encrypted tunnel. So, by this double VPN method, TCP procedures apply to the entire length of the connection and automatic packet loss avoidance services cover all of your UDP communications.

Packet loss on private networks

Risk of packet loss on private networks is significantly less than on the internet. However, packet loss does occasionally occur. A problem with your network equipment can raise the loss rate to a critical level. You have one big advantage when trying to eliminate packet loss on your LAN: control all of the links in the network and all of the equipment that process transfers. The surest way to avoid packet loss within your network is to keep tabs on the health of your network equipment.


Packet loss can be due to faulty hardware or software anywhere along the data’s path. Network monitoring tools can help you determine the reasons why packet loss is occurring, either on your network or on the internet, and help you locate the cause. Once you determine where the packet loss originates, you can take steps to reduce or eliminate it, or if it is happening outside of your control, you can look at ways of rerouting your data to achieve a better result.

Related: How to Fix Packet Loss

Image: Photo of technology from PXHere. Public domain