The sole purpose of a network is to connect one device with another – no matter how far apart they may be. Ideally, it does so without altering or dropping a packet. But sometimes congestion occurs, after which there is always the chance that packets will be dropped. In other cases, packets go through so slowly that they won’t be of much use by the time they reach their destinations.
In such instances, QoS can help minimize or even prevent packet loss and latency.
What is QoS?
Quality of Service or QoS is defined as the control and management of a network’s data transmission capabilities by giving priority to certain types of time-sensitive and mission-critical data protocols (e.g. audio and video) over the other non-urgent ones (e.g. SMTP, HTTP).
QoS is usually applied on networks that cater to traffic that carry resource-intensive data like:
- Voice over IP (VoIP)
- Internet Protocol television (IPTV),
- Streamed media
- Video conferencing
- Online gaming
These sorts of data need to be transmitted in the shortest amount of time to be consumable at the receiving end.
Real life scenario
To make things a bit clearer, let us take an example of a traffic jam on a highway at rush hour. All the drivers sitting in the middle of the jam have one plan – make it to their final destinations. And so, at snail’s pace, they keep moving along.
Then the sound of an ambulance’s siren alerts them to a vehicle that needs to get to its destination more urgently – and ahead of them. And so, the drivers move out of what now becomes the ambulance’s “priority queue,” and let it pass.
Similarly, when a network transports data, it too has a setup where some sort of data is treated preferably over all the others. The packets of important data need to reach their destinations much quicker than the rest of them because they are time-sensitive and will “expire” if they don’t make it on time.
Why does QoS Matter?
Once upon a time, a business’ network and communication networks were separate entities. The phone calls and teleconferences were usually handled by an RJ11-connected network; the calls were monitored by a PABX system. It ran separately from the RJ45-connected IP network that connected laptops, desktops, and servers. The two network types rarely crossed paths unless, for example, a computer needed a telephone line to connect to the internet. An example of such a network would look like:
The data that used the IP networks could come and go at its own leisure. No one would make a lot of fuss if an email was a couple of seconds late. If a file transfer took an extra minute or two, then so be it. QoS simply wasn’t required.
Today, on the other hand, that old telephony system has been replaced by IP-based audio and video communication/transmission systems. People now make business calls using video-conferencing applications like Skype, Zoom, and GoToMeeting, which use the IP transport protocol to send and receive video and audio messages.
While this technology has been welcomed and adopted by individuals and businesses alike, it has one major requirement which, if not met, could result in a terrible communication experience: a fast network that will let its packets go through uninterrupted. Meeting this criterion requires the help of QoS.
But, before we move any further into the topic of QoS, we need to talk about RTP.
What is RTP?
The Real-Time Transport Protocol or RTP is an internet protocol standard that stipulates ways for applications to manage their real-time transmissions of multimedia data they send. The protocol covers both unicast (one-to-one) and multicast (one-to-many) communications.
RTP is more commonly used in internet telephony communications where it handles the real-time transmissions of audiovisual data.
While RTP doesn’t in itself guarantee the delivery of the data packets – that task is handled by switches and routers – it does facilitate managing them once they arrive in the networking devices.
Now, QoS is a hop-by-hop transport configuration done on the networking devices to make them identify and prioritize the RTP packets. Every connected device between the sender and recipient(s) should also be configured to understand that the packet is a “VIP” one and needs to be pushed along in the priority lane. If even one of the devices in the relay isn’t configured right, QoS won’t work. The packets will lose their priority and slow down to that device’s data transmission speed.
What happens if we don’t use QoS?
Not having a correctly configured QoS could result in one (or all) of the following issues:
- Latency: When the RTP packets haven’t been assigned their required priorities they will be delivered at the devices’ default speeds. In a congested network, the packets have to travel along with the rest of the non-urgent packets.
While latency itself won’t have an effect on the quality of the delivered audiovisual data per se, it will affect communication between end users. At 100ms of latency, they will start talking on top of one another as the packets arrive out of sync, and at 300ms the conversation stops being comprehensible.
- Jitter: In this similar scenario, a delay in the delivery of the packets causes a VoIP conversation to become choppy while video streaming is interrupted by frequent pauses as the recipient application buffers the slow-arriving packets.
- Packet Loss: This is the worst-case scenario where we find that a number (or parts) of packets are lost due to too much congestion on the networking devices. When a switch or router’s output queue fills up, a tail drop occurs where the device discards any new incoming packets until space becomes available again.
In all the cases we have just seen, QoS can help by sorting the data out, managing the queues, and preventing data loss.
See also: The Ultimate guide to packet loss
It doesn’t take much imagination to see how communication and media transfer or streaming could be badly affected when we opt out of using QoS – especially on networks that cater to RTP protocols. Even if it were perfectly designed, eventually, the communication will first become difficult, then deteriorate as the traffic grows, and finally become impossible.
The three faults – latency, jitter, and packet loss – are, in fact, so critical in determining how well an implementation is working that QoS and network monitoring software manufacturing companies like SolarWinds use them as metrics to measure the quality of RTP-based traffic.
SolarWinds and network monitoring
This suite of network monitoring applications helps address issues that could be caused by:
- A slow network: A slow network can hold an entire business hostage as it continues to reduce the speed at which data flows. Unless the network’s bottlenecks are removed, the entire organization will experience terrible connectivity.
- Sluggish audiovisual communications: A business that can’t establish a clear communication channel within its network channel will be crippled. Even worse, not being able to communicate clearly with its clients will almost certainly bring it to its knees.
- Out-of-control networks: An administrator that can’t keep control of a network won’t be able to know about its current status or how to make plans for its future expansion. Their being in the dark about their own network will only lead to confused and ill-informed decision-making which will make things even worse.
Armed with the Netflow Traffic Analyzer, a network administrator will be able to get rid of the problems we have just seen by:
- Helping with a QoS implementation and its optimization. Admins can make use of custom reports to manage the network’s data flow and tweak it to make it even more efficient.
- Taking stock of, and reporting on, the current QoS policy configuration and allowing admins to decide whether or not they do indeed have the best design in place.
- Monitoring bandwidth usage to see which applications and devices are hogging network sources so they can either be isolated, rescheduled, or shut down entirely. See also: 6 best free bandwidth monitoring tools
A typical Netflow Traffic Analyzer dashboard contains the vital information an admin needs to see how things are going and also makes it easy to make quick decisions. An example:
These reports and analytics – and thus the decisions or actions taken afterwards – are made possible using the metrics that have been mentioned above: latency, jitter, and packet loss.
How do you configure your QoS?
Routers and switches that can be configured to prioritize protocols are usually accessed by router management software suites. For the most part, the whole process of configuring your QoS preference is a pretty straightforward affair that involves:
- Logging into the application and connecting to the hub or switch through it
- Navigating to the QoS configuration menu
- Setting packet priority preferences
And just like that, media packets will be able to traverse networks smoothly. Hardcore network engineers can do all of the tasks listed above via command line configuration interfaces.
How are RTP packets prioritized?
QoS packet prioritization can be done using two main methods:
- Classification: This method identifies the packet types and assigns their priority by marking them. The identification can be done using ACLs (Access Control Lists), LAN implementations using CoS (Class of Service), or with the help of switches which use hardware-based QoS markings.
- Queuing: Queues are high-performance memory buffers found in routers and switches. The packets passing through them are held in special memories as they wait to be sent on their way. When packets like RTP are assigned higher priority, they are moved to a dedicated queue that pushes the data on at a faster pace, thus reducing the chances of being dropped. The lower-priority queues aren’t afforded this luxury.
An important thing that needs to be remembered here is that a packet’s priority markings are only valid within the network it has been created in. Once it leaves the network, the owners of the recipient network will determine its new priority.
Thoughts to consider when prioritizing packets
Some thoughts and tips that can help when deciding how to prioritize packets include:
- It is generally a good idea to have the priority markings done using devices closest to the source of the data This ensures the packets travel the width and breadth of the network with the correct priority.
- The device of choice to mark incoming packets should always be switches. This is because these devices can load-balance the traffic and share the burden with other switches, thus reducing the burden on their CPUs.
- For the most part, incoming traffic is almost always greater than that which is headed in the opposite direction. ISP providers normally assign less bandwidth to their clients’ outgoing traffic, and it is there (on the outgoing network path) that QoS needs to be primarily applied.
- CISCO has a recommendation on how packets should be marked as is shown in this diagram:
Finally, the success of a QoS implementation always depends on the quality of the policy that governs how packets are classified, marked, and queued. The policy must be carefully drafted for the QoS implementation to be a success.
What not to use QoS for
Now, after reading this much about QoS it might appear as if it is a magic elixir that can cure all ailments that cause network congestion. Well, to a certain extent, it can make most RTP communications smoother and make it appear like it has streamlined the traffic on a network. Unfortunately, it isn’t an all-around solution for every network problem.
QoS should never be used for the following purposes:
Although QoS helps streamline the priority of RTP packets to make it look like the network suddenly increased bandwidth, it should never be construed as such. QoS should never be used as a tool to “increase bandwidth” when all it does is utilize the existing resources a little more efficiently (and in favor of the RTP packets).
Instead, consider looking into caching of files to decrease the amount of data that comes and goes. If that doesn’t work, then it could mean the bandwidth limits have been reached. When a company reaches its broadband limits, the only viable thing to do is to go out and buy some more of it – not use QoS.
Unclogging the network
If rogue applications are left to run and they end up hogging a network’s bandwidth, implementing QoS is not the solution. While Skype calls might finally start to go through, QoS will not have addressed the root problem. Eventually, the rogue applications will swallow up whatever resources may be available until a QoS plan will be of no use whatsoever.
One solution that could work here would be to hunt down the resource-hogging applications and either shut them down or reschedule them to run during after hours.
Again, the whole purpose of configuring QoS on a network is to make sure video and audio calls don’t lag (or even get dropped) due to a congested network. It is not a tool that can actually increase bandwidth. It can’t tunnel through a clogged network, either.
A good QoS implementation will improve the quality and speeds of mission-critical data by optimizing the allocated bandwidth and facilitating the packets’ marking so they are identified and given their assigned priorities. It makes use of the available bandwidth; it doesn’t expand it.
- Feature image by John Carlisle on Unsplash
- “Red and white car light trails on an urban highway at night in Röddingsmarkt” by CBX. on Unsplash
- Mixed network design – Wikimedia, public domain
- “Netflow Traffic Analyzer Summary” – screenshot taken on 28/05/2018
- “Cisco’s QoS Baseline Marking Recommendations” – Courtesy of Cisco Systems, Inc. Unauthorized use not permitted (Image captured on 28/05/2018)