The Quick Guide to LARTC

What is LARTC?

Linux Advanced Routing and Traffic Control (LARTC) is a policy-based routing methodology (PBR). PBR involves instructing your routers and switches to use some other method instead of the default Border Gateway Protocol (BGP) in order to select the adjacent network onto which it should forward data. On private networks, you have the option of specifying exact routes to a destination rather than letting the router work out the best path. You can also restrict the throughput of certain types of traffic and prioritize others through policy-based routing.

LARTC is implemented on the Linux operating system. You don’t need to buy a package in order to get LARTC working for you. There are already operating system commands available that will assist you in setting up LARTC. Since Linux 2.2, you have greater routing capabilities at your command that could be delivered by the route, ifconfig, and arp commands. The iproute2 utility offers much more sophisticated methods to direct certain traffic over specific routes and create dedicated paths to fast-track traffic from specific sources. This is the traffic control feature of LARTC and it integrates GRE tunneling to put allocated links to nominated traffic.

The best guide you will find online to LARTC is the Linux Advanced Routing and Shaping HOWTO. If you find this strategy too complicated to implement, there are shortcuts that you can take to simplify traffic shaping.

Routing and Traffic Control Methods

We will look at other routing and traffic control methods, specifically:

  • Border Gateway Protocol
  • Class-based queuing
  • Quality of Service (QoS)
  • Multi-band priority queuing
  • Dynamic load sharing

Now let’s take a look at detailed descriptions of each of these options.

Border Gateway Protocol

The Border Gateway Protocol is the only routing methodology in operation on the internet. The internet is made up of independently owned and managed networks. The controller of each network is its router. Each router in a path makes its own decision over which of its connected, neighboring routers to forward a data packet to.

When you make a connection to a web server on the other side of the world, your router does not specify how that packet will get there. The only control it has is over which of its neighbors it will send each packet to as the first “hop”. The receiving router then makes its own decision on where the next hop will be. Your router doesn’t need to dictate the route because all routers are playing by the same rules so they will all make exactly the same choice.

BGP is implemented by routing tables maintained by all of the routers on the internet. If one router in the world goes offline or is overwhelmed with traffic, all of the routers that connect to it will quickly notice and alter their routing tables to take that router off the list. This information also gets passed on to all of the other routers that connect to the neighbors of the faulty router, propagating routing table updates all over the world. The BGP system is very efficient, but you don’t need to apply it on your own private network.

You might have traffic that you need to prioritize over others and you might have certain applications that generate more traffic than your network can cope with. LARTC addresses these needs but BGP doesn’t. BGP is the default method used by network equipment.  However, there are plenty of other routing strategies that you could adopt.

Class-based queuing

Traffic control on private networks doesn’t rely on routers as much as it does on switches. The router involvement with network traffic comes from the interaction between nodes on the network with destinations beyond the router, or gateway. Once incoming traffic gets onto the network, the journey over the network is managed by switches. The danger of congestion on the network can be headed off by specifying the behavior of switches.

Queuing algorithms enable you to manage the traffic through switches when links get congested and class-based queuing is one of the most prevalent strategies. This system takes a little while to set up because you need to specify classes and allocate them to different sources. The usual attributes that get traffic allocated to classes are source IP address, protocol (port number), and application. However, you can choose to implement the methodology your own way because the important element is that there are classes – the allocation factor is not imposed by a universal rule.

If class-based queuing interests you, take a look at this free class-based routing program that is written for Linux.

Quality of Service (QoS)

Quality of Service is a Cisco concept that has become widely implemented around the world thanks to the high volume of sales that Cisco’s network equipment achieves. The Cisco IOS firmware includes route maps, which operate in a similar fashion to BGP routing tables. However, these route maps are influenced by “tagging”. QoS is really a form of class-based routing.

See also: What is QoS

You define types of traffic to prioritize and then that priority traffic gets marked with a QoS tag on its way out onto the network. The switches on your system will maintain different route maps for different tags. QoS is particularly useful for creating virtual networks. So, you can run your digital telephone service over your data network, keeping both types of traffic distinct. The tagging doesn’t necessarily need to focus on traffic shaping, it can also be useful for traffic monitoring. The presence of the tags means that you can filter the results of network monitoring and just examine only your voice traffic or only data.

In most network traffic scenarios, you have the time and power to analyze traffic and expand or adapt resources accordingly. In some situations, particularly when serving the public, demand can be unpredictable. You can use QoS methods to limit demand on less important services to free up bandwidth for your company’s most important products. For example, QoS can help reserve bandwidth to make specific applications available all of the time at the cost of others in the event of a traffic surge. You can implement QoS in an Apache web server as well.

Multi-band priority queuing

Your Linux kernel has a configuration item, called CONFIG_NET_SCH_PRIO. This implements multi-band priority queuing, which is also known as n-band priority queuing. If you search for the internet, you will see the term applied to the operations of wifi routers. Don’t get sidelined by this parallel technology, because the Linux implementation of the term is simply a scheduling concept and not a method of manipulating radio frequencies.

In Linux multi-band priority queuing, the “band” refers to a class. So, this technique is just another way to prioritize traffic using the class-based concept. The classes are identified by a number, starting with 0 and incrementing sequentially. When implemented, your router or switch will perform normally when it is under capacity, but once queuing starts, it will pull the next message off the queue that has the lowest band number. So, data units marked with the 0 band will get passed through before any datagram in the queue that is marked with a 1. This implementation essentially creates a series of virtual queues with the network device only serving queue 1 when there is nothing in queue 0. This methodology contributes to the LARTC technique.

Dynamic load sharing

LARTC is sometimes used for load balancing — getting routers to stand in for each other when one is approaching full capacity. This concept can be pushed down to Layer 2 (also possible with LARTC). At the switch level, you need to consider Shortest Path Bridging (SPB), which will divert traffic onto an alternative path if the obvious route is congested.

This technique does require a certain amount of physical network link duplication. It isn’t always possible to reach one endpoint by many different links, so SPB is more likely to be implemented on large complex networks. However, if you know you need to increase the physical capacity of your network on one segment, it is sometimes more economical to just lay a second cable alongside, rather than throwing out your existing wiring and getting in more expensive cabling with greater bandwidth capabilities. In this scenario, the SBP strategy would allow traffic to flow along cable A until it approaches full capacity and then pass extra traffic down cable B rather than queuing it.

Not just LARTC

Although LARTC is very appealing and seems to be the answer to every problem, it isn’t your only solution. In fact, many of the techniques outlined in this guide contribute to LARTC. If you want a toolkit that you can turn to no matter what problem you are facing, then Linux Advanced Routing & Traffic Control would keep you covered. However, you probably will never need all of the techniques of LARTC all at the same time all across your networks. It is more likely that on some switches you need load sharing and on others, you need queue manipulation. Consider these simpler solutions before you opt for the complex and composite methodologies that create the Linux Advanced Routing and Traffic Control.

See also:

25 Best Network Monitoring & Management Tools/Software of 2018
5 Best Bandwidth Optimization Tools to Increase Network Bandwidth
8 best packet sniffers and network analyzers for 2018
9 Best Network Troubleshooting Tools for Network Administrators
Ultimate Guide to TCP/IP Transmission Control Protocol

Image: Network device from PXHere. Public domain