In IP QoS: Two generations of class-of-service tools, I described the tools available in modern routers (and other layer-3 devices) that you can use to deploy differentiated services in your network.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Then in MPLS QoS: Implementing the best model for guaranteed service, I described how you can use these tools to implement a high-value VPN service.
With edge problems solved, it's time to focus on the network core. Unless you're fortunate enough to have very high-bandwidth core links, you are inevitably facing link congestion and discovering the unpleasant truths about today's router architecture. The bottom line is that it's best to alleviate these issues before they arise.
Most modern routers and layer-3 switches perform forwarding decisions (where do I send the packets) independently from the QoS decisions (which packets do I prefer or drop). For example, when a core link becomes congested, a router continues forwarding packets onto the congested link even though there might be a longer or slower -- but less congested -- alternate path through the network.
The core MPLS QoS mechanisms (queuing and selective dropping) can try to cope with the congestion, but they are effectively a zero-sum effort. You can give some traffic preferential treatment only at the expense of less-important traffic. Obviously, we need something more than standard IP routing and QoS. Routers should be aware of the bigger picture and use the network resources more intelligently.
Reintroducing virtual circuits to the IP core
The limitations faced by today's routers arise from the basic assumptions of IP routing: Core routers treat IP traffic as connectionless datagrams, not as streams of data (similar to virtual circuits in ATM or Frame Relay). If you want to optimize the utilization of redundant links in the network core and influence the paths traffic is taking based on the actual network load, you need to reintroduce virtual circuits into the core IP network. The only mechanism available in today's purely IP-based networks that can accomplish that is MPLS Traffic Engineering (MPLS TE).
MPLS TE wasn't connected to QoS for a long time. While you could provision alternate traffic-engineered (TE) label switch paths (LSPs) across the network and even specify how much bandwidth each path would need, the bandwidth limitations or preferential treatment of provisioned LSPs were not enforced automatically. You had to configure MPLS TE independently from the IP QoS or MPLS QoS, in that their interoperability was totally dependent on a good network design.
Learning from customer feedback, router vendors have introduced many features that make it easier to implement network-wide QoS in an MPLS TE environment. Some features require end-to-end interoperability and are thus standardized by Internet Engineering Task Force (IETF). Most notably, vendors agreed on a method to implement Diffserv-aware MPLS TE, where the network devices use multiple bandwidth pools to separate high-priority traffic allocations from low-priority ones. Using Diffserv-aware MPLS TE, you can implement a network where voice traffic gets the bandwidth it needs while guaranteeing that lower-priority services (VPN and Internet traffic) will not starve.
Automatic bandwidth adjustment simplifies MPLS TE provisioning
Most large service providers have experienced the pain of provisioning numerous MPLS TE LSPs across the core network (configured as MPLS TE tunnels on the edge routers). Ideally, you need a pair of LSPs (LSPs are unidirectional) between each pair of edge devices or between each pair of POPs. The number of MPLS TE tunnels thus grows with the square of the number of edge points in your network. The autotunnel mesh groups significantly simplify MPLS TE provisioning because the tunnels between members of the mesh group are established automatically.
Allocating correct bandwidth to each MPLS TE LSP provisioned across the network core became easier with the automatic bandwidth adjustment (autobandwidth) feature, which measures the actual long-term utilization of an LSP and adjusts its bandwidth allocation in real time. With the autobandwidth deployed throughout the network, you could almost have a core network running on autopilot, dynamically discovering changes in end-to-end load and adapting to them.
Last but not least, MPLS TE became fully QoS-aware with class-based tunnel selection. This feature allows you to establish a bundle of MPLS TE LSPs between a pair of endpoints. Each LSP in the bundle can have its own bandwidth requirements; it can also use different paths across the network, depending on overall bandwidth availability. Once the LSPs are established, the head-end router selects the outgoing LSP based on the QoS bits in the forwarded packet. This feature allows you to use different LSPs for voice, VPN and best-effort Internet services.
Some service providers already use the power of autobandwidth in combination with autotunnel mesh groups to build networks that spread traffic across all available network paths based on actual traffic conditions. It's not hard to do the same. Unless your gear is decades old, the functionality you need is probably already available and just needs to be configured. But don't rush. As with any other major network change, deploying MPLS TE with QoS requires careful planning, good design and implementation, as well as associated training for your networking engineers.
About the author: Ivan Pepelnjak, CCIE No. 1354, is a 25-year veteran of the networking industry. He has more than 10 years of experience in designing, installing, troubleshooting and operating large service provider and enterprise WAN and LAN networks. He is currently chief technology adviser at NIL Data Communications, focusing on advanced IP-based networks and Web technologies. His books include MPLS and VPN Architectures and EIGRP Network Design. For more expert advice from Ivan, you can read his blog, Cisco IOS hints and tricks.