MPLS QoS: Technical advances and service guarantees

Understanding technical advances in MPLS Quality of Service (QoS) are essential for telecom carriers to offer guaranteed delivery for mission-critical data and time-sensitive content. This guide walks through MPLS QoS deployment challenges on the edge and using MPLS-TE in the core.

Telecom service providers that want to stay in the IP networking business need to be able to differentiate and guarantee time-sensitive traffic (like voice and video) from less sensitive traffic (like email) in order to carry mission-critical customer traffic. On the business side, charging more for high priority traffic has always been a carrier goal because not all traffic is created equal, and the revenue generated by different classes of services should reflect that.

Guaranteed Quality of Service (QoS) over MPLS has been discussed and advanced for more than a decade, yet it's still complicated to deploy a guaranteed MPLS service. This Telecom Insights guide compares the capabilities of two generations of class-of-service tools that help carriers deliver IP quality-of-service guarantees to their customers, then looks at how to use these tools to implement a high-value VPN service. After looking at the edge, look at how MPLS Traffic Engineering (TE) can help alleviate core router and link congestion by reintroducing virtual circuits to optimize redundant links and use network resources intelligently.

Table of contents
  IP QoS: Two generations of class-of-service tools
  MPLS QoS: Implementing the best model for guaranteed service
  Using MPLS TE to avoid core network congestion


  IP QoS: Two generations of class-of-service tools  

Fifteen years ago, life was pure and simple: Service providers offered point-to-point links with specified quality of service (QoS) -- usually committed and excess bit rates. The Internet Protocol (IP) lacked any QoS mechanisms. Things got complicated when people started using IP in mission-critical networks, and as is usually the case, two competing architectures were developed to provide QoS on IP:

  • Integrated Services (IntServ; RFC 1633) allowed each individual data session (each application instance) to specify its own set of QoS parameters.
  • Differentiated Services (DiffServ; RFC 2475) grouped user data in coarse classes (for example, real-time, mission-critical and "other" traffic) and provided QoS guarantees to each class, but not to every single session within the class.

Integrated services architecture failed the scalability challenge owing to the same problem that had plagued X.25 and legacy IBM networking: You simply cannot provide individual QoS guarantees to millions of flows traversing the same high-speed link. So instead, all high-speed service provider designs use differentiated services (DiffServ) architecture.

Initial implementations of DiffServ architecture used the IP precedence field in IP packets to indicate the desired class of service. This field is three bits long; you can thus provide up to six different classes of service (values 6 and 7 are reserved for control traffic).

When it became evident that we needed a wider range of values, the type-of-service octet in the IP header was redefined as the Differentiated Services field (DSCP; see RFC 2474), which gives you the full range of IP precedence values, as well as four additional assured forwarding classes, each with three different drop priorities (the drop priority is similar to the discard eligibility bit in Frame Relay or the cell loss priority bit in ATM), as well as the expedited forwarding class used for real-time traffic.

IP quality of service mechanisms

A typical high-speed QoS implementation in modern routers and Layer 3 switches might include the following mechanisms:

  • Metering (policing) and marking. The metering function should ensure that traffic sent by customers conforms to contractual limits. Excess traffic could be dropped and relabeled as less-important traffic or marked with different drop priority.

    Note: Drop priorities are better than traffic relabeling because relabeling can cause out-of-order packets, which can severely degrade the throughput of customers' applications.

  • Queuing based on DSCP or IP precedence values. Most devices support priority queuing, which should be used for real-time traffic (voice, for example) and class-based queuing, which allocates a percentage of available bandwidth to each traffic class.
  • Dropping (including random early drop) based on drop priorities. When encountering output link congestion, the network devices should preferentially drop packets with high drop priority (assuming these packets are out-of-contract traffic marked at the network's ingress boundary).

Most software-based devices also include the shaping functionality. Instead of dropping or relabeling out-of-contract traffic (as policing does), shaping delays out-of-contract packets. Shaping is preferred to policing, as it results in much better end-to-end application performance, but it is usually implemented in software and thus is unusable on high-speed links.

Note: Recent high-end router modules, the 4-port Gigabit Ethernet module for Cisco's 7600 router, for example, support hardware-shaping queus, making PE-to-CE shaping a viable solution.

Ideally, the customer edge (CE) router should perform outbound shaping, and the provider edge (PE) router should use policing to monitor traffic contract compliance.

Summary

Service providers that don't want to compete solely on pricing should provide IP quality-of-service guarantees to their customers. To implement contractual obligations, the service provider network should use the following tools:

  • Policing and marking on ingress PE routers.
  • Differentiated queuing and dropping on core links.
  • Shaping (or policing, based on line speeds and hardware deployed in the network) and differentiated queuing on egress PE-CE links.


  MPLS QoS: Implementing the best model for guaranteed service  

The first implementation of a Multiprotocol Label Switching (MPLS) virtual private network (VPN) service with guaranteed Quality of Service (QoS) is an experience of multiples in more than one way -- multiple layers of tasks await service provider engineers. But advanced planning will start the project out on the right track to a successful deployment. So when the engineers start talking about pipes and hoses, don't worry, those terms are actually used in MPLS quality of service jargon.

What can you guarantee?

Before you start offering MPLS QoS to your customers, you should carefully evaluate what you can reasonably provide and how the guarantees will fit into your overall service portfolio. There are two basic models of QoS offered by service providers running MPLS-based networks.

  • Pipe model -- or site-to-site QoS: Similar to those offered on Frame Relay or ATM networks, this type of QoS is called the pipe model -- as you are providing the quality on a point-to-point virtual pipe linking two sites.
  • Hose method QoS: Alternatively, you can offer QoS guarantees on inbound and outbound traffic for each site. For example, you promise to deliver 10 Mbps of traffic sent by site X regardless of the destination of the traffic. This approach is called the hose model.

Obviously, the hose model is less precise than the pipe model. For example, if a high-speed site sends a 100 Mbps video stream to a low-speed site, most of the traffic will be lost before being delivered to the low-speed site, yet it will do so without violating the QoS guarantees. The hose model is also harder to engineer, as it is more difficult to reliably predict where the traffic will actually go.

With the pipe model, traffic engineering tools similar to those used in Frame Relay or ATM networks can be used to ensure optimum network performance. The hose model cannot be engineered so precisely. Therefore, the hose model should be used for only a relatively small percentage of the overall traffic mix.

The choice of the pipe or hose QoS model depends heavily on the type of VPN service offered. The pipe model is ideal for point-to-point services, including Any Transport over MPLS (AToM), point-to-point VPN services, or hub-and-spoke MPLS VPN services with no direct inter-spoke communication. The hose model is the only viable model for any-to-any service, including full mesh MPLS VPN service and Virtual Private LAN (VPLS) service.

Implementing MPLS QoS

All MPLS QoS implementations use the differentiated services (DiffServ) model. Routers use three bits, called Experimental bits for historical reasons, in the MPLS header of each packet transported across the MPLS network to differentiate the traffic. This allows eight traffic classes to be implemented; though one is usually reserved for default traffic class, leaving only seven actual classes. If you want to offer in-contract/out-of-contract QoS, similar to the DE bit in Frame Relay or CLP bit in ATM, then only four traffic classes will remain as one bit is needed for the out-of-contract indication. Four traffic classes should be enough to cover the needs of most service providers.

The traffic classification is usually performed by the service provider edge routers (PE routers). These routers should measure the compliance of the customer traffic, sort the traffic into MPLS traffic classes, optionally mark the out-of-contract traffic and drop excess traffic.

IP network traffic resources
Chapter download: Implementing quality of service (QoS) over MPLS VPNs

Traffic engineering the service provider network

MPLS and Carrier Ethernet: Playing together to ensure quality of service

The easiest marking algorithm modifies the DiffServ Code Point (DSCP) value in the original IP packets. The packets are then transported across the MPLS network with MPLS experimental bits matching the value of the IP DSCP. This approach is called the uniform mode, as the MPLS experimental bits match IP DSCP value.

In most cases, the customers that care about QoS want to retain their DSCP markings for end-to-end QoS control. To satisfy this request, PE routers have to measure the traffic and set the MPLS Exp bits directly. The MPLS markings are retained from the ingress PE router to the egress PE router, giving this method the name short pipe mode.

Advanced customers might want to retain flexibility and decide to set the in-contract/out-of-contract bits themselves like they used to do on Frame Relay or ATM networks. In these designs, the MPLS label switched path (LSP) has to be extended to the ingress customer edge (CE) router, thus this method is called long pipe mode.

Regardless of the way the MPLS experimental bits were set on the network edge, they can be used to sort packets into output queues with different QoS parameters or to implement selective drop for out-of-contract packets on oversubscribed links.

Note: Don't forget that most routers operate in uniform mode, unless configured otherwise, copying IP DSCP values in MPLS experimental bits. As soon as you implement differentiated queuing on your core links, you should mark all the inbound traffic on your network edges with explicit value of MPLS experimental bits -- otherwise non-paying customers will be able to hijack your high-priority queues.

Conclusion

Improving QoS guarantees will help service providers differentiate themselves from their competitors. A few simple planning steps listed below will ensure that service providers are on the right track.

  • Figure out what customers actually need. Copying competitor's models will not provide any advantage.
  • Design QoS offering based on the expected traffic flows of services. For example:
    • If SPs offer point-to-point services, the pipe model (guaranteeing end-to-end bandwidth or delay) is best.
      Note: Networks can be engineered better when using the pipe model, but this model cannot be applied to all VPN services.
    • If customers want to have full-mesh VPN connectivity, it is better to use the hose model which guarantees inbound and outbound bandwidth on each site.

Once the QoS service definitions have been decided, implementation can be started:

  • Map the QoS service offerings into MPLS traffic classes. Remember, only three bits are available to mark traffic, and hardware implementation on networking gear might further limit the available choices.
  • Protect yourself. Before configuring QoS mechanisms in the network core, ensure that regular customers cannot insert high-priority traffic into your network.
  • Configure queues and selective drop of out-of-contract traffic. Determine in advance what traffic receives priority and what traffic needs to be dropped from your core links. Use the MPLS experimental bits to sort packets into output queues.
  • Configure metering and marking. All inbound customer traffic should be marked or policed and metered.


  Using MPLS TE to avoid core network congestion  

With edge problems solved, it's time to focus on the network core. Unless you're fortunate enough to have very high-bandwidth core links, you are inevitably facing link congestion and discovering the unpleasant truths about today's router architecture. The bottom line is that it's best to alleviate these issues before they arise.

Most modern routers and layer-3 switches perform forwarding decisions (where do I send the packets) independently from the QoS decisions (which packets do I prefer or drop). For example, when a core link becomes congested, a router continues forwarding packets onto the congested link even though there might be a longer or slower -- but less congested -- alternate path through the network.

The core MPLS QoS mechanisms (queuing and selective dropping) can try to cope with the congestion, but they are effectively a zero-sum effort. You can give some traffic preferential treatment only at the expense of less-important traffic. Obviously, we need something more than standard IP routing and QoS. Routers should be aware of the bigger picture and use the network resources more intelligently.

Reintroducing virtual circuits to the IP core

The limitations faced by today's routers arise from the basic assumptions of IP routing: Core routers treat IP traffic as connectionless datagrams, not as streams of data (similar to virtual circuits in ATM or Frame Relay). If you want to optimize the utilization of redundant links in the network core and influence the paths traffic is taking based on the actual network load, you need to reintroduce virtual circuits into the core IP network. The only mechanism available in today's purely IP-based networks that can accomplish that is MPLS Traffic Engineering (MPLS TE).

MPLS TE wasn't connected to QoS for a long time. While you could provision alternate traffic-engineered (TE) label switch paths (LSPs) across the network and even specify how much bandwidth each path would need, the bandwidth limitations or preferential treatment of provisioned LSPs were not enforced automatically. You had to configure MPLS TE independently from the IP QoS or MPLS QoS, in that their interoperability was totally dependent on a good network design.

Learning from customer feedback, router vendors have introduced many features that make it easier to implement network-wide QoS in an MPLS TE environment. Some features require end-to-end interoperability and are thus standardized by Internet Engineering Task Force (IETF). Most notably, vendors agreed on a method to implement Diffserv-aware MPLS TE, where the network devices use multiple bandwidth pools to separate high-priority traffic allocations from low-priority ones. Using Diffserv-aware MPLS TE, you can implement a network where voice traffic gets the bandwidth it needs while guaranteeing that lower-priority services (VPN and Internet traffic) will not starve.

Automatic bandwidth adjustment simplifies MPLS TE provisioning

Most large service providers have experienced the pain of provisioning numerous MPLS TE LSPs across the core network (configured as MPLS TE tunnels on the edge routers). Ideally, you need a pair of LSPs (LSPs are unidirectional) between each pair of edge devices or between each pair of POPs. The number of MPLS TE tunnels thus grows with the square of the number of edge points in your network. The autotunnel mesh groups significantly simplify MPLS TE provisioning because the tunnels between members of the mesh group are established automatically.

Allocating correct bandwidth to each MPLS TE LSP provisioned across the network core became easier with the automatic bandwidth adjustment (autobandwidth) feature, which measures the actual long-term utilization of an LSP and adjusts its bandwidth allocation in real time. With the autobandwidth deployed throughout the network, you could almost have a core network running on autopilot, dynamically discovering changes in end-to-end load and adapting to them.

Last but not least, MPLS TE became fully QoS-aware with class-based tunnel selection. This feature allows you to establish a bundle of MPLS TE LSPs between a pair of endpoints. Each LSP in the bundle can have its own bandwidth requirements; it can also use different paths across the network, depending on overall bandwidth availability. Once the LSPs are established, the head-end router selects the outgoing LSP based on the QoS bits in the forwarded packet. This feature allows you to use different LSPs for voice, VPN and best-effort Internet services.

Some service providers already use the power of autobandwidth in combination with autotunnel mesh groups to build networks that spread traffic across all available network paths based on actual traffic conditions. It's not hard to do the same. Unless your gear is decades old, the functionality you need is probably already available and just needs to be configured. But don't rush. As with any other major network change, deploying MPLS TE with QoS requires careful planning, good design and implementation, as well as associated training for your networking engineers.


About the author: Ivan Pepelnjak, CCIE No. 1354, is a 25-year veteran of the networking industry. He has more than 10 years of experience in designing, installing, troubleshooting and operating large service provider and enterprise WAN and LAN networks. He is currently chief technology adviser at NIL Data Communications, focusing on advanced IP-based networks and Web technologies. His books include MPLS and VPN Architectures and EIGRP Network Design. For more expert advice from Ivan, check out his blog, Cisco IOS hints and tricks.

This was first published in May 2009

Dig deeper on Telecom Resources

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchNetworking

SearchDataCenter

SearchCloudComputing

SearchCloudProvider

Close