If you believed everything you read about the imminent arrival of IP video, whether that means video on demand, IPTV, or any other variety, you'd think it was a no-brainer. It's a lot like the promises in the early days of cable TV that went kind of like this: "Hundreds, no, thousands of channels will be yours at the push of a button." While that may be true now, it wasn't then, and so it goes for IP video.
The pressure is on telecom service providers to deploy video in many forms sooner, faster and better than their competitors. Have a look at SearchTelecom.com's newest guide for a look at what's driving IP video, how to turn the dream into revenue, and how to best ready your infrastructure.
Video networks: The fundamental structure
Implementing video on demand
Three key video strategies for revenue growth
|Video networks: The fundamental structure by Tom Nolle|
Video delivery, content network, IPTV, triple play -- these are just a few of the terms used today for video over broadband data networks. This concept of video networks is an increasing challenge, and like most network challenges, this one is influenced by technology, business and regulatory factors.
Video poses three major challenges to network planners and operators. First, video is bandwidth hungry as an application. A video file is huge in comparison with almost any other type of information carried over a network. Second, streaming video requires real-time consistent network performance. Most broadband applications are highly tolerant of variations in delay and packet loss, but video is often totally corrupted by even a small variation in delay, and any significant packet loss can destroy the viewing experience. Finally, video is perceived by the user to be a continuous long-term experience, and if any significant portion goes badly, complaints and demands for refunds will result. Unlike a lost call or a bad connection in voice, which users repair simply by redialing, a video connection is a commitment on both parties, and any failure is almost certain to create negative customer reaction.
These factors influence all video networks, but the degree of impact depends on the video model. There are three broadband video service models. Broadcast video models replicate the behavior of a cable system, offering a customer multi-channel viewing. Video on demand (VoD) models allow the customer to stream video in real time, but the video and the viewing time are selected by the customer. The "store for play" model (download model) allows the customer to load the video onto a local disk for viewing.
All video networks start with the same customer, so the first step in understanding video network design is to understand where the networks stop -- what the content source will be. General practice today is to cache content in each major metro area, and this is most likely to be done for the broadcast or streaming VoD models. Local caching eliminates the performance variability that is introduced by Internet transport or core peering relationships, and most video programming popular enough to be profitable will probably be consumed enough in a metro area to justify the cost of local storage. This means that most video networks will be metro networks.
From the content source(s), video networks must distribute video outward, but not so much to "customers" as to "customer gateways." All broadband networks, whoever deploys them, have a natural process of access aggregation that collects many customers to a single service point. In telephony, this would be called a "central" or "serving" office. In effect, a video network is a set of star connections between content sources and these customer gateway points. This means that all video networks can be viewed as two-zone structures: metro delivery to the gateway points and then delivery to the customers on the access network. These two topics must be considered separately.
Broadcast channels are distributed via satellite. In some cases, operators may partner with direct broadcast satellite (DBS) providers for broadcast material, in which case this material is not carried on the video network. Where the operator will actually carry broadcast, the network requirement depends on which approach is used for broadcast channel delivery to the customer. That will generally depend on the media used for the customer access network.
Where the customer is attached via very high-capacity media (CATV cable, PON-FTTH), the operator may elect to feed broadcast channels using what is often called "linear RF," meaning simply broadcasting the channel material in standard cable format. In this case, the broadcast material is not really carried by the video network at all. Where copper loop wiring is used (DSL), the capacity is not sufficient to carry even the minimum number of broadcast channels in a typical basic package, and a form of channel mapping must be used. With this approach, the customer's access connection is divided into "slots" into which channels can be mapped on demand. The capacity of the access connection will determine how many slots can be allocated (about 4 Mbps per standard definition slot and about 8.5 for high-definition). The customer, when tuning, selects a channel that serving-office equipment then maps to a channel slot.
Channel mapping creates a significant new set of requirements in the video network because the broadcast channels must be distributed to the gateway points in data format and then packaged onto the appropriate DSL connections. Most current systems use the IP multicast protocol and its associated registration protocol (IGMP) for this purpose. Special handling is needed to ensure that the customer's screen does not pixelate when channels are changed, a common problem if the switch occurs between compressed video "key frames."
Clearly a broadcast strategy that doesn't involve data-formatted video at all presents the least difficulty to network designers, but since most channels will probably be used by someone in a given gateway point, the video load to each gateway point can in fact be treated as constant by designers. Thus, broadcast video metro distribution is a fairly simple application -- simply size the pipes between the gateway points and content hosts correctly.
Video on demand has completely different metro requirements. A typical community of 8,000 users at a gateway point might have 300 broadcast channels, generating about a 1.2 Gbps load in standard definition if data-packet video is used. If that same community watches an hour of streaming VoD, the data load could be five to eight times that, depending on how likely it is that the streaming periods of users overlap. All of this variable load must be carried between the gateway points and the content servers. This means that video networks that expect to offer VoD should be designed for VoD loads. Video on demand is definitely a metro fiber and DWDM application for most operators.
Most users expect broadcast video to be real-time, but VoD can be buffered to ride out network variables like delay and delay jitter. Packet loss is difficult to tolerate in video networks, however, so where VoD buffering is available, it is common to trade delay against packet loss in the design. It is also common to oversupply network capacity to avoid congestion in the first place.
Video networks are distribution networks, not communications networks. Most are designed to prevent user traffic from riding on the video resources, and so peer connections are not only not needed but actively discouraged. This means that the topology of networks designed for video is more likely to be a star than a mesh, and that redundancy is provided by simple mechanisms like multi-homing.
|Implementing video on demand by Tom Nolle|
Video on demand, in at least some of its potential service models, is perhaps the most challenging of all network applications. Streaming real-time video poses major challenges in controlling packet loss and delay/jitter, and video on demand (VoD) creates potentially large swings in bandwidth requirements that can affect not only the performance of video customers but also the performance of metro network applications overall. The process should be undertaken in three steps:
- Consider the service model, meaning the nature of the content, the format of the video, and the appliance(s) to which the video will be delivered.
- Consider the delivery model, meaning the characteristics of the customer access network and the metro connection network.
- Consider the management and support model, meaning the way in which video will be ordered, supported and billed.
Video on demand service models can be characterized by the type of device on which the video will be viewed and, for some, also by the degree to which the video can be buffered on delivery prior to viewing. The options here are:
- Small-screen video, designed to be viewed on a mobile phone, game console, or other portable device. This form of video generally requires lower resolution even for standard-definition material and is not normally used for high-definition, owing to restrictions in viewing. It is therefore likely to consume less bandwidth. In addition, users are more likely to tolerate viewing glitches with this type of material. However, buffering is probably not available here.
- Computer video, delivered to a laptop or desktop computer or to a "media center" computer. This form of video requires higher resolution -- as high, in fact, as TV viewing -- but the computer system can normally buffer the material to accommodate variations in network performance. Users of computer video may even want to store the material locally for multiple viewing.
- TV viewing, where the video material is delivered to a "set-top box" and fed to a television screen for viewing. Users of this type of video have the least tolerance for video problems, and this form of video demands the highest possible quality. In addition, it is often impossible to buffer this material, and so any variations in network performance are likely to create a customer complaint.
The delivery network is divided into two parts, the actual customer broadband connection portion and the portion that delivers video to the customer broadband point of presence or gateway. The broadband connection to the customer sets the limit on bandwidth available for VoD. Not only must the bandwidth be sufficient to handle the video (4-5 Mbps for standard-definition, full-screen video and 8-10 Mbps for high-definition) but it must also be sufficient to ensure that access congestion does not create a problem with delay or packet loss. Most for-pay VoD services and probably all pay-per-view VoD services will need some form of access bandwidth management to ensure that multiple video users or other broadband applications do not have an impact on performance. There is no point managing network performance deeper in the network just to lose all the benefits at the access point.
Video on demand consumes so much bandwidth and generates so much potential variation in network load that most operators will design VoD networks separate from other applications. In the metro network, for example, it is a best practice to use separate wavelengths or tunnels for VoD and to reserve capacity for these independent of other applications. Since some VoD use is influenced by mass market trends (release of a new film, a special TV episode, a concert, etc.), it may be advisable to support reconfigurability of bandwidth to accommodate sudden surges in use or synchronization in viewing. This is particularly true where VoD feeds support live events.
The management and support model for video on demand is very possibly the most important and most neglected of all the issues in VoD. Network operators have confronted steadily rising customer-care costs, and today a single customer-support incident can cost so much that it will take years of profitable relationship with the customer just to break even. In fact, where VoD is offered by a third party or by a separate organization (particularly if that organization is structurally separated for regulatory reasons), it is critical that the responsibility for customer care be established clearly, not only among the provider organizations involved but also with the customer base.
Customer care in VoD can be classified as follows:
- Network-preemptive, where network monitoring and remediation of problems work to ensure that failures or congestion do not affect service. This is a very important category because preventing customer complaints is normally far cheaper than responding to them.
- Service-preemptive, where explicit knowledge of a given customer's state of VoD service can be obtained from the customer premises equipment, the content server, or from the network. This type of care can be targeted, as is network-level care, at preventing problems by anticipating congestion and so on, or it can be targeted at opening a customer dialog through the viewing channel to interdict any customer call. For example, a network congestion event that is known to cause packet loss can be assumed to affect service and might result in a brief note that "We're sorry for the interruption, but the problem has been corrected." This kind of message is rarely appropriate for network-preemptive care because not all customers are likely to have been affected.
- Reactive, where the goal is to respond to a problem that is being reported by the customer.
Reactive care will have two primary goals: to induce the customer to seek online support rather than to involve a customer service representative, and to reduce the time to resolve the problem by providing customer support personnel with enough information to quickly diagnose the problem and offer remediation to the customer.
Video on demand is the most challenging of all network applications because of its bandwidth requirements and QoS requirements. Unless careful design and implementation practices are followed, the profit on these services can be quickly eroded, along with the credibility of the service provider.
|Point-to-multipoint MPLS by David Jacobs|
Real-time video distribution, with its high bandwidth requirement and low tolerance to jitter, has driven the development of point-to-multipoint Multiprotocol Label Switching (MPLS), but the technology can also benefit other types of data requiring highly scalable and reliable transport. Point-to-multipoint MPLS combines the efficiency of multipoint protocols such as PIM and DVMRP with the reliability and quality of service (QoS) capabilities of MPLS.
Video is typically distributed from a single source to a very large number of destinations. For example, the broadcast of a sporting event may require the same data stream to be sent simultaneously to cable system head-ends for every cable system in America. The data stream can consist of bandwidth up to 300 Mbps and require delivery without loss of data and without jitter. In the past, ATM or SONET has been used to meet these requirements. IP networks offer advantages of flexibility and relatively low cost compared to these older technologies but could not meet the requirements of video distribution prior to the development of point-to-multipoint MPLS.
MPLS improves the efficiency of traditional IP packet forwarding. With MPLS, each data stream is assigned a specific label switched path (LSP). A label identifies each packet making up the stream. Routers along the path use the label to identify the proper LSP and forward the packet along it. MPLS labels are short and can be used to index into a table of LSPs much more efficiently than using a full-destination IP address with a subnet mask to compute the next hop.
MPLS traffic engineering enables a network manager to specify the QoS characteristics for an LSP. For example, an LSP for video may be created by including only links and routers that meet the requirements for available bandwidth and predictable delay. Routers along the path reserve the bandwidth when the LSP is created. A fall-back path can be created at the same time as the LSP so traffic can be rerouted quickly if a link is cut or a router fails.
MPLS was originally developed to support only LSPs extending from a single network entry point to a single destination. Using MPLS for video distribution would require creation of a separate LSP from the entry point to each destination. The source of the data would have to transmit each packet separately to each destination, greatly increasing the load on the source of the data and on the router at the network entry point.
Multipoint protocols over a traditional IP network eliminate the need for sending to each destination separately, but cannot provide the QoS guarantees of MPLS. The next hop is computed for each packet as it arrives at a router. It is not possible to guarantee that there will be a next-hop router available at that time with the available bandwidth.
The addition of point-to-multipoint MPLS retains the advantages of traffic engineering while reducing the load on the data source and router at the network entry point. Individual LSPs are created one-by-one from the network entry to a destination using the same traffic engineering techniques to guarantee QoS as in a point-to-point LSP. Then, after the LSP is created, it is combined with previously created LSPs to create a point-to-multipoint LSP.
The resulting point-to-multipoint LSP follows a common path up to the point where it is necessary to diverge to different destinations. For example, the initial LSP created runs from network entry point router A to router B to router C and then to router D at the network exit point connected to the first data destination. The second LSP runs from A to B to C to router E at the exit point connected to the second destination. The point-to-multipoint LSP will diverge at router C. Routers A and B will carry each packet only once. Router C will be the only router that needs to transmit it twice. A third LSP might diverge at router B. In this case, B will have to transmit it twice, but no one router is required to do all the retransmissions. Retransmissions are held to the minimum possible by following the common path until it is necessary to diverge.
Point-to-multipoint LSPs are not static. Additional destinations can be added at any time by adding another LSP. Similarly, destinations can be removed at any time.
While point-to-multipoint MPLS was developed with video in mind, it can support a variety of applications. MPLS traffic engineering does not specify a fixed set of QoS parameters. A 32-bit set of affinity bits is assigned to each link. The network manager configuring the network defines the meaning of each bit, which may specify a bandwidth quantity or a delay value or a monetary cost. Each LSP is also configured with a 32 bit affinity bit field and is routed only over links with matching affinity bits. This provides the network manager with a completely free-form way to force LSPs to conform to any set of criteria required.
The combination of the efficiency of multicast protocols combined with the traffic engineering facilities in MPLS promises to enable applications that previously could not be supported by IP networks. Point-to-multipoint MPLS standards are nearing completion by the MPLS Working Group within the IETF.
|Three key video strategies for revenue growth by Tom Nolle|
Most service providers realize that the largest component of network-related average revenue per user (ARPU) is video content, and so most also realize that offering video content in some form is a key to revenue and profit growth. The problem is that there seem to be many different business and technology strategies for video, and knowing what's best will require examining both the demographics of the area to be served and the nature of an operator's current business.
Video strategies can be classified in three dimensions:
1. The mobility dimension: Video can be targeted to mobile devices or fixed devices; and in some cases, it can be targeted to both.
3. The technology dimension: Video can be deployed primarily using an IP infrastructure, an Ethernet/tunnel infrastructure, or a combination of the two.
The mobility dimension will be addressed for most service providers based on a few simple facts about the territory. Mobile video works best with a population that is young, network-literate, and likely to be using public transportation or sitting in public places rather than driving vehicles or walking around. Where mobile phones are rapidly replacing wireline, it is worth consideration. The mobile dimension of video is less likely to be valuable where there is already strong mobile Internet use, since it may be difficult to differentiate service provider content from over-the-top content that is incrementally free.
Fixed-device video can mean either computer video or traditional television. The former has been shown again to be most effective in the youth market, and there it is dominated by over-the-top offerings like YouTube. The latter is the classical IPTV market, and where the other dimensions of the issue come into play.
Television is a mixture of broadcast and video-on-demand (VoD), and exactly what that mixture can be expected to be in a given market is the key factor in planning a television/video strategy. Where broadband data rates are high, it is relatively easy to offer downloaded or streamed video, but again, it may be difficult to differentiate these offerings from over-the-top services. Most network operators have tentatively determined that they will have to offer some broadcast capability to be competitive.
There are two basic broadcast strategies available, what is sometimes called "linear RF" or the transmission of digital TV in pure TV form (like a cable system does) and video over broadband data connections. The former approach has been adopted by Verizon with FiOS and the latter with many network operators in Europe and by AT&T in the U.S. with U-verse. The major determinant on the best strategy is the economic density of the market area. If there is sufficient concentration of households to make a passive optical network (PON) based fiber-to-the-home (FTTH) architecture economical, it is likely that a linear RF approach will offer the lowest overall cost and the highest level of customer satisfaction. If not, then a data-delivered TV strategy is the only option.
The challenges of an IP infrastructure
The largest area of technical debate in IPTV is just what IP infrastructure's role would be. There are two basic approaches. The first is an IP metro infrastructure and the second is a tunnel-based metro infrastructure. In the former, which is currently championed by Alcatel-Lucent, the customers premises are actually on an IP network, assigned an address by a Dynamic Host Configuration Protocol (DHCP) process. This IP network supports multicasting, and broadcast channels are multicast using the standard IGPM join/prune architecture. In the second, which most other vendors seem to support, a broadband remote access server (BRAS) -- usually placed well forward, even into the central office -- performs the channel assignment process. In both cases, the customers have some predetermined number of "slots" into which broadcast or VoD programming can be inserted.
The challenge posed by any data-delivered broadcast architecture is the traffic management, which means metro aggregation must be carefully planned. Any form of streaming video is highly sensitive to jitter and even more sensitive to packet loss, since loss of frames in MPEG coding can create noticeable pixelization. In rough terms, a hundred broadcast channels will require about 1 Gbps of transport. A key question is the size of a central office (CO) in terms of the number of households served. A typical CO of 20 thousand households will likely consume about 30-50 active channels at a time, and the variability of the load will be relatively small. Larger COs are even more likely to have relatively predictable traffic patterns, but small COs may show heavy swings in load as viewing patterns change. Major live events will also influence viewing patterns and thus traffic.
Video compression is normally used for delivering video payloads over any data protocol, and there are trade-offs associated with compression technology. As noted below, data loss in compressed video has a significant impact on quality and thus networks using compressed video must guard against packet loss, if possible through a combination of traffic engineering and hold-back buffering. It is also important to look closely at the insertion delay associated with the code/decode process and to insure that video compression doesn't cause problems with details like audio track synchronization. A careful analysis of compression standards and options is in order, but the industry is zeroing in on MPEG-4 compression for HD programming, and networks should be designed to assume that. There wills still be variability in the way that various MPEG-4 coders and decoders perform, so testing the combinations will be important before any major decisions are based on compression assumptions.
It's also important to plan for transition to a greater percentage of VoD traffic over time. Some research shows that customers are becoming more "on-demand" oriented as a result of their experience with the Internet in general and with Internet video in particular. This shift in viewing patterns can create a major problem in traffic management, since VoD streams are not likely to be suitable for multi-household distribution. A CO of 20,000 homes might require only about 450 Mbps of broadcast bandwidth but would consume a staggering 200 Gbps if only one VoD per household were watched at a time.
Satellite and broadcast TV
Satellite TV and over-the-air broadcast are the last factors. Where both are strong (which is likely true in mid-latitude locations where population density is high), it will be harder to be profitable without a strong VoD offering to differentiate service provider video from available competitive strategies. Where there are problems with both, or where it is perceived that competitive strategies are higher in cost, there may be an opportunity to field a broadcast-based video offering and be competitive for some time before major shifts toward VoD consumption will change the traffic mix.
Service management issues
Service management is the final issue with video in any form, and it is perhaps the most problematic. Customers will simply redial a dropped call, in most cases, and will have relatively little concern unless the problem happens frequently. The loss of video during a prime-time show is another matter that is likely to prompt frenzied calls to customer support. There, collision in complaints from a network-generated congestion event will create hold times that will further increase customer frustration. It is important to have some form of proactive system for handling problems with the network that result in video loss, which can range from a "network status" channel or display on a portal channel to a web portal or a canned message in a call center. Whichever system you choose, it should advise customers only of their local status, not of system-wide problems. Research is now showing that system-wide notices can erode customer confidence in the service overall.
Virtually every survey of communications ARPU shows video as the largest component of consumer spending, and in many markets the only component whose relative and absolute cost has been rising rather than falling. Because it is the most demanding network application, an incumbent competitor in the video space can likely enter voice and broadband services markets with a low barrier, and an incumbent service provider in the voice-data space that has no video plans is likely inviting a competitor to gradually take market share in non-video spaces, eventually marginalizing traditional services. Video, in the long run, is a necessity and not an option.
About the Authors:
Tom Nolle is president of CIMI Corporation, a strategic consulting firm specializing in telecommunications and data communications since 1982. He is a member of the IEEE, ACM, Telemanagement Forum, and the IPsphere Forum, and the publisher of Netwatcher, a journal in advanced telecommunications strategy issues. Tom is actively involved in LAN, MAN and WAN issues for both enterprises and service providers and also provides technical consultation to equipment vendors on standards, markets and emerging technologies.
David B. Jacobs has more than twenty years of networking industry experience. He has managed leading-edge software development projects and consulted to Fortune 500 companies as well as software startups.
This was first published in February 2008