Mean opinion score (MOS) is a paltry metric, but the direction it points is important. It's the tip of a valuable iceberg. Application quality of experience (QoE), and the productivity benefits derived from effective applications, are the goal. In short, we are talking about the raison d'être for any network.
Networks have a job to do -- supporting applications that use them. Networks not able to meet the demand not only fail the applications they support but also the markets and the businesses relying on them. Thus, understanding what networks are intending to achieve and how to measure that is important.
The Internet2 QoS Working Group, chaired by Internet2 chief engineer Guy Almes, has done seminal work defining the terms of application quality of service. Their document Internet2 Network QoS Needs of Advanced Internet Applications: A Survey provides an effective starting point for the essential work of defining a coherent and integrated view on the relationship between network performance and application quality of service.
The Internet2 document, primarily authored by Ben Teitlebaum and Stanislav Shalunov, offers a partial background on methodologies as well as metrics for a range of application categories. While their categorization revolves more around the high performance applications found in research environments, there is a solid set of references to network dependencies that are readily accessible. Within it, they identify ranges of behaviors such as elastic vs. tolerant applications, telepresence vs. teledata, background vs. foreground tasks, interactive vs. non-interactive, and machine-to-user vs. machine-to-machine vs. user-to-user.
Translating somewhat, a simplified set of categories can be identified:
- Real-time -- e.g. VoIP, IPTV, video conferencing, streaming applications
- Transactional -- e.g. remote sessions like Citrix, collaborative environments
- Data transfer -- e.g. data backup and replication, emergency recovery
- Best-effort -- e.g. e-mail, Web, etc.
Each category uniquely experiences the network. Subsequently, the prospect of employing an effective QoS-assurance mechanism becomes challenging -- there are numerous overlapping, and even conflicting requirements.
In this light, the practical implications of triple play start to emerge, as do the undeniable obstacles to successfully deploying a fully converged, high-performance network. Translating once again, the natures of the three key application types can be identified as:
- VoIP -- real-time application composed of relatively undemanding two-way streams that are somewhat robust against jitter and loss, but very sensitive to extreme jitter, loss bursts, and high latencies; quality is subjectively defined, and dependent on psycho-acoustic factors
- Video -- real-time application composed of relatively bandwidth intensive, one-way streams; robust against mild degradation; quality is subjectively defined, somewhat dependent on psycho-visual factors
- Data -- data transfer application composed of one-way, highly bandwidth-intensive flows, variably robust against loss/jitter, highly sensitive to bandwidth bottlenecks and performance degradation; quality is objectively defined.
Designing networks to support all three applications is a concern. The experience with VoIP, in particular, has led to a variety of strategies that include over-provisioning, dedicated network paths separating data from voice, and various forms of QoS intended to manage access to resources appropriately. Adding video is an awkward twist -- it has many of the sensitivities of voice but with the relatively high-load requirements of data. Continuing on the path of over-provisioning or creating yet another set of dedicated paths is not acceptable.
QoS isn't the answer
QoS seemed to be the solution with the greatest promise. However, the results of the Internet2 work throw that assumption into question. Asked about QoS today, Guy Almes remarks:
"The general consensus is that it's easier to fix a performance problem by host tuning and healthy provisioning rather than reserving. But it's understood that this may change over time. [...] For example, of the many performance problems being by users, very few are problems that would have been solved by QoS if we'd have had it."
For now, the Internet2's efforts at improving application performance have been shifted away from the QoS Working Group toward their End2End Initiative -- its associated projects address end-to-end performance with a focus on resolving specific sources of degradation through network measurement, end-user problem solving, resilient TCP stacks and cleaner networks.
So what is to become of the great triple play? Like the side-benefits derived from deploying VoIP, addressing the challenge of triple play will generate increasingly effective means of designing, implementing and maintaining networks. QoS may yet find its place and become an integral part of the solution.
However, noting Internet2's direction, it would appear that end-to-end performance and the drive to optimally performing networks offers the key. That said, one of the essential starting points lies in the effort to define the application quality of experience appropriate to each type. Using VoIP again as the exemplar, it has the rudiments of a performance metric in MOS with refinements being hammer it out. But this approach needs to be extended to a range of other application types including video and data.
Answers are coming
Current work in this area is being spearheaded by the International Telecommunication Standardization Sector (ITU-T) Study Group 12 and the IETF Internet Protocol Performance Metrics (IPPM) group. For example, ITU-T Recommendation Y.1541 refers to six quality of service (QoS) classes based on various IP network applications, such as VoIP, multimedia conferencing and interactive data transfer. End-to-end IP performance characterizations for each set of network behaviors are defined for each QoS class.
There is still work to be done but application performance metrics are on their way.
About the author:
Loki Jorgenson, PhD, has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen's and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University. As Chief Scientist for Apparent Networks Inc., Jorgenson leads network research in high performance and application performance, typically through collaboration with academic organizations and other thought leaders.