Lower network Opex needed to maximize new service revenue

Operators need to reduce network Opex to maximize new services revenue, and different approaches are emerging to address the challenge.

The terms network cost and capital expense used to be synonymous to operators planning network infrastructure. But when network revolutions like software-defined networking and network functions virtualization came along, both initially targeted lowering network capital costs. Now, network operators are realizing that Opex will fuel the network revolution, not Capex, and that means network-building is a whole new game.

Two factors are shaping the Opex-driven future of next-generation networking. The first is the steady and often precipitous decline in revenue per bit. If network infrastructure is to remain profitable, it is critical to wring more cost out of the network. Since equipment costs are falling more slowly than revenue per bit, Opex is the logical place to look for future savings. The second factor is the growing complexity of the service layer of the network, because cloud computing, content delivery, software-defined networking (SDN) and network functions virtualization (NFV) all add logical components to services.

Network operators are realizing that Opex will fuel the network revolution, not Capex.

It is important to understand that network operations costs tend to rise as the square of complexity, so without new operations strategies, the risk is that revenue gained from higher-layer services or new network technologies will be offset by increased operations costs.

Two primary approaches are also emerging that providers can use to drive an Opex-driven networking vision.

  • The first is the popular notion of flattening the network: eliminating layers of devices, and by doing so, reducing the number of devices and the relationships that have to be managed.
  • The second is applying the concept of abstraction to divide networks and services into functional components that can be represented in a model. The model can then be used to automate operations processes, both to deploy services and to manage them once they are deployed.

Layered networking goes back nearly 40 years to the seven-layer OSI model. In modern networks, layered networking builds up from optical fiber through Layer 2 structures to Layer 3, which is the IP layer at which most services are delivered. Building a layered infrastructure was a necessity until recently because the higher, electrical layers of the model aggregated traffic to fill faster optical pipes and to steer traffic among users.

But with higher-speed broadband services, lower-cost optical transport and agile optical interconnects, it's possible to envision service-layer technology riding directly on optical networks, reducing the number of devices and most important, reducing network management complexity.

Industry groups test solutions for Opex-driven networking

Since SDN came along, some proposals have favored using a single SDN controller to manage both optical and electrical connectivity, which, from a management perspective, creates a single-layer network controlled by a single software process. This evolution faces technical barriers, the most significant one being that the OpenFlow protocol used to control packet forwarding at the electrical layer is not ideal for controlling opaque optical flows. But work is proceeding on modifications to SDN to permit electro-optical integration and network flattening. A strategy to reduce Opex may take another step forward as this work develops. Even now, flattening networks by eliminating a routed core or multilayer metro infrastructure is the most prevalent approach to building an Opex-optimized network.

The second approach to the issue comes out of the growing complexity of network services themselves. As operators look to add software and servers to their repertoire of network elements, they find that software and servers are already related to each other though virtualization. Operators are now looking to apply the virtualization principles of abstraction to management at the functional level.

In this new management model, network elements would be divided into functional systems that can be represented as a single virtual element, like an IP core or a metro network. This single virtual element might then be further divided into smaller functional units -- IP Multimedia Subsystem (IMS) core, Evolved Packet Core, content delivery network (CDN) and so on -- until the lowest level of functional division is reached.

If someone wants to create a CDN, for example, a functional model of a CDN would be stamped out on infrastructure according to carrier policies. This is the basic principle that the European Telecommunications Standards Institute (ETSI) Network Functions Virtualization Industry Specification Group (NFV ISG) is applying to its work. It is promising, but it faces two challenges. The first is how to make functional abstraction work at the management variables level. Nearly all network devices have a management information base (MIB), a collection of management variables that when read or set will convey device status or change device behavior.

More on SDN, NFV and network opex

How SDN apps will change network services

SDN and NFV: What's the networking connection?

Now NFV can lead to network service agility

CloudNFV group working on NFV prototypes

To manage a functional system, you would need either to invent a completely different management model that changes how all management systems would then have to work, or make the functional system look like a virtual device. But questions remain. For example, how would the MIB for such a virtual device get defined? How are the variables in the virtual MIB related to the real management variables of the collection of devices inside the functional system?

The second challenge is one of jurisdiction. People working on cloud computing, SDN, NFV, mobile services, content delivery and other service-layer issues all have their own visions of functionality and management. A dozen different bodies may be involved in one area of network infrastructure, yet cost-effective management depends on having only one management model. So, who should provide it?

In theory, industry groups concerned with service-layer issues, including the Open Network Foundation, the ETSI NFV ISG and the OpenDaylight Project could all develop local management models that could be generalized to the network level. The TM Forum is also working to evolve its own management model to align it with event-driven, virtual-network requirements. It is unlikely that all of these sources will create the same strategy, but all of them are likely to develop a model-based, virtualization-friendly approach that can be adapted to work with the others because software application programming interfaces are easier to transform and interconnect than hardware interfaces are.

As software gets deeper into networking and network management, the new functional-model notion of management, combined with layer flattening, could radically reduce network operations costs. At the same time, it could accelerate service creation and deployment. A new model may also encourage open source tools and force network equipment vendors to view management as a profit center, not as an afterthought.

This was first published in March 2014

Dig deeper on Service Delivery

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Related Discussions

Tom Nolle asks:

Do you need to reduce network Opex to accommodate next-gen technologies?

0  Responses So Far

Join the Discussion

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchNetworking

SearchDataCenter

SearchCloudComputing

SearchCloudProvider

Close