Editor's note: In this OpenFlow tutorial for cloud providers, networking expert Tom Nolle explains how OpenFlow
can be a competitive advantage for providers and outlines several steps for testing the protocol in their networks.
Software defined networking (SDN) and its most visible protocol, OpenFlow, are generating significant buzz. Nearly every major router and Ethernet switch vendor has announced "support" for OpenFlow, and it has inspired several conferences and startup companies. And as major cloud operators such as Google and Verizon test and deploy OpenFlow, other network operators and cloud providers are anxious to know whether there's any substance under all the excitement and, particularly, just how that substance might benefit the bottom line.
Contrary to popular belief, OpenFlow isn't a complete SDN strategy on its own.
Let this OpenFlow tutorial put those uncertainties to rest: OpenFlow can be a competitive advantage in the cloud, and cloud providers need to know about it.
Why use OpenFlow?
At the technology level, OpenFlow and SDN are straightforward concepts. The idea behind both is to create a simple, centralized control plane for network behavior. This replaces the distributed, adaptive model used in both Ethernet and IP.
Instead of having each device in the network adapt its forwarding tables according to its knowledge of network topology and connectivity -- knowledge it receives from other network devices -- the SDN model dictates that all devices would receive specific forwarding rules from a central controller. This eases network traffic management, shortens failover periods and potentially even improves security -- in theory. The central controller model is what SDN is about; OpenFlow is an implementation of SDN principles using a switch-to-controller protocol.
More on OpenFlow
Interop 2012: OpenFlow grows up
Brocade: Some OpenFlow switches can't operate at line-rate speeds
Commentary: Cisco may lead in cloud networks if it does OpenFlow
The centralization of control is both OpenFlow's strength and weakness. In terms of benefits, central control means faster recovery from outages through pre-engineered failover routes, better security, more stable quality of service (QoS) and a quicker means of adapting the network to application needs. On the other hand, centralized control is clearly not scalable in networks that serve billions of users, like the Internet.
Is OpenFlow alone an SDN strategy?
Early experiments in OpenFlow showed that there are two variables that are critical to enhancing the benefits of the protocol and minimizing its limitations: One is the capabilities of the central controller software and the other is finding the best "boundary" point, the place where OpenFlow meets traditional network devices.
Contrary to popular belief, OpenFlow isn't a complete SDN strategy on its own. An OpenFlow controller would allow switches to be centrally controlled, but the protocol by itself it doesn't really offer any template or approach to guide the process. The basic software is intended to be vertically integrated -- via application programming interfaces (APIs) -- with other tools that would assign routes and decide connection policies. Operators could likely adapt or develop these tools on their own, and some OpenFlow vendors are offering applications that address specific applications, including cloud services.
No network technology should be deployed based on hype, and none should be rejected simply because it represents an emerging option rather than a traditional one.
OpenFlow-based applications can create virtual networks that represent cloud resources or customer clouds, extending the virtualization model beyond the data center. Traditional networking vendors, such as Cisco Systems, have also announced virtual network strategies that work with existing protocols. There are also emerging tools for cloud provisioning and integration, but again, these tools are independent of whether OpenFlow or another protocol controls the forwarding paths in the network.
OpenFlow tutorial: How to test OpenFlow
As part of this OpenFlow tutorial for cloud providers, we will also look at how operators can extract connectivity information from existing networks and use this information to drive forwarding changes. This is the model Google has deployed, and it also illustrates the aforementioned boundary-point issue with OpenFlow.
Whether OpenFlow is deployed in the core of a larger IP network or used inside a cloud to connect virtual networks or resource pools, it likely interacts with traditional protocols at some point to extend connectivity toward the user. Without this interaction between protocols, scalability issues would likely limit OpenFlow to small networks. By capturing topology information from the partner protocol -- Internet Protocol (IP) in Google's case -- OpenFlow can create a kind of compartment inside a traditional network that can flexibly connect virtual servers, increase utilization, improve traffic management and stabilize QoS.
There seems little doubt that OpenFlow will become a major factor in the cloud; Google's decision to deploy it in its backbone is enough to assure that. Cloud providers need to test the OpenFlow waters, and that means following some basic steps:
- Find a mission in the cloud network for OpenFlow. Looking at work already done in universities and by operators such as Google will be helpful in picking a test or deployment that matches local needs.
- Assemble the components needed for the test. This will include switches and routers that accept OpenFlow commands, the basic switch-control software, and other software components needed to manage topology and application-connection requirements. Don't forget to include any software needed to handle control packets that will be generated at the boundary between OpenFlow and the rest of the network.
- Link the components. Connect the devices and software via APIs, or commission the work from an integrator. Be sure to find one with specific OpenFlow experience, though.
- Design the rules for linking the applications to the network. Do this at the OpenFlow-application and topology layers via the OpenFlow controller software. This is where "normal" traffic engineering and failure-mode layout is done.
- Check your results. Run a small-scale prototype of several nodes, and then expand it as the logic of the application, topology and switch-control software has been tested.
- Ensure that OpenFlow is the best choice. Review other options for network virtualization and application-based forwarding, including various virtual private network (VPN), virtual LAN (VLAN) and tunnel protocols, to ensure that these strategies can't be used to augment or even replace OpenFlow. Where an OpenFlow alternative exists, monitor the progress of the standard to ensure that enhancements don't tip the balance again.
No network technology should be deployed based on hype, and none should be rejected simply because it represents an emerging option rather than a traditional one. SDN is almost surely the way of the future for the cloud, and OpenFlow is a forwarding-control standard tightly linked to progress in SDN. Like anything else, its strengths and limitations will determine where it belongs in an operator network.
About the author: Tom Nolle is president of CIMI Corporation, a strategic consulting firm specializing in telecom and data communications since 1982. He is the publisher of Netwatcher, a journal addressing advanced telecom strategy issues.
Dig deeper on Cloud Networks
Tom Nolle asks:
Are you deploying OpenFlow in your cloud services environment?
1 ResponseJoin the Discussion