Nearly every new revenue opportunity for service providers will involve a cloud to either host service components or serve as the foundation of the service itself. While network operators have operated data centers for decades, these have been dedicated to running their own internal operations and business software rather than customer services, resembling a traditional enterprise data center.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
But service providers are deploying cloud infrastructure to serve dual purposes, hosting not only services and features but also, in many cases, operational support system and business support system OSS/BSS processes. Operators are asking whether their data center networks need something new at their core -- a cloud data center switching fabric, rather than a hierarchy of switches. Enterprises also face this choice, but with cloud infrastructure, the scale of the cloud data center and data center traffic makes the question much more important -- and complicated.
From simpler silos to the modern cloud data center: More layers, more problems
Early data centers had a siloed structure. Servers and storage dedicated to applications were linked to a router or controller that provided wide area network (WAN) access to users in all company locations. But that fixed application-to-server relationship began to break down when virtualization was introduced in order to more efficiently manage applications and resources. Compounding that was the adoption of service-oriented architecture (SOA), which enables software to be divided into reusable components that can be distributed across a range of servers and reassembled into various applications.
Further, "horizontal" (or intra-application) traffic within the data center -- for both SOA interprocess communications (IPC) and the movement of virtual machines -- creates a new dimension in network traffic. Finally, wherever application-to-server relationships are flexible, storage networking becomes mandatory, because a server-dedicated storage element would otherwise effectively lock an application into the server where its data resides.
All this adds up to a more complex data center network requiring greater connectivity. In the cases of IPC traffic and storage traffic, the performance demands of the network connection may also be much greater, creating a challenge for data center network designers.
A large data center would normally have many different storage, server and WAN connection elements, each of which must be capable of mutual connectivity. Since the number of ports per switch is finite, this has led to the deployment of a hierarchy of switches, with the higher "layers" acting as aggregators to build a structure that provides any-to-any connectivity.
The problem is this: More layers means greater delay for traffic passing from port to port, and since the positions of the two connected switches in the hierarchy will vary, so too will the performance of the connections between them. Plus, increasing the number of layers, ports, connections and devices also increases management complexity and cost. For a growing number of operators, a cloud data center switching fabric is the solution.
Making the business case for a cloud data center switching fabric
A switching fabric is a switch architecture in which all switch ports connect with the same effective performance and in a non-interfering or "non-blocking" way. In small switches, this can be achieved in a single device -- a true fabric architecture. In applications requiring a larger number of ports, however, the notion of a switching fabric must be virtual due to physical limitations in the common backplane capacity. Switching fabrics will offer improved connection performance and consistency in most enterprise and service provider data center applications, but not all use cases will necessarily justify making such a dramatic architectural shift. Several factors influence the business case.
The first factor is the depth of the switch hierarchy needed to connect the data center devices using traditional Ethernet. The more "layers" of switching that are required to achieve full connectivity, the greater the performance variability, management cost and complexity. Operators are typically concerned when any hierarchy of switches goes more than two layers deep; they get very concerned when hierarchies reach five or more layers.
The second factor is the nature of the traffic. Client/server traffic is less sensitive to performance issues and variability than IPC or storage traffic is. Applications with little of the latter two may not gain much from a switching fabric, even when the alternative is three or more layers of Ethernet switches.
Factor number three is the rate of dynamic reconfiguration of application, storage and client relationships within the data center. The problem with performance variability inherent in multilayer switching is most apparent when an application is likely to change servers, use different storage or be WAN-connected in different ways on a regular basis. If these changes result in noticeable performance variations, they will get help desk phones ringing and raise user support costs. In service provider applications where the data center hosts services or content, variability can result in customer loss as well.
Where a cloud data center's service mix is unusually dynamic, highly flexible reconfiguration with no penalty in performance across the full range of connectivity is even more desirable. Virtualization increases dynamism, as does using SOA for application "componentization," but offering cloud computing and cloud services increase the reconfiguration rate within a data center most of all.
Where the data center is hosting a cloud, all of these factors are likely to favor cloud data center switching fabric deployment. Clouds demand large resource pools, so connecting them requires either a switching fabric or a large number of switch layers. Based on early operator reports, cloud data centers generate much more storage and IPC traffic than traditional data centers, suggesting that traffic in the cloud is delay-sensitive. Finally, cloud infrastructure is certain to require considerable dynamic balancing of resource assignments and thus changes in resource connectivity.
The many variables in play make establishing general rules for cloud data center networking difficult. But if there is one, it may well be that when it comes to the cloud, data center switching fabric deployment is a natural fit. Any cloud provider serious about preserving flexibility to accommodate the cloud's dynamic future will need to strongly consider implementing a cloud data center switching fabric.
About the author: Tom Nolle is president of CIMI Corporation, a strategic consulting firm specializing in telecommunications and data communications since 1982. He is the publisher of Netwatcher, a journal addressing advanced telecommunications strategy issues. Check out his blog, Uncommon Wisdom, for the latest in communications business and technology development.