Server virtualization -- and the multi-tenant cloud model it enables -- erases many of the chronic cost and management inefficiencies that stem from maintaining large farms of physical servers, which often reach dismal CPU utilization levels. But cramming scores of virtual machines (VMs) onto fewer servers has its drawbacks, too, particularly in terms of data center energy consumption.
It's natural to assume that with fewer hardware devices to support, a highly virtualized cloud will lead to lower heating, cooling and overall power costs. This is certainly the expectation of cloud providers and their customers as well as the driver behind projects such as the Federal Data Center Consolidation Initiative, a move by the U.S. government to reduce data center energy costs by 90% and shrink its facility footprint from 2,094 to 1,132 data centers by 2015.
While better server-resource usage should, in theory, equal greater energy efficiency and lower power costs, reality is not so simple in a cloud provider environment. Simply packing multiple VMs onto fewer physical hardware devices isn't enough on its own to slash the power bill. In fact, data center energy consumption can increase if consolidation is mishandled.
Reducing energy use and costs in the cloud is not directly proportional to reducing the overall energy load. Here's why: Consolidating the capacity requirements for what were two separate, physical servers onto one virtual platform may actually cause throughput degradation and other performance issues. When performance degrades, the energy needed to successfully complete a task can increase. Factor in the energy required to transfer workloads between cloud data centers and the power-efficiency equation becomes even more uncertain.
Limitations of the U.S. power grid
These challenges are further complicated by the fragility of the aging U.S. electrical grid, which still relies heavily on mechanical circuit breakers and controls from the 1950s. The fact that the grid isn't sturdy enough to handle the rapid shifts in high-volume workloads among multiple clouds also raises serious questions about the way to proceed. This instability may force cloud providers to spend more on redundant power supplies and alternate routes, wiping out some of the cloud's projected operational savings.
Projection scenarios show massively scalable and highly distributed cloud services that may wind up consuming more power than traditional computing models.
There are broad concerns about the U.S. electrical grid's ability to support not just the elastic demand requirements of the cloud, but also its escalating consumption requirements. Transmission lines are barely equipped to handle the country's current, near-insatiable demand for power. Data centers were responsible for consuming 2% of the U.S. electrical grid capacity from 2005 to 2010, according to a report published last year by Stanford University professor Jonathan Koomy. There are projection scenarios where massively scalable and highly distributed cloud services may wind up consuming more power than traditional computing models.
For a variety of reasons, it will be difficult to expand the capacity of the transmission lines. This makes it even more important for U.S.-based cloud providers to be mindful of power issues -- specifically the limitations of the U.S. grid -- as they architect their cloud environments.
This isn't to paint a doomsday scenario for cloud services. Rather, it is to urge care in design and execution. There are a number of theories about how cloud providers can approach these challenges proactively, such as demanding response models for the electrical grid from the utility companies. In this model, the utility company would send a message to the provider signaling when utilization on the electrical grid is high. The cloud provider would then shift application workloads to facilities in regions where utilization is low.
Managing data center energy consumption in the cloud
With so many issues still unresolved, how can providers get a handle on data center energy consumption in the cloud? A good first step is to identify all the challenges that come with delivering services via an on-demand environment. Recognizing that server consolidation brings inherent resource-contention issues, which may diminish energy efficiency, requires providers to be very careful when designing their data center consolidation strategies. The process begins with understanding which applications are optimal for the cloud.
All workloads are not created equal, particularly in terms of resource requirements. Depending on the service, the energy level needed to support acceptable performance levels can vary greatly. For example, a multi-tiered application running in a cloud environment is typically supported by multiple VMs that may be running on multiple physical servers -- an authentication server to validate access credentials, a front-end request classification server, a back-end database server, etc.
More on power consumption in the cloud
The energy efficient data center: Balance hype, social responsibility
CloudCast Weekly: Green clouds on the horizon, SDN meets SLAs
Apple hasn't failed green cloud, but has to see Greenpeace after class
The example above and similar scenarios require dynamic correlation and allocation of resources, including power. You'll need a well-architected approach to server consolidation that takes power dynamics into account when determining if it makes sense to put a particular kind of workload into the cloud. This isn't to say multi-tiered and complex applications can't be hosted in a cloud environment, but to ensure application performance, cloud providers must determine the architecture that supports the algorithm for optimal energy efficiency.
All of these issues underscore the need for both flexibility in design and automation to support dynamic distribution of capacity based on varying resource situations. Cloud providers that are able to pull this together have a rare opportunity in the on-demand space. Reaching that place, however, will not be a simple task.
About the author: Amy Larsen DeCarlo is a principal analyst at Current Analysis, where her research focuses on assessing managed and cloud-based data center and security services.