Thinking about green issues from an environmental perspective might not be a priority for some cloud providers, but cloud energy efficiency is a great operational strategy that can benefit both providers and their customers.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
"When I consult with any client considering a data center infrastructure investment, whether it is private or in the cloud, I preach about energy efficiency," said Bob Mobach, practice director of the Data Centers division at Logicalis, a global IT services company with U.S. headquarters in Farmington Hills, Mich.
"What I see is that people on the coasts [who have] higher electric rates than people in many places in the Midwest are more focused on this issue," Mobach said.
The Open Compute Project, the high-profile consortium that first bubbled up as a Facebook initiative, now includes dozens of member companies interested in cloud computing energy efficiency, including HP, Dell, VMware, Salesforce.com and Rackspace. Open Compute was formed with the explicit notion that cloud providers need a new approach to building and running their infrastructure to improve energy efficiency.
Some of Facebook's best practices are core to the group's published design practices, such as using a 480-volt electrical distribution to cut down on power loss and reusing hot-aisle air throughout the building to cut down on the amount of power needed for cooling and heating purposes. These practices are used in Facebook's Prineville, Ore., data center, which uses about 38% less energy to run the same applications as the company's other facilities.
"Open Compute's ideas will start opening people's eyes to what is possible," Mobach said.
The biggest savings to be had are really found in the cooling of the data center.
Practice Director, Data Centers, Logicalis
Not everyone is ready to dive headfirst into Open Compute, but there are several steps cloud providers and managed service providers (MSPs) can still take to reduce energy consumption in their public or private cloud infrastructure. Here are four ideas offered by service providers and MSPs that have nailed cloud computing energy efficiency.
Consider cooling requirements in hardware selection. As cloud providers select their hardware -- including servers, storage and networking gear -- they must be more closely concerned with understanding the power needed to keep those devices cool, Mobach said.
It is not enough for cloud providers to look at the benchmarks for electricity needed to run the IT load; the consumption figures must study much more closely the power needed to keep things cool. Another point to consider is whether the equipment allows for data center layout and design choices -- such as using hot and cold aisles -- that can reduce the cooling load.
"The biggest savings to be had are really found in the cooling of the data center," said Mobach. "You can do a lot of neat things."
Cooling costs can account for 30% to 40% of the power load of many data centers, said Phil Nail, chief technology officer of AISO.net, a hosting provider in Romoland, Calif., that uses about 200 solar panels to power its operation.
To manage its infrastructure and customers, AISO.net is using thin client hardware, which uses an average of 5 W of power. The company also has invested in specialized cooling units that use a combination of air and water to keep things cool. Each unit uses about 200 W of power to run, which is roughly equivalent to the power needed to light two light bulbs, Nail said.
So, the metric to watch is really performance per watt; that metric might be better served by hardware that is custom-built for the cloud service provider's explicit needs as opposed to commercial hardware.
Get more disciplined about storage architecture. The same storage-tiering strategy that helps companies create comprehensive data management processes can be equally effective as a way of reducing power consumption for cloud providers, said Jason Pollner, co-CEO of IT Authorities, an MSP in Tampa, Fla.
By placing archived data on slower, larger drives that use less power -- reserving faster devices for mission-critical information that needs to be accessed more quickly -- service providers not only can manage storage budgets more effectively, but they can also save some of the associated energy costs, he said.
"Be sure to separate your storage into tiers so that you can place low-I/O mass storage on slower, large disks instead of using higher-speed disks," Pollner said. "This will result in a much smaller footprint in the data center and significantly reduce power consumption and heat generation."
Virtualize, virtualize, virtualize. The same strategy being used by thousands of companies to make their on-premises data centers more efficient is, not surprisingly, a key operational strategy for pretty much every cloud service provider.
Sure, much of the virtualization discussion really comes down to the cost implications: The whole point of the public cloud is to let people share infrastructure.
More on cloud computing energy efficiency
Advancements in energy efficiency ignored, experts say
Can you be greener with cloud computing?
CloudCast Weekly: The advent of green clouds; SDN meets SLAs
But virtualization also can enable cloud providers to manage where certain instances of applications are run. This allows providers to keep certain servers at a higher capacity of utilization and avoid turning on others until they're needed to handle the load, thereby maximizing resource usage and in turn improving cloud computing energy efficiency.
"If we were non-virtualized, we would need 15,000 square feet to run our current capacity," said AISO.net's Nail.
Of course, that takes a high degree of management and automation, which is something that AISO.net gets from one of its vendors, OnApp, whose platform enables cloud providers to federate with each other and more closely manage resource utilization.
"We really focus on optimizing utilization of existing capacity," said Ditlev Bredahl, CEO of London-based OnApp.
That strategy doesn't necessarily have to be confined to a cloud provider's own infrastructure, he added. Its goal is to help cloud providers fill up shared servers on a geographic basis. For example, several cloud providers in London might work with OnApp to buy and sell each other's excess cloud capacity. That means moving virtual machines across the shared infrastructure via OnApp's platform.
"The fuller a server is, the greener it will be," Bredahl said.
Correlate infrastructure investments more closely to actual application requirements. Cloud providers need to study the applications that their customers plan to run, as well as manage where those applications run, said Steve Diamond, chairman of the Cloud Computing Initiative at the IEEE, the technology trade association and independent standards body.
That means, for example, organizing big data analytics applications to run on high-powered infrastructure that won't shortchange performance for the sake of energy efficiency, he added.
That takes a more heightened degree of automation and management attention than was needed in the past for single-purpose servers.
"You have to … measure the amount of computing you are using versus the power you use to generate that," Diamond said.
About the author: Heather Clancy is an award-winning business journalist and a regular contributor for several TechTarget publications.