With Amazon's dominant position in the cloud computing market, its Wal-Mart-like low prices have pressured other cloud providers to cut costs without compromising service delivery to remain competitive. Facebook's Open Compute Project, an open playbook for cutting data center energy consumption via custom-built hardware, could help cloud providers finally compete on price with Amazon.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
"When you look at a provider like Amazon and what they charge for compute power, it's really only pennies above the cost of power [itself]," said Ted Ritter, principal research analyst at Nemertes Research. "The margins that service providers need to operate in to be competitive in the cloud are extremely tight, so anything you can do to reduce your power costs clearly has a direct impact on the bottom line."
Though not quite a standard, the Open Compute Project represents a collection of technical specifications and mechanical drawings Facebook developed to build an energy-efficient data center in Prineville, Ore., in April 2011. The Oregon facility uses 38.1% less energy per unit of computing power than Facebook's other data centers, according to the project's manifesto. In the spirit of open source software projects, Facebook released those specs to the public and invited other engineers to contribute to the project.
"An efficient system from the ground up and through the data center has unquestionable value for us," said Joel Wineland, development manager for the Open Compute Project at Rackspace, which has dedicated a team of engineers to testing the performance, reliability and stability of the project's specifications. "And rather than having a lot of debates around a particular vendor's platform ... we can offer [customers] assurance that by being Open Compute, they get an open standards-based [approach]."
The customized, stripped-down—or "vanity-free," as Facebook's engineers put it—hardware and higher-voltage electrical distribution systems are the hallmarks of the Facebook Open Compute Project. The social networking giant claims its new data center achieved a Power Usage Effectiveness (PUE) of 1.08 for the second quarter and 1.07 for the first quarter of this year, meaning 93% of the power in the data center makes it to the servers.
Most data centers achieve half that, averaging around 50% of power making it to IT equipment, according to Ritter. The rest radiates as heat, which then must be regulated and cooled.
Researching the Facebook Open Compute guidelines is "a requirement for anyone building a cloud," according to Marshall Bartoszek, principal analyst at ACG Research.
"[Cloud providers] have to explore this. They have to. Then they can dismiss it, but they've got to learn about it," Bartoszek said. "You have to understand what's going on because your cloud's got to be based on similar principles or [else] you're never going to make it because your costs are going to be too high."
Facebook Open Compute Project not quite cloud provider-ready
Although cloud providers are paying attention to the Facebook Open Compute Project, they are not walking in lockstep with it. The application environments and users supported by cloud providers have dramatically different requirements from Facebook's consumer social networking service.
Verizon Communications Inc. has adopted some of the Facebook Open Compute electrical and hardware architecture guidelines in its labs, according to Jeff Deacon, managing director for cloud services at Verizon, which acquired cloud provider Terremark earlier this year.
But given the rigorous demands around performance, availability and reliability for Verizon's enterprise customers, Deacon doesn't anticipate putting those practices into his production environment any time soon. Verizon has focused more of its attention on the Open Data Center Alliance, of which Terremark is a founding member.
"Things like USB ports, monitor ports, video cards and sound cards [are unnecessary] in cloud computing, but they come standard on a lot of servers," Deacon said. "Pulling those components out gives you quite a bit of cost savings, but additionally, things like using a single power supply, a single network interface, using local disc instead of storage area networks—you get a lot of cost savings by doing that, but you have multiple single-points-of-failure, [which] impacts the availability of the individual [components] that make up the software application stack."
Whereas Facebook essentially supports one application—and has created an environment to failover individual layers of the application—an enterprise customer may need support for hundreds or thousands of less resilient commercial applications, Deacon said. Software developers will eventually overcome these challenges, but Deacon doesn't believe the Facebook Open Compute architecture is resilient enough to support enterprise cloud customers.
"A lot of [customers'] applications are running typical shrink-wrapped software, and much of those applications depend upon redundancy at the hardware layer—multiple Ethernet fibers into the network, Fibre Channel into an enterprise SAN, very high I/O requirements in terms of disk read-writes," he said. "Most of those types of applications can't handle a full hardware failure."
Rackspace shares some of those concerns, but its Open Compute team is working on modifying certain aspects of the project to meet its service-level agreement (SLA) demands, Wineland said.
"We're actively working on resolving [those challenges], but it's definitely a concern," he said. "We want to get that platform from being initially [purpose-]built for Facebook ... and broaden its applicability."
Rackspace plans to support the Facebook Open Compute architecture in its shared cloud and dedicated hosting environments eventually, but not without some modifications to make it more palatable to service providers, Wineland said.
However, providers walk a fine line with regard to customizing the project specs to meet their needs, he cautioned. Diverging too much from the original specs may compromise the project's standards-like approach and alienate other developers and engineers working on the project, Wineland said.
"Open source hardware is really hard. With open source software, you can jump into the [revision control system], check out the code and work on it at your leisure ... whereas hardware is relegated to the world of OEMs and ODMs," he said. "I believe with Open Compute we can broaden that somewhat ... [but] if we each have to go our own discreet, divergent direction, then some of the value [of the project] gets compromised."
Facebook Open Compute on commercial hardware?
Other challenges remain around scalability, especially given the fact that hardware customization underpins the project, said Verizon's Deacon.
"You have to buy in pretty significant volumes to have one of the major hardware vendors build to spec," he said. "The likes of Facebook have so many end users that they can order in significant capacities to make that cost-effective, and they're also placing them in one location. In general, our cloud computing nodes are all over the world, and often we have to source our [equipment] locally."
However, Deacon expects equipment vendors will integrate some of the Facebook Open Compute and other public projects' specs into their products.
"There are other companies like Google that publish specs on their gear, and some of the interesting things they're doing around using batteries actually on the server chassis itself, rather than using UPS systems in the data center, to reduce costs significantly," Deacon said. "I think over time major hardware vendors are going to embrace [those ideas], so I think it'll be commercially available."
Let us know what you think about the story; email: Jessica Scarpati, Site Editor.