Support: 1-800-961-4454
Sales Chat
1-800-961-2888

Open Compute Project Gaining Momentum

The mood in San Jose vacillated between quiet and congratulatory as participants in the fifth Open Compute Summit shared both their plans and their successes in front of a crowd of 3,800 registered attendees, up 90 percent from the 2,000 who registered last year.

There’s a philosophy shift happening in hardware and you can see it happening at the conference as large technology companies grapple with a new world defined by collaboration and shared designs instead of pure competition. Congratulations came from massive compute users such as Facebook, which claimed a savings of $1.2 billion over three years using open source hardware.

Love it or hate it, one thing was clear: companies can’t afford to ignore what’s happening with the simultaneous commoditization and community-driven nature of compute. The traditional supplier model of collecting components, packaging them in a closed system, and shipping them to customers—controlling the product pipeline — is coming to an end.

“Wouldn’t it be cool as a customer if you could say: I know a lot about my infrastructure, and I know what I want,” says Frank Frankovsky, vice president of Hardware Design at Facebook and the Chairman of the Open Compute Project (OCP). “Imagine if you could go to the technology maker directly and get what you want.” And at the high-end, this is a dream of big infrastructure buyers who could stand to save millions, or more, as they build ever-larger datacenters.

And the results that come out of this open communication between suppliers and customers are impressive. Facebook Vice President of Infrastructure Jay Parikh told the audience that his group was able to improve energy efficiency 38 percent and drop costs 24 percent. It was a concept that Mark Zuckerberg hammered home in a fireside chat with O’Reilly Media founder Tim O’Reilly. The Facebook chairman and CEO expressed it in terms of environmental impact, saying that using open source server designs, along with its datacenter designs, have helped the social media giant save the equivalent of 40,000 homes’ annual energy usage and the equivalent of 50,000 cars’ worth of emissions.

This type of performance improvement is attracting other compute megaliths to the project. Microsoft made waves this week announcing that it would join the Open Compute Project by contributing the design of its complex server pod architecture for high-performance applications such as Windows Azure, Bing and Office365. Employing the design in some portion of its 1 million servers gave the Redmond giant a 40 percent cost savings, 15 percent power efficiency gains and a 50 percent improvement in deployment and service times, Microsoft Corporate Vice President of Cloud and Enterprise Bill Lang told the audience. Using these complex home-brewed systems would help Microsoft save 1,100 miles of cabling and 10,000 tons of copper if it were scaled across 1 million servers, Lang says.

While excitement around open source compute equipment dominated the conversation, a growing project build on technology and collaboration is not without its challenges.

One of the biggest issues is that of scale. What can you do with these sorts of specifications? Some are inclined to think, “Unless I’m a big buyer with a big budget, it might not seem like much. The design process can be costly and you might have a tough time finding a supplier that would do a one-off build.” But there’s actually quite a number of ways to leverage the work that’s been done if you’re willing to come at the process of designing and building your infrastructure differently. The trick is to make use of the community, to share the heavy lifting and give your engineers the opportunity to leverage the architecture for your needs.

Some have also pointed at the OCP license as a limitation to wide scale adoption. But the project announced additional licensing options, during the summit that are geared toward alleviating the challenges of sharing hardware designs and the legal and cultural shifts that are necessary to foster true collaboration. As OCP Chairman Frankovsky explains: “The current license we use is a permissive, Apache-like license. There’s no requirement to give derivative work back to the project. But we heard from our suppliers that they might be able to give more if they could get a better return. So we’ve added another license that’s more restrictive—no, restrictive is a bad word—but makes it more GPL [General Public License] like.”

Concerns aside, the majority of presenters and attendees at the Open Compute Summit agreed that the issues that seemed big a year ago are nothing more than road bumps. Attendees and the community see the benefits to be gained from enabling a closer collaboration between technology customers and hardware suppliers, and seem willing to invest their time and effort to make it happen. “We’re going to a world where hardware moves at the pace of software,” says Rackspace Director and Principal Engineer Andrew Sullivan. And the excitement at the Open Compute Summit suggests that world is getting closer each day.

About the Author

This is a post written and contributed by Alexander Haislip.

Alexander Haislip is a Rackspace Executive Communications Specialist and the author of Essentials of Venture Capital. Follow him on Twitter @ahaislip.


More

Leave a New Comment

(Required)


Racker Powered
©2014 Rackspace, US Inc.