Q&A: Rackspace Open Compute Engineer Aaron Sullivan

Filed in Cloud Industry Insights by Alexander Haislip | August 27, 2013 4:00 pm

With OpenStack, we already have the open source software to run your data center. Now, we’re building the open source hardware to run it on. The Open Compute Project[1] is a collaborative community of designers, consumers and innovators focused on building more efficient servers, storage and data center hardware designs for scalable computing at a lower cost.

Rackspace took a leadership position in the Open Compute Project in 2011 and placed Mark Roenigk in one of the five board seats, alongside executives from Facebook, Intel and Goldman Sachs. Aaron Sullivan, a Principal Engineer in Rackspace’s Supply Chain Operations group, is responsible for helping drive these open source hardware designs into our data centers. We caught up with Sullivan for an update on Open Compute.

Rackspace is known for its open source cloud software. The company has changed the cloud landscape with OpenStack. Is Open Compute a natural next step?
OpenStack was born from a community of users basically saying, “Cloud software is not meeting our needs—we can make it better, we can do it together, and we can be creative.” That community started small, but really took off. Open Compute takes that same spirit into hardware. The server industry has been making iterative improvements for the last 10 to 15 years, with a relatively small group of architects and engineers. There was great value for years, and then it really started diminishing. It wasn’t rising to the demands of consumers. Like OpenStack, Open Compute started small, but has really taken off.

Sounds great, but are we actually doing this now?
Yes. We just put Open Compute-based servers into our data center a few months ago. We’re open enough to give end users who bring the intellectual and conceptual knowledge to the table to get a better system than they have today. Rackspace started active collaboration with some partners in Asia last year. We recently started field trials with what we developed. Some companies want to squeeze greater speed from their compute infrastructure. Others want better power efficiency. Still others need application-specific customizations such as super-fast memory or SSD storage.

Creating custom machines for customers, or groups of customers with similar needs, seems to be a big part of this program.
I view it as an extension of the service we provide to customers. Open Compute represents the opportunity for Rackspace to respond to the pull we’re starting to feel for custom, high-performance computing.

Can you give an example?
Hadoop clusters. It’s possible to build a high-performance server for Hadoop relatively inexpensively. Power consumption is a major issue for Hadoop at scale and you don’t need remarkably fast CPU. You need a ton of storage capacity and bandwidth and a lot of memory bandwidth.  There’s also a bunch of stuff in many off the shelf-servers that you don’t need.  It’s all about tradeoffs, and these tradeoffs start to add up when you’re looking at racks upon racks of machines.

Sounds like a real opportunity; why aren’t the traditional hardware manufacturers attacking this?
They are. There are original design manufacturers (ODMs) that are excited to provide components off the shelf, and we have been working with many of them. You’re going to start seeing many of the big name technology leaders get more deeply involved with open compute as they realize that they can capitalize on it.

Capitalize on it?
Sure. Manufacturers remain successful so long as they make hardware that people want to buy. Traditionally, it’s been very difficult to know if there was a real market out there for what you were building. You could get custom software, but forget about custom hardware. But if a community of consumers shows up and says, “we want this” … that adds a layer of certainty.

When I was working at AT&T on the project that ultimately became U-verse, we could never get quite what we wanted out of hardware. We’d sit down with our suppliers and say that it would make a huge difference if they could just do this or that. If we just got this feature on this chip, we could get huge boosts to efficiency or cool new features. For manufacturers, the question was always “who is going to buy this beyond AT&T?” and “is this the right thing to invest our limited engineering resources on?”

What if a consumer community had brought those requests forward? What if what they brought forward was already partially—or completely—designed, and the community attached an open license to it? I think those are changes in market dynamics that are good for manufacturers.

And new opportunities for technology users?
It’s transformative. We all want to make use of the tools and technology, but we’ve restricted ourselves by limiting the number of people who can access this technology and do things with it. If you can get to the point where the best ideas have a voice and a channel, and you can really create that and let it settle and mature, you can really change the world. There are all kinds of things you could start to envision. That’s why I want everyone I do business with to be open.

If you could tell the world one important thing Open Compute today, what would it be?
It’s not only about energy efficiency, packaging and cost cutting. Those things make it disruptive and exciting today, but that’s just the beginning. I envision a time when computer engineering schools are using Open Compute designs as a kind of text book–and for more than just mechanical and power distribution. I’m talking chip designs, fabrics, protocols, firmware. Arm the graduates of tomorrow with the knowledge of a fifth-year professional, today.

Endnotes:
  1. Open Compute Project: http://www.opencompute.org/

Source URL: http://www.rackspace.com/blog/qa-rackspace-open-compute-engineer-aaron-sullivan/