Support: 1-800-961-4454
1-800-961-2888

Laying Cinder Block (Volumes) In OpenStack, Part 2: Solutions Design

3

EDITOR’S NOTE: Ken Hui will be joining John Griffith, OpenStack Program Technical Lead for the Cinder Project and Solutions Architect at SolidFire for a webinar Tuesday, April 29 at 11:00 a.m. CDT to talk about OpenStack Block Storage Design Considerations including an interactive panel discussion at the end. Please join John and I by registering at: https://t.co/CRSEOkM5sD.

In part 1 of this series, I provided a basic overview of the OpenStack block storage service project, called Cinder, and highlighted how it is implemented with commodity hardware as well as third-party storage solutions. In this post, I will review some reference architectures and design principles for building OpenStack Cinder solutions using both commodity-off-the-shelf (COTS) hardware as well as third-party storage solutions. The content is based on experience gathered from various OpenStack-powered Rackspace Private Cloud (RPC) deployments.

To review, there are several use cases for Cinder:

  • If for some reason you need to terminate an instance and launch a new one, you can keep any “non-disposable” data on Cinder volumes and attach them to the new instance.
  • If an instance misbehaves or “crashes” unexpectedly, you can launch a new instance and attach Cinder volumes to that new instance with data intact.
  • If a compute node crashes, you can launch new instances on surviving compute nodes and attach Cinder volumes to those new instances with data intact.
  • Using a third-party storage solution to host Cinder volumes so capacity can be provided that is greater than what is available via the direct-attached storage in the compute nodes (Note: that is also true if using NFS with shared storage but without data persistence).
  • Using an third-party storage solution to host Cinder volumes so specific vendor features such as thin-provisioning, tiering, QoS, etc. can be leveraged.
  • A Cinder volume can be used as the boot disk for a cloud instance; in that scenario, an ephemeral disk is not required.

Although all of the above use cases can be seen across various RPC deployments, customers typically design their Cinder environment to support the “compute node crash” and “third-party storage solution” use cases. While COTS hardware or third-party storage solutions can support both use cases, there are design principles that dictate which solutions are a best fit for a particular deployment (more on that later).

Before walking though different design options, let’s talk about how to choose the underlying storage that backs Cinder, which is a discussion that is often overlooked by cloud administrators and users. While a scale-out architect could mitigate some of these issues by dispersing I/O requests across multiple instances, there are many cases when that is not possible either because of the workload type or the infeasibility of having enough compute nodes in a particular RPC deployment to satisfy the requests. In those cases, some attention must be paid to drive selection and configuration. For example, as an RPC architect, I work with customers to understand not only their CPU and RAM requirements, but also their storage workload, bandwidth and capacity requirements so I can recommend the best-fit solution.

Since I will be using the Rackspace Private Cloud as the canonical example for architecting Cinder in OpenStack, let me level-set by stating what is supported in RPC. Currently, we use Dell servers with internal and external drive enclosures as COTS Cinder volume nodes, aka virtual storage arrays; these enclosures can host 7200K, 10K and 15K SAS drives as well as SSD drives, all of which can be configured in RAID 10 or RAID 5. RPC also supports using Cinder with third-party storage solutions from vendors such as EMC, NetApp and SolidFire, with each solution supporting its particular mix of supported drive types and RAID configurations. I talk more about application sizing and showcase a storage IOPS calculator on my personal blog.

Cinder with COTS Servers

The majority of Cinder implementations in RPC, and likely in other OpenStack based products, use COTS servers as the Cinder Volume node/virtual storage array (VSA). You have the option of installing all Cinder services on a single node or dispersing them across multiple nodes. By default, RPC places the cinder-api and cinder-scheduler services on the controller nodes and the cinder-volume services on the nodes that will function as the virtual storage arrays, serving out Cinder volumes. Rackspace recommends the following configuration for a Cinder volume node/virtual storage array:

  • 1 compute core for every 3 TBs under management
  • Minimum of 6 physical drives
  • Minimum of 2 GB RAM plus an additional 250 MB RAM for every 1 TB under management
  • Some type of RAID protection

The diagram above was simplified to focus on just the Cinder components (I’ve simplified the networking and the controller nodes above do not show all the services that would normally be running on them in an RPC deployment). Some design considerations to think about when deploying this solution:

  • As noted above, RPC requires a dedicated iSCSI network for running Cinder. While it is possible to share storage traffic across a shared network, Rackspace requires a dedicated network for production OpenStack deployments.
  • While deploying a COTS Cinder solution is likely the cheaper option when capacity requirements are low, it becomes less cost effective in large capacity use cases. For example, a typical Cinder volume node in an RPC deployment would be a Dell R720 server with 8 drives that can be used for Cinder. Using 600 GB 15K SAS drives would yield ~2.3 TBs with RAID 10 and ~5.4 TBs with RAID 5 and less, obviously, with SSD. With 3 TB SAS drives, it would yield ~11.5 TBs with RAID 10 and ~20 TBs with RAID 5 at ~50 percent of the performance of 15K drives. Recently, I had a customer that required 12 TBs of Cinder storage and needed it to be on 600 GB SAS drives in a RAID 10 configuration; using just COTS servers meant that I had to design a solution with 5 Cinder volume nodes.
  • One of the shortcomings of a Cinder with COTS solution is the lack of redundancy in the solution. Note: in the diagram below that I have 2 Cinder volume nodes. While both nodes can be managed by the same cCloud controllers, the volume nodes are in fact independent VSAs that do not share data between each other. Effectively, this means that if a Cinder volume node fails, all volumes exported by that node, as well as the data on it, become unavailable.

In conversations with folks in the Cinder project community, including those on the Cinder technical team, there have been several alternative solutions proposed:

  • At the cloud instance layer, mirror logical volumes across multiple Cinder volume nodes. For example, you could take Cinder volumes from 2 nodes and present them to a Linux instance, which would then setup a mirror across the 2 volumes. While this does give you redundancy, you also incur a 50 percent capacity overhead, add complexity to the solution and incur a performance penalty as a result of using software RAID at the instance level.
  • Use a clustered file system with the Cinder volume nodes to create a solution that would be similar to an enterprise array with active-passive storage heads. While this should reduce some of the performance penalty, it again introduces an added level of complexity and the solution still has a 50 percent capacity overhead.
  • Add replication between Cinder volume nodes as a new feature within the Cinder project. The replication may be synchronous or asynchronous; it may leverage existing technologies such rsync or use a new technology. The concerns with this option include what technologies to add to Cinder, how complex will it be, and if it is the type of feature that should be part of the Cinder project.

Cinder with Third-Party Storage Back-Ends

For customers who required a third-party storage solution, RPC currently supports solutions from EMC, NetApp and SolidFire, with more to be offered in the future. There are a number of reasons customers may choose these solutions over a COTS approach:

  • As mentioned earlier, third-party storage solutions can provide greater capacity than a COTS Cinder volume node solution without creating silos of independent nodes.
  • A third-party solution provides more redundancy than is available with a COTS server. For Example, EMC, NetApp and SolidFire all have redundant storage controllers with mirrored write cache. In the case of SolidFire, meta data are distributed across multiple nodes as well.
  • Third-party storage solutions generally provide added-value features such as thin-provisioning, tiering and array-based snapshots.
  • Many customers have existing financial and personnel investments in their third-party storage solutions and wants to leverage that investment when deploying OpenStack.

When integrating with third-party storage back-ends, RPC places the cinder-volume service on the controller nodes, along with the cinder-api and cinder-scheduler services. As described in part 1, instead of interfacing with LVM, the cinder-volume service interfaces with the volume driver for the specific third-party back-end to export an iSCSI volume to cloud instances.

Some design considerations to think about when deploying this solution:

  • Although Fibre Channel (FC) support is now available in OpenStack with EMC subsystems, it is not currently supported in RPC. That could change as the FC support matures and Rackspace deems it production ready for our customers. This means that like the COTS solution, a dedicated iSCSI network is required.
  • Typically, a storage specialist will have to be responsible for the design and management of the third-party storage back-end. In a hosted RPC deployment, the Rackspace storage team is engaged to help the Cloud Architect to design a suitable solution.

A third category of Cinder storage back-ends is software only solutions, such as Ceph and GlusterFS, that can utilize COTS hardware while providing more robust enterprise storage features. These options are not currently supported in RPC but we are actively evaluating them and customers should see of them offered as an option in the near future.

For readers who want to learn more about Cinder with OpenStack, please consult the Rackspace Knowledge Center article on “Configuring OpenStack Block Storage” and the “Block Storage” section of the OpenStack Configuration Reference Guide.

I recently joined SolidFire Cloud Solutions Architect Aaron Delp (@aarondelp) to talk with SolidFire’s John Griffith (@jdg_8) about the role of an OpenStack Program Technical Lead, the evolution of Storage from Icehouse to Juno, Cinder architecture and where block storage fits into OpenStack deployments. You can catch that replay on the Cloudcast at: http://t.co/LRwe1Fxo98.

About the Author

This is a post written and contributed by Kenneth Hui.

Kenneth Hui is a Technology Evangelist with Rackspace. Prior to Rackspace, Ken was a vArchitect at VCE, focused on virtualized converged infrastructure. His passion is to help IT deliver value to their customers through collaboration, automation, and cloud computing. Ken’s responsibilities at Rackspace include helping to drive the adoption of OpenStack and the hybrid cloud. He is an OpenStack Ambassador and a VMware vExpert. Ken blogs about cloud computing, OpenStack, and VMware at http://cloudarchitectmusings.com/ and at http://www.rackspace.com./blog/author/kenneth-hui. He also speaks regularly at conferences and user group meetups.You can follow Ken on Twitter at @hui_kenneth.


More
  • learner

    this was great. How can we get the images. the image file seems to be corrupted

    • AndrewRHickey

      Hi learner,

      I’m working to restore the images. Thank you for your patience.

      -Andrew

    • AndrewRHickey

      All set. Thanks for pointing out that they went missing.

      -Andrew

Racker Powered
©2014 Rackspace, US Inc.