Support: 1-800-961-4454
1-800-961-2888

Laying Cinder Block (Volumes) In OpenStack, Part 1: The Basics

EDITOR’S NOTE: Ken Hui will be joining John Griffith, OpenStack Program Technical Lead for the Cinder Project and Solutions Architect at SolidFire for a webinar Tuesday, April 29 at 11:00 a.m. CDT to talk about OpenStack Block Storage Design Considerations including an interactive panel discussion at the end. Please join John and I by registering at: https://t.co/CRSEOkM5sD.

One OpenStack project that seems to get less attention than others, such as Nova and Neutron, is the Cinder block storage service project. I though it may be helpful if I wrote a series of blog posts that dive into the Cinder project. I’ll start, in this post, by walking through the basics. But first, let’s put Cinder in proper context by taking a look at the available storage options in OpenStack.

Storage Options in OpenStack

There are currently three storage options available with OpenStack – ephemeral, block (Cinder) and object (Swift) storage. The table below from the OpenStack Operations Guide helpfully outlines the differences between the three options:

Cloud instances are typically created with at least one ephemeral disk, which is used to run the VM guest operating system and boot partition. An ephemeral disk is purged and deleted when an instance is terminated. Swift was included as one of original OpenStack projects to provide durable, scale-out object storage. Initially, a block storage service was created as a component of Nova Compute, called Nova Volumes; but that has since been broken out as its own project called Cinder.

What Are The Benefits Cinder?

So what are the characteristics of Cinder and what are some its benefits? Cinder provides instances with block storage volumes that persist even when the instances they are attached to are terminated. A single block volume can only be attached to a single instance at any one time, but provides the flexibility to be detached from one instance and attached to another instance. The best analogy to this is of a USB drive: it can be attached to one machine and moved to a second machine with any data on that drive staying intact across multiple machines. An instance can attach multiple block volumes.

Cinder Use Cases

In a cloud platform such as OpenStack, persistent block storage has several potential use cases:

  • If for some reason you need to terminate and relaunch an instance, you can keep any “non-disposable” data on Cinder volumes and re-attached them to the new instance.
  • If an instance misbehaves or crashes unexpectedly, you can launch a new instance and attach Cinder volumes to that new instance with data intact.
  • If a compute node crashes, you can launch new instances on surviving compute nodes and attach Cinder volumes to those new instances with data intact.
  • Using a dedicated storage node or storage subsystem to host Cinder volumes, capacity can be provided that is greater than what is available via the direct-attached storage in the compute nodes (Note: that is also true if using NFS with shared storage but without data persistence).
  • A Cinder volume can be used as the boot disk for a cloud instance; in that scenario, an ephemeral disk is not required.

The Cinder Architecture

As seen in the diagram above from Avishay Traeger of IBM, the Cinder block service is delivered using the following daemons:

  • cinder-api – A WSGI app that authenticates and routes requests throughout the block storage service. It supports the OpenStack APIs only, although there is a translation that can be done through Nova’s EC2 interface, which calls in to the Cinder client.
  • cinder-scheduler – Schedules and routes requests to the appropriate volume service. Depending upon your configuration, this may be simple round robin scheduling to the running volume services, or it can be more sophisticated through the use of the Filter Scheduler.
  • cinder-volume – Manages block storage devices, specifically the back-end devices themselves.
  • cinder-backup – Provides a means to back up a Cinder volume to various backup targets.

Cinder Workflow Using Commodity Storage

As originally conceived, all cinder daemons can be hosted on a single Cinder node. Alternatively, all cinder daemons, except cinder-volume, can be hosted on the Cloud Controller nodes with the cinder-volume daemon installed on one or more cinder-volume nodes that would function as virtual storage arrays (VSA). These VSAs are typically Linux servers with commodity storage; volumes are created and presented to compute nodes using Logical Volume Manager (LVM).

The following high-level procedure, outlined in the OpenStack Cloud Administrator Guide and visualized in a diagram from IBM’s Traeger, demonstrates how a Cinder volume is attached to a cloud instance when invoked via the command-line, the Horizon dashboard or the OpenStack API:

  1. A volume is created through the cinder create command. This command creates an LV into the volume group (VG) “cinder-volumes.”
  2. The volume is attached to an instance through the nova volume-attach command. This command creates a unique iSCSI IQN that is exposed to the compute node.
  3. The compute node, which runs the instance, now has an active iSCSI session and new local storage (usually a /dev/sdX disk).
  4. Libvirt uses that local storage as storage for the instance. The instance get a new disk, usually a /dev/vdX disk.

Note that beginning in the Grizzly release, Cinder supports Fibre Channel, but for KVM only.

Cinder Workflow Using Enterprise Storage Solutions

Cinder also supports drivers that allow Cinder volumes to be created and presented using storage solutions from vendors such as EMC, NetApp, Ceph, GlusterFS, etc. These solutions can be used in place of Linux servers with commodity storage leveraging LVM.

Each vendor’s implementation is slightly different so I would advise readers to look at the OpenStack Configuration Reference Guide to understand how a specific storage vendor implements Cinder with their solution. For example, James Ruddy from the Office of the CTO at EMC has a recent blog post walking through installing Cinder with ViPR. Note that the architecture and workflow is similar across all vendors except that the cinder-volume node communicates with the storage solution to manage volumes but does not function as a VSA to create and present Cinder volumes using storage attached to the cinder-volume node.

I am intentionally sticking to the basics in this post, leaving out more sophisticated Cinder features such as snapshots and boot from volume. For more information on other facets of Cinder, I encourage readers to check out the OpenStack Configuration Reference Guide and the OpenStack Cloud Administrator Guide. There, you can also get walk-throughs on how to set up Cinder using commodity hardware and enterprise storage solutions. (RPC), powered by OpenStack, fully supports Cinder as its block storage service. A typical RPC install leverages a server with commodity storage as a Cinder volume node, effectively using that node as a virtual storage array. For customers with existing storage solution investments or that require functionality that is only available with an enterprise storage array, RPC also supports using solutions from EMC, NetApp and SolidFire. Rackspace is committed to supporting other storage solutions after they’ve gone through rigorous testing and certification.

I recently joined SolidFire Cloud Solutions Architect Aaron Delp (@aarondelp) to talk with SolidFire’s John Griffith (@jdg_8) about the role of an OpenStack Program Technical Lead, the evolution of Storage from Icehouse to Juno, Cinder architecture and where block storage fits into OpenStack deployments. You can catch that replay on the Cloudcast at: http://t.co/LRwe1Fxo98.

 

About the Author

This is a post written and contributed by Rackspace .


More
Racker Powered
©2014 Rackspace, US Inc.