• Sales: 1-800-961-2888
  • Support: 1-800-961-4454

OpenStack Object Storage Concepts and Rackspace Recommendations


WARNING: The Rackspace Private Cloud offering is being updated, and this documentation is obsolete.

Table of Contents

OpenStack Object Storage Zones
Rackspace Zone Recommendations
RAID Controller Configuration

OpenStack Object Storage (http://swift.openstack.org)  is a highly available, distributed, consistent object/blob store. The OpenStack Object Storage Administration Manual includes a sample architecture diagram, which may be useful for you to refer to.

OpenStack Object Storage can be used standalone to provide services similar to Rackspace Cloud Files (http://www.rackspace.com/cloud/public/files/) for an organization. 

OpenStack Object Storage integrates well with the compute layer and can be used as a backend for glance, providing an inexpensive, horizontally scalable store for all image and snapshot data. OpenStack Object Storage is not a block storage solution; block storage is provided by the Cinder project. It will not provide volumes that can be mounted by instances and does not directly provide disk space for the virtual machine instances in the cluster.

This document describes the process of installing and configuring a swift cluster for use inside of an existing Alamo cluster, including optional integration with glance.

OpenStack Object Storage Zones

Zones are a fundamental concept in OpenStack Object Storage. They are the primary means for an end user to isolate failure. OpenStack Object Storage creates three replicas of an object and distribute them across the cluster, attempting to keep each copy in a different zone. http://docs.openstack.org/developer/swift/overview_ring.html provides more information regarding the specifics of this algorithm.

Zones are arbitrarily assigned, and it is up to the administrator of the swift cluster to choose an isolation level and attempt to maintain the isolation level through appropriate zone assignment. The administrator ensures that each zone contains an equal number of nodes, and swift ensures that each object is distributed across three zones. In the event of a single zone failure, at most one replica would be lost, leaving two replicas available.

In small clusters (five nodes or fewer), each individual node counts as a single zone. Larger swift deployments may assign zone designations differently; for example, an entire cabinet of servers may be designated as a single zone to maintain replica availability if the cabinet becomes unavailable (for example, due to failure of the top of rack switches or a dedicated circuit). In very large deployments, such as service provider level deployments, each zone might have an entirely autonomous switching and power infrastructure, so that even the loss of a circuit or aggregator would result in the loss of a single replica at most.

The level of resilience is decided entirely by the administrator and requires the administrator to choose zone placement with car. Choosing zones properly makes maintenance easier; you can perform work on an entire zone without significantly impacting the operations of swift. Choosing zones poorly can make life very difficult.

Rackspace Zone Recommendations

For ease of maintenance on OpenStack Object Storage, Rackspace recommends that you set up at least five nodes. Each node will be assigned its own zone (for a total of five zones), which will give you host level redundancy. This allows you to take down a single zone for maintenance and still guarantee object availability in the event that another zone fails during your maintenance.

You could keep each server in its own cabinet to achieve cabinet level isolation, but you may wish to wait until your swift service is better established before developing cabinet-level isolation. OpenStack Object Storage is flexible; if you later decide to change the isolation level, you can take down one zone at a time and move them to appropriate new homes.

RAID Controller Configuration

OpenStack Object Storage does not require RAID. In fact, most RAID configurations cause significant performance degradation. The main reason for using a RAID controller is the battery backed cache. It is very important for data integrity reasons that when the operating system confirms a write has been committed that the write has actually been committed to a persistent location. Most disks lie about hardware commits by default, instead writing to a faster write cache for performance reasons. In most cases, that write cache exists only in non-persistent memory. In the case of a loss of power, this data may never actually get committed to disk, resulting in discrepancies that the underlying filesystem must handle.

OpenStack Object Storage works best on the XFS file system, and this document assumes that the hardware being used is configured appropriately to be mounted with the nobarriers option.   For more information, refer to the XFS FAQ: http://xfs.org/index.php/XFS_FAQ

To get the most out of your hardware, it is essential that every disk used in OpenStack Object Storage is configured as a standalone, individual RAID 0 disk; in the case of 6 disks, you would have six RAID 0s or one JBOD. Some RAID controllers do not support JBOD or do not support battery backed cache with JBOD. To ensure the integrity of your data, you must ensure that the individual drive caches are disabled and the battery backed cache in your RAID card is configured and used. Failure to configure the controller properly in this case puts data at risk in the case of sudden loss of power.

You can also use hybrid drives or similar options for battery backed up cache configurations without a RAID controller.







© 2011-2013 Rackspace US, Inc.

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License


See license specifics and DISCLAIMER