• Sales: 1-800-961-2888
  • Support: 1-800-961-4454

OpenStack Object Storage Configuration


WARNING: The Rackspace Private Cloud offering is being updated, and this documentation is obsolete.

This section describes the process for preparing the nodes and installing swift.

Install Ubuntu on the OpenStack Object Storage Nodes
Configure Chef On the OpenStack Object Storage Nodes
Add Chef Roles
Configure the OpenStack Object Storage Networks in Chef
Configure the OpenStack Object Storage Chef Environment
Configure the Storage Node Attributes
Using Chef to Put OpenStack Object Storage Together
Test the OpenStack Object Storage Cluster
Enable Monitoring

Install Ubuntu on the OpenStack Object Storage Nodes

Rackspace Private Cloud cookbooks have only been tested on Ubuntu Precise (12.04).

When the nodes are ready, begin the swift installation process with a minimal install of Ubuntu Precise on each server that is intended for use with the initial swift cluster. Ensure that the openssh-server and curl packages are installed; this will enable you to access the server remotely after installation.

For consistency, assign the nodes the same domain name that you assigned to the nodes in the existing Alamo cluster. 

Configure Chef On the OpenStack Object Storage Nodes

There are two methods for installing and configuring Chef client on the Swift nodes.

Configuring Chef With SSH

  1. While logged into the controller node, generate an SSH keypair with ssh-key-gen.
  2. Copy the public key to the root account on each Swift node with ssh-copy-id.
    $ ssh-copy-id root@swift_node_IP
  3. Use knife bootstrap to configure chef-client on the first Swift node. Replace node_hostname with the FQDN of the node, your_controller_ip with the IP address of the controller node, and swift_node_IPwith the IP address of the Swift node.
    $ knife bootstrap -N node_hostname -s http://your_controller_ip:4000 \
    -x root swift_node_ip
  4. Repeat knife bootstrap for each Swift node.

Configuring Chef Without SSH

  1. Log on to the swift node and switch to root access with sudo -i.
  2. Install Chef client.
    $ curl -L http://opscode.com/chef/install.sh | bash 
    
  3. Configure Chef client. Replace your_controller_ipwith the appropriate information for your environment.
     $ mkdir -p /etc/chef 
    $ export chef=your_controller_ip
    $ cat > /etc/chef/client.rb <<EOF
      chef_server_url "http://${chef}:4000"
      environment "rpcs"
      EOF
    $ wget -nv http://${chef}:4000/validation.pem -O /etc/chef/validation.pem              
    
  4. Register the node with Chef server.
    $ rm -fr /etc/chef/client.pem ; chef-client
    
  5. Configure the client to start on boot.
    $ export CHEF_VER=$(chef-client -v | awk '{print $2}')
    $ cp /opt/chef/embedded/lib/ruby/gems/1.9.1/gems/chef-${CHEF_VER}/distro/\
    debian/etc/init/chef-client.conf /etc/init/chef.conf
    $ cp /opt/chef/embedded/lib/ruby/gems/1.9.1/gems/chef-${CHEF_VER}/distro/\
    debian/etc/default/chef-client /etc/default/chef-client
    $ cp /opt/chef/embedded/lib/ruby/gems/1.9.1/gems/chef-${CHEF_VER}/distro/\
    debian/etc/init.d/chef-client /etc/init.d/chef-client
    $ mkdir -p /var/log/chef
    $ /etc/init.d/chef-client start
    

Add Chef Roles

This procedure adds the Chef roles to the servers. The RCB Chef recipes are grouped into roles for ease of use, with the following roles used to manage swift:

  • swift-account-server: Storage node for account data.
  • swift-container-server: Storage node for container data.
  • swift-object-server: Storage node for object server.
  • swift-proxy-server: Proxy for swift storage nodes.
  • swift-management-server: Proxy node with account management and ring builder/repository.
  • swift-all-in-one: Role shortcut for all object classes and proxy on one machine. Since Rackspace recommends that you configure OpenStack Object Storage on a minimum of five nodes, you will probably not use this role.

These roles can be assigned to any number of nodes within your swift cluster with the following exceptions and requirements:

  • The swift-management-server role can only be assigned to a SINGLE server. Assigning this role to more than one server will result in unintended behavior.
  • In a cluster with 3 replicas, each of the swift-account-server, swift-container-server and swift-object-server roles must be assigned to at least three nodes.

In larger environments, where it is cost effective to split the proxy and storage layer, storage nodes will carry swift-{account,container,object}-server roles, and there will be dedicated hosts with the swift-proxy-server role. In very large environments, it's possible that the storage node will be split into swift-{container,account}-server nodes and swift-object-server nodes.

Before you begin, be sure to note each server's fully qualified domain name (FQDN) and the roles that you will assign to each while adhering to the exceptions listed above. For example, a swift cluster with 7 nodes, 5 of which are storage nodes, might have the following role list:

Node

FQDN

Role(s)

Node #1

swift1.example.com

swift-management-server

Node #2

swift2.example.com

swift-proxy-server

Node #3

swift3.example.com

swift-object-server

swift-container-server

swift-account-server

Node #4

swift4.example.com

swift-object-server

swift-container-server

swift-account-server

Node #5

swift5.example.com

swift-object-server

swift-container-server

swift-account-server

Node #6

swift6.example.com

swift-object-server

swift-container-server

swift-account-server

Node #7

swift7.example.com

swift-object-server

swift-container-server

swift-account-server

Note: In all of the following steps, replace domain_name with the fully qualified domain name (FQDN) of the server you wish to perform the action on.

  1. Log into the controller node and switch to root access with sudo -i.
  2. Run the swift-setup.rb recipe on the swift controller node. (Note: This is used as a search tag.)
    $ knife node run_list add domain_name 'recipe[swift::swift-setup]'
  3. Identify the node on which you want to perform ring management functions and create a OpenStack Object Storage Management Server by assigning the swift-manage-serverrole.
    $ knife node run_list add domain_name 'role[swift-management-server]'
    
  4. Identify the node that will provide access to your swift objects and create a OpenStack Object Storage Proxy Server by assigning the swift-proxy-serverrole.
    $ knife node run_list add domain_name 'role[swift-proxy-server]'
    
  5. Identify the node(s) on which you want to store swift objects and create a OpenStack Object Storage Object Server by assigning the swift-object-serverrole.
    $ knife node run_list add domain_name 'role[swift-object-server]'
    
  6. Identify the node(s) on which you want to store swift container objects and create a OpenStack Object Storage Container Server by assigning the swift-container-serverrole.
    $ knife node run_list add domain_name 'role[swift-container-server]'
    
  7. Identify the node(s) on which you want to store swift account objects and create a OpenStack Object Storage Account Server by assigning the swift-account-serverrole.
    $ knife node run_list add domain_name 'role[swift-account-server]'
    

Configure the OpenStack Object Storage Networks in Chef

By default, OpenStack Object Storage uses two networks for communication of data inside and outside of the cluster.

  • public: External network for proxy server access. Your proxy server must have an interface configured with an IP address in this network.
  • swift: Internal private network for swift replication services

Additional networks can be defined to override the default values. Many options are available; the following examples are a sample.

  • swift-public: Defines the proxy server external network. This functionality is included in public by default.
  • swift-lb: IP Network of the swift proxy servers. This functionality is included in public by default.
  • swift-private: The IP network for internal replication services. This functionality is included in swift by default.

For examples below we will use the network 192.168.119.0/24 for both internal and external communication and will define a swift-public, swift-lb, and swift-private networks. Replace the IP address with the correct IP network configuration for your installation.

  1. Log into the controller node and switch to root access with sudo -i.
  2. Define the swift-lbnetwork.
    $ knife exec -E '@e=Chef::Environment.load("rpcs"); \
      a=@e.override_attributes;\
      a["osops_networks"]["swift-lb"]="192.168.119.0/24"; \
      @e.override_attributes(a); @e.save'
    
  3. Define the swift-privatenetwork
    $ knife exec -E '@e=Chef::Environment.load("rpcs"); \
      a=@e.override_attributes; \
      a["osops_networks"]["swift-private"]="192.168.119.0/24"; \
      @e.override_attributes(a); @e.save'
    
  4. Define the public swift network.
    $ knife exec -E '@e=Chef::Environment.load("rpcs"); \
      a=@e.override_attributes; \
      a["osops_networks"]["swift"]="192.168.119.0/24"; \
      @e.override_attributes(a); @e.save'
    

Configure the OpenStack Object Storage Chef Environment

This procedure creates the swift hash, configures OpenStack Object Storage to use Keystone, and enables the application of hot patches. The swift cookbooks contain patches for dispersion reports that are not available in the current Ubuntu packages (as of swift 1.4.8). This will enable those patches.

The swift hash is a seed to MD5 for computing the MD5 hash for object storage. You must store the swift hash stored in a safe location. Do not modify the hash after the initial ring has been created and data stored in the swift cluster.

WARNING: Modification of this value after data has been stored can result in loss of data.

  1. Log into the controller node and switch to root access with sudo -i.
  2. Set the swift hash. Replace RANDOM_STRINGwith your own private hash for your swift cluster.
    $ knife exec -E '@e=Chef::Environment.load("rpcs"); \
      a=@e.override_attributes; \
      a.merge!({"swift" => {"swift_hash" => "RANDOM_STRING"}}); \
      @e.override_attributes(a); @e.save'
    
  3. Configure swift to use keystone
    $ knife exec -E '@e=Chef::Environment.load("rpcs"); \
      a=@e.override_attributes; \
      a["swift"].merge!({"authmode" => "keystone"}); \
      @e.override_attributes(a); @e.save'
    
  4. Enable hot-patches
    $ knife exec -E '@e=Chef::Environment.load("rpcs"); \
      a=@e.override_attributes; \
      a.merge!({"osops" => {"apply_patches" => true}}); \
      @e.override_attributes(a); @e.save'
    

Configure the Storage Node Attributes

The Swift Chef cookbooks utilize attributes defined on each of the nodes to control the zone definition that is used during the ring building process. Refer to OpenStack Object Storage Zones for more information about zones.

Using the 7-node cluster example from Add Chef Roles, we will assign each of the storage nodes to a separate zone.

  1. Log into the controller node and switch to root access with sudo -i.
  2. Configure the zone assignment.
    $ knife exec -E "nodes.find(:name => 'swift3.example.com') \
      {|n| n.set['swift']['zone'] = '1'; n.save }"
    $ knife exec -E "nodes.find(:name => 'swift4.example.com') \
      {|n| n.set['swift']['zone'] = '2'; n.save }"
    $ knife exec -E "nodes.find(:name => 'swift5.example.com') \
      {|n| n.set['swift']['zone'] = '3'; n.save }"
    $ knife exec -E "nodes.find(:name => 'swift6.example.com') \
      {|n| n.set['swift']['zone'] = '4'; n.save }"
    $ knife exec -E "nodes.find(:name => 'swift7.example.com') \
      {|n| n.set['swift']['zone'] = '5'; n.save }"
    
  3. Verify that the storage nodes are assigned a zone.
    • Account Servers
      $knife exec -E 'search(:node,"role:swift-account-server") \
        { |n| z=n[:swift][:zone]||"not defined"; puts "#{n.name} has the role\
        role [swift-account-server] and is in swift zone #{z}"; }'
                                             
      swift3.example.com has the role [swift-account-server] and is in swift zone 1
      swift4.example.com has the role [swift-account-server] and is in swift zone 2
      swift5.example.com has the role [swift-account-server] and is in swift zone 3
      swift6.example.com has the role [swift-account-server] and is in swift zone 4
      swift7.example.com has the role [swift-account-server] and is in swift zone 5
      
    • Container Servers
      $ knife exec -E 'search(:node,"role:swift-container-server") \
        { |n| z=n[:swift][:zone]||"not defined"; puts "#{n.name} has the role \
        [swift-container-server] and is in swift zone #{z}"; }'
                                             
      swift3.example.com has the role [swift-container-server] and is in swift zone 1
      swift4.example.com has the role [swift-container-server] and is in swift zone 2
      swift5.example.com has the role [swift-container-server] and is in swift zone 3
      swift6.example.com has the role [swift-container-server] and is in swift zone 4
      swift7.example.com has the role [swift-container-server] and is in swift zone 5
      
    • Object Servers
      $ knife exec -E 'search(:node,"role:swift-object-server") \
        { |n| z=n[:swift][:zone]||"not defined"; puts "#{n.name} has the role \
        [swift-object-server] and is in swift zone #{z}"; }'
                                             
      swift3.example.com has the role [swift-object-server] and is in swift zone 1
      swift4.example.com has the role [swift-object-server] and is in swift zone 2
      swift5.example.com has the role [swift-object-server] and is in swift zone 3
      swift6.example.com has the role [swift-object-server] and is in swift zone 4
      swift7.example.com has the role [swift-object-server] and is in swift zone 5
      

Using Chef to Put OpenStack Object Storage Together

The Chef recipes handle candidate device discovery, application installation, drive mounting, ring building, and ring distribution. To make these recipes resilient against failure, certain actions are taken on each run, and in most cases Chef must be run multiple times during the initial configuration of a node to bring the system into a consistent and operating state.

Storage Node Chef runs occur in the following phases:

  • Phase 1 - Application installation, initial configuration and candidate device discovery.
  • Phase 2 - Candidate drives are formatted, mounted and /etc/fstab is updated for mount on boot.
  • Phase 3 - Ring updates are pulled from management server and services restart.
  • Phase 4 - Repeat steps 2 and 3 as system state changes (for example, as new drives are added to the node or a new ring is available).

Management Server Chef runs occur in the following phases:

  • Phase 1 - Application installation and initial configuration.
  • Phase 2 - Ring proposal shell script is created.
  • Phase 3 - Repeat step 2 as new nodes or drives are available within the cluster.

Note: OpenStack Object Storage is designed for human interaction when modifying the ring. Distributing a faulty ring to your swift cluster can result in loss of data, because the data may be stored in an incorrect location. Rackspace provides a shell script that contains proposed changes to the ring from the Chef recipes. You must review this script before executing it to verify that the changes made to the ring will meet your operational requirements.

  1. Perform the initial application install and configuration on the swift management server (the server that was assigned the swift-management-server role in Add Chef Roles.)
    1. Log into the swift management server and switch to root access with sudo -i.
    2. Run chef-client for the first time.
      $ chef-client
      
  2. Perform the initial application install, configuration, and candidate device discovery on the storage nodes.
    1. Log into the storage node and switch to root access with sudo -i.
    2. Run chef-client for the first time.
      $ chef-client
      
    3. Log into the controller node and verify the discovery of all candidate drives.
      $ knife exec -E \
        'search(:node,"role:swift-object-server OR \
        role:swift-account-server \
        OR role:swift-container-server") \
         { |n| puts "#{n.name}"; \
         begin; n[:swift][:state][:devs].each do |d| \
         puts "\tdevice #{d[1]["device"]}"; \
         end; rescue; puts \
         "no candidate drives found"; end; }'
      
      --- [additional output omitted] ---
      swift4.example.com
      device sdg1
      device sdf1
      device sde1
      device sdb1
      device sdd1
      device sdc1
      --- [additional output omitted] ---
      
    4. Run chef-client again on the node to mount and format drives.
      $ chef-client
      
    5. On the node, verify that the candidate drives were mounted.
      $ df     

    Repeat these steps on each of the storage nodes.

  3. Build the proposed ring.
    1. Log into the swift management server and switch to root access with sudo -i.
    2. Run chef-client to generate the proposed ring.
      $ chef-client 
    3. Review the proposed ring to ensure that it meets your operational requirements. If necessary, make changes.
      $ cd /etc/swift/ring-workspace
      $ vi generate-rings.sh
      
    4. If the proposed ring is correct, comment out exit 0in the script and run the following commands:
      $ sed -i 's/exit 0/# exit 0/g' generate-rings.sh
      $ ./generate-rings.sh
      
  4. Push the ring to git.
    $ cd /etc/swift/ring-workspace/rings
    $ git add account.builder container.builder object.builder
    $ git add account.ring.gz container.ring.gz object.ring.gz
    $ git commit -m "initial commit"
    $ git push
    
  5. Distribute the ring.
    1. Log into each Proxy and Storage node and switch to root access with sudo -i.
    2. Run chef-client.
      $ chef-client
      
    3. Repeat these steps each proxy and storage node. This process pulls the ring updates from the management server onto each node.
    4. Log onto the swift management server and verify the ring has been distributed.
      $ swift-recon --md5
      

You also have the option to enable swift image storage in OpenStack Image Service (Glance).

Note: Existing images already stored in the Image Service will not be migrated to Object Storage automatically. Migration is outside the scope of this document. Enabling swift image storage in Image Service will not affect your existing images as long as Image Service was not already using some form of swift storage, such as Rackspace Cloud Files or another Object Storage cluster.

  1. Log into the controller node and switch to root access with sudo -i.
  2. Enable swift storage in Image Service with the following command.
    $ knife exec -E '@e=Chef::Environment.load("rpcs"); \
      a=@e.override_attributes; \
      a["glance"].merge!({"api" => {"default_store" => "swift"}}); \
      @e.override_attributes(a); @e.save'
    
  3. Run chef-client on the controller node.
    $ chef-client
    

Test the OpenStack Object Storage Cluster

When you originally installed your cluster you were asked to create an OpenStack username and password, which also creates a tenant of the same name. We can use this to test connectivity to your newly deployed swift cluster. The following example uses a username and password of demo and password, and a tenant named demo.

For this procedure, you will be installing the swift command line client. You can install this client on any server, including your own desktop.

  1. Install the swift client (this can be done on any server):
    $ apt-get -y install swift
  2. Set up the environment variables for the client.
    $ cat > ~/.swiftrc << EOF
      export OS_USERNAME=demo
      export OS_PASSWORD=password
      export OS_TENANT_NAME=demo
      export OS_AUTH_URL=http://controller_node_ip:5000/v2.0/
      EOF
    $ source ~/.swiftrc
    
  3. Use the following commands to test OpenStack Object Storage features.
    • List containers:
      $ swift stat
      
      Account: AUTH_51f488117cc64756b32f3f6e1f771e8f
      Containers: 0
      Objects: 0
      Bytes: 0
      Accept-Ranges: bytes
      X-Trans-Id: txb07509e228f84dffbcfb8ade8ed3ec53
      
    • Create a container:
      $ swift post testcontainer
      
    • Upload a test object to the test container:
      $ echo 'test' > testobject
      $ swift upload testcontainer testobject
      
    • Verify that the test object is in the test container:
      $ swift list testcontainer
      testobject
                             

Enable Monitoring

Rackspace Private Cloud includes monitoring with the Graphite tool. Monitoring must be enabled on the controller and compute nodes; it is enabled by default on controller nodes installed with Rackspace Private Cloud tools. Follow this procedure to enable monitoring of OpenStack Object Storage nodes.

  1. Log on to the controller node and switch to root access with sudo -i.
  2. Execute the following knife command for each Object Storage node, substituting the appropriate node names.
    $ knife node run_list add node_name 'role[collectd-client]'
    
  3. Log into each object storage node and switch to root access with sudo -i.
  4. Run chef-client on each storage node.

Monitoring data for the storage nodes will now be available via collectd on your controller node, on port 8080, as in the following example:

http://172.16.137.10:8080







© 2011-2013 Rackspace US, Inc.

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License


See license specifics and DISCLAIMER