• Sales: 1-800-961-2888
  • Support: 1-800-961-4454

Installing OpenStack With Rackspace Private Cloud Tools


This document discusses the process for installing an OpenStack environment with the Rackspace Private Cloud cookbooks.

Before you begin, it is strongly recommended that you review Installation Prerequisites and Concepts to ensure that you have completed all necessary preparations for the installation process.

For information about OpenStack Networking (Neutron, formerly Quantum), see Configuring OpenStack Networking.

NOTE: For information about Rackspace Private Cloud v. 2.0 and v. 3.0, see Rackspace Private Cloud Software: Archived Versions.

Table of Contents

Prepare the Nodes
Install Chef Server, Cookbooks, and chef-client
Install Chef Server
Install the Rackspace Private Cloud Cookbooks
Install chef-client
Installing OpenStack
Overview of the Configuration
Create an Environment
Define Network Attributes
Set the Node Environments
Add a Controller Node
Add a Compute Node
Controller Node High Availability
Troubleshooting the Installation

This chapter discusses the process for installing an OpenStack environment with the Rackspace Private Cloud cookbooks. For networking, these instructions only apply to the default nova-network configuration. For information about OpenStack Networking (Neutron, formerly Quantum) see Configuring OpenStack Networking.

The installation process involves the following stages:

  • Preparing the nodes
  • Installing Chef server, the Rackspace Private Cloud cookbooks, and chef-client
  • Creating a Chef environment and defining attributes
  • Setting the node environments
  • Applying the Controller and Compute roles to the nodes
Note

Before you begin, Rackspace recommends that you review Installation Prerequisites and Concepts to ensure that you have completed all necessary preparations for the installation process.

Prepare the Nodes

Before you begin, ensure that the OS is up-to-date on the nodes. Log in to each node and run the appropriate update for the OS and the package manager.

You should also have an administrative user (such as admin) with the same user name configured across all nodes that will be part of your environment.

Install Chef Server, Cookbooks, and chef-client

Your environment must have a Chef server, the latest versions of the Rackspace Private Cloud cookbooks, and chef-client on each node within the environment. You must install the Chef server node first.

Installation is performed via a curl command that launches an installation script. The script downloads the packages from GitHub and uses the packages to install the components. You can review the scripts in the GitHub repository at the following link https://github.com/rcbops/support-tools/tree/master/chef-install.

Before you begin, ensure that curl is available, or install it with apt-get install -y curl on Ubuntu or yum install curl on CentOS.

Install Chef Server

The Chef server should be a device that is accessible by the devices that will be configured as OpenStack cluster nodes on ports 443 and 80. On distros running iptables, you may need to enable access on these ports.

By default, the script installs Chef 11.0.8 with a set of randomly generated passwords, and also installs a Knife configuration that is set up for the root user.

The following variables are added to your environment:

  • CHEF_SERVER_VERSION: defaults to 11.0.8
  • CHEF_URL: defaults to https://<hostURL>:443
  • CHEF_UNIX_USER: the user for which the Knife configuration is set; defaults to root.
  • A set of randomly generated passwords:
    • CHEF_WEBUI_PASSWORD
    • CHEF_AMQP_PASSWORD
    • CHEF_POSTGRESQL_PASSWORD
    • CHEF_POSTGRESQL_RO_PASSWORD
  1. Log in to the device that will be the Chef server and download and run the install-chef-server.sh script.
    # curl -s -O https://raw.github.com/rcbops/support-tools/master/chef-install/install-chef-server.sh 
    # bash install-chef-server.sh
                        
  2. Source the environment file to enable the knife command.
    # source /root/.bash_profile
                                
  3. Run the following command to ensure that knife is working correctly.
    # knife client list
                            

If the command runs successfully, the installation is complete. If it does not run successfully, you may need to log out of the server and log in again to re-source the environment.

Install the Rackspace Private Cloud Cookbooks

The Rackspace Private Cloud cookbooks are set up as Git submodules and are hosted at http://github.com/rcbops/chef-cookbooks,with individual cookbook repositories at http://github.com/rcbops-cookbooks.

The following procedure describes the download process for the full suite, but you can also download individual cookbook repositories, such as the Nova repository at https://github.com/rcbops-cookbooks/nova.

  1. Log into your Chef server or on a workstation that has knife access to the Chef server.
  2. Verify that the knife.rb configuration file contains the correct cookbook_path setting.
  3. Use git clone to download the cookbooks.
    # git clone https://github.com/rcbops/chef-cookbooks.git
                            
  4. Navigate to the chef-cookbooks directory.
    # cd chef-cookbooks
                                
  5. Check out the desired version of the cookbooks. The current versions are v4.2.2 and v4.1.3.
    # git checkout <version>
    # git submodule init
    # git submodule sync
    # git submodule update
                            
  6. Upload the cookbooks to the Chef server.
    # knife cookbook upload -a -o cookbooks 
                            
  7. Apply the updated roles.
    # knife role from file roles/*rb 
                            

Your Chef cookbooks are now up to date.

Install chef-client

All of the nodes in your OpenStack cluster need to have chef-client installed and configured to communicate with the Chef server. This can be most eaily accomplished with the knife bootstrap command. The nodes on which the OpenStack Object Storage cluster will be configured should be able to access the Chef server on ports 443 and 80. Note that this will not work if you are behind an HTTP proxy.

Each client node must have a resolvable hostname. If the hostname cannot resolve, the nodes will not be able to check in properly.

  1. Log in to the Chef server as root.
  2. Generate an ssh key with the ssh-keygen command. Accept the defaults when prompted.
  3. Use the knife bootstrap command to bootstrap the nodes to the Chef server. This command installs chef-client on the target node and allows it to communicate with the server. You will specify the name of the environment, the user name that will be associated with the ssh key, and the IP address of the node.

    For a single controller node:

    # knife bootstrap -E <environmentName> -i .ssh/id_rsa_private \
      --sudo -x <sshUserName> <nodeIPAddress>
    
  4. After you have completed the bootstrap process on each node, you must add the IP address and host name of the Chef server to the /etc/hosts file on each node. Log into the first client node and open /etc/hosts with your preferred text editor.
  5. Add a line with the Chef server's IP address and host name in the following format:
    <chefServerIPAddress> <chefServerHostName>
                            
  6. Save the file.

Repeat steps 3-6 for each node in the environment.

Installing OpenStack

At this point, you have now created a configuration management system for your OpenStack cluster, based on Chef, and given Chef the ability to manage the nodes in the environment. You are now ready to use the Rackspace Private Cloud cookbooks to deploy OpenStack.

This section demonstrates a typical OpenStack installation, and includes additional information about customizing or modifying your installation.

Overview of the Configuration

A typical OpenStack installation configured with Rackspace Private Cloud cookbooks consists of the following components:

  • One or two infrastructure controller nodes that host central services, such as rabbitmq, MySQL, and the Horizon dashboard. These nodes will be referred to as Controller nodes in this document.
  • One or more servers that host virtual machines. These nodes will be referred to as Compute nodes.
  • If you are using OpenStack Networking, you may have a standalone network node. Networking roles can also be applied to the Controller node. This is explained in detail in Configuring OpenStack Networking.

The cookbooks are based on the following assumptions:

  • All OpenStack services, such as Nova and Glance, use MySQL as their database.
  • High availability is provided by VRRP.
  • Load balancing is provided by haproxy.
  • KVM is the hypervisor.
  • The network will be flat HA as nova-network, or will be Neutron-controlled.

More information is available at the Rackspace Private Cloud Reference Architectures page.

Create an Environment

The first step is to create an environment on the Chef server. In this example, the knife environment create command is used to create an environment called private-cloud. The -d flag is used to add a description of the environment.

# knife environment create private-cloud -d "Rackspace Private Cloud OpenStack Environment"
                    

This creates an JSON environment file that can be directly edited to add attributes specific to your configuration. To edit the environment, run the knife environment edit command:

# knife environment edit private-cloud
                

This will open a text editor where the environment settings can be modified and override attributes added.

Define Network Attributes

You must now add a set of override attributes to define the nova, public, and management networks in your environment. For more information about the information you need to configure networking, refer to Network Requirements.

Note

This information is for configuring nova-network, which is what a Rackspace Private Cloud environment uses by default. If you want to use OpenStack Networking, see Configuring OpenStack Networking.

To define override attributes, you will need to run the knife environment edit command and add a networking section, substituting your network information.

The v4.2.2 and v4.1.3 cookbooks use hash syntax to define network attributes. The syntax is as follows:

"override_attributes": {
    "nova": {
        "network": {
            "public_interface": "<publicInterface>"
        },
        "networks": {
            "public": {
                "label": "public",
                "bridge_dev": "<VMNetworkInterface>",
                "dns2": "8.8.4.4",
                "ipv4_cidr": "<VMNetworkCIDR>",
                "bridge": "<networkBridge>",
                "dns1": "8.8.8.8"
            }
        }
    },
    "mysql": {
        "allow_remote_root": true,
        "root_network_acl": "%"
    },
    "osops_networks": {
        "nova": "<novaNetworkCIDR>",
        "public": "<publicNetworkCIDR>",
        "management": "<managementNetworkCIDR>"
    }
}
                

The following example shows an environment configuration in which all three networks are folded onto a single physical network. This network has an IP address in the 192.0.2.0/24 range. All internal services, API endpoints, and monitoring and management functions run over this network. VMs are brought up on a 198.51.100.0/24 network on eth1, connected to a bridge called br100.

"override_attributes": {
    "nova": {
        "network": {
            "public_interface": "br100"
        },
        "networks": {
            "public": {
                "label": "public",
                "bridge_dev": "eth1",
                "dns2": "8.8.4.4",
                "ipv4_cidr": "198.51.100.0/24",
                "bridge": "br100",
                "dns1": "8.8.8.8"
            }
        }
    },
    "mysql": {
        "allow_remote_root": true,
        "root_network_acl": "%"
    },
    "osops_networks": {
        "nova": "192.0.2.0/24",
        "public": "192.0.2.0/24",
        "management": "192.0.2.0/24"
    }
}
                

Set the Node Environments

To ensure that all changes are made correctly, you must now set the environments of the client nodes to match the node created on the Chef server. While logged on to the Chef server, run the following command:

# knife exec -E 'nodes.transform("chef_environment:_default") \
  { |n| n.chef_environment("<environmentNaame>") }'
                

This command will update the environment on all nodes in the cluster. Be aware that if you have any non-OpenStack nodes in your cluster, their environments will be altered as well.

Add a Controller Node

The Controller node (also known as an infrastructure node) must be installed before any Compute nodes are added. Until the Controller node chef-client run is complete, the endpoint information will not be pushed back to the Chef server, and the Compute nodes will be unable to locate or connect to infrastructure services.

A device with the  ha-controller1 role assigned will include all core OpenStack services and should be used even in non-HA environments. For more information about HA, see Controller Node High Availability.

Note

Rackspace ONLY sells and supports a dual-controller architecture. Escalation and Core Support customers should always have dual-controller HA configurations. Users who install a single-controller cloud will not be supported by Rackspace Support. Contact your Rackspace Support representative for more information.

This procedure assumes that you have already installed chef-client on the device, as described in Install Chef Client, and that you are logged in to the Chef server.

  1. Add the ha-controller1 role to the target node's run list.
    # knife node run_list add <deviceHostname> 'role[ha-controller1]'
                                
  2. Log in to the target node via ssh.
  3. Run chef-client on the node.

It will take chef-client several minutes to complete the installation tasks. chef-client will provide output to help you monitor the progress of the installation.

Controller Node High Availability

By creating two Controller nodes in the environment and applying the ha-controller* roles to them, you can create a pair of Controller nodes that provide HA with VRRP and monitored by keepalived. Each service has a VIP of its own, and failover occurs on a service-by-service basis. Refer to High Availability Concepts for more information about HA configuration.

Before you configure HA in your environment, you must allocate IP addresses for the MySQL, rabbitmq, and haproxy VIPs on an interface available to both Controller nodes. You will then add the VIPs to the override attributes.

Note

If you are upgrading your environment from an older configuration in which VIP vrid and networks were not defined, you may have to remove the keepalived configurations in /etc/keepalived/conf.d/* and run chef-client before adding the VIP vrid and network definitions to override_attributes.

Havana VIP attribute blocks

These attribute blocks define which VIPs are associated with which service, and they also define the virtual router ID (vrid) and network for each VIP. The neutron-api VIP only needs to be specified if you are deploying OpenStack Networking.

The following example shows the attributes for a v4.2.2 (Havana) VIP configuration where the RabbitMQ VIP is 192.0.2.51, the HAProxy VIP is 192.0.2.52, and the MySQL VIP is 192.0.2.53:

"override_attributes": {
    "vips": {
        "rabbitmq-queue": "192.0.2.51",

        "ceilometer-api": "192.0.2.52",
        "ceilometer-central-agent": "192.0.2.52",
        "cinder-api": "192.0.2.52",
        "glance-api": "192.0.2.52",
        "glance-registry": "192.0.2.52",
        "heat-api": "192.0.2.52",
        "heat-api-cfn": "192.0.2.52",
        "heat-api-cloudwatch": "192.0.2.52",
        "horizon-dash": "192.0.2.52",
        "horizon-dash_ssl": "192.0.2.52",
        "keystone-admin-api": "192.0.2.52",
        "keystone-internal-api": "192.0.2.52",
        "keystone-service-api": "192.0.2.52",
        "nova-api": "192.0.2.52",
"nova-api-metadata": "192.0.2.52", "nova-ec2-public": "192.0.2.52", "nova-novnc-proxy": "192.0.2.52", "nova-xvpvnc-proxy": "192.0.2.52", "swift-proxy": "192.0.2.52", "neutron-api": "192.0.2.52", "mysql-db": "192.0.2.53", "config": { "192.0.2.51": { "vrid": 1, "network": "public" }, "192.0.2.52": { "vrid": 2, "network": "public" }, "192.0.2.53": { "vrid": 3, "network": "public" } } } }

Grizzly VIP attribute blocks

These attribute blocks define which VIPs are associated with which service, and they also define the virtual router ID (vrid) and network for each VIP. The quantum-api VIP only needs to be specified if you are deploying OpenStack Networking.

The following example shows the attributes for a v4.1.n (Grizzly) VIP configuration where the RabbitMQ VIP is 192.0.2.51, the HAProxy VIP is 192.0.2.52, and the MySQL VIP is 192.0.2.53:

"override_attributes": {
    "vips": {
        "rabbitmq-queue": "192.0.2.51",

        "cinder-api": "192.0.2.52",
        "glance-api": "192.0.2.52",
        "glance-registry": "192.0.2.52",
        "horizon-dash": "192.0.2.52",
        "horizon-dash_ssl": "192.0.2.52",
        "keystone-admin-api": "192.0.2.52",
        "keystone-internal-api": "192.0.2.52",
        "keystone-service-api": "192.0.2.52",
        "nova-api": "192.0.2.52",
        "nova-ec2-public": "192.0.2.52",
        "nova-novnc-proxy": "192.0.2.52",
        "nova-xvpvnc-proxy": "192.0.2.52",
        "swift-proxy": "192.0.2.52",

        "neutron-api": "192.0.2.52",

        "mysql-db": "192.0.2.53",

        "config": {
            "192.0.2.51": {
                "vrid": 1,
                "network": "public"
            },
            "192.0.2.52": {
                "vrid": 2,
                "network": "public"
            },
            "192.0.2.53": {
                "vrid": 3,
                "network": "public"
            }
        }
    }
}
 

Installing a pair of High Availability Controller nodes

Follow this procedure to edit your environment file and apply the HA Controller roles to your Controller nodes.

  1. Open the environment file for editing.
    # knife environment edit <yourEnvironmentName>
                                
  2. Locate the override_attributes section.
  3. Add the VIP information to the override_attributes.
    • If you are deploying a v4.2.2 Havana environment, refer to Havana VIP attributes.
    • If you are deploying a v4.1.n Grizzly environment, refer to Grizzly VIP attributes.
  4. On the first Controller node, add the ha-controller1 role.
    # knife node run_list add <deviceHostname> 'role[ha-controller1]'
                                
  5. On the second Controller node, add the ha-controller2 role.
    # knife node run_list add <deviceHostname> 'role[ha-controller2]'
                                
  6. Run chef-client on the first Controller node.
  7. Run chef-client on the second Controller node.
  8. Run chef-client on the first Controller node again.

Add a Compute Node

The Compute nodes can be installed after the Controller node installation is complete.

  1. Add the single-compute role to the target node's run list.
    # knife node run_list add <deviceHostname> 'role[single-compute]'
                                
  2. Log in to the target node via ssh.
  3. Run chef-client on the node.

It will take chef-client several minutes to complete the installation tasks. chef-client will provide output to help you monitor the progress of the installation.

Repeat this process on each Compute node. You will also need to run chef-client on each existing Compute node when additional Compute nodes are added.

Troubleshooting the Installation

If the installation is unsuccessful, it may be due to one of the following issues.

  • The node does not have access to the Internet. The installation process requires Internet access to download installation files, so ensure that the address for the nodes provides that access and that the proxy information that you entered is correct. You should also ensure that the nodes have access to a DNS server.
  • Your network firewall is preventing Internet access. Ensure the IP address that you assign to the Controller is available through the network firewall.

For more troubleshooting information and user discussion, you can also inquire at the Rackspace Private Cloud Support Forum at the following URL:

https://community.rackspace.com/products/f/45







© 2011-2013 Rackspace US, Inc.

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License


See license specifics and DISCLAIMER