• Sales: 1-800-961-2888
  • Support: 1-800-961-4454

Rackspace Private Cloud Installation Prerequisites and Concepts


This chapter lists the prerequisites for installing a cluster with the Rackspace Private Cloud tools. Rackspace recommends that you review this chapter in detail before attempting the installation.

Table of Contents

Hardware Requirements
Chef Server Requirements
Cluster Node Requirements
Software Requirements
Internet Requirements
Network Requirements
Networking In Rackspace Private Cloud Cookbooks
Preparing For the Installation
Instance Access Considerations in nova-network
Proxy Considerations
Configuring Proxy Environment Settings On the Nodes
Testing Proxy Settings
High Availability
Availability Zones

Hardware Requirements

Rackspace has tested Rackspace Private Cloud deployment with a physical device for each of the following nodes:

  • A Chef server
  • An OpenStack Nova Controller node
  • Additional physical machines with OpenStack Nova Compute nodes as required

If you have different requirements for your environment, contact Rackspace.

Chef Server Requirements

Rackspace recommends that the Chef server hardware meet the following requirements:

  • 4 GB RAM
  • 50 GB disk space
  • Dual socket CPU with dual core

Cluster Node Requirements

Each node in the cluster will have chef-client installed on it. The hardware requirements vary depending on the purpose of the node. Each device should support VT-x. Refer to the following table for detailed requirements.

Node Type Requirements
Nova Controller
  • 16 GB RAM
  • 144 GB disk space
  • Dual socket CPU with dual core, or single socket quad core
Nova Compute
  • 32 GB RAM
  • 144 GB disk space
  • Dual socket CPU with dual core, or single socket quad core

CPU overcommit is set at 16:1 VCPUs to cores, and memory overcommit is set to 1.5:1. Each physical core can support up to 16 virtual cores; for example, one dual-core processor can support up to 32 virtual cores. If you require more virtual cores, add additional physical nodes to your configuration.

For a list of Private Cloud certified devices, refer to Private Cloud Certified Devices - Compute.

Software Requirements

The following table lists the operating systems on which Rackspace Private Cloud has been tested.

Operating System Tested Rackspace Private Cloud Version
Ubuntu 12.04 All versions
Ubuntu 12.10 and later None
CentOS 6.3 v4.0.0 and earlier
CentOS 6.4 v4.1.2 and later
CentOS 6.5 v4.2.2 and later
Red Hat Enterprise Linux 6.0 None

It is possible to install Rackspace Private Cloud on untested OSes, but this may cause unexpected issues.

If you require OpenStack Networking, Rackspace recommends that you use Rackspace Private Cloud v4.2.2 with CentOS 6.4 or Ubuntu 12.04 for the Controller and Networking nodes.

The following kernels have been tested:

  • linux-image-3.8.0-38-generic
  • linux-image-3.2.0-58-generic 
  • kernel-2.6.32-358.123.2.openstack.el6.x86_64

Internet Requirements

Internet access is required to complete the installation, so ensure that the devices that you use have internet access to download the installation files.

Network Requirements

A network deployed with the Rackspace Private Cloud cookbooks uses nova-network by default, but OpenStack Networking (Neutron, formerly Quantum) can be manually enabled. For proof-of-concept and demonstration purposes, nova-network will be adequate, but if you intend to build a production cluster and require software-defined networking for any reason, you should use OpenStack Networking, which is designed as a replacement for nova-network.

If you want to use OpenStack Networking for your private cloud, you will have to specify it in your Chef environment and configure the nodes appropriately. See Configuring OpenStack Networking for detailed information about OpenStack Networking concepts and instructions for configuring OpenStack Networking in your environment.

Networking In Rackspace Private Cloud Cookbooks

The cookbooks are built on the following assumptions:

  • The IP addresses of the infrastructure are fixed.
  • Binding endpoints for the services are best described by the networks to which they are connected.

The cookbooks contain definitions for three general networks and what services bind to them. These networks are defined by CIDR range, and any network interface with an address within the named CIDR range is assumed to be included in that network. You can specify the same CIDR for multiple networks; for example, the nova and management networks can share a CIDR.

The following table lists the networks and the services that bind to the IP address within each of these general networks.

Network Services
nova
  • keystone-admin-api
  • nova-xvpvnc-proxy
  • nova-novnc-proxy
  • nova-novnc-server
public
  • graphite-api
  • keystone-service-api
  • glance-api
  • glance-registry
  • nova-api
  • nova-ec2-admin
  • nova-ec2-public
  • nova-volume
  • neutron-api
  • cinder-api
  • ceilometer-api
  • horizon-dash
  • horizon-dash_ssl
management
  • graphite-statsd
  • graphite-carbon-line-receiver
  • graphite-carbon-pickle-receiver
  • graphite-carbon-cache-query
  • memcached
  • collectd
  • mysql
  • keystone-internal-api
  • glance-admin-api
  • glance-internal-api
  • nova-internal-api
  • nova-admin-api
  • cinder-internal-api
  • cinder-admin-api
  • cinder-volume
  • ceilometer-internal-api
  • ceilometer-admin-api
  • ceilometer-central

The configuration allows you to have either multiple interfaces or VLAN-separated sub-interfaces. You can create VLAN-tagged interfaces both in the nova-network configuration (with Linux bridges) or in Neutron (with OVS). Single-NIC deployment also possible. Both multi- and single-NIC configurations are described in Configuring OpenStack Networking.

Preparing For the Installation

You need the following information for the installation:

  • The nova network address in CIDR format
  • The nova network bridge, such as br100 or eth0. This will be used as the VM bridge on Compute nodes. The default value is usually acceptable. This bridge will be created by Nova as necessary and does not need to be manually configured.
  • The public network address in CIDR format. This is where public API services, such as the public Keystone endpoint and the dashboard, will run. An IP address from this network should be configured on all hosts in the Nova cluster.
  • The public network interface, such as eth0 or eth1. This is the network interface that is connected to the public (Internet/WAN) network on compute nodes. In a nova-network configuration, Compute nodes with instance traffic NAT out from this interface unless a specific floating IP address has been assigned to that instance.
  • The management network address in CIDR format. This network is used for communication among services such as monitoring and syslog. This may be the same as your Nova public network if you do not want to separate service traffic.
  • The VM network CIDR range. This is the range from which IP addresses will be automatically assigned to VMs. An address from this network will be visible from within your instance on eth0. This network should be dedicated to OpenStack and not shared with other services.
  • The network interface of the VM network for the Compute notes (such as eth1).
  • The name of the Nova cluster. This should be unique and composed of alphanumeric characters, hyphens (-), or underscores (_).
  • The name of the default availability zone. Rackspace recommends using nova as the default.
  • For nova-network configurations, an optional NAT exclusion CIDR range or ranges for networks configured with a DMZ. A comma-separated list of CIDR network ranges that will be excluded from NAT rules. This enables direct communication to and from instances from other network ranges without the use of floating IPs.

Instance Access Considerations in nova-network

In a nova-network configuration, by default, the instances that you create in the OpenStack cluster can be publicly accessed via NAT only by assigning floating IP addresses to them. Before you assign a floating IP address to an instance, you must have a pool of addresses to choose from. Your network security team must provision an address range and assign it to your environment. These addresses need to be publicly accessible. Floating IP addresses are not specified during the installation process; once the Controller node is operational, you can add them with the nova-manage floating create --ip_range command. Refer to "Managing Floating IP Addresses".

You can also make the instances accessible to other hosts in the network by default by configuring the cloud with a network DMZ. The network DMZ range cannot be the same as the nova network range. Specifying a DMZ enables NAT-free network traffic between the virtual machine instances and resources outside of the nova fixed network. For example, if the nova fixed network is 10.1.0.0/16 and you specify a DMZ of 172.16.0.1/12, any devices or hosts in that range will be able to communicate with the instances on the nova fixed network.

To use the DMZ, you must have at least two NICs on the deployment servers. One NIC must be dedicated to the VM instances.


Proxy Considerations

In general, the Rackspace Private Cloud installation instructions assume that none of your nodes are behind a proxy. If they are behind a proxy, review this section before proceeding with the installation. Rackspace has not yet tested a hybrid environment where some nodes are behind the proxy and others are not.

Configuring Proxy Environment Settings On the Nodes

You must make your proxy settings available to the entire OS on each node by configuring /etc/environment as follows:

$ /etc/environment
http_proxy=http://<yourProxyURL>:<port>
        https_proxy=http://<yourProxyURL>:<port>
        ftp_proxy=http://<yourProxyURL>:<port>
no_proxy=<localhost>,<node1>,<node2>                    
                    

Replace node1 and node2 with the hostnames of your nodes.

In all cases, no_proxy is required and must contain a localhost entry. If localhost is missing, the Omnibus Chef Server installation will fail. Ubuntu requires http_proxy and no_proxy at a minimum. CentOS requires http_proxy, https_proxy, and no_proxy for yum packages and key updates. However, because installation methods might change over time, Rackspace recommends that you set as many variables as you can.

The nodes must also have sudo configured to retain the following environment variables:

Defaults env_keep += "http_proxy https_proxy ftp_proxy no_proxy"  
                    

In Ubuntu, this can be put in a file in /etc/sudoers.d. In CentOS, ensure that your version loads files in /etc/sudoers.d before adding this variable.

Testing Proxy Settings

You can verify that the proxy settings are correct by logging in and running env and sudo env. You should see the configured http_proxy, https_proxy, ftp_proxy, and no_proxy settings in the output, as in the following example.

$ env
TERM=screen-256color
SHELL=/bin/bash
SSH_CLIENT=<ssh-url>
SSH_TTY=/dev/pts/0
LC_ALL=en_US
http_proxy=<yourProxyURL>:<port>
ftp_proxy=<yourProxyURL>:<port>
USER=admin
PATH=/usr/local/sbin
PWD=/home/admin
LANG=en_US.UTF-8
https_proxy=<yourProxyURL>:<port>
SHLVL=1
HOME=/home/admin
no_proxy=localhost,chef-server,client1
LOGNAME=admin
SSH_CONNECTION=<sshConnectionInformation>
LC_CTYPE=en_US.UTF-8
LESSOPEN=| /usr/bin/lesspipe %s
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/env
                
$ sudo env
TERM=screen-256color
LC_ALL=en_US
http_proxy=<yourProxyURL>:<port>
ftp_proxy=<yourProxyURL>:<port>
PATH=/usr/local/sbin
LANG=en_US.UTF-8
https_proxy=<yourProxyURL>:<port>
HOME=/home/admin
no_proxy=localhost,chef-server,client1
LC_CTYPE=en_US.UTF-8
SHELL=/bin/bash
LOGNAME=root
USER=root
USERNAME=root
MAIL=/var/mail/root
SUDO_COMMAND=/usr/bin/env
SUDO_USER=admin
SUDO_UID=1000
SUDO_GID=1000
               

High Availability

Rackspace Private Cloud has the ability to implement support for high availability (HA) for all Nova service components and APIs, Cinder, and Keystone, and Glance, as well as the scheduler, RabbitMQ, and MySQL. HA functionality is powered by Keepalived and HAProxy.

Rackspace Private Cloud uses the following methods to implement HA in your cluster.

  • MySQL master-master replication and active-passive failover: MySQL is installed on both Controller nodes, and master-master replication is configured between the nodes. Keepalived manages connections to the two nodes, so that only one node receives reads/write requests at any one time. 
  • RabbitMQ active/passive failover: RabbitMQ is installed on both Controller nodes. Keepalived manages connections to the two nodes, so that only one node is active at any one time.
  • API load balancing: All services that are stateless and can be load balanced (essentially all the APIs and a few others) are installed on both Controller nodes. HAProxy is then installed on both nodes, and Keepalived manages connections to HAProxy, which makes HAProxy itself HA. Keystone endpoints and all API access go through Keepalived.

Availability Zones

Availability zones enable you to manage and isolate different nodes within the environment. For example, you might want to isolate different sets of Compute nodes to provide different resources to customers. If one availability zone experiences downtime, other zones in the cluster are not affected.

When you create a Nova cluster, it is created with a default availability zone, and all Compute nodes are assigned to that zone. You can create additional availability zones within the cluster as needed.







© 2011-2013 Rackspace US, Inc.

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License


See license specifics and DISCLAIMER