This document discusses the prerequisites for installing a cluster with the Rackspace Private Cloud tools. It is strongly recommended that you review this chapter in detail before continuing with the installation.
Note: For information about Rackspace Private Cloud v. 2.0 and v. 3.0, see Rackspace Private Cloud Software: Archived Versions.
Rackspace has tested Rackspace Private Cloud deployment with a physical device for each node:
If you have different requirements for your environment, contact Rackspace.
Rackspace recommends that the Chef server hardware meet the following requirements:
Each node in the cluster will have chef-client installed on it. The hardware requirements vary depending on the purpose of the node. Each device should support VT-x. Refer to the following table for detailed requirements.
CPU overcommit is set at 16:1 VCPUs to cores, and memory overcommit is set to 1.5:1. Each physical core can support up to 16 virtual cores; for example, one dual-core processor can support up to 32 virtual cores. If you require more virtual cores, add additional physical nodes to your configuration.
The following table lists the operating systems on which Rackspace Private Cloud has been tested.
|Operating System||Tested Rackspace Private Cloud Version|
|Ubuntu 12.04||All versions|
|Ubuntu 12.10 and later||None|
|CentOS 6.3||v4.0.0 and earlier|
|CentOS 6.4||v4.1.2 and later|
Internet access is required to complete the installation, so ensure that the devices that you use have internet access to download the installation files.
A network deployed with the Rackspace Private Cloud cookbooks uses nova-network by default, but OpenStack Networking (Neutron, formerly Quantum) can be manually enabled. The cookbooks are built on the following assumptions:
The cookbooks contain definitions for three general networks and what services bind to them. These networks are defined by CIDR range, and any network interface with an address within the named CIDR range is assumed to be included in that network. You can specify the same CIDR for multiple networks; for example, the
management networks can share a CIDR.
The following table lists the networks and the services that bind to the UP address within each of these general networks.
The configuration allows you to have either multiple interfaces or VLAN-separated sub-interfaces. You can create VLAN-tagged interfaces both in the nova-network configuration with Linux bridges, or in Neutron with OVS. Single-NIC deployment is possible but requires more extensive interface configuration and some way to separate the osops networks from tenant networks. We will only address the simpler multi-NIC use-case in this documentation.
Depending on whether you are using nova-network or Neutron, you will need to prepare different sets of information before configuring your network.
You will need the following sets of information for the installation:
OpenStack Networking (Neutron), is designed as a replacement for nova-network.
If you want to use OpenStack Networking for your private cloud, you will have to specify it in your Chef environment and configure the nodes appropriately. See Configuring OpenStack Networking for detailed information about OpenStack Networking concepts and instructions for configuring OpenStack Networking in your environment.
In general, our installation instructions assume that none of your nodes are behind a proxy. If they are behind a proxy, review this section before proceeding with the installation. Rackspace has not yet tested a hybrid environment where some nodes are behind the proxy and others are not.
You must make your proxy settings available to the entire OS on each node by configuring
/etc/environment as follows:
node2 with the hostnames of your nodes.
In all cases,
no_proxy is required and must contain a
localhost entry. If
localhost is missing, the Omnibus Chef Server installation will fail. Ubuntu requires
no_proxy at a minimum. CentOS requires
no_proxy for yum packages and key updates. However, it is best practice to set as many variables as you can, since installation methods may change over time.
The nodes must also have sudo configured to retain the following environment variables:
Defaults env_keep += "http_proxy https_proxy ftp_proxy no_proxy"
In Ubuntu, this can be put in a file in
/etc/sudoers.d In CentOS, ensure that your version loads files in
/etc/sudoers.d before adding this variable.
You can verify that the proxy settings are correct by logging in and running env and sudo env. You should see the configured http, https, ftp, and no_proxy settings in the output, as in the following example.
TERM=screen-256color SHELL=/bin/bash SSH_CLIENT=
<ssh-url>SSH_TTY=/dev/pts/0 LC_ALL=en_US http_proxy=
<port>USER=admin PATH=/usr/local/sbin PWD=/home/admin LANG=en_US.UTF-8 https_proxy=
<port>SHLVL=1 HOME=/home/admin no_proxy=localhost,
<sshConnectionInformation>LC_CTYPE=en_US.UTF-8 LESSOPEN=| /usr/bin/lesspipe %s LESSCLOSE=/usr/bin/lesspipe %s %s _=/usr/bin/env
TERM=screen-256color LC_ALL=en_US http_proxy=
<port>PATH=/usr/local/sbin LANG=en_US.UTF-8 https_proxy=
client1LC_CTYPE=en_US.UTF-8 SHELL=/bin/bash LOGNAME=root USER=root USERNAME=root MAIL=/var/mail/root SUDO_COMMAND=/usr/bin/env SUDO_USER=admin SUDO_UID=1000 SUDO_GID=1000
In a nova-network configuration, by default, the instances that you create in the OpenStack cluster can only be publicly accessed via NAT by assigning floating IP addresses to them. Before you assign a floating IP address to an instance, you must have a pool of addresses to choose from. Your network security team must provision an address range and assign it to your environment. These addresses need to be publicly accessible. Floating IP addresses are not specified during the installation process; once the Controller node is operational, you can add them with the nova-manage floating create --ip_range command. Refer to "Managing Floating IP Addresses".
You can also make the instances accessible to other hosts in the network by default by configuring the cloud with a network DMZ. The network DMZ range cannot be the same as the nova fixed network range. Specifying a DMZ enables NAT-free network traffic between the virtual machine instances and resources outside of the nova fixed network. For example, if the nova fixed network is
10.1.0.0/16 and you specify a DMZ of
172.16.0.1/12, any devices or hosts in that range will be able to communicate with the instances on the nova fixed network.
To use the DMZ, you must have at least two NICs on the deployment servers. One NIC must be dedicated to the VM instances.
Rackspace Private Cloud has the ability to implement support for high availability for all Nova service components and APIs, Cinder, and Keystone, and Glance, as well as the scheduler, rabbitmq, and mysql. HA functionality is powered by Keepalived and HAProxy.
There are three methods through which Rackspace Private Cloud cookbooks implement high availability in your cluster:
Availability zones enable you to manage and isolate different nodes within the environment. For example, you may want to isolate different sets of Compute nodes to provide different resources to customers. If one availability zone experiences downtime, other zones in the cluster will not be affected.
When you create a Nova cluster, it is created with a default availability zone, and all Compute nodes will be assigned to that zone. You can create additional availability zones within the cluster as needed.
© 2011-2013 Rackspace US, Inc.
Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License