• Sales: 1-800-961-2888
  • Support: 1-800-961-4454

Rackspace Private Cloud 3.0 Installation Prerequisites and Concepts


NOTE: This documentation is for the Rackspace Private Cloud Software v. 3.0 with OpenCenter. Other v. 3.0 documentation can be found on the Archived Versions page. Documentation for the newest version of Rackspace Private Cloud Software can be found in the Getting Started section.

 

 


This section discusses the hardware, software, and networking prerequisites for installing Rackspace Private Cloud Software. It also discusses concepts behind availability zones and high availability in Rackspace Private Cloud Software.

Hardware Requirements
OpenCenter and Chef Server Requirements
Cluster Node Requirements
Deploying OpenCenter in an All-VM Environment
Software Requirements
Network Requirements
Preparing For the Installation
Proxy Considerations
Node Access Considerations
Instance Access Considerations
High Availability Concepts
Availability Zone Concepts

Hardware Requirements

Rackspace has tested two hardware-based scenarios for Rackspace Private Cloud Software deployment, with different hardware requirements.

  • A physical device for each required node: an OpenCenter server, a Chef server, an OpenStack Nova Controller node, and additional physical machines with OpenStack Nova Compute nodes as required.
  • One physical device with VMs configured for the OpenCenter Server and Chef server, and the OpenStack Nova Controller node installed on the host, and additional physical machines for the OpenStack Nova Compute nodes as required. For information about this configuration, refer to "Installing OpenCenter in a Virtual Machine Configuration".

OpenCenter and Chef Server Requirements

Rackspace recommends that the OpenCenter server meets the following minimum requirements:

  • 8 GB RAM
  • 144 GB disk space
  • Dual socket CPU with dual core. A dual socket CPU with a hex core (for a total of 6-12 cores) will provide better performance.

The Chef server hardware should meet the following requirements:

  • 16 GB RAM
  • 144 GB disk space
  • Dual socket CPU with dual core, or single socket quad core

Cluster Node Requirements

Each node in the cluster will have the OpenCenter agent installed on it. The hardware requirements vary depending on the purpose of the node. Each device should support VT-x. Refer to the following table for detailed requirements.

Node Type Requirements
Nova Controller
  • 16 GB RAM
  • 144 GB disk space
  • Dual socket CPU with dual core, or single socket quad core
Nova Compute
  • 32 GB RAM
  • 144 GB disk space
  • Dual socket CPU with dual core, or single socket quad core

CPU overcommit is set at 16:1 VCPUs to cores, and memory overcommit is set to 1.5:1. Each physical core can support up to 16 virtual cores; for example, one dual-core processor can support up to 32 virtual cores. If you require more virtual cores, adjust your sizing appropriately.

Deploying OpenCenter in an All-VM Environment

For testing purposes, it is also possible to deploy Rackspace Private Cloud Software on a group of virtual machines, such as a group of Rackspace Cloud Servers. The virtual machines should meet specifications similar to the standard hardware specifications, though it is possible for the Chef server, Controller, and Compute nodes to be as small as 8 GB if you are only doing proof-of-concept tests.

Software Requirements

The device on which OpenCenter Server is installed and all OpenCenter-managed devices must be using one of the following operating systems:

  • Ubuntu 12.04
  • CentOS 6.3
  • RHEL 6.3 or 6.4

The GUI package can also be installed on OS X.

Network Requirements

Internet access is required to complete the installation, so ensure that the devices that you use have internet access to download the installation files.

Rackspace Private Cloud Software creates a FlatDHCP network in multi_host mode, in which nova-network software is installed and configured on each server that is running nova-compute. Further conceptual information about Flat DHCP networking is available in the OpenStack Compute Administration Manual. Refer to the following topics:

Preparing For the Installation

Before you begin, have the following networking information prepared and available:

  • The Nova public network in CIDR format.
  • The Nova public network interface (such as eth0).
  • The Nova VM network bridge (such as br100).
  • Optional NAT exclusion CIDR range or ranges for networks configured with a DMZ,
  • The name of the Nova cluster.
  • A password for an admin OpenStack user.
  • Nova management network address in CIDR format.
  • The interface of the VM network for the Compute notes (such as eth1).
  • The name of the default availability zone.
  • The VM network CIDR range.
  • The Nova internal network CIDR range that you want to assign to each Controller and Compute node.

Proxy Considerations

In general, our installation instructions assume that none of your OpenCenter nodes are behind a proxy. If they are behind a proxy, review this section before proceeding with the installation. Rackspace has not yet tested a hybrid environment where some nodes are behind the proxy and others are not.

Configuring Proxy Environment Settings On the Nodes

You must make your proxy settings available to the entire OS on each node by configuring /etc/environment as follows:

$ /etc/environment
http_proxy=http://your-proxy-URL:port
        https_proxy=http://your-proxy-URL:port
        ftp_proxy=http://your-proxy-URL:port
no_proxy=localhost,opencenter-node1,opencenter-node2                    

Replace opencenter-node1 and opencenter-node2 with the hostnames of your OpenCenter nodes.

In all cases, no_proxy is required and must contain a localhost entry. If localhost is missing, the Omnibus Chef Server installation will fail. Ubuntu requires http_proxy and no_proxy at a minimum. CentOS requires http_proxy, https_proxy, and no_proxy for yum packages and key updates. However, it is best practice to set as many variables as you can, since installation methods may change over time.

The nodes must also have sudo configured to retain the following environment variables:

Defaults env_keep += "http_proxy https_proxy ftp_proxy no_proxy"  

In Ubuntu, this can be put in a file in /etc/sudoers.d In CentOS and RHEL, ensure that your version loads files in /etc/sudoers.d before adding this variable.

In the event that the proxy is configured after a completed OpenCenter installation, restart the opencenter-agent service after the proxy configuration is complete.

  • On Ubuntu:

    $ service opencenter-agent restart
    
  • On CentOS or RHEL:

    $ initctl restart opencenter-agent
    

Testing Proxy Settings

You can verify that the proxy settings are correct by logging in and running env and sudo env. You should see the configured http, https, ftp, and no_proxy settings in the output, as in the following example.

$ env
TERM=screen-256color
SHELL=/bin/bash
SSH_CLIENT=ssh-url
SSH_TTY=/dev/pts/0
LC_ALL=en_US
http_proxy=your-proxy-URL:port
ftp_proxy=your-proxy-URL:port
USER=admin
PATH=/usr/local/sbin
PWD=/home/admin
LANG=en_US.UTF-8
https_proxy=your-proxy-URL:port
SHLVL=1
HOME=/home/admin
no_proxy=localhost,opencenter-server,opencenter-client1
LOGNAME=admin
SSH_CONNECTION=ssh-connection-information
LC_CTYPE=en_US.UTF-8
LESSOPEN=| /usr/bin/lesspipe %s
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/env
                
$ sudo env
TERM=screen-256color
LC_ALL=en_US
http_proxy=your-proxy-URL:port
ftp_proxy=your-proxy-URL:port
PATH=/usr/local/sbin
LANG=en_US.UTF-8
https_proxy=your-proxy-URL:port
HOME=/home/admin
no_proxy=localhost,opencenter-server,opencenter-client1
LC_CTYPE=en_US.UTF-8
SHELL=/bin/bash
LOGNAME=root
USER=root
USERNAME=root
MAIL=/var/mail/root
SUDO_COMMAND=/usr/bin/env
SUDO_USER=admin
SUDO_UID=1000
SUDO_GID=1000

Node Access Considerations

All nodes within the OpenCenter environment must be able to access one another. Provided that the nodes on which the agent is installed have outbound connectivity to the OpenCenter server, the nodes can be physically located anywhere. All communication between the server and agent runs from the agent to the server.

Currently, if you delete the agent from a node, you will have to manually delete the node from the server with the opencentercli node delete command in the command line interface.

Instance Access Considerations

By default, the instances that you create in the OpenStack cluster can only be publicly accessed via NAT by assigning floating IP addresses to them. Before you assign a floating IP address to an instance, you must have a pool of addresses to choose from. Your network security team must provision an address range and assign it to your environment. These addresses need to be publicly accessible. Floating IP addresses are not specified during the installation process; once the Controller node is operational, you can add them with the nova-manage floating create --ip_range command. Refer to "Managing Floating IP Addresses".

You can also make the instances accessible to other hosts in the network by default by configuring the cloud with a network DMZ. The network DMZ range cannot be the same as the nova fixed network range. Specifying a DMZ enables NAT-free network traffic between the virtual machine instances and resources outside of the nova fixed network. For example, if the nova fixed network is 10.1.0.0/16 and you specify a DMZ of 172.16.0.1/12, any devices or hosts in that range will be able to communicate with the instances on the nova fixed network.

To use the DMZ, you must have at least two NICs on the deployment servers. One NIC must be dedicated to the VM instances.

 

 

High Availability Concepts

Rackspace Private Cloud Software has the ability to implement support for high availability for all Nova service components and APIs, Cinder, and Keystone, and Glance, as well as the scheduler, rabbitmq, and mysql. HA functionality is powered by Keepalived and HAProxy.

There are three methods through which Rackspace Private Cloud Software cookbooks implement high availability in your cluster:

  • mysql master-master replication and active-passive failover: mysql is installed on both controller nodes, and master-master replication is configured between the nodes. Keepalived manages connections to the two nodes, so that only one node receives reads/writes at any one time. 
  • rabbitmq active/passive failover: rabbitmq is installed on both controller nodes. Keepalived manages connections to the two nodes, so that only one node is active at any one time.
  • API load balancing: All services that are stateless and can be load balanced (essentially all the APIs and a few others) are installed on both controller nodes. HAproxy is then installed on both nodes, and Keepalived manages connections to HAproxy, which makes HAproxy itself HA. Keystone endpoints and all API access goes through Keepalived.

Availability Zone Concepts

Availability zones enable you to manage and isolate different nodes within the environment. For example, you may want to isolate different sets of Compute nodes to provide different resources to customers. If one availability zone experiences downtime, other zones in the cluster will not be affected.

When you create a Nova cluster with OpenCenter, it is created with a default availability zone, and all Compute nodes will be assigned to that zone. You can create additional availabilty zones within the cluster as needed.







© 2011-2013 Rackspace US, Inc.

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License


See license specifics and DISCLAIMER