This section discusses the hardware, software, and networking prerequisites for installing Rackspace Private Cloud Software. It also discusses concepts behind availability zones and high availability in Rackspace Private Cloud Software.
Rackspace has tested two hardware-based scenarios for Rackspace Private Cloud Software deployment, with different hardware requirements.
Rackspace recommends that the OpenCenter server meets the following minimum requirements:
The Chef server hardware should meet the following requirements:
Each node in the cluster will have the OpenCenter agent installed on it. The hardware requirements vary depending on the purpose of the node. Each device should support VT-x. Refer to the following table for detailed requirements.
CPU overcommit is set at 16:1 VCPUs to cores, and memory overcommit is set to 1.5:1. Each physical core can support up to 16 virtual cores; for example, one dual-core processor can support up to 32 virtual cores. If you require more virtual cores, adjust your sizing appropriately.
For testing purposes, it is also possible to deploy Rackspace Private Cloud Software on a group of virtual machines, such as a group of Rackspace Cloud Servers. The virtual machines should meet specifications similar to the standard hardware specifications, though it is possible for the Chef server, Controller, and Compute nodes to be as small as 8 GB if you are only doing proof-of-concept tests.
The device on which OpenCenter Server is installed and all OpenCenter-managed devices must be using one of the following operating systems:
The GUI package can also be installed on OS X.
Internet access is required to complete the installation, so ensure that the devices that you use have internet access to download the installation files.
Rackspace Private Cloud Software creates a FlatDHCP network in
multi_host mode, in which
nova-network software is installed and configured on each server that is running
nova-compute. Further conceptual information about Flat DHCP networking is available in the OpenStack Compute Administration Manual. Refer to the following topics:
Before you begin, have the following networking information prepared and available:
All nodes within the OpenCenter environment must be able to access one another. Provided that the nodes on which the agent is installed have outbound connectivity to the OpenCenter server, the nodes can be physically located anywhere. All communication between the server and agent runs from the agent to the server.
Currently, if you delete the agent from a node, you will have to manually delete the node from the server with the
opencentercli node delete command in the command line interface.
By default, the instances that you create in the OpenStack cluster can only be publicly accessed via NAT by assigning floating IP addresses to them. Before you assign a floating IP address to an instance, you must have a pool of addresses to choose from. Your network security team must provision an address range and assign it to your environment. These addresses need to be publicly accessible. Floating IP addresses are not specified during the installation process; once the Controller node is operational, you can add them with the
nova-manage floating create --ip_range command. Refer to "Managing Floating IP Addresses".
You can also make the instances accessible to other hosts in the network by default by configuring the cloud with a network DMZ. The network DMZ range cannot be the same as the nova fixed network range. Specifying a DMZ enables NAT-free network traffic between the virtual machine instances and resources outside of the nova fixed network. For example, if the nova fixed network is
10.1.0.0/16 and you specify a DMZ of
172.16.0.1/12, any devices or hosts in that range will be able to communicate with the instances on the nova fixed network.
To use the DMZ, you must have at least two NICs on the deployment servers. One NIC must be dedicated to the VM instances.
Rackspace Private Cloud Software has the ability to implement support for high availability for all Nova service components and APIs, Cinder, and Keystone, and Glance, as well as the scheduler, rabbitmq, and mysql. HA functionality is powered by Keepalived and HAProxy.
There are three methods through which Rackspace Private Cloud Software cookbooks implement high availability in your cluster:
Availability zones enable you to manage and isolate different nodes within the environment. For example, you may want to isolate different sets of Compute nodes to provide different resources to customers. If one availability zone experiences downtime, other zones in the cluster will not be affected.
When you create a Nova cluster with OpenCenter, it is created with a default availability zone, and all Compute nodes will be assigned to that zone. You can create additional availabilty zones within the cluster as needed.
© 2011-2013 Rackspace US, Inc.
Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License