Rackspace Private Cloud v4.2.1 Release Notes


This article documents the changes in Rackspace Private Cloud v4.2.1 and v4.2.0 and identified known issues.

For the release notes for previous versions, refer to the following articles:

1. Overview

Rackspace has developed a fast, free, and easy way to install a Rackspace Private Cloud powered by OpenStack in any data center. This method is suitable for anyone who wants to install a stable, tested, and supportable OpenStack-powered private cloud, and can be used for all scenarios from initial evaluations to production deployments.

Rackspace Private Cloud v4.2.n supports the Havana release of OpenStack.

Intended audience

This document describes the updates in Rackspace Private Cloud v4.2.n and any known issues in the product. You should already be familiar with Rackspace Private Cloud Software and have prior knowledge of OpenStack and cloud computing, and basic Linux administration skills.

Document change history

This version of the Rackspace Private Cloud Release Notes replaces and obsoletes all previous versions. The most recent changes are described in the table below:

Revision Date Summary of Changes
December 18, 2013
  • Rackspace Private Cloud v4.2.1 release.
November 13, 2013
  • Rackspace Private Cloud v4.2.0 release.

 

Additional resources

Contact Rackspace

For more information about sales and support, send an email to .

If you have feedback about the product and the documentation, send and email to . For the documentation, you can also leave a comment at the Knowledge Center.

For more troubleshooting information and user discussion, you can also inquire at the Rackspace Private Cloud Support Forum at https://community.rackspace.com/products/f/45

2. What's new in Rackspace Private Cloud v4.2.1

In addition to several stability and usability fixes, Rackspace Private Cloud v4.2.1 has the following changes.

Cookbook version

The latest cookbook version number is v4.2.1. This change is reflected in the instructions under Install the Rackspace Private Cloud Cookbooks.

Where Rackspace Private Cloud v4.2.0 was made available as an Early Access release, Rackspace Private Cloud v4.2.1 is a production-ready release based on the OpenStack Havana code base. For detailed information about the Havana release, refer to the OpenStack Havana Release Notes.

LBaaS in OpenStack Networking (Neutron)

Rackspace Private Cloud v4.2.1 adds a technology preview implementation of Load Balancing as a Service (LBaaS) support with the neutron-lbaas-agent recipe, which is part of the single-network-node role. The Rackspace Private Cloud cookbooks currently support only the HAProxy LBaaS service provider. LBaaS is not currently recommended for Rackspace Private Cloud production environments.

For more information, refer to Configuring Load Balancing as a Service.

Upgrading from v4.1.n to v4.2.1

Users who want to upgrade from an existing Rackspace Private Cloud v4.1.n Grizzly cluster to a Havana cluster should upgrade to v4.2.1. Because of the OpenStack version change and the renaming of attributes from quantum to neutron, this upgrade process is more complex and is best facilitated with the Rackspace Private Cloud mungerator tool. This upgrade process is documented in Upgrading from v4.1.n to v4.2.1

The following matrix indicates which upgrade scenarios are supported.

Upgrade Scenario Supported Not Supported
v4.2.0 => v4.2.n   X
v4.1.n => v4.2.1 X  
v4.1.n => v4.2.0   X
v4.1.n => v4.1.n X  
v4.0.0 => v4.2.n   X
v4.0.0 => v4.1.n X  
v3 (OpenCenter) => Any   X
v2 (Alamo) => Any   X

Users who currently have Rackspace Private Cloud v4.0.0 and want to upgrade to v4.2.1 must first upgrade to v4.1.3.

Note

For users who installed v4.2.0, which was made available only as an Early Access release, there is a known issue in upgrades from v4.2.0 to v4.2.1 relating to HAProxy . For more information, see Upgrading to from v4.2.0 to v4.2.1.

Rackspace Private Cloud Sandbox

Rackspace Private Cloud Sandbox enables you to install an OpenStack-powered private cloud for testing or demonstration purposes in VMWare or VirtualBox. Rackspace Private Cloud Sandbox v1.0 runs Rackspace Private Cloud v4.2.1 powered by the Havana release of OpenStack. For information about this product and instructions for using it, see Rackspace Private Cloud Sandbox.

3. What's new in Rackspace Private Cloud v4.2.0

In addition to several stability and usability fixes, Rackspace Private Cloud v4.2.0 has the following changes.

Note

Rackspace Private Cloud was made available as an Early Access release. Rackspace recommends using Rackspace Private Cloud v4.2.1 for a Havana-based private cloud and Rackspace Private Cloud v4.1.3 for a Grizzly-based private cloud.

OpenStack

The following changes have been made to OpenStack support in Rackspace Private Cloud:

High Availability

The following changes have been made to High Availability in Rackspace Private Cloud:

  • Increased reliability under a wider range of circumstances, such as manual failover for maintenance, power loss, and network isolation.
  • Messages with lost messages during rabbitmq failover have been resolved.
  • OpenStack Networking availability in failure scenarios has been increased. The RPCDaemon tool improves OpenStack Networking scheduling of DHCP agents and L3 agents.

OpenStack Networking (Neutron)

The following changes have been made to OpenStack Networking in Rackspace Private Cloud:

  • The Rackspace Private Cloud v4.2.0 cookbooks have been updated to change packages and attributes from quantum to neutron, reflecting the OpenStack project name change.
  • L3 agent support is now available, which enables floating IPs and routers. For more information about L3 routing, refer to "L3 Routing and NAT" in the OpenStack Networking Administration Guide.

OpenStack Block Storage (Cinder)

The following changes have been made to OpenStack Block Storage in Rackspace Private Cloud:

  • LVM provider is now used by default, which provides better performance and reliability.
  • Standard paste configuration is now used, to provide the same configuration on CentOS/RHEL and Ubuntu.

Dashboard changes

You can access Rackspace Private Cloud documentation on the Knowledge Center by clicking the Help link on the dashboard.

4. Rackspace Private Cloud Object Storage

Rackspace has developed an Object Storage offering compatible with the Rackspace Private Cloud offering.

If you are interested in a deployment of Rackspace Private Cloud Object Storage, contact your Rackspace support representative for more information. You can work with Rackspace to have an Object Storage cluster configured in your data center or in a Rackspace data center.

For an overview of the Rackspace Private Cloud Object Storage architecture, refer to "Distributed Object Storage" on our Private Cloud Architectures page.

5. Known Issues

The following issues have been identified in Rackspace Private Cloud v4.2.0 and v4.2.1.

Upgrading from v4.1.3 to v4.2.1

When upgrading from v4.1.3 to v4.2.1, chef-client may fail on the Compute nodes due to a conflict in python-swiftclient versions. If this failure occurs, run chef-client a second time.

Upgrading from v4.2.0 to v4.2.1

The implementation of HAProxy VIP routing has changed in v4.2.1. After an upgrade from v4.2.0 to v4.2.1, the HAProxy VIP must be removed from the loopback interface (lo). This issue applies only to upgrades from v4.2.0 to v4.2.1.

The following upgrade process ensures that the HAProxy VIP is configured correctly for v4.2.1, and should prevent you from having to manually remove the HAProxy VIP.

  1. Log into the Chef server as the root user.
  2. Run the knife environment edit command.
    # knife environment edit <environmentName>
                    
  3. Edit the environment by changing the do_package_upgrades and image_upload override attributes. This forces the package upgrades and also disables image uploading, which enables you to work around issues related to Glance.
        "override_attributes": {
          "osops": {
            "do_package_upgrades": true
          },
          [...]
          "glance": {
            "image_upload": false
          },
          [...]
          }
                            
  4. Check out the cookbooks and specify v4.2.1.
    /root# git clone https://github.com/rcbops/chef-cookbooks.git
    /root# cd chef-cookbooks
    /root/chef-cookbooks# git checkout v4.2.1
    /root/chef-cookbooks# git submodule init
    /root/chef-cookbooks# git submodule sync
    /root/chef-cookbooks# git submodule update
                            
  5. Upload the cookbooks and the roles to the Chef server.
    /root/chef-cookbooks# knife cookbook upload -a -o cookbooks
    
    /root/chef-cookbooks# knife role from file roles/*rb
                            
  6. Stop the following OpenStack services on the backup Controller node, beginning with monit:
    • monit
    • keystone
    • nova
    • glance
    • cinder
    • haproxy
    • keepalived
  7. Run chef-client on the backup node. You may have to run chef-client twice.
  8. After the last chef-client run that completes the upgrade, run the following command on the backup node:
    # ip addr show lo
                                

    It should generate output similar to the following:

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
         link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
         inet 127.0.0.1/8 scope host lo
            valid_lft forever preferred_lft forever 
         inet 169.254.123.1/30 scope host lo
            valid_lft forever preferred_lft forever                                
                                

    In the correct configuration, shown here, it should include 169.254.123.1/30, which is the configured APIPA address that a Rackspace Private Cloud environment uses for HAProxy binding and communication,

  9. Run the following command on the active node to manually failover to the backup node.
    # service keepalived restart
                                
  10. On the original active node, perform Steps 1-6 of this procedure and run chef-client.
  11. After the last chef-client run that completes the upgrade, run the following command on the original active node:
    # ip addr show lo
                                

    As with the backup node in Step 2, it should generate output similar to the following:

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
         link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
         inet 127.0.0.1/8 scope host lo
            valid_lft forever preferred_lft forever 
         inet 169.254.123.1/30 scope host lo
            valid_lft forever preferred_lft forever                                
                                

If at any point you need to manually remove the VIP from lo, use the following command:

# ip addr del <HAProxyVIP> dev lo
                            

In an environment where the HAProxy VIP was 192.0.2.0/32, the command would be:

# ip addr del 192.0.2.0/32 dev lo
                        

Run ip addr show lo again to verify that the VIP has been removed and run chef-client on the node again to re-run the upgrade.

Restarting RabbitMQ

When a RabbitMQ cluster fails, it must be brought up in the reverse order in which it went down, or the restart will fail. For example, if RabbitMQ goes down on controller1 and then on controller2, the service should first be started on controller2, and then on controller1.

Intermittent failover failure

Occasionally, failover between HA Controller nodes may fail due to multiple RabbitMQ processes existing on the secondary Controller node. Restarting RabbitMQ will bring the cluster back online:

# pkill -f rabbitmq; service rabbitmq-server restart
                

OpenStack Orchestration and instance names

OpenStack Orchestration builds instance names based on the stack name, which results in long instance names. Instance names that are longer than 64 characters may cause the stack to fail to build. You should keep stack names short to avert this issue.

CentOS and RHEL challenges with advanced OpenStack Networking

When using OpenStack Networking (Neutron), Controller nodes with Networking features and standalone Networking nodes require namespace kernel features that are not available in the default kernel shipped with CentOS 6.4, RHEL 6.4, and older versions of these operating systems. More information about Neutron limitations is available in the OpenStack documentation, and more information about RedHat-derivative kernel limitations is provided in the RDO FAQ.

Rackspace recommends that you upgrade to the latest RDO kernel if you are going to use CentOS or RHEL for networking nodes.

Snapshot deletion on CentOS

Setting volume_clear in cinder.conf to shred or zero will cause snapshot deletion to fail in OpenStack Grizzly installations on CentOS. Rackspace recommends you set volume_clear to none until this issue is resolved in OpenStack.

OpenStack Metering visualization

Rackspace Private Cloud is currently using the patch referenced in Ceilometer bug #1093625 to address issues with OpenStack Metering visualization. If the underlying packages are updated, you may experience patch issues. We will continue to monitor the status of this bug for future releases.

OpenStack service timeouts

OpenStack service timeouts cannot be configured, which makes it possible for an OpenStack client to remain connected to a failed service for a longer period than is desirable. If you cannot wait for the connection to close, you can restart the service that is holding the connection. For example, if the nova compute service has failed, restart it with service nova-compute restart on Ubuntu or service openstack-nova-compute restart on CentOS.

Ephemeral and swap disks on RHEL

In RHEL-based clusters, ephemeral and swap disks cannot be used to create a custom image. If you do so, you will be unable to boot an instance. This is related to Nova bug #1251152.

Ceilometer packages fail to install

In some cases, the Ceilometer packages may not be correctly downloaded or installed on the Compute nodes. On the primary Controller node, follow this procedure to clean up the Compute nodes and re-install the packages.

Note

The procedure as written is designed to be run on the primary Controller node. The dsh -g compute -c command enables you to run the quoted commands concurrently on all compute nodes. If desired, you can log into each individual Compute node and run the quoted commands individually, but this is not recommended in large clusters.

  1. Install the python-swiftclient, python-warlock, and babel packages.
    # dsh -g compute -c "apt-get -y install python-swiftclient python-warlock babel"
    
  2. Verify the installed packages. Review the output for ERROR. If there are no errors, the packages are correctly installed.
    # dsh -g compute -c "apt-get -f install"
                
  3. Run the following command to remove unneeded packages from the Compute nodes.
    # dsh -g compute -c "apt-get -y autoremove"
    
  4. Run chef-client again:
    # dsh -g compute -c "chef-client"
                

Horizon is not installed correctly

In some cases on Ubuntu, Horizon packages are not installed correctly. If Hypervisors tab does not contain pie charts, or if the Stats page on the Resource Usage tab is not rendering graphs, Horizon has not been correctly installed. To resolve this issue, you will uninstall Horizon and django and run chef-client again.

  1. On the primary Controller node, uninstall Horizon and django:
     # apt-get -y remove openstack-dashboard python-django-horizon && apt-get -y autoremove
                                 
  2. Run chef-client.
  3. Log out of Horizon and log back in again.

You should now see the correct Hypervisors and Stats content.

In rare cases you may need to manually run Horizon's collectstatic command to localize all of the Horizon resources. Run the following command on the primary Controller node to perform this task.

# /usr/share/openstack-dashboard/manage.py collectstatic --noinput
                


Was this content helpful?




© 2011-2013 Rackspace US, Inc.

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License


See license specifics and DISCLAIMER