• Sales: 1-800-961-2888
  • Support: 1-800-961-4454

Upgrading Your Private Cloud


This document will assist the user in upgrading their Rackspace Private Cloud environment between versions. The document discusses the following upgrade procedures:

  • Upgrading from v4.1.n to v4.2.2.
  • Upgrading between v4.2.n versions.
  • Upgrading between v4.1.n versions.

Two versions of Rackspace Private Cloud are available:

To use the product and this document, you should already be familiar with Rackspace Private Cloud and have an existing Rackspace Private Cloud environment.

Upgrade overview and limitations
Upgrading from Grizzly to Havana
Preparing for the upgrade
Downloading the upgrade scripts
Updating the cookbooks
Using mungerator
Troubleshooting the upgrade
Upgrading within Havana releases
Upgrading within Grizzly releases
Upgrading from v4.0.0 to v4.1.n
Upgrading to v4.1.3 on CentOS

Upgrade overview and limitations

The upgrade process is subject to the following limitations:

  • The procedures can only be used in an environment that was created with the Rackspace Private Cloud cookbooks.

  • Issues with image uploading require you to disable Glance image uploading in the cookbooks.

Refer to the following matrix to see which upgrade scenarios are supported and which procedures to follow for each scenario. If an unsupported scenario applies to you, you will either need to create a new environment, or contact your Rackspace support representative to discuss your installation options.

Note

For Grizzly to Havana upgrade scenarios, Rackspace only supports upgrades between the latest v4.1.n and v4.2.n versions. For example, if you want to upgrade from 4.1.2 to v4.2.2, you should upgrade to v4.1.3 first, and then upgrade from v4.1.3 to v4.2.2.

Upgrade Scenario Supported Not Supported Upgrade Procedure
v4.2.1 => v4.2.2 X   Upgrading Within Havana Releases
v4.2.0 => v4.2.2   X Upgrading from v4.2.0 to v4.2.1 in the Release Notes
v4.2.0 => v4.2.1   X Upgrading from v4.2.0 to v4.2.1 in the Release Notes
v4.1.n => v4.2.2 X   Upgrading from Grizzly to Havana
v4.1.n => v4.2.1 X   Upgrading from Grizzly to Havana
v4.1.n => v4.2.0   X N/A. Users should upgrade to v4.2.2.
v4.1.n => v4.1.n X  
v4.0.0 => v4.2.n   X N/A. Users should upgrade to v4.1.3 first, then to v4.2.2.
v4.0.0 => v4.1.n X  
v3 (OpenCenter) => Any   X N/A
v2 (Alamo) => Any   X N/A

Upgrading from Grizzly to Havana

Users who want to upgrade from an existing Rackspace Private Cloud v4.1.n Grizzly cluster to a Havana cluster should upgrade to v4.2.2. Because of the OpenStack version change and the renaming of attributes from quantum to neutron, this upgrade process is more complex and is best facilitated with the Rackspace Private Cloud mungerator tool.

Users who currently have Rackspace Private Cloud v4.0.0 or any earlier v4.1.n versions and want to upgrade to v4.2.2 must first upgrade to the latest v4.1.n version.

Refer to the Rackspace Private Cloud v4.2 Release Notes for known issues in this upgrade scenario.

Preparing for the upgrade

Before you begin, you should purge the cached Chef cookbooks from the Chef-controlled nodes. This averts possible issues with leftover or unmodified cookbooks when chef-client compiles the cookbooks. Execute the following command on all Chef-controlled nodes.

# for i in /var/chef/cache/cookbooks/*;do rm -rf $i; done
                

Preparations on Ubuntu

You should also clean and upgrade the distribution packages on each node. To clean the packages, execute the following command:

# apt-get clean
                    

Rackspace recommends making the following package updates:

  • OpenStack Metering (Ceilometer): Install python-warlock, python-novaclient, and babel.

    # apt-get -y install python-warlock python-novaclient babel
                                    
  • OpenStack Dashboard (Horizon): Install openstack-dashboard and python-django-horizon.

    # apt-get -y install openstack-dashboard python-django-horizon
                                    
  • QEMU: If you are running qemu, you may have issues with older packages not upgrading due to dependency problems. This issue can be resolved with the following commands:

    # apt-get update
    # apt-get remove qemu-utils 
    # apt-get install qemu-utils
                                    

CentOS/RHEL

Before upgrading, you must add the RDO Havana repository on each node prior to attempting the upgrade. This will remove the Grizzly RDO information stored in /etc/yum.repos.d/rdo-release.repo.

Rackspace also supports ONLY kernel kernel-2.6.32-358.123.2.openstack.el6.x86_64. OpenStack Networking and the Rackspace Private Cloud HA solution both depend on this kernel. You will need to install this kernel on each node in the environment as well during your system preparation. If mungerator finds any nodes without this kernel message, you will receive an error message indicating that a node has been found without the correct kernel.


Procedure 3.1. To add the RDO Havana repository

  1. Execute the following command to install the RDO Havana package.

    # sudo yum install -y \
      http://rdo.fedorapeople.org/openstack-havana/rdo-release-havana.rpm
                        
  2. Purge the old packages and headers from yum.

    # yum clean headers 
    # yum clean packages
                                
  3. Update the packages.

    # yum update -y
                                

Downloading the upgrade scripts

Rackspace has developed two scripts to avert possible issues with quantum database corruption in upgrades from Grizzly to Havana. These scripts are used to backup the database before an upgrade and restore the database after the upgrade is complete.

  • database_backup.sh: This script is run at the beginning of the upgrade process. The script backs up all databases as found in MySQL. The script saves a backup of all databases as individual files in a backup directory. You can customize the directory by setting the DB_BACKUP_DIR variable. If the backup directory does not exist, it will be created.

    You will also need to have a .my.cnf file set in your home directory with access to the root MySQL user. If the file does not exist, you will need to know the root MySQL password.

    Run the following curl command on the primary Controller node to download the script.

    # curl -s -O  https://raw.github.com/rcbops/support-tools/master/havana-tools/database_backup.sh
                        
  • quantum-upgrade.sh: This script is run at the end of the upgrade process. The script restores the quantum database, stamps it as Grizzly, and performs a Havana upgrade on the database, as described in the OpenStack Networking Known Issues in the Havana upgrade notes.

    Run the following curl command on the primary Controller node to download the script.

    # curl -s -O https://raw.github.com/rcbops/support-tools/master/havana-tools/quantum-upgrade.sh 
                        

To update the cookbooks

  1. Log into the Chef server as the root user.

  2. Run the knife environment edit command.

    # knife environment edit <environmentName>
                    
  3. Edit the environment by changing the do_package_upgrades and image_upload override attributes. This forces the package upgrades and also disables image uploading, which enables you to work around issues related to Glance.

        "override_attributes": {
          "osops": {
            "do_package_upgrades": true
          },
          [...]
          "glance": {
            "image_upload": false
          },
          [...]
          }
                            
  4. Check out the newest v4.2.n version of the cookbooks.

    /root# git clone https://github.com/rcbops/chef-cookbooks.git
    /root# cd chef-cookbooks
    /root/chef-cookbooks# git checkout v<V.n.n>
    /root/chef-cookbooks# git submodule init
    /root/chef-cookbooks# git submodule sync
    /root/chef-cookbooks# git submodule update
                            
  5. Upload the cookbooks and the roles to the Chef server.

    /root/chef-cookbooks# knife cookbook upload -a -o cookbooks
    
    /root/chef-cookbooks# knife role from file roles/*rb
                            

Using mungerator

One of the functions of the mungerator tool is to upgrade attributes, and it is especially suited to upgrading an existing v4.1.n environment that uses the quantum attribute to v4.2.n, changing all instances of this attribute to neutron.

In this procedure, controller1 refers to the primary Controller node on which the ha-controller1 role is applied, and controller2 refers to the secondary Controller node on which ha-controller2 is applied. If you use your Controller nodes differently, adjust your process to match your environment.

To use mungerator to update

  1. Run the database_backup.sh script on the controller1 node:

    # bash database_backup.sh
                        
  2. On the Chef server, ensure that you have the correct python tools installed:

    On Ubuntu:

    # apt-get -y install python-dev python-setuptools python-pip
                            

    On CentOS/RHEL:

    # yum install -y openssl-devel python-devel python-pip
                            
  3. On the Chef server, install mungerator.

    # pip install git+https://github.com/rcbops/mungerator
                             
  4. On the Chef server, run the mungerator command. The tool will look for and update all quantum attribute references to neutron.

    # mungerator munger --client-key <chefClientKeyPath> \
      --auth-url <chefURL> all-nodes-in-env --name <environmentName> 
                                

    The following example shows the use of the mungerator command on a Chef server in an environment called rpcs, where the path to the client key is /etc/chef-server/admin.pem.

    # mungerator munger --client-key /etc/chef-server/admin.pem \
      --auth-url https://127.0.0.1:443 all-nodes-in-env --name rpcs
                            
  5. Remove the file /etc/haproxy/haproxy.d/vs_quantum-api.cfg. This averts any issues with the Grizzly quantum config file.

    To remove the file from the haproxy.d directory and retain it, you can move it to another directory, such as the root home directory.

    # mv /etc/haproxy/haproxy.d/vs_quantum-api.cfg ~/
                                
  6. On controller1, restart the Keepalived service to move the VIPs to controller2.

    # service keepalived restart           
                                
  7. Run chef-client on controller1.

  8. On controller1, add the new Ceilometer VIPs and the nova-api-metadata VIP to the override attributes in the environment file.

    "override_attributes": {
        "vips": {
            "ceilometer-api": "<HAProxyVIP>",
            "ceilometer-central-agent": "<HAProxyVIP>",
            "nova-api-metadata": "<HAProxyVIP>",
            [...]
        }
    }  
                        
  9. If your Chef server exists on the same node as a Controller node, add the following information to /etc/chef-server/chef-server.rb:

    rabbitmq["<VIP>"] = node ["<nodeIPAddress>"]                            
                            
  10. Run chef-client on controller2.

  11. Run chef-client on the Compute nodes. This can be done either serially or in parallel.

  12. On Ubuntu, the OpenStack Networking supporting packages python-cmd2 and python-pyparsing will need to be upgraded after running chef-client.

    # apt-get update
    # apt-get -y install python-cmd2 python-pyparsing
                                    
  13. Run the quantum-upgrade.sh script on controller1:

    # bash quantum-upgrade.sh
                        
  14. Restart HAProxy and RPCDaemon.

    # service haproxy restart
    # monit restart rpcdaemon
                            

When mungerator has completed its task, the chef-client runs are complete, and the database has been restored and upgraded, all attributes will be updated for v4.2.n, and your environment will be ready to use.

Troubleshooting the upgrade

This section discusses common problems that you may encounter with the upgrade and the steps to resolve those problems.

Quantum database issues

The quantum database upgrade may experience issues in the following circumstances:

  • The database table was created and never "stamped" with the release of OpenStack when OpenStack networking was originally installed. If the database is not stamped, the Neutron migration application will attempt to migrate the database from the OpenStack Folsom release, which was the earliest release where OpenStack Networking was available.

  • Out-of-scope database tables have been added to the quantum database through package updates or through normal system use. During normal use, these tables are not used by OpenStack networking and do not cause issues. However, they will cause migration failures when upgrading to Havana.

In the event that the quantum-upgrade.sh script fails, follow this procedure to manually repair the database.

[Warning] Warning

Before you begin, ensure that the database is sufficiently backed up. You should save additional backups in locations outside of the database backup directory to ensure that they are available in the event that they are needed for recovery.

To repair the database

  1. Navigate to the database backup directory specified in DB_BACKUP_DIR in database-backup.sh.

  2. Run the following MySQL commands to re-create the database:

    # mysql -e "drop database quantum"
    # mysql -e "create database quantum"
    # mysql -o quantum < quantum.sql
    # mysql -e "use quantum; CREATE TABLE servicedefinitions \
      (c CHAR(20) CHARACTER SET utf8 COLLATE utf8_bin);"
    # mysql -e "use quantum; CREATE TABLE servicetypes \
      (c CHAR(20) CHARACTER SET utf8 COLLATE utf8_bin);"                       
                            
  3. Run the following neutron-db-manage commands to upgrade the database:

    # neutron-db-manage --config-file /etc/neutron/neutron.conf \
      --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini \
      stamp grizzly
    
    # neutron-db-manage --config-file /etc/neutron/neutron.conf \
      --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini \
      upgrade havana  
                        

It is a good practice to backup the database again when the upgrade is complete. You can use the database-backup.sh script to perform the upgrade.

VIPs referring to the loopback device

A chef-client run may fail if a VIP refers to the loopback device after an upgrade. You can manually remove the VIP from lo, with the following command:

# ip addr del 0.0.0.0/0 dev lo
                            

Upgrading within Havana releases

The following procedure is used for upgrading from v4.2.1 => v4.2.2.

For the upgrade from v4.1.n to v4.2.2, see Upgrading from Grizzly to Havana.

Note

There is a known issue in upgrades from the Limited Availability v4.2.0 release to v4.2.n relating to HAProxy . For more information, see Upgrading from v4.2.0 to v4.2.1 in the Release Notes. This issue applies only to upgrades from v4.2.0 to v4.2.n.

Upgrading within Havana releases does not require the mungerator tool.

To upgrade from v4.2.1 to v4.2.2

  1. Log into the Chef server as the root user.

  2. Run the knife environment edit command.

    # knife environment edit <environmentName>
                    
  3. Edit the environment by changing the do_package_upgrades and image_upload override attributes. This forces the package upgrades and also disables image uploading, which enables you to work around issues related to Glance.

        "override_attributes": {
          "osops": {
            "do_package_upgrades": true
          },
          [...]
          "glance": {
            "image_upload": false
          },
          [...]
          }
                            
  4. Check out the desired version of the cookbooks. The version is specified in the format V.n.n.

    /root# git clone https://github.com/rcbops/chef-cookbooks.git
    /root# cd chef-cookbooks
    /root/chef-cookbooks# git checkout <version>
    /root/chef-cookbooks# git submodule init
    /root/chef-cookbooks# git submodule sync
    /root/chef-cookbooks# git submodule update
                            
  5. Upload the cookbooks and the roles to the Chef server.

    /root/chef-cookbooks# knife cookbook upload -a -o cookbooks
    
    /root/chef-cookbooks# knife role from file roles/*rb
                            
  6. Stop keepalived on controller1 and run chef-client. When chef-client is finished, restart keepalived.

  7. Stop keepalived on controller2 and run chef-client. When chef-client is finished, restart keepalived.
  8. Run chef-client on the Compute nodes. This can be done either serially or in parallel.

  9. When the updates are finished, log on to the Horizon dashboard and reboot any virtual machines in your cluster.

When this process is complete, your cluster should be fully upgraded to the latest Rackspace Private Cloud v4.2.n cookbooks.

Upgrading within Grizzly releases

The following procedure is used for the following upgrade scenarios:

  • v4.0.0 => v4.1.n
  • v4.1.n => v4.1.n

Depending on your upgrade scenario, you may be required to perform additional actions:

For the upgrade from v4.1.n to v4.2.1, see Upgrading from Grizzly to Havana.

To upgrade from v4.0.0 or v4.1.n to another v4.1.n

  1. Log into the Chef server as the root user.

  2. Run the knife environment edit command.

    # knife environment edit <environmentName>
                    
  3. Edit the environment by changing the do_package_upgrades and image_upload override attributes. This forces the package upgrades and also disables image uploading, which enables you to work around issues related to Glance.

        "override_attributes": {
          "osops": {
            "do_package_upgrades": true
          },
          [...]
          "glance": {
            "image_upload": false
          },
          [...]
          }
                            
  4. Check out the desired version of the cookbooks. The version is specified in the format V.n.n.

    /root# git clone https://github.com/rcbops/chef-cookbooks.git
    /root# cd chef-cookbooks
    /root/chef-cookbooks# git checkout <version>
    /root/chef-cookbooks# git submodule init
    /root/chef-cookbooks# git submodule sync
    /root/chef-cookbooks# git submodule update
                            
  5. Upload the cookbooks and the roles to the Chef server.

    /root/chef-cookbooks# knife cookbook upload -a -o cookbooks
    
    /root/chef-cookbooks# knife role from file roles/*rb
                            
  6. If your environment is not configured for HA, you can now run chef-client on the Controller node to perform the upgrade. When chef-client is finished, run chef-client on the Compute nodes. This can be done either serially or in parallel.

    If your environment is configured for HA, follow these steps:

    To upgrade an HA environment

    1. Stop all services except for MySQL on the secondary Controller node, beginning with monit. You will need to wait about 30 seconds for the services to stop. The following commands can be used to restart the services.

      for i in `monit status | grep Process | awk '{print $2}' | grep -v mysql | sed "s/'//g"`;
        do monit stop $i;
      done
                                  
    2. Stop Keepalived on the secondary controller node and wait approximately ten seconds for the VIPs to migrate to the primary Controller node.

    3. Run chef-client on the primary Controller node to perform the upgrade.

      On Ubuntu, you will run chef-client twice:

      # chef-client
      
      INFO: Processing mysql_database[create keystone database] action create (keystone::setup line 34)
      ================================================================================
      Error executing action `create` on resource 'mysql_database[create keystone database]'
      ================================================================================
      [sample output truncated]
      
      # chef-client
                                  

      On CentOS, you will have to run chef-client twice, restart Keepalived, and run chef-client again. This is related to a known issue.

      # chef-client
      
      INFO: Processing mysql_database[create keystone database] action create (keystone::setup line 34)
      ================================================================================
      Error executing action `create` on resource 'mysql_database[create keystone database]'
      ================================================================================
      [sample output truncated]
      
      # chef-client
      
      INFO: Processing keystone_register[Recreate keystone Endpoint] action recreate_endpoint (openstack-ha::default line 184)
      [sample output truncated]
      
      # service keepalived restart
      # chef-client
                                  
    4. Run chef-client on the secondary Controller, then on the primary Controller, and finally on the compute nodes.

    5. Restart the services on the secondary Controller. The following commands can be used to restart the services.

      for i in `monit status | grep Process | awk '{print $2}' | grep -v mysql | sed "s/'//g"`;
        do monit start $i;
      done
                                  
    6. Reboot the devices to ensure that the OpenvSwitch module is active.

  7. When the updates are finished, log on to the Horizon dashboard and reboot any virtual machines in your cluster.

When this process is complete, your cluster should be fully upgraded to Grizzly.

Upgrading from v4.0.0 to v4.1.n

The following information applies to upgrades from v4.0.0 to v4.1.n.

Upgrading nova-network

The nova-network override attributes changed between v4.0.0 and v4.1.2. The v4.1.2 and later cookbooks use hash syntax to define network attributes, and v4.0.0 use array syntax.

The hash syntax for v4.1.2 and later is as follows:

"override_attributes": {
    "nova": {
        "network": {
            "public_interface": "<publicInterface>"
        },
        "networks": {
            "public": {
                "label": "public",
                "bridge_dev": "<VMNetworkInterface>",
                "dns2": "8.8.4.4",
                "ipv4_cidr": "<VMNetworkCIDR>",
                "bridge": "<networkBridge>",
                "dns1": "8.8.8.8"
            }
        }
    },
    "mysql": {
        "allow_remote_root": true,
        "root_network_acl": "%"
    },
    "osops_networks": {
        "nova": "<novaNetworkCIDR>",
        "public": "<publicNetworkCIDR>",
        "management": "<managementNetworkCIDR>"
    }
}
                

The array syntax for v4.0.0 is as follows:

"override_attributes": {
    "nova": {
      "networks": [
        {
          "label": "public",
          "bridge_dev": "<VMNetworkInterface>",
          "dns2": "8.8.4.4",
          "ipv4_cidr": "<VMNetworkCIDR>",
          "bridge": "<networkBridge>",
          "dns1": "8.8.8.8"
        }
      ]
    },
    "mysql": {
      "allow_remote_root": true,
      "root_network_acl": "%"
    },
    "osops_networks": {
      "nova": "<novaNetworkCIDR>",
      "public": "<publicNetworkCIDR>",
      "management": "<managementNetworkCIDR>"
    }
}
                

The following example shows a v4.1.2+ environment configuration in which all three networks are folded onto a single physical network. This network has an IP address in the 192.0.2.0/24 range. All internal services, API endpoints, and monitoring and management functions run over this network. VMs are brought up on a 198.51.100.0/24 network on eth1, connected to a bridge called br100.

"override_attributes": {
    "nova": {
        "network": {
            "public_interface": "br100"
        },
        "networks": {
            "public": {
                "label": "public",
                "bridge_dev": "eth1",
                "dns2": "8.8.4.4",
                "ipv4_cidr": "198.51.100.0/24",
                "bridge": "br100",
                "dns1": "8.8.8.8"
            }
        }
    },
    "mysql": {
        "allow_remote_root": true,
        "root_network_acl": "%"
    },
    "osops_networks": {
        "nova": "192.0.2.0/24",
        "public": "192.0.2.0/24",
        "management": "192.0.2.0/24"
    }
}
                

The following example shows a v4.0.0 environment configuration in which all three networks are folded onto a single physical network. This network has an IP address in the 192.0.2.0/24 range. All internal services, API endpoints, and monitoring and management functions run over this network. VMs are brought up on a 198.51.100.0/24 network on eth1, connected to a bridge called br100.

"override_attributes": {
    "nova": {
      "networks": [
        {
          "label": "public",
          "bridge_dev": "eth1",
          "dns2": "8.8.4.4",
          "ipv4_cidr": "198.51.100.0/24",
          "bridge": "br100",
          "dns1": "8.8.8.8"
        }
      ]
    },
    "mysql": {
      "allow_remote_root": true,
      "root_network_acl": "%"
    },
    "osops_networks": {
      "nova": "192.0.2.0/24",
      "public": "192.0.2.0/24",
      "management": "192.0.2.0/24"
    }
}
                

Upgrading OpenStack Networking in v4.1.2

The syntax for the provider_networks attributes block in OpenStack Networking also differs between v4.0.0 and v4.1.2. Ensure that the label and bridge settings match the name of the interface for the instance network, and that the vlans value is correct for your provider VLANs.

The syntax for v4.1.2 and later is as follows:

"quantum": {
    "ovs": {
        "provider_networks": [
            {
                "label": "ph-eth1",
                "bridge": "br-eth1",
                "vlans": "1:1000"
            }
        ]
    }
} 
                        

The v4.0.0 syntax is as follows:

"quantum": {
    "ovs": {
        "provider_networks": {
            "ph-eth1": {
                "bridge": "br-eth1",
                "vlans": "1:1000"
            }
        }
    }
}  
                        

Upgrading RabbitMQ

Rackspace Private Cloud v4.1.2 and later build out clustered RabbitMQ by default. Fresh installations with v4.1.n include this feature automatically, but installations using Rackspace Private Cloud v4.0.0 may need to be upgraded.

To upgrade RabbitMQ from v4.0.0

  1. Stop all OpenStack services on all nodes, including Controller and Compute nodes.

  2. On nodes that have rabbitmq-server installed, run the following command:

    # dpkg -P rabbitmq-server
    
  3. Update the cookbooks using the procedure in Performing the Upgrade up through Step 6.

  4. Run chef-client on the first Controller, the second Controller, and then the first Controller.

  5. After you have run chef-client on the Controller nodes, verify that the cluster was created by running the following command:

    # rabbitmqctl cluster_status
    
  6. Run chef-client on all other nodes in the cluster as required.

  7. Verify that all the services have restarted.

Upgrading to v4.1.3 on CentOS

Before upgrading to v4.1.3 from v4.0.0 or v4.1.2 on CentOS, you must upgrade the kernel on any Controller nodes in your cluster. Log into the Controller node and execute the following command:

# rpm -Uvh http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/kernel-2.6.32-358.123.2.openstack.el6.x86_64.rpm \
  http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/kernel-firmware-2.6.32-358.123.2.openstack.el6.noarch.rpm \
  http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/kernel-headers-2.6.32-358.123.2.openstack.el6.x86_64.rpm     
                

After the kernel upgrade is complete, reboot the Controller nodes and upload the v4.1.3 cookbooks. You can then run chef-client on the Controller nodes to complete the upgrade. You do not need to make any changes to the environment attributes.

The following changes have been made to Rackspace Private Cloud on CentOS and RHEL:

  • The chef-client native selinux handling is used, instead of restoring selinux contexts explicitly in cookbooks.

  • A patch has been added for amqplib to enable the SO_KEEPALIVE option on sockets, which allows faster failover detection for RabbitMQ.







© 2011-2013 Rackspace US, Inc.

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License


See license specifics and DISCLAIMER