• Sales: 1-800-961-2888
  • Support: 1-800-961-4454

Configuring OpenStack Networking


A network deployed with the Rackspace Private Cloud cookbooks uses nova-network by default, but OpenStack Networking (Neutron) can be manually enabled. This section discusses the concepts behind the Rackspace deployment of Neutron and provides instructions for configuring it in your cluster.

Table of Contents

OpenStack Networking concepts
Network types
Namespaces
Metadata
OVS bridges
OpenStack Networking and high availability
OpenStack Networking prerequisites
Configuring OpenStack Networking
Networking infrastructure
Editing the override attributes for Networking
Apply the network role
Interface Configurations
Creating a network
Creating a subnet
Configuring L3 routers
Configuring Load Balancing as a Service (LBaaS)
Configuring Firewall as a Service (FWaaS)
Configuring VPN as a Service (VPNaaS)
Installing a new cluster With OpenStack Networking
Troubleshooting OpenStack Networking
RPCDaemon
RPCDaemon overview
RPCDaemon operation
RPCDaemon configuration
Command line options

A network deployed with the Rackspace Private Cloud cookbooks uses nova-network by default, but OpenStack Networking (Neutron) can be manually enabled. This section discusses the concepts behind the Rackspace deployment of Neutron and provides instructions for configuring it in your cluster.

Note

When using OpenStack Networking (Neutron), Controller nodes with Networking features and standalone Networking nodes require namespace kernel features which are not available in the default kernel shipped with RHEL 6.4, CentOS 6.4, and older versions of these operating systems. More information about Neutron limitations is available in the OpenStack documentation, and more information about RedHat-derivative kernel limitations is provided in the RDO FAQ.

If you require OpenStack Networking using these features, Rackspace recommends that you use Rackspace Private Cloud v 4.2.0 with CentOS 6.4, or Ubuntu 12.04 for the Controller and Networking nodes.

On CentOS 6.4, after applying the single-network-node role to the device, you must reboot it to use the appropriate version of the CentOS 6.4 kernel.

OpenStack Networking concepts

The Rackspace Private Cloud cookbooks deploy OpenStack Networking components on the Controller, Compute, and Network nodes in the following configuration:

  • Controller node: hosts the Neutron server service, which provides the networking API and communicates with and tracks the agents.
  • Network Node:
    • DHCP agent: spawns and controls dnsmasq processes to provide leases to instances. This agent also spawns neutron-ns-metadata-proxy processes as part of the metadata system.
    • Metadata agent: Provides a metadata proxy to the nova-api-metadata service. The neutron-ns-metadata-proxy direct traffic that they receive in their namespaces to the proxy.
    • OVS plugin agent: Controls OVS network bridges and routes between them via patch, tunnel, or tap without requiring an external OpenFlow controller.
    • L3 agent: performs L3 forwarding and NAT.
  • Compute node: has an OVS plugin agent
Note

You can use the single-network-node role alone or in combination with the ha-controller1 or single-compute roles.

Network types

The OpenStack Networking configuration provided by the Rackspace Private Cloud cookbooks allows you to choose between VLAN or GRE isolated networks, both provider- and tenant-specific. From the provider side, an administrator can also create a flat network.

The type of network that is used for private tenant networks is determined by the network_type attribute, which can be edited in the Chef override_attributes. This attribute sets both the default provider network type and the only type of network that tenants are able to create. Administrators can always create flat and VLAN networks. GRE networks of any type require the network_type to be set to gre.

Namespaces

For each network you create, the Network node (or Controller node, if combined) will have a unique network namespace (netns) created by the DHCP and Metadata agents. The netns hosts an interface and IP addresses for dnsmasq and the neutron-ns-metadata-proxy. You can view the namespaces with the ip netns [list] command, and can interact with the namespaces with the ip netns exec <namespace> <command> command.

Metadata

Not all networks or VMs need metadata access. Rackspace recommends that you use metadata if you are using a single network. If you need metadata, you will need to enable metadata route injection when creating a subnet.

If you need to use a default route and provide instances with access to the metadata route, refer to Creating a Subnet for more information. Note that this approach will not provide metadata on cirros images. However, booting a cirros instance with nova boot --config-drive will bypass the metadata route requirement.

OVS bridges

An OVS bridge for provider traffic is created and configured on the nodes where single-network-node and single-compute are applied. Bridges are created, but physical interfaces are not added. An OVS bridge is not created on a Controller-only node.

When creating networks, you can specify the type and properties, such as Flat vs. VLAN, Shared vs. Tenant, or Provider vs. Overlay. These properties identify and determine the behavior and resources of instances attached to the network. The cookbooks will create bridges for the configuration that you specify, although they do not add physical interfaces to provider bridges. For example, if you specify a network type of GRE, a br-tun tunnel bridge will be created to handle overlay traffic.

OpenStack Networking and high availability

OpenStack Networking has been made HA as of Rackspace Private Cloud v 4.2.0, and has been tested on Ubuntu 12.04 and CentOS 6.4. For an HA configuration, you must configure the OpenStack networking roles on the Controller node. Do not use a standalone Network node if you require OpenStack Networking to be HA. For more information about HA, refer to Controller Node High Availability.

OpenStack Networking prerequisites

If you are using OpenStack Networking, you will have to specify it in your Chef environment. You should also have the following information:

  • The Nova network CIDR
  • The public network CIDR
  • The management network CIDR
  • The name of the Nova cluster
  • The password for an OpenStack administrative user

The nova, public, and management networks must be pre-existing, working networks with addresses already configured on the hosts. They are defined by CIDR range, and any network interface with an address within the named CIDR range is assumed to be included in that network. The CIDRs must be provisioned by your hosting provider or yourself.

You can specify the same CIDR for multiple networks. All three networks can use the same CIDR, but this is not recommended in production environments.

The following table lists the networks and the services that bind to the IP address within each of these general networks.

Network Services
nova
  • keystone-admin-api
  • nova-xvpvnc-proxy
  • nova-novnc-proxy
  • nova-novnc-server
public
  • graphite-api
  • keystone-service-api
  • glance-api
  • glance-registry
  • nova-api
  • nova-ec2-admin
  • nova-ec2-public
  • nova-volume
  • neutron-api
  • cinder-api
  • ceilometer-api
  • horizon-dash
  • horizon-dash_ssl
management
  • graphite-statsd
  • graphite-carbon-line-receiver
  • graphite-carbon-pickle-receiver
  • graphite-carbon-cache-query
  • memcached
  • collectd
  • mysql
  • keystone-internal-api
  • glance-admin-api
  • glance-internal-api
  • nova-internal-api
  • nova-admin-api
  • cinder-internal-api
  • cinder-admin-api
  • cinder-volume
  • ceilometer-internal-api
  • ceilometer-admin-api
  • ceilometer-central

You should also have Controller and Compute nodes already configured. You may configure OpenStack Networking roles on the Controller node or on a standalone Network node. You should not use a standalone Network node if you need OpenStack Networking to be HA.

In Ubuntu, you may also need to install a linux-headers package on the nodes where the Networking roles will be applied and on the Compute nodes. This enables the openvswitch-datapath-dkms to build the OpenVSwitch module required by the service. The package is installed with the following command:

$ sudo apt-get -y install linux-headers-`uname -r`
            

Configuring OpenStack Networking

By default, the Rackspace Private Cloud cookbooks create a cluster that uses nova-network. The configuration also supports OpenStack Networking (Neutron, formerly Quantum), which must be enabled by editing the override attributes in the Chef environment.

Networking infrastructure

The procedures documented here assume that you will be adding the single-network-node role to an existing cluster where a Controller node and at least one Compute node have already been configured. This role can be applied to a standalone network node, or to the already-configured Controller node. The assumed environment is as follows:

  • A minimum of one Controller node configured with the ha-controller1 role, with an out-of-band eth0 management interface. The single-network-node role can also be applied to this node, so that the node serves the dual purpose of hosting core OpenStack services as well as networking services.
  • A minimum of one Compute node configured with the single-compute role, with an out-of-band eth0 management interface and an eth1 physical provider interface..
  • If you do not require OpenStack Networking HA and are not using the Controller node for the single-network-node role, you will need one Network node that will be configured with the single-network-node role, with an out-of-band eth0 management interface and an eth1 physical provider interface.

If you are installing an entirely new cluster, refer to Installing a New Cluster With OpenStack Networking.

Editing the override attributes for Networking

Before you apply the single-network-node role, you must update the override_attributes section of your environment file with the OpenStack Networking attributes.


Procedure 1.1. To edit override attributes for networking

  1. On the Chef server, run the knife environment edit command and edit the nova network section of the override_attributes to specify OpenStack Networking, as in the following example:

    "override_attributes": {
        "nova": {
             "network": {
                  "provider": "neutron"
             }
        },
        "osops_networks": {
            "nova": "<novaNetworkCIDR>",
            "public": "<publicNetworkCIDR>",
            "management": "<managementNetworkCIDR>"
        }
    }
                    

    If you require a GRE network, you must also add a network_type attribute.

    "override_attributes": {
        "nova": {
             "network": {
                  "provider": "neutron"
             }
        },
        "neutron": {
             "ovs": {
                  "network_type": "gre"
             }
        }
    }
                    
  2. If you need to customize your provider_network settings, you will need to add a provider_networks block to the override_attributes. Ensure that the label and bridge settings match the name of the interface for the instance network, and that the vlans value is correct for your provider VLANs, as in the following example:

    "neutron": {
        "ovs": {
            "provider_networks": [
                {
                    "label": "ph-eth1",
                    "bridge": "br-eth1",
                    "vlans": "1:1000"
                }
            ]
        }
    } 
                            
  3. If you are working in an HA environment, add the neutron-api VIP to the override-attributes. These attribute blocks define which VIPs are associated with which service, and they also define the virtual router ID (vrid) and network for each VIP.

    "override_attributes": {
        "vips": {
            "rabbitmq-queue": "<rabbitmqVIP>",
            "horizon-dash": "<haproxyVIP>",
            "horizon-dash_ssl": "<haproxyVIP>"
            "keystone-service-api": "<haproxyVIP>",
            "keystone-admin-api": "<haproxyVIP>",
            "keystone-internal-api": "<haproxyVIP>",
            "nova-xvpvnc-proxy": "<haproxyVIP>",
            "nova-api": "<haproxyVIP>",
            "nova-ec2-public": "<haproxyVIP>",
            "nova-novnc-proxy": "<haproxyVIP>",
            "cinder-api": "<haproxyVIP>",
            "glance-api": "<haproxyVIP>",
            "glance-registry": "<haproxyVIP>",
            "swift-proxy": "<haproxyVIP>",
            "neutron-api": "<haproxyVIP>",
            "mysql-db": "<mysqlVIP>",
            "config": {
                "<rabbitmqVIP>": {
                    "vrid": <rabbitmqVirtualRouterID>,
                    "network": "<networkName>"
                },
                "<haproxyVIP>": {
                    "vrid": <haproxyVirtualRouterID>,
                    "network": "<networkName>"
                },
                "<mysqlVIP>": {
                    "vrid": <mysqlVirtualRouterID>,
                    "network": "<networkName>"
                }
            }
        }                      
    }
     

Apply the network role

The single-network-node role can be applied in your environment after the Controller node installation is complete and eth1 is configured. You can apply this role to a standalone Network node, or to the existing Controller node.

Note

On CentOS 6.4, after applying the single-network-node role to the device, you must reboot it to use the appropriate version of the CentOS 6.4 kernel.


Procedure 1.2. To apply the network role

  1. Add the single-network-node role to the target node's run list.

    # knife node run_list add <deviceHostname> 'role[single-network-node]'
                                
  2. Log in to the target node via ssh.

  3. Run chef-client on the node.

    It will take chef-client several minutes to complete the installation tasks. chef-client will provide output to help you monitor the progress of the installation.

Interface Configurations

Rackspace Private Cloud supports the following interface configurations:

  • Separated Plane: The control plane includes the management interface, management IP address, service IP bindings, VIPs, and VIP traffic. The control plane communicates through a different logical interface than the data plane, which includes all OpenStack instance traffic.
  • Combined Plane: Both control and data planes communicate through a single logical interface on a single device.

A Separated Plane configuration is recommended in situations where high traffic on the data plane will impede service traffic on the control plane. In this configuration, you need separate switches, firewalls, and routers for each plane. A Combined Plane configuration maximizes port density efficiency, but is more difficult to configure. If you need a Combined Plane configuration, contact Rackspace Support for assistance.

Separated Plane Configuration

This section describes how to manually create separated plane configurations on Ubuntu and CentOS.

The out-of-band eth0 management interfaces are where the primary IP address of the node is located, and is not controlled by OpenStack Networking. The eth1 physical provider interfaces have no IP addresses and must be configured to be "up" on boot.


Procedure 1.3. To create a separated plane configuration on Ubuntu

  1. Add an eth1 entry in /etc/network/interfaces:

    auto eth1 
    iface eth1 inet manual 
      up ip link set $IFACE up 
      down ip link set $IFACE down
    
                            

    This ensures that eth1 will be up on boot.

  2. To bring up eth1 without rebooting the device, use the ifup command:

    # ifup eth1
                        
  3. Add the interface as a port in the bridge.

    # ovs-vsctl add-port br-eth1 eth1
       


Procedure 1.4. To create a separated plane configuration on CentOS

  1. Create the OVS bridge in the file /etc/sysconfig/network-scripts/ifcfg-br-eth1.

    DEVICE=br-eth1
    ONBOOT=yes
    BOOTPROTO=none
    STP=off
    NM_CONTROLLED=no
    HOTPLUG=no
    DEVICETYPE=ovs
    TYPE=OVSBridge
                            
  2. Modify the bridge interface in the file /etc/sysconfig/network-scripts/ifcfg-eth1.

    DEVICE=eth1
    BOOTPROTO=none
    HWADDR=<hardwareAddress>
    NM_CONTROLLED=no
    ONBOOT=yes
    TYPE=OVSPort
    DEVICETYPE="ovs"
    OVS_BRIDGE=br-eth1
    UUID="UUID"
    IPV6INIT=no
    USERCTL=no
                        

Creating a network

The following procedure assumes that you have added the single-network-node role in your environment and configured the environment files and interfaces as described in the previous sections.

This documentation provides a high-level overview of basic OpenStack Networking tasks. For detailed documentation, refer to the Networking chapter of the OpenStack Cloud Administrator Guide.

Authentication details are required for the neutron net-create command. These details can be found in the openrc file created during installation.

On the Controller node, create a provider network with the neutron net-create command.

For a flat network:

# neutron net-create --provider:physical_network=ph-eth1 \
  --provider:network_type=flat <networkName>
                                

For a VLAN network:

# neutron net-create --provider:physical_network=ph-eth1 \
  --provider:network_type=vlan --provider:segmentation_id=100 
  <networkName>
                                

For a GRE network:

# neutron net-create --provider:network_type=gre \
  --provider:segmentation_id=100 <networkName>
                                

For an L3 router external network:

# neutron net-create --provider:network_type=local \
  --router:external=True public
                                

Creating a subnet

Once you have created a network, you can create subnets with the neutron subnet-create command:

# neutron subnet-create --name range-one <networkName> <subnetCIDR>
                                

If you are using metadata, you will need to configure a default metadata route. To accomplish this, you will need to create a subnet with the following configuration:

  • No gateway IP is specified.
  • A static route is defined from 0.0.0.0/0 to your gateway IP address.
  • The DHCP allocation pool is defined to accommodate the gateway IP by starting the pool range beyond the gateway IP.
# neutron subnet-create --name <subnetName> <networkName> \
  --no-gateway \
  --host-route destination=0.0.0.0/0,nexthop=10.0.0.1 \ 
  --allocation-pool start=10.0.0.2,end=10.0.0.251
                        

With this configuration, dnsmasq will pass both routes to instances. Metadata will be routed correctly without any changes on the external gateway.

If you have a non-routed network and are not using a gateway, you only need create the subnet with --no-gateway. A metadata route will automatically be created.

# neutron subnet-create --name <subnetName> <networkName> \
  --no-gateway \ 
                        

You can also specify --dns-nameservers for the subnet. If a name server is not specified, the instances will try to make DNS queries through the default gateway IP.

# neutron subnet-create --name <subnetName> <networkName> \
  --dns-nameservers list=true <dnsNameserverIP> \ 
                        

Configuring L3 routers

Rackspace Private Cloud supports L3 routing. L3 routers can connect multiple OpenStack Networking L2 networks and can also provide a gateway to connect one or more private L2 networks to a shared external network. An L3 router uses SNAT to manage all traffic by default.

When you create a network to use as the external network for an L3 router, use the following command.

# neutron net-create --provider:network_type=local \
  --router:external=True public
                                 

After you have created a network and subnets, you can use the neutron router-create command to create L3 routers.

# neutron router-create <routerName>
            

For an internal router, use neutron router-interface-add to add subnets to the router.

# neutron router-interface-add <routerName> <subnetUUID> 
            

To use a router to connect to an external network, which will enable the router to act as a NAT gateway for external traffic, use neutron router-gateway-set

# neutron router-gateway-set <routerName> <externalNetworkID> 
            

You can view all routers in the environment with the router-list command.

# neutron router-list 
            

Additional information about L3 routing can be found in the OpenStack Networking Adminstrator Guide.

Configuring Load Balancing as a Service (LBaaS)

Rackspace Private Cloud v4.2.2 includes a technology preview implementation of LBaaS with the neutron-lbaas-agent recipe, which is part of the single-network-node role. The Rackspace Private Cloud cookbooks currently support only the HAProxy LBaaS service provider.

LBaaS is offered as a technology preview feature for Rackspace Private Cloud and is not fully supported at this time. It is not currently recommended for Rackspace Private Cloud production environments.

To enable LBaaS, you must first have the v4.2.2 or later version of the cookbooks with the single-network-node role applied to a node in your environment. You must then set the following attribute to true:

node["neutron"]["lbaas"]["enabled"] = true
                    

To enable the LBaaS button in Horizon, you must set the following attribute in your environment:

default["horizon"]["neutron"]["enable_lb"] = "True" 
                

Additional information about using LBaaS in OpenStack can be found on the OpenStack wiki:

Configuring Firewall as a Service (FWaaS)

Rackspace Private Cloud v4.2.2 includes a technology preview implementation of FWaaS with the neutron-fwaas-agent recipe, which is part of the single-network-node role.

FWaaS is offered as a technology preview feature for Rackspace Private Cloud and is not fully supported at this time. This feature can only be accessed via the API and CLI. It is not currently recommended for Rackspace Private Cloud production environments.

To enable FWaaS, you must first have the v4.2.2 version of the cookbooks with the single-network-node role applied to a node in your environment. You must then set the following attribute to true:

node["neutron"]["fwaas"]["enabled"] = true
                    

Additional information about using FWaaS in OpenStack can be found on the OpenStack wiki article on FWaaS.

Configuring VPN as a Service (VPNaaS)

Rackspace Private Cloud v4.2.2 includes a technology preview implementation of VPNaaS with the neutron-vpnaas-agent recipe, which is part of the single-network-node role.

VPNaaS is offered as a technology preview feature for Rackspace Private Cloud and is not fully supported at this time. This feature can only be accessed via the API and CLI. It is not currently recommended for Rackspace Private Cloud production environments.

To enable VPNaaS, you must first have the v4.2.2 version of the cookbooks with the single-network-node role applied to a node in your environment. You must then set the following attribute to true:

node["neutron"]["vpnaas"]["enabled"] = true
                    

Additional information about using VPNaaS in OpenStack can be found on the OpenStack wiki article on VPNaaS.

Note

Rackspace Private Cloud Support does not support currently support technology preview features such as LBaaS, FWaaS, and VPNaaS. They are not recommended for Rackspace Private Cloud production environments, and are included for experienced OpenStack users only.

Installing a new cluster With OpenStack Networking

If you are installing a cluster and want to install OpenStack Networking from the start, you will follow the instructions documented in Installing OpenStack With Rackspace Private Cloud Tools. You will take the following additional steps:

  • Modify the override_attributes as documented in Editing the Override Attributes for Networking.
  • Add the single-network-node role to the runlist when creating Controller nodes, as shown in the following examples.

    For a single controller:

    # knife node run_list add <deviceHostname> \
      'role[ha-controller1],role[single-network-node]' 
                            

    For HA controllers:

    # knife node run_list add <deviceHostname> \
      'role[ha-controller1],role[single-network-node]'
    
    # knife node run_list add <deviceHostname> \
      'role[ha-controller2],role[single-network-node]' 
                                

Troubleshooting OpenStack Networking

In the event you experience issues adding or deleting subnets, you may be experiencing issues with dnsmasq or neutron-dhcp-agent. The following steps will help you determine if the issue is with dnsmasq, neutron-server, or neutron-dhcp-agent.


Procedure 1.5. To identify OpenStack Networking issues

  1. Ensure that dnsmasq is running with pgrep -fl dnsmasq. If it is not, restart neutron-dhcp-agent.

  2. If dnsmasq is running, confirm that that the IP address is in the namespace with ip netns list.

  3. Identify the qdhcp-network <networkUUID> namespace with ip netns exec qdhcp-<networkUUID> ip and ensure that the IP on the interface is present and matches the one present for dnsmasq.

    To verify what the expected IP address is, use neutron port-list and neutron port-show <portUUID>.

  4. Use cat /var/lib/neutron/dhcp/<networkUUID>/host to determine the leases that dnsmasq is configured with by OpenStack Networking.

    If the dnsmasq configuration is correct, but dnsmasq is not responding with leases and the bridge/interface is created and running, pkill dnsmasq and restart neutron-dhcp-agent.

    If dnsmasq does not include the correct leases, verify that neutron-server is running correctly and that it can communicate with neutron-dhcp-agent. If it is running correctly, and the bridge/interface is created and running, restart neutron-dhcp-agent.

RPCDaemon

This section describes the Rackspace Private Cloud RPCDaemon, a utility that facilitates high availability of virtual routers and DHCP services in a Rackspace Private Cloud OpenStack cluster using OpenStack Networking (Neutron, formerly Quantum).

Generally, you will not need to interact directly with RPCDaemon. This information is provided to give you an understanding of how Rackspace has implemented HA in OpenStack Networking and to assist in the event of troubleshooting.

RPCDaemon overview

The RPCDaemon is a python-based daemon that is designed to monitor network topology changes in an OpenStack cluster running OpenStack Networking, and which automatically makes changes to the networking configuration to maintain availability of services even in the event of an OpenStack Networking node failure. RPCDaemon supports both Grizzly and Havana releases of OpenStack and is part of Rackspace Private Cloud v4.1.3 and v4.2.2.

Currently, OpenStack does not include built-in support for highly available virtual routers or DHCP services. These services are scheduled to a single network node and are not rescheduled when that node fails. Since these services are normally scheduled evenly, the failure of a single network node can cause failures in IP addressing and routing on a number of networks proportional to the number of network nodes in use.

To avert this risk, most production deployments of OpenStack use the nova-network driver in HA mode, or use OpenStack Networking with provider networks to externalize these services for high availability. However, these solutions reduce the utility of OpenStack's software-defined networking. Rackspace developed RPCDaemon to address these issues and improve the utility of OpenStack Networking to meet Rackspace Private Cloud production requirements.

RPCDaemon operation

RPCDaemon monitors the AMQP message bus for actionable events. It is automatically installed on a network node as part of the single-network-node role.

Three plugins are currently implemented:

  • DHCPAgent: Implements high availability in DHCP services.
  • L3Agent: Implements high availability in virtual routers.
  • Dump: Dumps message traffic. This is typically only used for development or troubleshooting purposes and is not discussed here.

DHCPAgent plugin

The DHCPAgent plugin performs the following tasks:

  • Periodically removes DHCP services OpenStack Networking DHCP agent that is no longer reporting itself as available.
  • Periodically provisions DHCP services on every Neutron DHCP agent node that does not already have them provisioned.
  • Ensures that DHCP services are deprovisioned on all Neutron DHCP agent nodes when a DHCP enabled network is removed.

The operational effect of these actions is that when you create new DHCP enabled networks, DHCP servers appear on every Neutron network node rather than on a single Neutron network node. While this slightly increases DHCP traffic from multiple offers to each DHCP discovery request, it does so safely, because the OpenStack DHCP implementation uses DHCP reservations to ensure virtual machines always boot with predictable IP addresses.

Because of this, DHCP requests can continue to be serviced by other available network nodes, even in the event of catastrophic failure of a single network node.

L3Agent plugin

The L3Agent plugin runs periodically and only monitors virtual routers that are currently assigned to L3 agents. If the L3Agent plugin observes an inactive L3 agent that OpenStack Networking shows as hosting a virtual router, then the L3Agent plugin deprovisions the virtual router from that node and reprovisions it on another active Neutron L3 agent node.

This reprovisioning action does not occur immediately, and there will be some minimal network interruption while the virtual router is migrated. However the corrective action happens without intervention, and any network outage is transient. This process does allow a higher availability of virtual routing, and the minimal interruption may be acceptable for some production workloads.

RPCDaemon configuration

Not all configuration options are currently exposed by the Rackspace Private Cloud cookbooks. This section describes the configuration values in the RPCDaemon configuration file, which is typically located at /etc/rpcdaemon.conf.

General RPCDaemon options

General daemon options are specified in the Daemon section of the configuration file. Available options include:

  • plugins: Space-separated list of plugins to load. Valid options include L3Agent, DHCPAgent, and Dump.
  • rpchost: Kombu connection url for the OpenStack message server. In the case of RabbitMQ, an IP address is sufficient. See the Kombu Documentation for more information on Kombu connection URLs.
  • pidfile: Location of the daemon pid file.
  • logfile: Location of the log file.
  • loglevel: Verbosity of logging. Valid options include DEBUG, INFO, WARNING, ERROR, and CRITICAL.
  • check_interval: The interval in seconds in which to run plugin checks.

DHCPAgent and L3Agent options

DHCPAgent plugin options are specified in the DHCPAgent section of the configuration file and L3Agent plugin options are specified in the L3Agent section of the configuration file.

Logs will also be sent to the logfile specified in the Daemon section, while the log level is independently configurable.

The following configuration options are available for the DHCPAgent and L3Agent plugins:

  • conffile: Path to the neutron configuration file.
  • loglevel: Verbosity of logging.
  • timeout: Max time for API calls to complete. This also affects failover speed.
  • queue_expire: Auto-terminate rabbitmq queues if no activity in specified time.

Dump

The Dump plugin options are specified in the Dump section of the configuration file. The Dump plugin is most useful when the loglevel option has been set to DEBUG. In this mode, dumped messages will be logged in the logfile specified in the Daemon section.

The Dump plugin is most useful when running in foreground mode. See Command Line Options for more information.

The following configruation options are available for the Dump plugin:

  • loglevel: Verbosity of logging. DEBUG will produce the most useful results.
  • queue: Queue to dump. Typically neutron to view network related messages.

Command line options

The command rpcdaemon currently uses two command-line options:

  • -d: Run in foreground and do not detach. When running in foreground, a pidfile is not dropped, the default log level is set to DEBUG, and the daemon logs to stderr rather than the specified logfile. This is most useful for running the Dump plugin, but can be helpful in development mode as well.
  • -c : Display the path to the configuration file. The default configuration file path is /usr/local/etc/rpcdaemon.conf, but init scripts on packaged version of RPCDaemon pass -c /etc/rpcdaemon.conf.






© 2011-2013 Rackspace US, Inc.

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License


See license specifics and DISCLAIMER