Support: 1-800-961-4454
Sales Chat
1-800-961-2888

Inside My Home Rackspace Private Cloud, OpenStack Lab, Part 4: Neutron

So after following the first three posts, we now have a Rackspace Private Cloud powered by OpenStack running with two Controllers (HA) and three Computes. So now what? Well the first thing we need to do is get our hands dirty with the OpenStack Networking component, Neutron, and create a network that our instances can be spun up on. For the home lab, I have dumb unmanaged switches – and I take advantage of that by creating a Flat Network that allows my instances access out through my home LAN on the 192.168.1.0/24 subnet.

Logging on to the environment

We first need to get to the OpenStack lab environment and there are a couple of routes: we can use Web Dashboard, Horizon which lives on the “API_VIP” IP I created when I set up my environment (see step 9 on Part 3) – which is https://192.168.1.253/ (and answering yes to the SSL warning due to it being a self-signed certificate) or I can use the command line (CLI). And the easiest way to use the CLI is to ssh to the first controller, openstack1 (192.168.1.101), and change to the root user, then source the environment file (/root/openrc) that was created that sets up the various environment variables to allow you to communicate with OpenStack.

To use the CLI on the first controller, issue the following:

ssh openstack1
sudo -i
. openrc

The /root/openrc file contains the following:

# This file autogenerated by Chef
# Do not edit, changes will be overwritten
# COMMON OPENSTACK ENVS
export OS_USERNAME=admin
export OS_PASSWORD=secrete
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.1.253:5000/v2.0
export OS_AUTH_STRATEGY=keystone
export OS_NO_CACHE=1
# LEGACY NOVA ENVS
export NOVA_USERNAME=${OS_USERNAME}
export NOVA_PROJECT_ID=${OS_TENANT_NAME}
export NOVA_PASSWORD=${OS_PASSWORD}
export NOVA_API_KEY=${OS_PASSWORD}
export NOVA_URL=${OS_AUTH_URL}
export NOVA_VERSION=1.1
export NOVA_REGION_NAME=RegionOne
# EUCA2OOLs ENV VARIABLES
export EC2_ACCESS_KEY=b8bab4a938c340bbbf3e27fe9527b9a0
export EC2_SECRET_KEY=787de1a83efe4ff289f82f8ea1ccc9ee
export EC2_URL=http://192.168.1.253:8773/services/Cloud

These details match up to the admin user’s details specified in the environment file that was created in Step 9 in Part 3.

With this loaded into our environment we can now use the command line clients to control our OpenStack cloud. These include:

  • nova for launching and controlling instances
  • neutron for managing networking
  • glance for manipulation of images that are used to create our instances
  • keystone for managing users and tenants (projects)

There are, of course, others such as cinder, heat, swift, etc., but they’re not configured in the lab yet. To get an idea of what you can do with the CLI, head over to this @eglute‘s blog for a very handy, quick “cheat sheet” of commands.

Creating the home lab Flat Network

The five OpenStack servers, and everything else on my network, hang off a single subnet: 192.168.1.0/24. Each and every one of those devices gets an IP from that range and is configured to use a default gateway of 192.168.1.254 – which is great, it means they get Internet access.

I want my instances running under OpenStack to also have Internet access, and be accessible from the network (from my laptop, or through PAT on my firewall/router to expose some services such as a webserver running on one of my OpenStack instances). To do this I create a Flat Network, where I allocate a small DHCP range so as not to conflict with any other IPs or ranges currently in use.

For more information on Flat Networking, view @jimmdenton‘s blog post.

To create this Flat Network to co-exist on the home LAB subnet of 192.168.1.0/24 I do the following

1. First create the network (network_type=flat)  (all one line):

neutron net-create \
    --provider:physical_network=ph-eth1 \
    --provider:network_type=flat \
    --router:external=true \
    --shared flatNet

2. Next create the subnet (all one line):

neutron subnet-create \
    --name flatSubnet \
    --no-gateway \
    --host-route destination=0.0.0.0/0,nexthop=192.168.1.254 \
    --allocation-pool start=192.168.1.150,end=192.168.1.170 \
    --dns_nameserver 192.168.1.1 \
    flatNet 192.168.1.0/24

Now what’s going on in that subnet-create command is as follows:

–no-gateway specifies no default gateway, but…

–destination=0.0.0.0/0,nexthop=192.168.1.254 looks suspiciously like a default route – it is.

The effect of –no-gateway is that something extra has to happen for an instance to access the Metadata service (where it goes to get cloud-init details, ssh-keys, etc.). As it can’t rely on a gateway address (it doesn’t exist) to get to the next hop to the 169.254/16 network where the Metadata service lives – a route is injected into the instance’s routing table instead.

But that’s Metadata sorted, what about access to anything other than 192.168.1.0/24 – i.e. everything else? This is taken care of by putting in another route (–destination=) but simply has the same effect as setting a gateway due to the settings used (0.0.0.0/0,nexthop=192.168.1.254). That nexthop address is the default gateway on my LAN, therefore the instance gets Internet access. With Neutron we can add in a number of routes – and these are automatically created on the instance’s routing table. Very handy.

–allocation-pool is the DHCP address pool range.  I run DHCP on my network for everything else, but I deliberately specify ranges in both so as not to ever conflict (obviously). I set this range to be between 192.168.1.150 and 192.168.1.170.

–dns_nameserver 192.168.1.1 sets the resolver to my NAS (192.168.1.1), which is running Dnsmasq and performs DNS resolution for all my LAN clients. As the instance gets an IP on this network, it can reach my DNS server.

Now that we have a network in place, I can spin up an instance – but before that, there are a couple of other housekeeping items that need to be performed first: creating/importing an SSH keypair and setting security group rules to allow me access (SSH) to the instance.

Creating/Importing keypairs

Keypairs are SSH Public/Private keypairs that you would create when wanting passwordless (or key-based) access to Linux instances. They are the same thing in OpenStack. What happens though is that OpenStack has a copy of the Public key of your keypair, and when you specify that key when booting an instance, it squirts it into a user’s .ssh/authorized_keys file – meaning that you can ssh to it using the Private portion of your key.

To create a keypair for use with OpenStack, issue the following command:

nova keypair-add demo > demo.pem
chmod 0600 demo.pem

demo.pem will then be the private key, and the name of the key when booting an instance will be called “demo.” Keep the private key safe and ensure it is only readable by you.

If creating a whole new keypair isn’t suitable (it’s an extra key to carry around), you can always use the one you’ve been using for years by importing it into OpenStack. To import a key, you’re effectively copying the public key into the database so that it can be used by OpenStack when you boot an instance. To do this, issue the following:

nova keypair-add --pub-key .ssh/id_rsa.pub myKey

What this does is take a copy of .ssh/id_rsa.pub and assigns the name myKey to it. You can now use your key to access new instances you spin up.

Creating default security group rules

Before we can spin up an instance (although technically this step could also be done after you have booted on one up) we need to allow access to it as by default, no traffic can flow in to it – not even a ping.

To allow pings (ICMP) and SSH to the Default group (I tend to not allow any more than that in this group) issue the following commands:

neutron security-group-rule-create --protocol ICMP --direction ingress default
neutron security-group-rule-create --protocol tcp --direction ingress --port-range-min 22 --port-range-max 22 default

Adjusting m1.tiny so Ubuntu can boot with 512Mb

My servers don’t have much memory – the Computes (hypervisors – where the instances actually spawn up in) only have 4Gb RAM, so I value the m1.tiny when trying various things – especially when I have a need to spin up a lot of instances. The problem is that by default in Havana, an m1.tiny specifies 1Gb for the disk of an instance. A Ubuntu image requires more than 1Gb and OpenStack is unable to “shrink” an instance smaller than what was created for the image. To fix this we amend the m1.tiny so that the disk becomes “unlimited” again, just like it was in Grizzly and before. To do this we issue the following:

nova flavor-delete 1
nova flavor-create m1.tiny 1 512 0 1

Havana’s nova command is unable to amend flavors – so we delete and recreate to mimic this behaviour. (Horizon does this process too when you edit a flavor).

We’re now ready to boot an instance, or are we?

Loading images

A Rackspace Private Cloud can automatically pull down and upload images into Glance by setting the following in the environment JSON file and running chef-client:

"glance": {
  "images": [
    "cirros",
    "precise"
  ],
  "image" : {
    "cirros": "https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img",
    "precise": "http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img"
  },
  "image_upload": true
 },

This is handy at installation time, but can also add to RPC installation times. Tune this to suit. What I tend to do is download images, store them on my Web server (NAS2, 192.168.1.2), which is connected to the same switch as my OpenStack environment, and change the URL for the images to point to the NAS2 Web server instead. If you do this – wget those URLs above and store them in /share/Web. When you enable the default Webserver service under QNAP it becomes available on the network. Change the above JSON snippet for Chef to the following:

"glance": {
  "images": [
    "cirros",
    "precise"
  ],
  "image" : {
    "cirros": "http://192.168.1.2/cirros-0.3.0-x86_64-disk.img",
    "precise": "http://192.168.1.2/precise-server-cloudimg-amd64-disk1.img"
  },
  "image_upload": true
 },

To load images using the glance client issue the following:

glance image-create \
    --name='precise-image' \
    --disk-format=qcow2 \
    --container-format=bare \
    --public < precise-server-cloudimg-amd64-disk1.img

What this will do is load that image precise-server-cloudimg-amd64-disk1.img that you have downloaded manually into Glance. If you don’t have that image downloaded, you can get Glance to reference it on a web server instead, rather than it being stored in Glance:

glance image-create \
    --name='precise-image' \
    --disk-format=qcow2 \
    --container-format=bare \
    --public \
    --location http://webserver/precise-server-cloudimg-amd64-disk1.img

Booting an instance

Now we have an OpenStack lab, a network, set up our keypair and have access to any instance – we can now boot an instance. To do this we first list the available images:

nova image-list

And then we can use one of those images for our instance.

The next thing to do is list the Neutron networks available:

neutron net-list

Now that we have these pieces of information – along with knowing the name of the keypair (nova keypair-list) we can now boot our instance (I grab the UUID of “flatNet” and store it in a variable for me to automate this step when I first spin up an instance):

flatNetId=$(neutron net-list | awk '/flatNet/ {print $2}')
nova boot myInstance \
    --image precise-image \
    --flavor m1.tiny \
    --key_name demo \
    --security_groups default \
    --nic net-id=$flatNetId

You can watch this boot up by viewing the console output with the following command:

nova console-log myInstance

When this has booted up, I’ll have an instance available in the range 192.168.1.150 that is accessible from my network – check this by viewing the nova list output:

nova list

This will show you the IP that the instance will be assigned. As this is on my home LAN network, I can SSH to this instance as if it was a server connected to my switch:

root@openstack1:~# ssh -i demo.pem root@192.168.1.150
Welcome to Ubuntu 12.04.3 LTS (GNU/Linux 3.2.0-57-virtual x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Fri Feb 14 16:05:59 UTC 2014
System load: 0.05 Processes: 62
 Usage of /: 39.1% of 1.97GB Users logged in: 0
 Memory usage: 8% IP address for eth0: 192.168.1.150
 Swap usage: 0%
Graph this data and manage this system at https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:

http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

root@myInstance:~#

Now the basics are out of the way, the next blog post will look at more advanced uses of OpenStack and the instances!

About the Author

This is a post written and contributed by Kevin Jackson.

Kevin Jackson, the author of OpenStack Cloud Computing Cookbook, is part of the Rackspace Private Cloud Team and focuses on assisting enterprises to deploy and manage their Private Cloud infrastructure. Kevin also spends his time conducting research and development with OpenStack, blogging and writing technical white papers.


More

Leave a New Comment

(Required)


Racker Powered
©2014 Rackspace, US Inc.