These Ubuntu Hardy Heron articles will take you from a 'barebones' Ubuntu Hardy Cloud Server to a secured and up to date Cloude Server ready for your server software (or whatever you use the Cloud Server for).
Securing your Cloud Server as soon as possible is a great way of starting your Cloud Server administration.
As soon as you have your IP address and password for your Cloud Server, login via SSH:
If you rebuilt your Cloud Server, you may get a message informing you that the "remote host identification has changed".
When you log into a Cloud Server via SSH, one of the security features is matching the remote host with known keys. When you rebuild a Cloud Server, the remote host key changes. As such, your computer thinks there is something dodgy going on.
All you need to do is remove the older entry for the Cloud Server IP:
On your LOCAL computer, edit the SSH known_hosts file and remove any entries that point to your Cloud Server IP address.
If you are not using Linux or a Mac on your LOCAL computer, the location of the known_hosts file will differ. Please refer to your own OS for details of where this file is kept.
Once logged in to the VPS, immediately change your root password to one of your choosing.
Add an admin user (I've used the name demo here but any name will do).
As you know, we never log in as the root user (this initial setup is the only time you would need to log in as root). As such, the main administration user (demo) needs to have sudo (Super User) privileges so they can, with a password, complete administrative tasks.
To configure this, give the 'visudo' command:
At the end of the file add:
demo ALL=(ALL) ALL
One effective way of securing SSH access to your Cloud Server is to use a public/private key. This means that a 'public' key is placed on the server and the 'private' key is on our local workstation. This makes it impossible for someone to log in using just a password - they must have the private key.
The first step is to create a folder to hold your keys. On your LOCAL workstation:
That's assuming you use Linux or a Mac and the folder does not exist. I will do a separate article for key generation using Putty for Windows users.
To create the ssh keys, on your LOCAL workstation enter:
ssh-keygen -t rsa
If you do not want a passphrase then just press enter when prompted.
That created two files in the .ssh directory: id_rsa and id_rsa.pub. The pub file holds the public key. This is the file that is placed on the Cloud Server.
The other file is your private key. Never show, give away or keep this file on a public computer.
Now we need to get the public key file onto the Cloud Server.
We'll use the 'scp' (secure copy) command for this as it is an easy and secure means of transferring files.
Still on your LOCAL workstation enter this command:
scp ~/.ssh/id_rsa.pub firstname.lastname@example.org:/home/demo/
When prompted, enter the demo user password.
Change the IP address to your Cloud Server and the location to your admin user's home directory (remember the admin user in this example is called demo).
OK, so now we've created the public/private keys and we've copied the public key onto the Cloud Server.
Now we need to sort out a few permissions for the ssh key.
On your Cloud Server, create a directory called .ssh in the 'demo' user's home folder and move the pub key into it.
mkdir /home/demo/.ssh mv /home/demo/id_rsa.pub /home/demo/.ssh/authorized_keys
Now we can set the correct permissions on the key:
chown -R demo:demo /home/demo/.ssh chmod 700 /home/demo/.ssh chmod 600 /home/demo/.ssh/authorized_keys
Again, change the 'demo' user and group to your admin user and group.
Done. It may seem a long set of steps but once you have done it once you can see the order of things: create the key on your local workstation, copy the public key to the Cloud Server and set the correct permissions for the key.
Next we'll change the default SSH configuration to make it more secure:
The main things to change (or check) are:
Port 30000 <--- change to a port of your choosing Protocol 2 PermitRootLogin no PasswordAuthentication no UseDNS no AllowUsers demo
The settings are fairly self explanatory but the main thing is to move the server from the default port of 22 to one of your choosing, turn off root logins and define which users can log in.
PasswordAuthentication has been turned off as we setup the public/private key earlier. Do note that if you intend to access your Cloud Server from different computers you may want leave PasswordAuthentication set to yes. Only use the private key if the local computer is secure (i.e. don't put the private key on a work computer).
Note that we haven't enabled the new settings - we will restart SSH in a moment but first we need to create a simple firewall using iptables.
As said, the next thing is to set up our iptables so we have a more secure installation. To start with, we're going to have three ports open: ssh, http and https.
We're going to create two files, /etc/iptables.test.rules and /etc/iptables.up.rules. The first is a temporary (test) set of rules and the second the 'permanent' set of rules (this is the one iptables will use when starting up after a reboot for example).
Note that we are logged in as the root user. This is the only time we will log in as the root user. As such, if you are completing this step at a later date using the admin user, you will need to put a 'sudo' in front of the commands.
So, as the root user, save any existing rules to /etc/iptables.up.rules:
iptables-save > /etc/iptables.up.rules
Now let's see what's running at the moment:
You will see something similar to this:
Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination
As you can see, we are accepting anything from anyone on any port and allowing anything to happen.
One theory is that if there are no services running then it doesn't matter. I disagree. If connections to unused (and popular) ports are blocked or dropped, then the vast majority of script kiddies will move on to another machine where ports are accepting connections. It takes two minutes to set up a firewall - is it really worth not doing?
Let's assume you've decided that you want a firewall. Create the file /etc/iptables.test.rules and add some rules to it.
Look at this example:
*filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allows SSH connections # # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE # -A INPUT -p tcp -m state --state NEW --dport 30000 -j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT
The rules are very simple and it is not designed to give you the ultimate firewall. It is a simple beginning.
Hopefully you will begin to see the pattern of the configuration file. It isn't complicated and is very flexible. You can change and add ports as you see fit.
Good. Defined your rules? Then lets apply those rules to our server:
iptables-restore < /etc/iptables.test.rules
Let's see if there is any difference:
Notice the change? (If there is no change in the output, you did something wrong. Try again from the start).
Have a look at the rules and see exactly what is being accepted, rejected and dropped. Once you are happy with the rules, it's time to save our rules permanently:
iptables-save > /etc/iptables.up.rules
Now we need to ensure that the iptables rules are applied when we reboot the server. At the moment, the changes will be lost and it will go back to allowing everything from everywhere.
Open the file /etc/network/interfaces
Add a single line (shown below) just after 'iface lo inet loopback':
... auto lo iface lo inet loopback pre-up iptables-restore < /etc/iptables.up.rules # The primary network interface ...
As you can see, this line will restore the iptables rules from the /etc/iptables.up.rules file. Simple but effective.
Now we have our basic firewall humming along and we've set the ssh configuration. Now we need to test it. Reload ssh so it uses the new ports and configurations:
Don't logout yet...
As you have an already established connection you will not be locked out of your ssh session (look at the iptables config file: it accepts already established connections).
On your LOCAL computer, open a new terminal and log in using the administration user (in this case, demo) to the port number you configured in the sshd_config file:
ssh -p 30000 email@example.com
The reason we use a new terminal is that if you can't login you will still have the working connection to try and fix any errors.
CloudServers also has the excellent online console, in the control panel, so if it all goes horribly wrong, you can log into your Cloud Server from the Cloud Serverhost management area.
If all goes well you should login without a password to a plain terminal:
First thing is to confirm what OS we're using. We know we should be using Ubuntu Hardy but let's see:
You should get an output similar to this:
DISTRIB_ID=Ubuntu DISTRIB_RELEASE=8.04 DISTRIB_CODENAME=hardy DISTRIB_DESCRIPTION="Ubuntu 8.04"
Good. Memory usage should be very low at this point but let's check using 'free -m' (the -m suffix displays the result in MB's which I find easier to read):
It's nice to know what is going on so let's look at that output:
. total used free shared buffers cached Mem: 254 43 211 0 3 74 -/+ buffers/cache: 26 228 Swap: 511 0 511
The line to take notice of is the second one as the first line includes cached memory - in this demo Cloud Server I have 254MB of useable memory with 26MB actually used, 228MB free and no swap used. Nice.
Let's make the terminal a bit more attractive and a bit more informative by adding a few lines to our . bashrc file.
Add the next few lines at the end of the existing text. The following line will make the terminal show the server name in colour and display the working directory (the directory we are in) in a different colour:
export PS1='\[\033[0;35m\]\h\[\033[0;33m\] \w\[\033[00m\]: '
If you look at the existing content of the .bashrc file, you may notice some 'PS1' content already - you can change the existing content if you prefer. This method simply changes the default output.
Now we can add aliases to the file. Aliases are short cuts to commands or sequences of commands. I've included a few below but you can have as many or as few as you want.
alias free="free -m" alias update="sudo aptitude update" alias install="sudo aptitude install" alias upgrade="sudo aptitude safe-upgrade" alias remove="sudo aptitude remove"
The examples above are pretty simple. Instead of typing 'free -m' every time I want to look at the memory usage, I just type 'free. Typing 'sudo aptitude install' can get tedious, so I just type 'install'.
I still need to provide my password for the sudo command to work, but it is more productive/quicker/easier to have short cuts.
To activate the changes issue this command:
You should now see the Cloud Server name in purple and the working directory in brown.
To change the colours to your choosing, adjust the 0;35m and the 0;33m values in the 'export PS1' line of your .bashrc. For example:
export PS1='\[\033[0;32m\]\h\[\033[0;36m\] \w\[\033[00m\]: '
would give you a green and blue output.
The Ubuntu Hardy Cloud Server comes with a basic set of repositories but let's have a check to see what sources we are using:
sudo nano /etc/apt/sources.list
You should see the default list as follows:
deb http://archive.ubuntu.com/ubuntu/ hardy main restricted universe deb-src http://archive.ubuntu.com/ubuntu/ hardy main restricted universe deb http://archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe deb-src http://archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe deb http://security.ubuntu.com/ubuntu hardy-security main restricted universe deb-src http://security.ubuntu.com/ubuntu hardy-security main restricted universe
You can, of course, add more repositories whenever you want to but I would just give a word of caution: Some of the available repositories are not officially supported and may not receive any security updates should a flaw be discovered.
Keep in mind it is a server we are building and security and stability are paramount.
Now we can update the sources so we have the latest list of software packages:
sudo aptitude update
NOTE: If you have used the .bashrc shown above you just need to enter 'update' as the alias will use the entire command. I've put the whole thing here so you know what is happening.
Remember the Hardy Cloud Server is a bare bones install so we need set the system locale:
sudo locale-gen en_US.UTF-8 ... sudo /usr/sbin/update-locale LANG=en_US.UTF-8
Now we have updated the sources.list repositories and set the locale, let's see if there are any upgrade options:
sudo aptitude safe-upgrade
Followed by a:
sudo aptitude full-upgrade
Once any updates have been installed, we can move on to installing some essential packages.
Ubuntu Hardy has some handy meta-packages that include a set of pre-defined programmes needed for a single purpose.
So instead of installing a dozen different package names, you can install just one meta-package. One such package is called 'build-essential'. Issue the command:
sudo aptitude install build-essential
Notice the programs that are to be installed include gcc, make, patch and so on. All these are needed for many other programs to install properly. A neat system indeed.
Enter 'Y' and install them.
Quite a lot happening here but now we have a secured and updated Cloud Server.
The console is now informative and less drab, locales have been configured and the meta-package build-essential has been installed.
If you do this more than once or twice it doesn't take long at all and we now have the ideal base to install the 'meat' of our server.
© 2011-2013 Rackspace US, Inc.
Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License