Debian Lenny - Setup

These articles will take you from a 'barebones' Debian Lenny Cloud Server to a secured and up to date Cloud Server ready for your server software (or whatever you use the Cloud Server for).

This article for the most part will be based on the Debian Lenny - Setup, so if you've read that article, nothing here will seem out of the ordinary.

Not only that, you will have a better understanding of what is going on and, more importantly, why it's going on.


Log in

On your LOCAL computer, edit the SSH known_hosts file and remove any entries that point to your Cloud Server IP address. If this is a brand new Cloud Server then you will not need to do this, but a reinstall will result in a different signature.

nano ~/.ssh/known_hosts

If you are not using Linux on your LOCAL computer, the location of the known_hosts file will differ. Please refer to your own OS for details of where this file is kept.

As soon as you have your IP address and password for your VPS login via SSH:

ssh root@

User administration

Now we're logged in to the VPS, immediately change your root password


Add an admin user (I've used the name demo here but any name will do).

adduser demo

As you know we never log in as the root user (this initial setup is the only time you would need to log in as root). As such, the main administration user (demo) needs to have sudo (Super User) privileges so he can, with a password, complete administrative tasks.

Give the 'visudo' command:


At the end of the file add:

demo   ALL=(ALL) ALL

SSH preparation

One effective way of securing SSH access to your Cloud Server is to use a public/private key. This means that a 'public' key is placed on the server and the 'private' key is on our local workstation. This makes it impossible for someone to log in using just a password - they must have the private key.

This is very simple with ssh-copy-id.

We already have our admin user created (demo), so on your local workstation enter the command:

ssh-copy-id -i ~/.ssh/ demo@

We use the -i option to specify which file (identity) to copy across to the Cloud Server. The user is then specified followed by the IP address of the Cloud Server.

So what happens when the command is entered? Firstly you will need to enter the user's password so it can have secure access to the Cloud Server. Then it creates a 'hidden' directory called .ssh and copies the public key to a file named 'authorized_keys'.

It then automatically changes the permissions so that only the owner (demo) can read or write to the file. Neat.

It's always a good idea to check the settings on something as important as this so let's have quick look at the permissions:

ls -al /home/demo/.ssh/authorized_keys
-rw------- 1 demo demo 394 Sep  6 09:23 /home/demo/.ssh/authorized_keys

Perfect. You can also open the authorized_keys file and make sure only your key was copied across and it is not full of unknown keys.

Remember that this is the only time you'll need to enter the SSH password as the file we just copied over will authorize the admin user 'demo' to SSH in without it - but only if they have the private key on their local workstation: it won't work from any workstation.

SSH config

Next we'll change the default SSH configuration to make it more secure:

nano /etc/ssh/sshd_config

Use can use this ssh configuration as an example.

The main things to change (or check) are:

Port 30000                           <--- change to a port of your choosing
Protocol 2
PermitRootLogin no
PasswordAuthentication no
UseDNS no
AllowUsers demo

The settings are fairly self explanatory but the main thing is to move the server from the default port of 22 to one of your choosing, turn off root logins and define which users can log in.

PasswordAuthentication has been turned off as we setup the public/private key earlier. Do note that if you intend to access your Cloud Server from different computers you may want leave PasswordAuthentication set to yes. Only use the private key if the local computer is secure.


Right, now we have the basics of logging in and securing SSH done.

Next thing is to set up our iptables so we have a more secure installation. To start with, we're going to have three ports open: ssh, http and https.

We're going to create two files, /etc/iptables.test.rules and /etc/iptables.up.rules. The first is a temporary (test) set of rules and the second the 'permanent' set of rules (this is the one iptables will use when starting up after a reboot for example).

Note: We are still logged in as the root user. This is a new, or reinstalled Cloud Server, and it is the only time we will ever log in as root. If you are doing this later, when logged in as the admin user, you will need to enter the command:

sudo -i

The sudo -i command will place you as the 'root' user. You will need to do this when changing the iptables rules as it won't work by adding a standard 'sudo' in front of the commands.

So, as the root user, save any existing rules to /etc/iptables.up.rules:

iptables-save > /etc/iptables.up.rules

Now let's see what's running at the moment:

iptables -L

You will see something similar to this:

Chain INPUT (policy ACCEPT)
target     prot opt source               destination
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

As you can see, we are accepting anything from anyone on any port and allowing anything to happen.

One theory is that if there are no services running then it doesn't matter. I disagree. If connections to unused (and popular) ports are blocked or dropped, then the vast majority of script kiddies will move on to another machine where ports are accepting connections. It takes two minutes to set up a firewall - is it really worth not doing?

Let's assume you've decided that you want a firewall. Create the file /etc/iptables.test.rules and add some rules to it.

nano /etc/iptables.test.rules

Look at the following example iptables configuration:


#  Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0
-A INPUT -i lo -j ACCEPT
-A INPUT ! -i lo -d -j REJECT

#  Accepts all established inbound connections

#  Allows all outbound traffic
#  You can modify this to only allow certain traffic

# Allows HTTP and HTTPS connections from anywhere (the normal ports for websites)
-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 443 -j ACCEPT

#  Allows SSH connections
-A INPUT -p tcp -m state --state NEW --dport 30000 -j ACCEPT

# Allow ping
-A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT

# log iptables denied calls
-A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7

# Reject all other inbound - default deny unless explicitly allowed policy


The rules are very simple and it is not designed to give you the ultimate firewall. It is a simple beginning.

Hopefully you will begin to see the pattern of the configuration file. It isn't complicated and is very flexible. You can change and add ports as you see fit.

Good. Defined your rules? Then lets apply those rules to our server:

iptables-restore < /etc/iptables.test.rules

Let's see if there is any difference:

iptables -L

Notice the change? (If there is no change in the output, you did something wrong. Try again from the start).

Have a look at the rules and see exactly what is being accepted, rejected and dropped. Once you are happy with the rules, it's time to save our rules permanently:

iptables-save > /etc/iptables.up.rules

Now we need to ensure that the iptables rules are applied when we reboot the server. At the moment, the changes will be lost and it will go back to allowing everything from everywhere.

Open the file /etc/network/interfaces

nano /etc/network/interfaces

Add a single line (shown below) just after 'iface lo inet loopback':

auto lo
iface lo inet loopback
pre-up iptables-restore < /etc/iptables.up.rules

# The primary network interface

As you can see, this line will restore the iptables rules from the /etc/iptables.up.rules file. Simple but effective.


Now we have our basic firewall humming along and we've set the ssh configuration. Now we need to test it. Reload ssh so it uses the new ports and configurations:

/etc/init.d/ssh reload

Don't logout yet...

On your LOCAL computer, open a new terminal and log in using the administration user (in this case, demo) to the port number you configured in the sshd_config file:

ssh -p 30000 demo@

The reason we use a new terminal is that if you can't login you will still have the working connection to try and fix any errors.

Cloud Serverhost also has the excellent ajax console so if it all goes horribly wrong, you can log into your Cloud Server from the Cloud Serverhost management area.

If all goes well you should login without a password to a plain terminal:


OS check and Free

Let's confirm the Linux type and version:

cat /etc/issue

I hope you get something like this:

Debian GNU/Linux 5.0

If not, you're probably logged into the wrong VPS.....

We can start our server administration straight away by looking at the memory usage:

free -m

My test VPS has 256MB of memory and 'free -m' reports usage in MB as follows:

.                  total       used     free     shared    buffers     cached
Mem:           254         37        217          0           0              9
-/+ buffers/cache:       26        228
Swap:          511          0         511

Concentrate on the second line of the output and you'll see that I am actually using 26MB of RAM with 228MB free. The first line suggested I was already using 37MB but 9MB of that is in buffers and cached.


You'll notice the terminal is a little bland and is not very informative. Let's add some colour and let it tell us which Cloud Server we are logged into and what directory we are in. It's actually quite easy to forget simple things like this if you have more than one or two Cloud Servers!

We'll do this by editing .bash_profile to fit our needs:

nano ~/.bash_profile

To add some colour to the terminal and to show who we are and where we are, add the following line at the end of the file:

export PS1='\[\033[0;35m\]\h\[\033[0;33m\] \w\[\033[00m\]: '

That will show the VPS name in purple and the working directory in brown.

Now we can add some aliases. An alias is simply a shortcut to a command or sequence of commands.

Some examples are:

alias free="free -m"
alias aptitude="sudo aptitude"
alias update="sudo aptitude update"
alias upgrade="sudo aptitude upgrade"
alias install="sudo aptitude install"
alias remove="sudo aptitude remove"

The first example gives the memory usage in MB whenever I enter 'free' and the rest are aliases involved with the aptitude command.

Feel free to change the alias names to something else and add and remove more as you see fit. You'll get into your own habits pretty quickly.

Once happy, save the file and to activate the changes enter:

source ~/.bash_profile

The terminal is more usable now the VPS name and current directory are shown. sources.list

Now we have some customization completed, we need to update the software that's installed.

Update Software

First thing is to update the sources.list. This is a file which contains a list of repositories from which to fetch software and updates.

Fire up your favorite text editor (I use nano) and open the sources.list file:

sudo nano /etc/apt/sources.list

The following list is installed by default:

deb lenny main
deb-src lenny main

deb lenny/updates main contrib
deb-src lenny/updates main contrib

Be wary of entering repositories which are not designed for servers as some repositories, whilst 'official' do not receive security updates. The list above will suffice for most servers.


Now we need to update the list of packages available to us:

sudo aptitude update

Remember that if you have set your .bash_profile as suggested above, you will not need to enter the full command. You will only need to enter the alias 'update'.

Good. Now we have an updated list of packages we can install. locales

Before we install anything else we need to configure the locales package:

sudo aptitude install locales

and then configure them:

sudo dpkg-reconfigure locales

During the configuration, simply select the locale(s) you want available on your VPS. In my case I selected 'en_US.UTF-8 UTF-8'.


Now we can get down and install the latest security updates with a full upgrade command:

sudo aptitude upgrade
sudo aptitude dist-upgrade

Half way through the first command it will pause and ask if you want to set your timezone. It would be a good idea to set it here by following the terminal prompts. It's usually a good idea to select UTC as the server timezone (by entering '12' and then 'UTC').


You would have noticed that quite a few programmes were upgraded so now we need to reboot the Cloud Server.

There's rarely any need to reboot a Linux based server but as we have changed quite a few base packages it's a good idea this time:

sudo shutdown -r now

The reboot should only take a few seconds (mine took around 10 seconds before I could log in again).

Now log back in to your VPS ready for the next stage of the Cloud Server setup:

ssh -p 30000 demo@


Let's get started by installing screen. This is a great application that allows 'virtual' terminals to be opened in one console. Switching between them is done with the press of a key.

The advantages are that you can be working on more than one shell at a time, say one installing software and another monitoring network activity - all without having more than one physical shell open. If the SSH connection is cut for some reason or you have to leave the room then close the terminal and the work will still carry on in the background.

I highly recommend installing and getting used to using screen. This screen tutorial gives an excellent introduction.

sudo aptitude install screen

To start a screen session simply enter the command:


Press the space bar to remove the introduction page and to activate any custom bash_profile entries, enter:

source ~/.bash_profile

Build essentials

Most software requires libraries and compilers to work properly. As you would imagine, they (nearly) all share the same libraries and compilers. If they didn't it would be a bit messy really.

Debian provides what is known as 'meta-packages'. The meta-packages are a defined set of programmes for particular situations. One such set of packages include the 'essential' programmes described above such as shared libraries and compilers.

This is a nice time saver and ensures we don't forget anything. So go ahead and install the 'essentials':

sudo aptitude install build-essential

Have a look at what is going to be installed and, once you are happy with the build-essential package, click 'Y' to continue.

Quite a lot has happened here but now we have a secured Cloud Server.

The console is now informative and less drab and we can use the handy 'screen' tool.

The Debian sources have been updated, the Cloud Server upgraded and the meta-package build-essential has been installed.

If you do this more than once or twice it doesn't take long at all and we now have the ideal base to install the 'meat' of our server - whether the primary task is to serve web pages, act as a database server, be a file repository and so on.

© 2015 Rackspace US, Inc.

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License

See license specifics and DISCLAIMER