Installing and Configuring LVS-TUN

Filed in Product & Development by Brandon Woodward | September 22, 2010 2:27 pm

EDITOR’S NOTE: Due to a global shortage of IPv4 addresses, we are no longer allowed to give out additional IPs unless they are for SSL certs. This is a Rackspace policy and we can not add an IP for any other reason.

The throughput of an individual server can be the bottleneck for almost any Cloud based server.  With restrictions placed on a shared environment, you may find that you can no longer grow your solution horizontally behind a proxy load balancer.  There are options to work around this, the first and easiest being DNS round robin load balancing. DNS round robin load balancing isn’t smart and will just keep rotating, even if a node goes down – this is a huge downside.

You’ve got other solutions though; I want to introduce you to LVS-TUN, a tunneling load balancer.  The Linux Virtual Server project has been around for a while now, and has had different iterations, but LVS-TUN seems to work best on our infrastructure.

http://www.linuxvirtualserver.org/[1]

LVS-TUN is a tunneling load balancer solution that will take all incoming requests through the load balancer and forward the packet to the web nodes. The web nodes will then respond directly to the client without having to proxy through the Load Balancer. This type of solution can allow for geo-load balancing, but will more importantly allow a customer use the bandwidth pool available from all web nodes, instead of relying on the limited through put of the load balancer.

PREREQUISITES

INSTALLATION

Networking:

A shared IP address must be common among all Load Balancers and Web nodes in this cluster.  Also another thing to remember is that all of your servers will need to be on the same Huddle in cloud servers; the IP pools available in Cloud Servers are restricted by Huddle.

Load Balancers (aka Directors):

What to install on your Load Balancers:

# This list is pretty short, but installing this package through YUM will install all necessary dependencies to get you working.

# Make sure to make it start on boot with “chkconfig pulse on”.

The LVS-TUN Load Balancer’s configuration will be identical on both servers.  First we need to make a couple of changes to the sysctl.conf file; these kernel parameters will allow for things like the shared IP to be bounced between Load Balancing Nodes.

sysctl.conf:

net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.ipv4.ip_nonlocal_bind = 1

Once you’ve made changes to this file, run ‘sysctl –p’ to make them live inside your system.

Moving on to the LVS-TUN service configuration file, which is found at /etc/sysconfig/ha/lvs.cf; this will configure both the load balancing and fail over service.   The service that will read this file is pulse[2], and is installed as part of piranha.  Pulse is a heartbeating daemon for monitoring the health of the nodes on your cluster.

You’ll want to ifdown the shared IP, usually eth0:1, and then delete its configuration file in /etc/sysconfig/network-scripts/.  The pulse service will bring the IP live on the currently active load balancer, instead of the networking service used in most RPM based systems.

/etc/sysconfig/ha/lvs.cf

# serial must be incremented every time that a change
#is made to this configuration file
serial no = 1
#primary IP for your master LB, used as an identification
primary = 184.106.233.251
primary_private = 10.179.109.69
#primary IP for your slace LB, used as an indentification
backup = 184.106.233.252
backup_private = 10.179.119.61
# must be listed as active, or slave will terminate pulse
backup_active = 1
# Failover Options
heartbeat = 1
keepalive = 2
deadtime = 15
# LVS Type
service = lvs
network = tunnel
debug_level = NONE

##  Simple HTTP configuration
virtual http-pool {
active = 1
# the VIP that you want to bind to, this
# is only a forward and will not show up in a netstat
address = 172.x.x.x eth0:1
vip_nmask = 255.255.255.255
port = 80
persistent = 600
pmask = 255.255.255.0
send = "GET / HTTP/1.0\r\n\r\n"
use_regex = 0
load_monitor = none
scheduler = wrr
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 0
server web01 {
address = 10.x.x.1
active = 1
weight = 10
}
server web02 {
address = 10.x.x.2
active = 0
weight = 10
}
}

## SSL Configuration
virtual ssl-pool {
active = 1
address = 172.x.x.x eth0:1
vip_nmask = 255.255.255.255
port = 443
persistent = 600
pmask = 255.255.255.0
load_monitor = none
scheduler = wrr
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 0
server web01 {
address = 10.x.x.1
active = 1
weight = 10
}
server web02 {
address = 10.x.x.2
active = 0
weight = 10
}
}

Web Nodes configuration (aka Real Servers):

Install your Web Head as you normally would – apache, lighttpd or nginx.  It shouldn’t make a difference which http service you are running.  The main changes that we will be worried about will be to the sysctl parameters and a Tunneled IP address.  Just like the Load Balancers, let’s start by running ifdown on the VIP, eth0:1, and then deleting its configuration file.  We’ll reconfigure the IP as a tunnel on the web nodes a little later.  First let’s make a few changes to sysctl.conf; we’re doing a few things in this file.  These changes need to be made due to the default settings that come from Centos, which would normally block this type of transaction.

sysctl.conf:

net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.tunl0.arp_ignore = 1
net.ipv4.conf.tunl0.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.ip_local_port_range = 1024 65536
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.lo.rp_filter = 0
net.ipv4.conf.eth0.rp_filter = 0
net.ipv4.conf.eth1.rp_filter = 0
net.ipv4.conf.tunl0.rp_filter = 0

Tunnel device:

Ok, here is a very important step: this tunneled IP will allow the web-node to respond directly to client giving the Virtual IP address shared amongst the cluster.  This shared IP makes the client think that it is still talking to the load balancer, so the connection is never terminated.

/etc/sysconfig/network-scripts/tunl0:

DEVICE=tunl0
TYPE=ipip
# this is the shared IP from the Load Balancer
IPADDR=172.x.x.x
NETMASK=255.255.255.255
ONBOOT=yes

Once you’ve saved this file, you can bring up the tunnel with ‘ifup tunl0’.  It should show up in ifconfig like any other IP and it should be listed as an ‘ipip’ type.

START LOAD BALANCING:

starting:
/etc/init.d/pulse start

reloading:

/etc/init.d/pulse reload

stopping:
/etc/init.d/pulse stop

Once your load balancer is up and running, you’ll want to run ‘ipvsadm’; this will tell you which load balancer is currently active and which web-nodes are active behind that load balancer.  The active balancer will give a similar answer:

[root@lvs01 ~]# ipvsadm

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port                 Forward             Weight             ActiveConn             InActConn

TCP  174-x-x-x.static.cloud- wrr

-> 10.x.x.x:http                       Tunnel              10                             0                      0

-> 10.x.x.x:http                       Tunnel              10                             0                      0

Once everything is up and running, your load balancer should take a request and forward the packet to your web-node.  Then, the web-node will respond directly to the client.  With this type of traffic pattern, your load balancer is essentially only taking incoming traffic.  This will significantly increase its ability to handle traffic.  The outgoing traffic back to the client will be shared amongst your web-nodes.  You’ve just allowed your cluster to handle a much larger scale of traffic, since your traffic is now spread against your cluster.

Helpful Notes

Endnotes:
  1. http://www.linuxvirtualserver.org/: http://www.linuxvirtualserver.org/
  2. pulse: http://linux.die.net/man/8/pulse

Source URL: http://www.rackspace.com/blog/installing-and-configuring-lvs-tun/