Support: 1-800-961-4454
1-800-961-2888

Introduction To Cloud Load Balancers

This is one of a collection of posts I’ve written recently to provide a high-level introduction to all of the products and services available within a Rackspace Cloud Account. I like to think of them as the building blocks of the Internet. Each post will give a description of the product, how to use it,  the costs  and the most frequently asked questions about that product.

I’ve also included several videos to help show how to configure and deploy and use all of these products. 

As always, feedback is welcome and appreciated!

What is it?

The real strength of the cloud lies in the ability to distribute your workload (and your code) over multiple nodes employing multiple technologies to create a high performance, highly scalable, highly available platform at a highly affordable price. Cloud Load Balancers makes all of this possible by programmatically routing traffic to and between your Cloud Servers and other infrastructural resources, allowing cloud architects to employ tremendous amounts of scale and redundancy that would be difficult and costly to achieve within a dedicated environment.

In addition to being the traffic cop of the open cloud, we’ll discuss several other features of Cloud Load Balancers that can increase the performance, stability and security of your website.

How do I use it?

Before we get into the architectural benefits of load balancing, let’s cover how to configure and deploy your first instance of Cloud Load Balancers. You’ll need to have at least one active Cloud Server on your account so your load balancer has somewhere to send traffic.

You create a new Cloud Load Balancer by clicking on “Load Balancers” and then clicking on “Create Load Balancer.”

Give the new load balancer a descriptive name and select the same region as your Cloud Servers. While you can load balance outside of your load balancer’s region, it will result in unpredictable performance and excess bandwidth charges.

You can configure your new load balancer without a public interface to make it available only within the Rackspace private network. This option is good to select if you’ll only be routing traffic within your infrastructure, between web servers and databases, for example.

Otherwise, you should make it accessible on the public Internet.

You will also need to specify the type of traffic by selecting a protocol. If you’re not sure which protocol to choose, reference this article in the Rackspace Knowledge Center: Choosing the right protocol.

The functionality of a load balancer comes from the ability to programmatically distribute traffic. A basic understanding of the available algorithms will help configure your load balancer to meet your needs. Here is a brief description of how each algorithm will negotiate traffic to the connected nodes.

If after reading through the descriptions you’re not sure which algorithm is the most appropriate for your needs, this article provides recommendations: Which Algorithm Should I Choose?

Explanation of Algorithms

Round Robin

All nodes are treated equally and assigned requests in the order they are listed.

Weighted Round Robin

Just like Round Robin but with the ability to favor individual nodes and assign them more connections based on a “Weight”

Random

All nodes treated equally and assigned requests with no regard to the order that they are listed

Least Connections

Favors nodes with the least number of current connections

Weighted Least Connections

Like Least Connections but with the ability to favor individual nodes and assign them more connections based on a “Weight.”


Lastly, you must select the Cloud Servers this load balancer will send traffic to.

You are allowed to add external nodes. While this is possible, please be wary of excess bandwidth costs and variability in the server’s response time as that traffic must traverse the public Internet.

Types of external nodes you can add:

  • Cloud Servers in a different region
  • Rackspace dedicated servers
  • Servers outside of a Rackspace data center

Now click on “Create Load Balancer” to begin deployment of the virtual machine.

Watch me do it

Here’s a video walking you through how to configure and deploy a Cloud Load Balancer using the Round Robin algorithm to send traffic to two web servers:

Configuration After Deployment

There are some features on the Cloud Load Balancer that are configured after the load balancer is created.

Health Monitoring

Health monitoring uses HTTP response codes to monitor and remove an unresponsive node (Cloud Server) from the rotation. This ensures traffic is not sent to a node experiencing some kind of failure.

Connection Throttling

Cloud Load Balancers can impose user defined limits on the number of connections per IP address which may be used to mitigate malicious or abusive traffic to your application or website.

Session Persistence

Distributing traffic across multiple nodes can make it difficult for users to complete certain transactions on web applications. Session persistence is a feature that ensures subsequent requests from the same user are directed to the same Cloud Server. Session persistence can be enabled for all protocols.

Content Caching

Content caching can improve the load time of a website while reducing server load by caching recently requested content.

Learn more about content caching from this article: Content Caching for Load Balancers.

SSL Traffic

SSL termination allows users to terminate their traffic at the load balancer with centralized certificate management, SSL acceleration for improved throughput, reduced CPU load at the application server for better performance and HTTP/HTTPS session persistence.

Connection Logging

To simplify log management, the connection logging feature allows for Apache-style access logs (for HTTP-based protocol traffic) or connection and transfer logging (for all other traffic) to your Cloud Files account. If you need raw data in one place for performance tuning or web analytics, logs are sorted, aggregated and delivered to Cloud Files.

Here is an example of a log file pulled from a test load balancer:

Error Page

This feature displays an error page if all the nodes connected to your public facing Cloud Load Balancer become unavailable. You can either use a default page or create a custom page.

Here’s an example of the default error page:

Access Control

For increased security, this feature allows you to easily manage who can and can’t access the services that are exposed via the load balancer.

Frequently Asked Questions

Q: What is the cost for Cloud Load Balancers? 

A: Please see the pricing page for details. Standard charges for Cloud Files will apply if you have elected to enable log delivery to your Cloud Files account. Standard charges will apply for additional (unique) virtual IP addresses per load balancer.

Q: I’ve heard a recommendation to use a load balancer if I’m only using a single Cloud Server. Why would I want to do that? 

A: Building behind a load balancer, even if you’re only going to be using a single Cloud Server, is like buying an insurance policy if you ever need to need to scale in the future with little or no notice. How? A load balancer will give you the benefit of a static IP. This means that you can rearrange your application’s infrastructure without having to wait for DNS to propagate, which can take in excess of 24 hours. This means changes you make will take effect immediately.

Check out the previous posts in this series: Introduction to Cloud Files, Introduction to Cloud Servers and Introduction to Cloud Databases.

About the Author

This is a post written and contributed by Joseph Palumbo.

As a founding member of Rackspace's Managed Cloud Support Team, Joseph spends half of his time teaching customers about the Cloud and the other half learning about the Cloud from them. When he's not in meetings, he can be found presiding over cultural happenings around the Rackspace office, discussing support innovation with other 'Fanatics' and wishing paper documents had a built-in search function.


More
Racker Powered
©2014 Rackspace, US Inc.