NOTE: This article is written for our Classic Cloud Control Panel . A version of this article is also available for our New Cloud Control Panel .
What is Load Balancing?
- Mission critical web-based applications and workloads require an HA, or High Availability, solution. Load balancing distributes workloads across two or more servers, network links, and other resources to maximize throughput, minimize response time, and avoid overload. Rackspace Cloud Load Balancers allow customers to quickly load balance multiple Cloud Servers or external servers for optimal resource utilization. Read more to see how easy this is to do!
Add a Load Balancer
- You can add a load balancer to your account via API using the Create Load Balancer API operation, as described in the Rackspace Cloud Load Balancers Developer Guide. Or, you can simply add a load balancer to your account through the Classic Cloud Control Panel by clicking on Create a Load Balancer .
Configure your Load Balancer
In order to complete the configuration of your Load Balancer, you will need to define the following variables:
Load Balancer Name
- There are no limitations on text in this field and the Load Balancer can be named whatever you choose.
Protocol and Port Number
- The available Protocols and their pre-set port numbers are as follows:
- As you can see, our Load Balancers have the most common protocols and port numbers pre-configured. If you are running a service on a non-standard port and need it to be load-balanced, you can create a Protocol/Custom Port Number combination:
Virtual IP Type
This variable can be set to: Public, Shared Virtual IP , or Service Net .
- Public : Setting your VIP type to Public would allow any two servers with public IP addresses to be load balanced. These can be nodes outside of the Rackspace network, but please be aware that standard bandwidth rates will apply.
- Shared Virtual IP : Use this option if you wish to load balance multiple services on different ports while using the same virtual IP address.
- Service Net : This is the best option for load-balancing two Cloud Servers, as it allows the load-balancing traffic to run on the Rackspace Cloud Internal Network, or Service Net. This option has two distinct advantages; the rate limit is double what the rate limit is on the public interface, and all traffic on the Service Net between Cloud Servers is not charged for bandwidth.
Load Balancing Algorithm
- This is a very important attribute to set, especially as your Load Balancer implementation gets more complex. Each algorithm has an explanation for how it assigns traffic. In most cases, the Random , Round Robin , or Least Connections algorithms will be sufficient when load-balancing two identical servers for increased web traffic. If your servers are unequal in size or resources, you should take a look at using weighted algorithms to favor your servers with more resources.
Geographic Region where Load Balancer will be located
- Your load balancer can be provisioned in Dallas, TX (DFW1), or Chicago, IL (ORD1). When selecting a region, consider the location of the backend nodes you want to load balance and provision your load balancer in a region that is geographically as close to your backend nodes as possible.
- N odes for Load Balancer Operation : In this section you will have the option to set your Load Balancer to operate on one or more of your Cloud Servers, like the one named Test in the illustration below, or on one or more External Nodes. To add an External Node to be load-balanced, you must enter the IP address and set the port of the service that you want load-balanced (usually port 80 for HTTP traffic). Then, you can either Enable or Disable the load-balancing service on your external node directly through the Classic Cloud Control Panel.
- Lastly, create Your Load Balancer by clicking the Create Balancer button:
Manage Your Load Balancer
- From the Overview screen, you can see the Load Balancer usage statistics, as well as tabs for additional configuration options.
Additional configuration options include
- Active Health Monitor - In addition to the default passive health monitor check, active health monitoring uses synthetic transaction monitoring to inspect an HTTP response code and body content to determine if the application or site is healthy.
- Advanced Access Control - Easily manage who can and can’t access the services that are exposed via the load balancer.
- Session Persistence - If you are load balancing HTTP traffic, the session persistence feature utilizes an HTTP cookie to ensure subsequent requests are directed to the same node in your load balancer pool.
- Connection Logging - To simplify log management, the Connection Logging feature allows for Apache-style access logs (for HTTP-based protocol traffic) or connection and transfer logging (for all other traffic) to your Cloud Files™ account. If you need raw data in one place for performance tuning or web analytics, logs are sorted, aggregated, and delivered to Cloud Files.
- Once you have enabled logging for this load balancer, you will see the following:
- Connection Throttling - As an additional feature, our cloud load balancer has a connection throttling feature that imposes user defined limits on the number of connections per IP address which may be used to mitigate malicious or abusive traffic to your application or website.
The cost for each Cloud Load Balancer (instance) is based on an hourly rate + the number of concurrent connections + bandwidth. You can view pricing details on the product pages for Cloud Load Balancers:
Now you know how to configure a Cloud Load Balancer to distribute your web traffic over multiple nodes! Next, learn how to Cloud Files and the Content Delivery Network for website acceleration and mass object-storeage . You can start by watching a short Cloud Files Overview Video , or dive right into the more detailed article .
© 2011-2013 Rackspace US, Inc.
Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License
See license specifics and DISCLAIMER