Support: 1-800-961-4454
Sales Chat
1-800-961-2888

E-Commerce Tip: Load Testing: A Critical Step

5

The busy holiday shopping season is only a month away and the National Retail Federation forecasts 12 percent growth in online sales over last year.  Sounds like good news, doesn’t it?  It is, if you can keep your virtual doors open and sales flowing around the clock. Recently, Rackspace offered up five tips to prep your e-commerce site for the holiday crunch. We invited some of our e-commerce partners to provide a guest blog post looking at those tips and offering further advice.  

Submodal is a web design studio and a Rackspace e-commerce partner that works with companies of all sizes – from startups to established consumer brands. Here, Submodal Director of Engineering Jason Gordon discusses why load testing is a critical step for your e-commerce site.

We know how crucial load testing is: it’s the reason such a detailed approach has been developed to address this aspect of web application deployment. Here’s what you need to know about load testing and all the issues that come with it.

WHAT IS LOAD TESTING?

When someone uses the term “load testing,” they are usually talking about a larger set of tasks involved in delivering a responsive web application.

For us, load testing is the final step in the testing process, during which all performance tests are run simultaneously over a period of time to see how the system will hold up for a couple of days. It’s similar to how automotive engineers run an engine by itself for a few months straight to test its safety and capabilities.

“Load” is the number of concurrent connections to a system. Since we specialize in Magento, load will be discussed here in the context of a Magento e-commerce application. We also run another CMS in tandem with Magento, which helps determine what type of users a system will have: non-authenticated shoppers, authenticated shoppers and content administrators. A performance plan should include tests designed for each of these three users, because each type of user creates a different load on the system.

LEAD UP TO TESTING

Before the performance-testing phase, it’s important to follow a few development best practices. First, minimize HTTP connections as much as possible by:

  1. Merging and compressing JavaScript files. Ideally, this process is part of an automated build; but on small projects the process is often performed manually by combining JS files into one large file that is then run it through Yahoo’s compressor, or a similar tool.
  2. The CSS files are merged and compressed using the same technique as with JavaScript.
  3. We use image sprites as much as possible to ensure all images load quickly and at the same time.
  4. We also use a CDN, and generally recommend a cloud solution like Rackspace Cloud Files. A system that supports origin pull eliminates the need for software that manages assets on the CDN (which is both difficult and).

The next step is to determine which parts of each page don’t change and are cacheable. In our applications, most parts fall into this category. For the sections that don’t fit this bill, we often employ ajax to fill in the “holes,” and Varnish’s ESI technique (“edge side includes”).

By implementing a very lightweight code review process, we identify obvious performance issues in new code. By running our code tests with a tool such as JMeter during the development process, performance and load testing is much easier later on, and poor performing code is identified earlier in the process.

WHAT’S ACTUALLY BEING TESTED?

The process essentially tests from left to right; as a request comes into the system, it gets jiggled around, and then comes back to the browser. The goal is to determine how many of these types of requests a system can handle. Here’s a simplified view of the request flow:

  1. Load balancer decides which app server to hit
  2. App server checks static cache (varnish). We use nginx here because each request consumes very few resources, allowing many requests to be offload from apache
  3. On a miss, the request is routed to the second webserver (apache)
  4. The request is then routed to either Magento or the CMS where the software stack boots up
  5. The software will make requests to a database cache (memcached)
  6. On a miss, the database will be queried (mostly reads)
  7. After all cache hits/misses and database action, the response is returned back up the chain and through the load balancer
  8. The response will be HTML filled with many requests to static assets that will be served by the CDN

Here’s what is being tested:

  1. How many concurrent requests can a system handle at maximum load?
  2. What are response times for all test paths, and are they acceptable?
  3. What points in the chain are consuming the most hardware resources?
  4. Are there any obvious failures caused by large data sets, many users, many products, many orders and other factors?
  5. Is there any obvious low hanging fruit to optimize? This category includes:
    a. Totally unnecessary database queries
    b. Frequently used code paths that produce a consistent result (can be cached)
    c. Frequently repeating database queries (can be cached)
  6. Is there anything else that can be obviously cached?

THE LOAD TEST ENVIRONMENT

In practice, the load test environment ends up being:

  • Software tools for sending lots of requests to our system
  • Scripts to simulate user activity on the site
  • Scripts to generate mass amounts of data in the system
  • Hardware to run tools
  • Spreadsheets to track and analyze results.

The first step is to define the set of requests that makes up the basis of how users will interact with the site, such as:

  • The front page (most important page)
  • Any “hole punching” URLs
  • Shopping cart page, full and empty
  • Checkout page
  • User account pages
  • Random types of URLs that generate 404s. It is crucial to minimize how many 404s the software stack has to deal with.

We use apache bench to quickly benchmark each request in a test list. This is when the reverse proxy cache system will be tuned to ensure it is getting as many hits as possible. Apache bench will also give a basic sense of how many requests per second the system can handle for each URL.

Next, we create scripts to generate test data and simulate user interaction. Although we create some stand-alone scripts for data generation, the main goal is to make a large suite of scripts for JMeter. A script will be made for each test path, cookies and all. JMeter scripts are easy to make into templates, and can be copied and pasted to create new ones.

A list of necessary scripts:

  1. Randomly create products
  2. Randomly create orders (fill up a cart, then checkout)
  3. Randomly create customer accounts (can combine this with creating an order script)
  4. Make changes in the admin interface (creating categories, configuration changes).

The idea is to run these scripts both individually and simultaneously to find performance limits. This will fill up a database with lots of data so that during load testing you can identify bottlenecks that occur only when a good amount of data is present.

Then comes the testing system setup. Laptops quickly run out of capacity for testing, so load testing tools usually get set up on cloud machines. During load testing, several machines will run for several days.

TESTS AND ANALYSIS

Now it’s time to actually run tests. Because running tests can be very time consuming, it’s important to ensure we get the most information possible out of each test run and avoid wastefully repeating the same test. To do so, we create a spreadsheet with the following information:

  1. Name of the person running the test
  2. The date, time and duration of the run
  3. A clearly defined hypothesis
  4. A statement of what has changed
  5. A set of metrics being tracked during the test
  6. A post-run capture of the results of the metrics, best visualized with a graph of the metric over time.

Using a set of monitoring tools allows us to obtain truthful data about actual test performance in simple visual graphs: Cacti for capturing metrics, MONyog for monitoring the database and statsd to put stats logging into code so we can monitor code performance in real time. By saving this information in spreadsheets, we achieve provable results and can identify the most effective tests.

Once we churn through the performance tests and apply all possible optimizations, we start running longer tests. A nice side effect of all this setup is that it puts in place a monitoring system that can be used throughout the life of the site, which is very helpful during new software releases.

About the Author

This is a post written and contributed by RJ Rowntree.

A Racker since 2007, RJ is part of the Rackspace Commerce Vertical partnering with solutions integrators and some of the top commerce platforms in the industry. His primary focus is business development and growing Rackspace's reach into commerce business lines. RJ is an ardent fan of live music and spending time with his dog Joplin and wife Meghan.


More
5 Comments

Very nice blog I am getting good information about Load testing method of E-commerce. Could you please give me next update of other E-commerce techniques? Thanks in advance.

avatar ChrishTucker on June 26, 2013 | Reply

I agree very good helpful post, and I’d love to read more of your posts!

avatar Tom on July 5, 2013 | Reply

Very good article to keep your website healthy to handle multiple customer requests at a time.

Thanks for valuable information.

avatar Prashant on August 30, 2013 | Reply

This is a very informative article on the importance of load testing in e-commerce. Thanks for sharing such a helpful post.

avatar Zora Ferrel on November 11, 2013 | Reply

A very helpful post on the importance of load testing. Those who are in e-commerce development are surely delighted. Thank you for posting.

avatar Clarissa Lucas on December 2, 2013 | Reply

Leave a New Comment

(Required)


Racker Powered
©2014 Rackspace, US Inc.