• Sales: 1-800-961-2888
  • Support: 1-800-961-4454

Peak Season Prep Guide: Preparing your Ecommerce Site for the Next Big Rush


Most retailers are familiar with the holiday rush. In brick and mortars, it’s the time of year when more hiring happens to ensure smiling faces in the store to help eager customers and enough cashiers at the registers for fast, stress-free checkout. For an ecommerce retailer, beefing up for peak traffic is less about hiring more bodies and more about tuning up the systems and processes that bring, keep, and allow customers to fluidly make transactions on a site.

Lags in page load not only frustrate users, but Google’s search algorithms now also penalize sites for slow loads. Shop.org, The National Retail Federation’s digital division, expects online sales to grow between 9% and 12% in 2013, further increasing the need for a well-planned ecommerce strategy, both on the customer-facing side and in your underlying infrastructure. To add urgency, the 2013 holiday shopping season is almost a week shorter than 2012. Retailers will only have 26 days between Thanksgiving and Christmas, compared with 32 days in 2012, which makes driving sales during the shortened season even more pressing1.

Getting ready for a peak period, whether it’s the holidays, summer tourism, or a far-reaching marketing campaign, involves evaluating both your infrastructure and code to optimize for best results. The cornerstone of peak traffic planning is rigorously load testing your system ahead of time to identify and correct breakpoints and bottlenecks.

1. Test Your Site Before a Traffic Spike

Load testing gives you a window into how your site will perform when peak traffic arrives. The steps below layout high-level steps involved in load testing.

  1. Determine testing goals: The initial testing should be a complete end-to-end test. Follow up tests may only cover post-optimization testing of specific pages or processes. Also, keep in mind that different types of users use your site differently. A computer buyer may be divided into segments like consumer, business, and government. The consumer user theoretically takes a different path than a government user, hence a different load on the system that needs optimization. By narrowly defining test objectives and user types, you get more defined results to guide optimization. Use your goals to create a list of questions your test needs to answer, for example:

    • How many concurrent requests can my system handle at maximum load?
    • Are response times for all test paths acceptable?
    • What points in the chain are consuming the most hardware resources?
    • Are there obvious failures caused by large data sets, multiple concurrent users, amount of products on the site, shopping cart functionality, or other factors?
    • Is there any obvious low hanging fruit to optimize? Examples: unnecessary database queries, frequently used code paths that produce a consistent result or frequently repeating database queries that can be cached.

  2. Start with a Benchmark: Review logs and analytics to see how you’ve performed during previous peak periods and determine what a typical busy load looks like. Use tools like apache bench or autobench to simulate multiple concurrent users for a benchmark of how many requests per second you’re capable of serving. Pay special attention to heavily trafficked pages like home and landing pages where optimization efforts have the biggest payoff. To compare your performance and set benchmarks aligned with other retailers, take a look at Compuware’s Retail Web and Mobile Site Performance Index.

  3. Determine data collection methods: Organized data collection is essential to understanding results and learning from them. LoadRunner is a software tool that provides sophisticated formatting, flexibility and analysis. Microsoft Visual Studio has a SQL script for creating a database repository for results. Evaluate a prospective test data collection tool based on its ability to:

    • Thoroughly document the test conditions
    • Accurately document the results
    • Offer a straightforward analysis of results
    • Archive test data for future comparisons

    Using a set of monitoring tools allows you to obtain truthful data about actual test performance in simple visual graphs:

    Cacti for capturing metrics
    MONyog for monitoring the database
    statsd to put stats logging into code to monitor code performance in real time.

  4. Create scripts: Scripts generate test data and simulate user interaction. The scripts flood the site with requests so that you can identify bottlenecks that occur only during heavy traffic periods. A script will be made for each test path, cookies and all. JMeter is an open source load testing software designed to load test functional behavior and measure performance. JMeter scripts are easy to make into templates, and can be copied and pasted to create new ones. Common scripts include:

    • Randomly create products
    • Randomly create orders (fill up a cart, then checkout)
    • Randomly create customer accounts (can combine this with creating an order script)
    • Make changes in the admin interface (creating categories, configuration changes).

    These scripts will be run both individually and simultaneously to find performance limits.

  5. Define the Test Environment: In practice, the load test environment contains:

    • Software tools for sending lots of requests
    • Scripts to simulate user activity on the site
    • Scripts to generate mass amounts of data in the system
    • Hardware to run tools
    • Spreadsheets to track and analyze results

    Load testing is a common use of cloud resources to avoid investing in additional resources to execute tests or degrading other applications during testing which can run for days at a time. Using the cloud for testing means you can spin up as much CPU as needed to run tests for as long as you need and then decommission those resources when tests are completed.

  6. Execute Test: Because running tests can be very time consuming, get the most information possible out of each test run to avoid repeating the same test. To do so, create a spreadsheet with the following information:

    • Name of the person running the test
    • The date, time and duration of the run
    • A clearly defined hypothesis
    • A statement of what has changed
    • A set of metrics being tracked during the test
    • A post-run capture of the results of the metrics, best visualized with a graph of the metric over time.

    By saving this information in spreadsheets, you achieve provable results and can identify the most effective tests.

At the completion of the test, you should have a set of data that answers the questions set forth when goals were defined in the beginning. The answers will then guide your next steps for optimization. After optimizing, it’s critical to retest to account for any anomalies the optimization may create as fixing one problem can sometimes create another problem that you don’t want to wait till you’re in the middle of a spike to discover.

2. Solutions for Common Issues that Testing Uncovers

If site traffic reports reveal an increase in the number of refused connections during load testing, it’s time to reassess your load balancing solution. Refused connections are the first sign that your serving capacity is too small for the amount of traffic that your site receives. The next sign will probably be visitors calling or emailing to complain that they can’t access or transact on your site.

The number of load balancers you deploy is determined by your traffic and performance goals; there is no magic formula. Using the data from the load test will help you determine the number and location of load balancers. Once in place, re-test to confirm that the load balancer can handle the expected load.

Implementing server-side compression can reduce the size of your store pages by reducing the time to return data from the server and making a web server process available sooner. Most modern browsers and web servers are capable of compressing data to send and decompressing at the destination. This helps lower bandwidth requirements but can increase the CPU loads.

Smaller files lead to faster page load times. The two most common options here are to minimize your CSS and JavaScript removing white space and increasing readability (using a tool like the YUI Compressor) and to reduce the size of your images by removing unnecessary data from them with a tool like smush.it.

Using a CDN can help speed up sluggish page loads. On a CDN, the first time content is served to a user, a copy of the content is stored on edge servers geographically closest to that user. Subsequent requests use the stored copy resulting in faster load time. Hosting landing pages or other heavily trafficked static pages on the CDN helps to maintain a persistent, consistent web presence. As more people visit your site, your landing page will be cached worldwide and as a result load times will improve.

Publishing static content to a Content Delivery Network (CDN) rather than the web server can be easily accomplished with services like Rackspace Cloud Files, built with Akamai’s CDN technology. W3 Total Cache and PressFlow have built-in CDN technology as well.

There are many services associated with ecommerce and social that may provide code for you to place either sitewide or on particular pages. These code snippets, used for social sharing, analytics, widgets, etc., are HTML/JavaScript. Many use external requests to the third party, which lag and may not be cached on their end. Where possible, choose solutions for common page elements that avoid third-party code. Another common solution for troublesome JavaScript is to load the code on document ready (when the DOM is fully loaded), rather than including it in your mark-up directly.

Database-heavy sites may benefit from database optimizations, such as schema changes and indexing of commonly queried columns. Avoid using bloated frameworks and libraries.

Caching is an easy way to speed up your application or website which can help with your bounce rate, saving you from potential lost revenue. Determine what data you access frequently, and cache it in memory for repeated high-speed access to it. Whether your application is generating static content for web pages or storing sessions in caches, you have to decide how to store those caches. You can store the caches on your local file system or utilize distributed memory caches like memcached clusters.

Despite your best planning efforts, your site can still experience spikes outside planned peak capacity. For example, a site may typically run on two servers and occasionally traffic spikes up to needing a third server. If that spike is predictable, your team can hop in ahead of time to provision the extra server and decommission it when it’s no longer needed.

If that spike is unpredictable, you may be forced to hold on to a third server to cover that spike when or if it happens. With an autoscaling tool, like Rackspace Cloud Tools Marketplace partner, RightScale, you don’t have to hold on to that third server. By setting traffic, performance, or other variables, the third server is launched only when needed and automatically decommissioned when the launch parameters subside. For more auto scaling tools, visit the Rackspace Cloud Tools Marketplace.

Many still think that if they can’t go all cloud, they can’t use cloud at all. Not so. A hybrid cloud configuration lets you buy the base and rent the spike. You’re able to put the elements in place that you need to run your site now with the ability to burst into the cloud for traffic spikes or to expand functionality without re-architecting or changing platforms.

With Hybrid Cloud, an ecommerce store can tap cloud efficiencies for caching, image and video storage, or other resource-intensive, non-critical elements while keeping other elements like payment processing and other security-sensitive site elements on private cloud or on-premises gear to meet PCI compliance and other security.

3. Plan for Peak Success with Rackspace

During the 2012 holiday season, Rackspace customer, Karmaloop saw sales shoot up 60% from Black Friday to Cyber Monday and overall sales growth of 40%. Testing and tuning their site in advance was critical to their ability to stay up and running during the holiday season. “In our case, we tested up to a level that we felt was five times the volume that we would see during that Black Friday to Cyber Monday peak traffic period,” says Gary Rush, CTO Karmaloop. “Once we were able to satisfy ourselves that we were able to handle that load, we felt more confident.”

The Rackspace ecommerce hosting environment is designed to support customers with resources that effortlessly scale from small to large volumes of traffic. We can help you with the technology needed to develop, test, and scale your site to manage and optimize a spike instead of losing customers.



© 2011-2013 Rackspace US, Inc.

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License

See license specifics and DISCLAIMER