Support: 1-800-961-4454
1-800-961-2888

memcached: More Cache = Less Cash!

8

By: Adrian Otto

Money matters. Performance and Scalability matter too. If you write your web application to be highly efficient and scalable, you will save money when you run it on the cloud. Your web site visitors will be delighted and so will your bank account.

In my series Coding in the Cloud I’ve been laying out some guidelines for how to successfully develop for the cloud. In both Rule 1 and Rule 2 I make reference to a memory-based cache and to a software program called memcached.

This post is all about memcached. We hope that you use it with Cloud Sites and Cloud Servers to get superior performance under high concurrent load when compared to a typical LAMP application that does its reads from a MySQL database. Note that memcached can be used from all popular programming languages using one of the many Client API’s.

History

The memcached server was originally written by Anatoly Corobey and Brad Fitzpatrick at Danga Interactive in 2003. The memcached developer community really got their act together in 2007 and by 2008 the server was rock solid and ready for mainstream use. Many could argue that it was ready long before then. The software has been made popular thanks in part to numerous talks by Brian Aker from MySQL at developer conferences throughout the world. It’s used by many major web sites like Yahoo, Amazon, and Facebook. It’s true… memcached is ready for you… today.

How It Works

The concept for using memcached is simple: store your frequently accessed data in your database and memcached, and check the cache first. If the data is not in the cache, then you fetch it from the database, and update the cache for next time.

When you store your data into memcached, it’s stored in memory. The network protocol for fetching the data is extremely simple and lightweight. Because you access the data with a single hash key, the lookup time is very consistent between requests.

Why It Works

Databases store your data on disk. Most databases have a query cache so that if you re-query the same data repeatedly you can get cached response from memory on the database server. This works nicely when the data is rarely ever changed. It’s practically useless if your database changes all the time. Let’s consider a common scenario with a MySQL database for example:

The application stores session keys in a browser cookie, and uses a text string to index active sessions in an InnoDB table on the MySQL database. Each time the client browser connects to the application it consults the supplied text key from a browser cookie with a value in the database to judge whether or not the session is authenticated. If the key is found, the request is processed. Otherwise a login page is sent as the response.

Chances are if you’ve written a web application before, you’ve probably done something like what I’ve described above. The problem is that if you have a lot of users your session data will change very frequently. When it does change, your MySQL query cache is emptied for that table. Guess what happens to your application under these circumstances:

1) Your application’s HTTP requests are processed much more slowly after a database update because of the query cache misses.

2) Your database performance suffers tremedously as a result of accessing the disk which leads to slower average query response time and multiplies the number of concurrent connections to the database server which exacerbates the problem.

3) If your HTTP request rate is high enough you could swamp your database server completely, and HTTP requests to your application could time out or return errors.

Why? Because of the difference between the MySQL Query Cache and memcached.

Query Cache vs. memcached: The Big Difference

When you use a MySQL database, each time you INSERT, UPDATE, or DELETE a row from the MySQL database, the entire query cache for that table is invalidated. That means that the next request for every single session will be a cache miss, and must access the data from disk on the database server.  This leads to the pile-up that I described above.

Using memcached is different. Let’s assume you have exactly the same setup in your database and your web application, but instead of just using the MySQL database for all the authentication lookups, you instead consulted memcached first, and if no data was found you consulted the MySQL database, and inserted it into memcached for next time. Most of your requests would not need to access the database at all. Memory access is hundreds of times faster than disk access.

Remember that memcached is a simple key/value store. It offers no SQL interface, just a simple protocol for setting and retrieving data by using a text key. It’s very common to use an MD5 hash of some data as a key.

When you change or remove data in memcached, only one key changes at a time. There is no “global” cache invalidation. This means you can do an operation like logging out a user (by removing the authentication key from MySQL and memcached) and the key lookup performance for all other users of your application will not be affected. This means you completely avoid the pile-up I mentioned above.

Writing. Evil Writing.

One fact that most application developers miss or forget is that writing data to disk is expensive. It takes on average 10 times longer to write data to a typical disk drive than it does to read it. Writing to memory is the same speed as reading from memory. If you can decrease the frequency that you write to disk by writing small fragments of data in memory until you have enough saved up to write a big chunk to the disk, then you will see a dramatic performance improvement.

When you use memcached, you can write new data into memory without this same penalty. When you do an SQL query to save data into a database it must save that data to disk. In most cases it actually writes the same data multiple times. First it writes to a transaction log, then to the actual table. In many cases it writes the data yet again to a replication log too. So, if you have an application that does lots of INSERT, UPDATE, or DELETE queries, consider using memcached for writing that data instead.

Memory is Volatile

When memory is powered off, you lose the contents. When your application holding the memory crashes you lose it too. Although memcached is very stable, it’s possible that it could crash. If it does, you’ll need to have all the same data elsewhere (usually in a MySQL database) so that you can rebuild the cache on the fly.

In some cases an application can’t run with an empty cache, so it must be pre-warmed first. This means that you load it with all the hot data in advance, before you expose it to your application so that you don’t get a rash of cache misses when you bring it back on-line. Be sure to test that your application can work under load with an empty cache. If not, pre-warm the cache by walking your database and pre-populating memcached.

Some data you can’t afford to lose. A financial transaction for example. Fine, write that to the database. You need the guarantee that your transaction has been committed to permanent storage. Some data does not matter as much. For example, an authentication token tracking a logged in session, or some log data of some sort. Write and summarize that less important data in memcached if you are constantly writing it. Persist it to the database in batches on a sensible time interval.

The memcached Protocol

Here is a brief overview of the memcached protocol.

Storage Commands

  • set: Store the given data
  • add: Store the given data if it does not already exist for the given key
  • replace: Store the given data if it already exits for the given key
  • append: Add the given data after the existing data for the given key
  • prepend: Add the given data before the existing data for the given key
  • cas: Check and Set: Store this data if no other connections have changed it since I last fetched it.

Retrieval Commands

  • get: Get the data for the given key.
  • gets: Get the data, and include a unique id for use with cas.

Increment/Decrement

  • incr: Increment the value for the given key
  • decr: Decrement the value for the given key

Deletion

  • delete: Delete the given key and it’s data

You won’t need to study the details of the protocol if you use one of the many client API’s for memcached. However, you will need to understand the concepts for the various commands so that you can use it. There are other commands that I did not mention as well, so check the documentation that comes with your Client API for details.

Unlimited Memory?

Okay, so putting data into memcached is great, but you don’t have an infinite supply of memory on hand. Don’t worry. First consider my assertions:

1) A small cache is way better than no cache at all.

2) A huge cache is not always better than a small one.

The benefit of your cache is greatest with “hot” content, meaning the content that’s most frequently accessed by your application. Note that in most cases the volume of “hot” content is generally small, but the access rate for that data is high. Using a cache where you have a small amount of data that’s very frequently accessed is perfect.

The memcached server uses an LRU (Least Recently Used) cache reclamation algorithm. That means that when your cache fills up, it automatically replaces old content with new content by replacing the stuff that you have not accessed recently. This naturally results in a cache that holds your hottest content.

If you have a use case where you have more hot content then you can fit in a single memcached server, or an access rate that’s so high you can’t satisfy the demand with one cache, then you can distribute your cache over multiple servers. Each server will hold a fraction of the total cached content.

Performance

A single instance of memcached on a single server can deliver up to 50,000 requests per second when using the UDP protocol. Facebook has reported that they are able to service 300,000 requests per second from a single memcached instance using a modified setup.

For fun, let’s do some simple math. Let’s assume that a “peak” load is ten times normal load. That means that an application that would max out a single memcached server would need to have an average request rate of about 5,000 requests per second. That’s 18 million requests per hour. Assuming you sustain that traffic for 12 hours each day on average, that’s 216 million memcached requests. Let’s assume you do 3 memcached lookups per page view. That means you would be serving up to 72 million page views per day. If you are lucky enough to have an application that runs at that scale, then you will definitely want to consider using multiple memcached servers, or potentially making each one more efficient like they have done over at Facebook. You will be interested in the way that memcached uses consistent hashing to distribute your cache among multiple servers.

Concurrency

Looking up data in a database can be very fast. Looking the same data up in memcached may not be any faster. What memcached gets you in these cases is performance under high concurrent load. A database that performs well with a few concurrent connections is not likely to perform will with hundreds or thousands of concurrent connections. On the other hand, memcached works very well even under very high transaction rates and large numbers of concurrent network connections.

One thing I love about memcached is that it’s built using a server architecture that’s based on libevent. This means that it can support tens of thousands of concurrent network connections on a single server with very limited memory overhead per connection. This is way better than the more common server architectures that use a process per connection or a thread per connection. This is one of the key reasons why memcached is so amazingly fast.

Tutorials

Setting up memcached on Cloud Servers
Setting up memcached on Cloud Sites

About the Author

This is a post written and contributed by Adrian Otto.

Adrian serves as a Principal Architect for Rackspace, focusing on cloud services. He cares deeply about the future of cloud technology, and important projects like OpenStack. He also is a key contributor for and serves on the editing team for OASIS CAMP, a draft standard for application lifecycle management. He comes from an engineering and software development background, and has made a successful career as a serial entrepreneur.


More
  • http://bradley-holt.blogspot.com/ Bradley Holt

    Is memcached available on Mosso/The Rackspace Cloud? I remember it being available for awhile in beta, then it went away. Is it back again? If so, how do I use it with my sites?

    • http://www.mosso.com Adrian Otto

      Cloud Sites supports the Memcache PHP client class, so you can access a memcached server running on Cloud Servers. The tutorials that will be released tomorrow will detail exactly how you use it.

      • Hubert Nguyen

        But there are bandwidth charges between Cloud Sites and Cloud Servers are kind of killing it for me. Too bad because it seemed like a really good idea.

  • http://thecanarycollective.com Ben Hirsch

    So excited about memcached. I am unclear though on if it is available on cloud sites?

    • http://www.mosso.com Adrian Otto

      Yes, Cloud Sites supports the Memcache PHP client class, so you can access a memcached server running on Cloud Servers. The tutorials that will be released tomorrow will detail exactly how you use it.

  • Pingback: Rackspace Cloud Computing & Hosting |  Setting up memcached on Cloud Sites

  • http://www.schoonerinfotech.com/ Darpan Dinker

    Adrian: this is a great article.

    I would like to point out that there are Memcache products based on Flash memory that provide persistence to Memcache. Schooner Information Technology for instance provides 512 GB in Flash memory to cache/store Memcache objects in one server. It bridges the gap between a service to the application providing cache using volatile memory and a durable transactional store using database and persistent storage.

    Using Flash memory with the Memcache service changes the landscape in a few interesting ways. I found it best to supplement your sections and explain some of the benefits:

    “Writing. Evil writing”. Writing randomly to non-volatile Flash memory (e.g. Flash SSD) has much better performance (throughput and latency) compared to writing randomly to hard disk drives. You point is still valid, there is a big difference between reading and writing to Flash memory.

    “Memory is Volatile”. Flash memory is non-volatile, so there is no loss of data or updates upon power failure like DRAM.

    “Unlimited Memory”. In some cases, we do have a choice for increasing the amount of cache used for Memcache.
    (1) It is often simple to reduce Memcache’s miss rate by adding DRAM incrementally. However, after a certain point (say miss rate of 10%) one may need to keep doubling the cache size to reduce the miss rate by half. This means that to reduce the miss rate from 10% to 2.5%, one may have to quadruple the total available cache size. Using DRAM to achieve this may be expensive solution for medium to large deployments.
    (2) There could be various ways to define “hot content”, I like to think it in ways of working sets. Even for cached data, 1GB of total cache may be inefficient for a workload with 500,000 requests/sec. However 32-64 GB may be efficient, and adding main memory incrementally may also be efficient. What is the efficiency of adding main-memory to a memcache cluster? I would like to think of the metric “Hits/ GB/ sec” – the greater this metric, the more efficient the use of memory. When we plot a chart for this metric over increasing memory size, we can find that at different data-points, the graph may rise, fall or flatten. Working set and efficiency can clearly be seen from such a graph, and we may find more than one working set in most cases.
    DRAM is expensive and has high power consumption. Would you consider keeping an object in DRAM if it was accessed once per second, once per minute, once per hour or once per day? The decision is associated with many constraints, such as the application, the service, TCO for the infrastructure, etc.
    With the extra capacity and low power, Flash memory offers the potential of holding hot, semi-hot and cold data. This results in minimal load on the back-end data sources, and lower TCO.

    Darpan Dinker

    • http://www.mosso.com Adrian Otto

      I agree with all your points. In the future options that leverage SSD will emerge in the cloud for these specific benefits. It’s not offered today by any major cloud infrastructure provider that I’m aware of. There are some prerequisites that service providers have with offering this sort of a managed service that are absent from the current memcached protocol. Rackspace and others are working to address the gaps, which are beyond the scope of this blog.

      Step one should be to get everyone comfortable with the basics of memcached. For applications that are hosted within an enterprise environment, an SSD based memcached appliance would certainly be worth consideration when a basic memcached setup won’t address the needs for data durability or when dealing with particularly large working sets of data.

Racker Powered
©2014 Rackspace, US Inc.