Filed in by Adrian Otto | July 29, 2009 11:49 am
By: Adrian Otto
Money matters. Performance and Scalability matter too. If you write your web application to be highly efficient and scalable, you will save money when you run it on the cloud. Your web site visitors will be delighted and so will your bank account.
In my series Coding in the Cloud I’ve been laying out some guidelines for how to successfully develop for the cloud. In both Rule 1 and Rule 2 I make reference to a memory-based cache and to a software program called memcached.
This post is all about memcached. We hope that you use it with Cloud Sites and Cloud Servers to get superior performance under high concurrent load when compared to a typical LAMP application that does its reads from a MySQL database. Note that memcached can be used from all popular programming languages using one of the many Client API’s.
The memcached server was originally written by Anatoly Corobey and Brad Fitzpatrick atInteractive in 2003. The memcached developer community really got their act together in 2007 and by 2008 the server was rock solid and ready for mainstream use. Many could argue that it was ready long before then. The software has been made popular thanks in part to numerous talks by Brian Aker from MySQL at developer conferences throughout the world. It’s used by many major web sites like Yahoo, Amazon, and Facebook. It’s true… memcached is ready for you… today.
The concept for using memcached is simple: store your frequently accessed data in your database and memcached, and check the cache first. If the data is not in the cache, then you fetch it from the database, and update the cache for next time.
When you store your data into memcached, it’s stored in memory. The network protocol for fetching the data is extremely simple and lightweight. Because you access the data with a single hash key, the lookup time is very consistent between requests.
Databases store your data on disk. Most databases have a query cache so that if you re-query the same data repeatedly you can get cached response from memory on the database server. This works nicely when the data is rarely ever changed. It’s practically useless if your database changes all the time. Let’s consider a common scenario with a MySQL database for example:
The application stores session keys in a browser cookie, and uses a text string to index active sessions in an InnoDB table on the MySQL database. Each time the client browser connects to the application it consults the supplied text key from a browser cookie with a value in the database to judge whether or not the session is authenticated. If the key is found, the request is processed. Otherwise a login page is sent as the response.
Chances are if you’ve written a web application before, you’ve probably done something like what I’ve described above. The problem is that if you have a lot of users your session data will change very frequently. When it does change, your MySQL query cache is emptied for that table. Guess what happens to your application under these circumstances:
1) Your application’s HTTP requests are processed much more slowly after a database update because of the query cache misses.
2) Your database performance suffers tremedously as a result of accessing the disk which leads to slower average query response time and multiplies the number of concurrent connections to the database server which exacerbates the problem.
3) If your HTTP request rate is high enough you could swamp your database server completely, and HTTP requests to your application could time out or return errors.
Why? Because of the difference between the MySQL Query Cache and memcached.
When you use a MySQL database, each time you INSERT, UPDATE, or DELETE a row from the MySQL database, the entire query cache for that table is invalidated. That means that the next request for every single session will be a cache miss, and must access the data from disk on the database server. This leads to the pile-up that I described above.
Using memcached is different. Let’s assume you have exactly the same setup in your database and your web application, but instead of just using the MySQL database for all the authentication lookups, you instead consulted memcached first, and if no data was found you consulted the MySQL database, and inserted it into memcached for next time. Most of your requests would not need to access the database at all. Memory access is hundreds of times faster than disk access.
Remember that memcached is a simple key/value store. It offers no SQL interface, just a simple protocol for setting and retrieving data by using a text key. It’s very common to use an MD5 hash of some data as a key.
When you change or remove data in memcached, only one key changes at a time. There is no “global” cache invalidation. This means you can do an operation like logging out a user (by removing the authentication key from MySQL and memcached) and the key lookup performance for all other users of your application will not be affected. This means you completely avoid the pile-up I mentioned above.
One fact that most application developers miss or forget is that writing data to disk is expensive. It takes on average 10 times longer to write data to a typical disk drive than it does to read it. Writing to memory is the same speed as reading from memory. If you can decrease the frequency that you write to disk by writing small fragments of data in memory until you have enough saved up to write a big chunk to the disk, then you will see a dramatic performance improvement.
When you use memcached, you can write new data into memory without this same penalty. When you do an SQL query to save data into a database it must save that data to disk. In most cases it actually writes the same data multiple times. First it writes to a transaction log, then to the actual table. In many cases it writes the data yet again to a replication log too. So, if you have an application that does lots of INSERT, UPDATE, or DELETE queries, consider using memcached for writing that data instead.
When memory is powered off, you lose the contents. When your application holding the memory crashes you lose it too. Although memcached is very stable, it’s possible that it could crash. If it does, you’ll need to have all the same data elsewhere (usually in a MySQL database) so that you can rebuild the cache on the fly.
In some cases an application can’t run with an empty cache, so it must be pre-warmed first. This means that you load it with all the hot data in advance, before you expose it to your application so that you don’t get a rash of cache misses when you bring it back on-line. Be sure to test that your application can work under load with an empty cache. If not, pre-warm the cache by walking your database and pre-populating memcached.
Some data you can’t afford to lose. A financial transaction for example. Fine, write that to the database. You need the guarantee that your transaction has been committed to permanent storage. Some data does not matter as much. For example, an authentication token tracking a logged in session, or some log data of some sort. Write and summarize that less important data in memcached if you are constantly writing it. Persist it to the database in batches on a sensible time interval.
Here is a brief overview of the memcached protocol.
You won’t need to study the details of the protocol if you use one of the many client API’s for memcached. However, you will need to understand the concepts for the various commands so that you can use it. There are other commands that I did not mention as well, so check the documentation that comes with your Client API for details.
Okay, so putting data into memcached is great, but you don’t have an infinite supply of memory on hand. Don’t worry. First consider my assertions:
1) A small cache is way better than no cache at all.
2) A huge cache is not always better than a small one.
The benefit of your cache is greatest with “hot” content, meaning the content that’s most frequently accessed by your application. Note that in most cases the volume of “hot” content is generally small, but the access rate for that data is high. Using a cache where you have a small amount of data that’s very frequently accessed is perfect.
The memcached server uses an LRU (Least Recently Used) cache reclamation algorithm. That means that when your cache fills up, it automatically replaces old content with new content by replacing the stuff that you have not accessed recently. This naturally results in a cache that holds your hottest content.
If you have a use case where you have more hot content then you can fit in a single memcached server, or an access rate that’s so high you can’t satisfy the demand with one cache, then you can distribute your cache over multiple servers. Each server will hold a fraction of the total cached content.
A single instance of memcached on a single server can deliver up to 50,000 requests per second when using the UDP protocol. Facebook has reported that they are able to service 300,000 requests per second from a single memcached instance using a modified setup.
For fun, let’s do some simple math. Let’s assume that a “peak” load is ten times normal load. That means that an application that would max out a single memcached server would need to have an average request rate of about 5,000 requests per second. That’s 18 million requests per hour. Assuming you sustain that traffic for 12 hours each day on average, that’s 216 million memcached requests. Let’s assume you do 3 memcached lookups per page view. That means you would be serving up to 72 million page views per day. If you are lucky enough to have an application that runs at that scale, then you will definitely want to consider using multiple memcached servers, or potentially making each one more efficient like they have done over at Facebook. You will be interested in the way that memcached uses consistent hashing to distribute your cache among multiple servers.
Looking up data in a database can be very fast. Looking the same data up in memcached may not be any faster. What memcached gets you in these cases is performance under high concurrent load. A database that performs well with a few concurrent connections is not likely to perform will with hundreds or thousands of concurrent connections. On the other hand, memcached works very well even under very high transaction rates and large numbers of concurrent network connections.
One thing I love about memcached is that it’s built using a server architecture that’s based on libevent. This means that it can support tens of thousands of concurrent network connections on a single server with very limited memory overhead per connection. This is way better than the more common server architectures that use a process per connection or a thread per connection. This is one of the key reasons why memcached is so amazingly fast.
Setting up memcached on Cloud Servers
Setting up memcached on Cloud Sites
Source URL: http://www.rackspace.com/blog/memcached-more-cache-less-cash/
Copyright ©2015 The Official Rackspace Blog unless otherwise noted.