Support: 1-800-961-4454
Sales Chat
1-800-961-2888

Coding in the Cloud – Rule 2 – Don't write to the database in real time

15

Coding in the Cloud
By Adrian Otto

This continues my series on Rules for Coding in the Cloud, rules I’ve developed after watching applications encounter problems at scale when deployed on Cloud Sites.

People think about the cloud as an unlimited resource, but there are certain limits and you will reach those limits when you try to do something like writing multiple rows into the database for every single hit to your web site.  For example, if your site gets a million hits in an hour and you write four rows for each hit, you’d write four million rows of new data into your database in an hour.  That use pattern will cause lots of blockage and lots of wasted money. And in some cases, you can produce a write use pattern that can quickly exceed the capacity of the database server to write since writes take on average 10 times longer than any equally sized read.

So if you can avoid it, don’t write to the database.  When you must write to the database, do it very infrequently.  Don’t write based on the access pattern of your web site, or your entire application will fail when it’s under high load. Instead, find a way to individually queue that data in a scalable fashion, summarize it and then add it to the database at an infrequent interval.

Don’t use the database as a web log. If you do, and you have an application that’s running on hundreds of thousands of nodes in parallel, you’ll be unpleasantly surprised by the outcome. It will fail.

So what kinds of applications tend to break this rule?

Ad networks for one. Ad networks are designed to track where ads come from in real time so they can get up-to-date intelligence about the performance of ads. The critical error in logic that some ad network developers have made is that to get real time data you need real time logging, which you don’t.  All you need is real time summary data.  You don’t need the detail level in real time.  You need the detail level for archival, but you don’t need the detail level to get real time intelligence.  So what you really want in the case of serving an ad network is summary counters of the performance of all the various objects that you’re serving, and you want those counters updated in memory resources, not in the database. Every few minutes, read that information out of the memory counters and write it into the database for permanent storage.  This gives you a real time view of the ad network without writing multiple nodes into a database for every single access to every single ad.  We’ve seen multiple ad networks make this mistake, run into scalability constraints, and have to redesign the way their systems work. The principle here applies to all sites, though, not just ad networks.

Because of the way Cloud Sites works, storing something in memory as a summary value may seem rather tricky. The best way to do this is to use a memcached instance running on Cloud Servers. From a PHP application you can use the Memecached class for this. It supports the increment method that will allow you to safely increment the value of a given key from numerous servers simultaneously.

Bottom line: don’t try to write to the database in real time.  Writing to the database in real time is a recipe that will fail at scale. If you’re going to write to the database, do it asynchronously. Take the data in a batched format and save it to the database at regular intervals.

About the Author

This is a post written and contributed by Angela Bartels.

Angela runs integrated marketing campaigns for Rackspace. She started at Rackspace in 2003 and has done everything from Linux support, account management, sales, product marketing and now lives in marketing. She left Rackspace in 2005 to work for PEER 1 Hosting but returned in 2009 because she was interested in the cloud computing movement (and has always been a Racker at heart). Angela is a strong believer in the power of storytelling.


More
15 Comments

“Every few minutes, read that information out of the memory counters and write it into the database for permanent storage.”

In my experience, memory can’t be relied on the way db storage can. If you run into a problem with your memory storage, you could lose any stats for good if they aren’t put into the database directly.

Is that a valid issue?

avatar Oliver Nassar on July 10, 2009 | Reply

For Rails developers using ActiveRecord, check out the ar-extensions gem ). In addition to giving you all kinds of neat new finders, it gives ActiveRecord the ability to do multiple row inserts in one SQL statement.

You still pay the database penalty for these inserts, but you can insert 1000 rows without doing 1000 individual database calls. It works like this:

Suppose you have a table of books with columns title, author, publisher, and you need to insert 3 (or 3000) records. The old way would be to call Book.create{:title => ….} 3 (or 3000) times. With ar-extensions, it’s much more efficient.

# Define your columns
cols = [:title, :author, :publisher]
# Define your data (one array entry per row)
vals = [
[‘The Shining’, ‘Stephen King’, ‘Random House’],
[‘The Lovely Bones’, ‘Alice Sebold’, ‘Pendant’],
[‘Advanced Rails Recipes’, ‘Mike Clark’, ‘Pragmatic Programmer’]
]
# Stuff it in the DB in one swell foop.
Book.import(cols, vals)

avatar Beau on July 10, 2009 | Reply

“In my experience, memory can’t be relied on the way db storage can. If you run into a problem with your memory storage, you could lose any stats for good if they aren’t put into the database directly.”

With memcached I don’t think you’ll have to worry about your memory issues unless someone accidentally trips over the plug that runs the entire datacenter ;)

avatar Denny Ferrassoli on July 10, 2009 | Reply

> Oliver Nassar Says:
>
> July 10th, 2009 at 11:54 am
> “Every few minutes, read that information out of the memory
> counters and write it into the database for permanent storage.”
>
> In my experience, memory can’t be relied on the way db
> storage can. If you run into a problem with your memory
> storage, you could lose any stats for good if they aren’t put
> into the database directly.
>
> Is that a valid issue?

Yes, it’s true that memory is not durable like disk storage. If the equipment holding the non-committed data in memory loses power you will lose the data. If the application holding the data in memory crashes, it will be lost.

Developing an application for both high performance and scalability requires some compromise. One key compromise in this example is data durability.

If you have data that you can’t afford to lose, then you’ll need to speed up the writes to disk storage in order to scale. As Beau suggested one technique when using an SQL database is to batch the writes to reduce the number of SQL queries required. This will help, but the solution does not scale horizontally. To scale horizontally you want a solution that adds capacity for each additional server you add to the cluster.

Another technique is to write data to numerous servers (each with independent disk storage) in parallel, and use a data summary “roll-up” scheme.

The simple solution here is to use a raid volume for the disk system where the database runs in order to speed up the writes. To get the equivalent performance of the memory solution I described, but do it with disk storage, you would need a full rack full of hundreds of fast hard drives. In most cases this is cost prohibitive both for the purchase of the equipment, and the electric power to operate it.

When I’ve seen requirements for data that can’t be left without a transactional write to permanent storage, the solution for horizontal scalability has always been to use a cluster of systems where writes are distributed over a large number of drives in multiple servers in the cluster (the widest part of the cluster), and later rolled up into summary data. When properly architected and implemented this approach can also work. It’s usually not possible to do this in a real-time format, but it is possible if you have a method to query the master data source and all of the distributed queues in an aggregate query. This approach is a lot more complex than using a memory based cache, so only consider it if your data reliability requirements justify it.

avatar Adrian Otto [Racker] on July 11, 2009 | Reply

Thanks for the in-depth post Adrian. I really appreciate it, and will keep it in mind for db storage/memory caching.

All the best.

avatar Oliver Nassar on July 11, 2009 | Reply

Is it possible to use memcache with cloud sites?

avatar Richard on July 11, 2009 | Reply

Should be. As long as you have root access it shouldn’t be a problem.

avatar Oliver Nassar on July 11, 2009 | Reply

Richard,

Cloud Sites has the libmemcaed client library classes on the PHP servers. You can use that to access a memcached instance on a Cloud Server. I will be publishing another blog post on that very subject in the near future, so keep an eye out for that.

Adrian

avatar Adrian Otto [Racker] on July 11, 2009 | Reply

Good advise here, but isn’t this simply restating “be nice to the database at the expense of your application integrity”, without having anything more to say about this unresolved scalability catch-22 than bigtable/dynamo have stated by their inherent design?

avatar Steve Davis on July 14, 2009 | Reply

[…] continues my series on Rules for Coding in the Cloud, rules I’ve developed after watching applications encounter problems at scale when deployed […]

avatar Coding in the Cloud - Rule 3 - Use a “Stateless” design whenever possible | The Rackspace Cloud on July 17, 2009 | Reply

Steve Davis,

An astute observation indeed! Yes, I’m essentially saying that you should be nice to the database, and should be willing to compromise data durability to get it. Yes, you can use a widely distributed storage system like the ones you mentioned to help solve this, but the reality today is that the systems in widespread use today rely heavily on SQL databases. Most application developers are using one of the popular frameworks that have an SQL database central to its design. For those developers, using a next generation data storage system is out of the question.

Keep in mind that my posts… although many of them work as general guidance for any cloud… is written from the perspective of lessons learned from the Cloud Sites platform from the Rackspace Cloud. Because this is a LAMP and Windows stack system the SQL guidance applies.

Although the truth may be that the only way to get a truly scalable system is not to use LAMP at all and to do something completely next-gen, the fact remains that we have applications that are here and now using LAMP and .NET systems that need better scalability, and that’s what my 10 rules are trying to help with.

If you are one of the elite that wants to use a distributed data store, and have the ability to make your application work with it instead of an SQL database then you most certainly should!

Regards,

Adrian Otto

avatar Adrian Otto [Racker] on July 21, 2009 | Reply

Adrian and Steve,
Suppose I am interested in developing a scalable cloud app that has social networking and data collection aspects and can start with a clean slate. What stack would you recommend? If the only non-variable was using PHP what stack would you recommend? I have researched key-value data stores, column-based distributed servers, etc and not found a solution that really seems plausible from a rapid-development perspective due to either poor documentation or a lack of confidence in the relatively un-proven system. To phrase the question differently, if I want to develop an app with a data store similar to GAE or AWS SimpleDb on my own servers without tying myself to a proprietary system (preferably OSS), what is the best option?

Thanks in advance,
Colin

avatar Colin on July 24, 2009 | Reply

[…] been laying out some guidelines for how to successfully develop for the cloud. In both Rule 1 and Rule 2 I make reference to a memory-based cache and to a software program called […]

avatar Rackspace Cloud Computing & Hosting |  memcached: More Cache = Less Cash! on July 29, 2009 | Reply

[…] But consider also this. Mosso Sites are an example of “fabric”, albeit fabric that tries to solve the very problem posed above by, well, simply providing the familiar stack (and doing some MySQL clustering magic behind the scene, I assume). And it certainly wins the followers this way — the effort needed to port a LAMP application to a Mosso Site is minimal. But yet the same RDBMS limitations surface and Mosso feels the need to implore the users: […]

avatar DEBEDb holds forth » All your ACID are belong to us on August 4, 2009 | Reply

I have the non-cloud edition for a while and it’s fine for general shared storage I’m using it for. The issue I find with it is that it does indeed randomly drop one of the HDDs. The drive simply does not wake up from sleep. I need to do a physical power cycle with the unit and it does a re-build (kept the original mirror setup as I don’t really need that much space. Had this 4 times since July 2010 when it was bought.

avatar Jone Canestrini on March 15, 2012 | Reply

Leave a New Comment

(Required)


Racker Powered
©2014 Rackspace, US Inc.