Todd Hoff over on the high scalability blog has an interesting post along the vein of my favorite mantra of Disk is the new tape titled Are Cloud Based Memory Architectures the Next Big Thing? where he writes

Why might cloud based memory architectures be the next big thing? For now we'll just address the memory based architecture part of the question, the cloud component is covered a little later.

Behold the power of keeping data in memory:


Google query results are now served in under an astonishingly fast 200ms, down from 1000ms in the olden days. The vast majority of this great performance improvement is due to holding indexes completely in memory. Thousands of machines process each query in order to make search results appear nearly instantaneously.

This text was adapted from notes on Google Fellow Jeff Dean keynote speech at WSDM 2009.

Google isn't the only one getting a performance bang from moving data into memory. Both LinkedIn and Digg keep the graph of their network social network in memory. Facebook has northwards of 800 memcached servers creating a reservoir of 28 terabytes of memory enabling a 99% cache hit rate. Even little guys can handle 100s of millions of events per day by using memory instead of disk.

The entire post is sort of confusing since it seems to mix ideas that should be two or three different blog posts into a single entry. Of the many ideas thrown around in the post, the one I find most interesting is highlighting the trend of treating in-memory storage as a core part of how a system functions not just as an optimization that keeps you from having to go to disk.

The LinkedIn architecture is a great example of this trend. They have servers which they call The Cloud whose job is to cache the site's entire social graph in memory and then have created multiple instances of this cached social graph. Going to disk to satisfy social graph related queries which can require touching data for hundreds to thousands of users is simply never an option. This is different from how you would traditionally treat a caching layer such as ASP.NET caching or typical usage of memcached.

To build such a memory based architecture there are a number of features you need to consider that don't come out of the box in caching product like memcached. The first is data redundancy which is unsupported in memcached. There are forty instances of LinkedIn's in-memory social graph which have to be kept mostly in sync without putting to much pressure on underlying databases. Another feature common to such memory based architectures that you won't find in memcached is transparent support for failover. When your data is spread out across multiple servers, losing a server should not mean that an entire server's worth of data is no longer being served out of the cache. This is especially of concern when you have a decent sized server cloud because it should be expected that servers come and go out of rotation all the time given hardware failures. Memcached users can solve this problem by using libraries that support consistent hashing (my preference) or by keeping a server available as a hot spare with the same IP address as the downed server. The key problem with lack of native failover support is that there is no way to automatically rebalance the workload on the pool of servers are added and removed from the cloud.

For large scale Web applications, if you're going to disk to serve data in your primary scenarios then you're probably doing something wrong. The path most scaling stories take is that people start with a database, partition it and then move to a heavily cached based approach when they start hitting the limits of a disk based system. This is now common knowledge in the pantheon of Web development. What doesn't yet seem to be part of the communal knowledge is the leap from being a cache-based with the option to fall back to a DB to simply being memory-based. The shift is slight but fundamental. 

FURTHER READING

  • Improving Running Components at Twitter: Evan Weaver's presentation on how Twitter improved their performance by two orders of magnitude by increasing their usage of caches and improving their caching policies among other changes. Favorite quote, "Everything runs from memory in Web 2.0".

  • Defining a data grid: A description of some of the features that an in-memory architecture needs beyond simply having data cached in memory.

Note Now Playing: Ludacris - Last Of A Dying Breed [Ft. Lil Wayne] Note