When you build web applications that have to scale up to millions of users, you sometimes end up questioning almost every aspect of your design as you hit scalability problems. One thing I hadn't expected was to notice that a number of people in our shoes had begun to point out the limitations of SQL databases when it comes to building modern Web applications. Below are a sampling of such comments primarily gathered together so I have easy access to them next time I want an example of what I mean by limitations of SQL databases when building large scale Web applications.

  1. From Google's Adam Bosworth we have the post Where Have all the Good databases Gone

    The products that the database vendors were building had less and less to do with what the customers wanted...Google itself (and I'd bet a lot Yahoo too) have similar needs to the ones Federal Express or Morgan Stanley or Ford or others described, quite eloquently to me. So, what is this growing disconnect?

    It is this. Users of databases tend to ask for three very simple things:

    1) Dynamic schema so that as the business model/description of goods or services changes and evolves, this evolution can be handled seamlessly in a system running 24 by 7, 365 days a year. This means that Amazon can track new things about new goods without changing the running system. It means that Federal Express can add Federal Express Ground seamlessly to their running tracking system and so on. In short, the database should handle unlimited change.

    2) Dynamic partitioning of data across large dynamic numbers of machines. A lot people people track a lot of data these days. It is common to talk to customers tracking 100,000,000 items a day and having to maintain the information online for at least 180 days with 4K or more a pop and that adds (or multiplies) up to a 100 TB or so. Customers tell me that this is best served up to the 1MM users who may want it at any time by partioning the data because, in general, most of this data is highly partionable by customer or product or something. The only issue is that it needs to be dynamic so that as items are added or get "busy" the system dynamically load balances their data across the machines. In short, the database should handle unlimited scale with very low latency. It can do this because the vast majority of queries will be local to a product or a customer or something over which you can partion...

    3) Modern indexing. Google has spoiled the world. Everyone has learned that just typing in a few words should show the relevant results in a couple of hundred milliseconds. Everyone (whether an Amazon user or a customer looking up a check they wrote a month ago or a customer service rep looking up the history for someone calling in to complain) expects this. This indexing, of course, often has to include indexing through the "blobs" stored in the items such as PDF's and Spreadsheets and Powerpoints. This is actually hard to do across all data, but much of the need is within a partioned data set (e.g. I want to and should only see my checks, not yours or my airbill status not yours) and then it should be trivial.
    ...
    Users of databases don't believe that they are getting any of these three. Salesforce, for example, has a lot of clever technology just to hack around the dynamic schema problem so that 13,000 customers can have 13,000 different views of what a prospect is.

    If the database vendors ARE solving these problems, then they aren't doing a good job of telling the rest of us.

  2. Joshua Schachter of del.icio.us is quoted as saying the following in a recent talk

    Scaling: avoid early optimization. SQL doesn't map well to these problems - think about how to split up data over multiple machines. Understand indexing strategies, profile every SQL statement. Nagios or similar for monitoring.

    Tags don't map well to SQL. Sometimes you can prune based on usage - only index the first few pages for example. This keeps indexes small and fast.

  3. Mark Fletcher of Bloglines wrote the following in his post Behind the Scenes of the Bloglines Datacenter Move (Part 2)

    The Bloglines back-end consists of a number of logical databases. There's a database for user information, including what each user is subscribed to, what their password is, etc. There's also a database for feed information, containing things like the name of each feed, the description for each feed, etc. There are also several databases which track link and guid information. And finally, there's the system that stores all the blog articles and related data. We have almost a trillion blog articles in the system, dating back to when we first went on-line in June, 2003. Even compressed, the blog articles consist of the largest chunk of data in the Bloglines system, by a large margin. By our calculations, if we could transfer the blog article data ahead of time, the other databases could be copied over in a reasonable amount of time, limiting our downtime to just a few hours.

    We don't use a traditional database to store blog articles. Instead we use a custom replication system based on flat files and smaller databases. It works well and scales using cheap hardware.

Interesting things happen when you question everything. 


 

Categories: Web Development
Tracked by:
"Two links on highly scalable databases" (ed costello: comments & links) [Trackback]
http://twwilliams.com/blog/2006/02/21/its-hard-to-have-scalable-reliable-databas... [Pingback]
http://byzhang.wordpress.com/2006/02/24/links-for-2006-02-24/ [Pingback]
http://rohitaggarwal.wordpress.com/2006/02/25/problems-with-sql-databases-for-in... [Pingback]
"Storage is Key" (Dare Obasanjo aka Carnage4Life) [Trackback]
http://tr5d5iu.net/00/sitemap1.html [Pingback]
http://weujmru.net/musicians/sitemap1.html [Pingback]
http://fwmwly7.net/connecticut/sitemap1.html [Pingback]
http://lx2rnws.net/events/sitemap1.html [Pingback]
http://bombaylogger.web.aplus.net/00/index.html [Pingback]
http://tulanka.readyhosting.com/retail/sitemap1.php [Pingback]
http://restablog.dreamhosters.com/games/sitemap1.html [Pingback]
http://host239.hostmonster.com/~blogford/sitemap1.html [Pingback]
http://gator413.hostgator.com/~digital/loan/sitemap1.html [Pingback]
http://fskh6vo.net/sitemap1.html [Pingback]
http://blogfordummies.org/sitemap1.html [Pingback]
http://eaerowb.net/sitemap1.html [Pingback]
http://box432.bluehost.com/~zbloginf/sitemap2.html [Pingback]
http://gator442.hostgator.com/~hockteam/leather/sitemap1.html [Pingback]

Wednesday, February 22, 2006 2:44:11 PM (GMT Standard Time, UTC+00:00)
A trillion articles in 2.5 years, sounds like a typo to me.
Wednesday, February 22, 2006 5:12:56 PM (GMT Standard Time, UTC+00:00)
Great post, Dare. Amazon.com had these problems as well. I have a post recounting some of that history at

http://glinden.blogspot.com/2006/02/early-amazon-splitting-website.html

In terms of Adam Bosworth's post, there are some steps toward a highly available, fast, replicated, distributed database system. One is MySQL's cluster product

http://www.mysql.com/products/database/cluster/

which looks quite nice, though I haven't tried it myself.

Another is the tools only available inside of Google such the Google File System and Google's BigTable:

http://labs.google.com/papers/gfs.html
http://glinden.blogspot.com/2005/09/googles-bigtable.html
Wednesday, February 22, 2006 9:42:21 PM (GMT Standard Time, UTC+00:00)
More support for the strategy of not spending too much energy on scalability in the beginning. You will most likely revisit your architecture no matter how much design you put in at the beginning.
pwb
Friday, February 24, 2006 2:52:07 AM (GMT Standard Time, UTC+00:00)
relational databases exist to support ACID transactions. if you don't want/need ACID then neither do you need RDBMS. MySql was very fast just because it was/is (depending on which version you choose) merely a SQL syntax parser to the underlying file system. the notion that the scientists at Oracle/IBM/M$ don't know how to do indexing is silly. they know. they've read Knuth. they know the math (indexing is a branch of applied math; it's not something you make up as you go along).

Google and Yahoo! searching don't qualify.
robert
Comments are closed.