Via Joe Gregorio I found a post entitled Transactionless by Martin Fowler. Martin Fowler writes

A couple of years ago I was talking to a couple of friends of mine who were doing some work at eBay. It's always interesting to hear about the techniques people use on high volume sites, but perhaps one of the most interesting tidbits was that eBay does not use database transactions.
The rationale for not using transactions was that they harm performance at the sort of scale that eBay deals with. This effect is exacerbated by the fact that eBay heavily partitions its data into many, many physical databases. As a result using transactions would mean using distributed transactions, which is a common thing to be wary of.

This heavy partitioning, and the database's central role in performance issues, means that eBay doesn't use many other database facilities. Referential integrity and sorting are done in application code. There's hardly any triggers or stored procedures.

My immediate follow-up to the news of transactionless was to ask what the consequences were for the application programmer, in particular the overall feeling about transactionlessness. The reply was that it was odd at first, but ended up not being a big deal - much less of a problem than you might think. You have to pay attention to the order of your commits, getting the more important ones in first. At each commit you have to check that it succeeded and decide what to do if it fails.

I suspect that this is one of those topics like replacing the operations team with the application developers which the CxOs and architects think is a great idea but is completely hated by the actual developers. We follow similar practices in some aspects of the Windows Live platform and I've heard developers complain about the fact that the error recovery you get for free with transactions is left in the hands of application developers. The biggest gripes are always around rolling back complex batch operations. I'm definitely interested in learning more about how eBay makes transactionless development as easy as they claim, I wonder if Dan Pritchett's talk is somewhere online?

The QCon conference wasn't even on my radar but if Dan Pritchett's talk is indicative of the kind of content that was presented, then it looks like I missed out. Looking at the list of speakers it looks like a conference I wouldn't have minded attending and submitting a paper for. I wonder if there'll be a U.S. version of the conference in the future? 


Monday, March 19, 2007 6:51:58 PM (GMT Standard Time, UTC+00:00)
I got to the bit where he said referential integrity was done in application code and thought "this is actually just a long-winded excuse for the fact that their database was designed by cowboys".
Monday, March 19, 2007 7:44:55 PM (GMT Standard Time, UTC+00:00)
Not sure if this is the talk you wanted, but here is a post with a description of and a link to a talk by Dan Pritchett on eBay's architecture from SD Forum:
Monday, March 19, 2007 8:28:56 PM (GMT Standard Time, UTC+00:00)
Stefan Tilkov of InnoQ (one of QCon's sponsors) says "I also managed to interview all of the speakers for InfoQ (except for the one with Paul Downey, which I plan to do tomorrow), and all of the presentations have been videotaped, too — a lot of excellent content to look forward to."


Tuesday, March 20, 2007 4:28:11 AM (GMT Standard Time, UTC+00:00)
Not just Ebay, apparently Microsoft doesn't believe in
complex transactions either?

And Pat Helland just moved to Amazon.

Tuesday, March 20, 2007 8:12:49 PM (GMT Standard Time, UTC+00:00)
I'm one of the conference organizers for QCon. Dare - aren't you reading You would have heard about the event there. :)

Yes we plan to come ot the US eventually, and yes we filmed all the sessions and they will be posted on over the course of the next year, in addition to a video interview specifically with Dan.

Also, the SDForum link provided earlier is an earlier version of Dan Pritchets talk.
Wednesday, March 21, 2007 5:26:42 AM (GMT Standard Time, UTC+00:00)
When it comes to Microsoft, it all comes down to the team you're on. I've been on two teams now that have largish OLTPs (1.5-3TB), and one used transactions very sparingly, and the other (current) uses them quite all just depends :)
Thursday, March 22, 2007 4:34:53 AM (GMT Standard Time, UTC+00:00)
When you deal with large volumes of data, either you design very expensive systems (not just in terms of money, mind you) or you design the system to be lean and mean. Heck, some of the largest databases I have had to deal with weren't even regular relational databases.

The only way our current system can meet the latency and throughput requirements is to not use transactions. I also see a disbelief, shock and you-must-surely-be-joking responses from people who never dealt with very large data volumes when they see our system. It is not that bad when you understand a small portion of your application which needs to perform and the failure and recovery model are well understood.

Praki Prakash
Comments are closed.