Recently I wrote a blog post entitled Google GData: A Uniform Web API for All Google Services where I pointed out that Google has standardized on GData (i.e. Google's implementation of the Atom 1.0 syndication format and the Atom Publishing Protocol with some extensions) as the data access protocol for Google's services going forward. In a comment to that post Gregor Rothfuss wondered whether I couldn't influence people at Microsoft to also standardize on GData. The fact is that I've actually tried to do this with different teams on multiple occasions and each time the I've tried, certain limitations in the Atom Publishing Protocol become quite obvious when you get outside of blog editing scenarios for which the protocol was originally designed. For this reason, we will likely standardize on a different RESTful protocol which I'll discuss in a later post. However I thought it would be useful to describe the limitations we saw in the Atom Publishing Protocol which made it unsuitable as the data access protocol for a large class of online services. 

Overview of the Atom's Data Model

The Atom data model consists of collections, entry resources and media resources. Entry resources and media resources are member resources of a collection. There is a handy drawings in section 4.2 of the latest APP draft specification that shows the hierarchy in this data model which is reproduced below.

                    
Member Resources
|
-----------------
| |
Entry Resources Media Resources
|
Media Link Entry

A media resource can have representations in any media type. An entry resource corresponds to an atom:entry element which means it must have an id, a title, an updated date, one or more authors and textual content. Below is a minimal atom:entry element taken from the Atom 1.0 specification


<entry>
    <title>Atom-Powered Robots Run Amok</title>
    <link href="http://example.org/2003/12/13/atom03"/>
    <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id>
    <updated>2003-12-13T18:30:02Z</updated>
    <summary>Some text.</summary>
</entry>

The process of creating and editing resources is covered in section 9 of the current APP draft specification. To add members to a Collection, clients send POST requests to the URI of the Collection. To delete a Member Resource, clients send a DELETE request to its Member URI. While to edit a Member Resource, clients send PUT requests to its Member URI.

Since using PUT to edit a resource is obviously problematic, the specification notes two concerns that developers have to pay attention to when updating resources.

To avoid unintentional loss of data when editing Member Entries or Media Link Entries, Atom Protocol clients SHOULD preserve all metadata that has not been intentionally modified, including unknown foreign markup.
...
Implementers are advised to pay attention to cache controls, and to make use of the mechanisms available in HTTP when editing Resources, in particular entity-tags as outlined in [NOTE-detect-lost-update]. Clients are not assured to receive the most recent representations of Collection Members using GET if the server is authorizing intermediaries to cache them.

The [NOTE-detect-lost-update] points to Editing the Web: Detecting the Lost Update Problem Using Unreserved Checkout which not only talks about ETags but also talks about conflict resolution strategies when faced with multiple edits to a Web document. This information is quite relevant to anyone considering implementing the Atom Publishing Protocol or a similar data manipulation protocol.   

With this foundation, we can now talk about the various problems one faces when trying to use the Atom Publishing Protocol with certain types of Web data stores.

Limitations Caused by the Constraints within the Atom Data Model

The following is a list of problems one faces when trying to utilize the Atom Publishing Protocol in areas outside of content publishing for which it was originally designed.

  1. Mismatch with data models that aren't microcontent: The Atom data model fits very well for representing authored content or microcontent on the Web such as blog posts, lists of links, podcasts, online photo albums and calendar events. In each of these cases the requirement that each Atom entry has an an id, a title, an updated date, one or more authors and textual content can be met and actually makes a lot of sense. On the other hand, there are other kinds online data that don't really fit this model.

    Below is an example of the results one could get from invoking the users.getInfo method in the Facebook REST API.

    
    <user>
        <uid>8055</uid>
        <about_me>This field perpetuates the glorification of the ego.  Also, it has a character limit.</about_me>
        <activities>Here: facebook, etc. There: Glee Club, a capella, teaching.</activities>    
        <birthday>November 3</birthday>
        <books>The Brothers K, GEB, Ken Wilber, Zen and the Art, Fitzgerald, The Emporer's New Mind, The Wonderful Story of Henry Sugar</books>
        <current_location>
    <city>Palo Alto</city>
    <state>CA</state>
    <country>United States</country>
    <zip>94303</zip>
    </current_location>

    <first_name>Dave</first_name>
    <interests>coffee, computers, the funny, architecture, code breaking,snowboarding, philosophy, soccer, talking to strangers</interests>
    <last_name>Fetterman</last_name>
    <movies>Tommy Boy, Billy Madison, Fight Club, Dirty Work, Meet the Parents, My Blue Heaven, Office Space </movies>
    <music>New Found Glory, Daft Punk, Weezer, The Crystal Method, Rage, the KLF, Green Day, Live, Coldplay, Panic at the Disco, Family Force 5</music>
    <name>Dave Fetterman</name>
    <profile_update_time>1170414620</profile_update_time>
    <relationship_status>In a Relationship</relationship_status>
    <religion/>
    <sex>male</sex>
    <significant_other_id xsi:nil="true"/>
    <status>
    <message>Pirates of the Carribean was an awful movie!!!</message> </status> </user>

    How exactly would one map this to an Atom entry? Most of the elements that constitute an Atom entry don't make much sense when representing a Facebook user. Secondly, one would have to create a large number of proprietary extension elements to anotate the atom:entry element to hold all the Facebook specific fields for the user. It's like trying to fit a square peg in a round hole. If you force it hard enough, you can make it fit but it will look damned ugly.

    Even after doing that, it is extremely unlikely that an unmodified Atom feed reader or editing client such as would be able to do anything useful with this Frankenstein atom:entry element. If you are going to roll your own libraries and clients to deal with this Frankenstein element, then it it begs the question of what benefit you are getting from mis using a standardized protocol in this manner?

    I guess we could keep the existing XML format used by the Facebook REST API and treat the user documents as media resources. But in that case, we aren't really using the Atom Publishing Protocol, instead we've reinvented WebDAV. Poorly.

  2. Lack of support for granular updates to fields of an item: As mentioned in the previous section editing an entry requires replacing the old entry with a new one. The expected client interaction with the server is described in section 5.4 of the current APP draft and is excerpted below.

    Retrieving a Resource

    Client                                     Server
    | |
    | 1.) GET to Member URI |
    |------------------------------------------>|
    | |
    | 2.) 200 Ok |
    | Member Representation |
    |<------------------------------------------|
    | |
    1. The client sends a GET request to the URI of a Member Resource to retrieve its representation.
    2. The server responds with the representation of the Member Resource.

    Editing a Resource

    Client                                     Server
    | |
    | 1.) PUT to Member URI |
    | Member Representation |
    |------------------------------------------>|
    | |
    | 2.) 200 OK |
    |<------------------------------------------|
    1. The client sends a PUT request to store a representation of a Member Resource.
    2. If the request is successful, the server responds with a status code of 200.

    Can anyone spot what's wrong with this interaction? The first problem is a minor one that may prove problematic in certain cases. The problem is pointed out in the note in the documentation on Updating posts on Google Blogger via GData which states

    IMPORTANT! To ensure forward compatibility, be sure that when you POST an updated entry you preserve all the XML that was present when you retrieved the entry from Blogger. Otherwise, when we implement new stuff and include <new-awesome-feature> elements in the feed, your client won't return them and your users will miss out! The Google data API client libraries all handle this correctly, so if you're using one of the libraries you're all set.

    Thus each client is responsible for ensuring that it doesn't lose any XML that was in the original atom:entry element it downloaded. The second problem is more serious and should be of concern to anyone who's read Editing the Web: Detecting the Lost Update Problem Using Unreserved Checkout. The problem is that there is data loss if the entry has changed between the time the client downloaded it and when it tries to PUT its changes.

    Even if the client does a HEAD request and compares ETags just before PUTing its changes, there's always the possibility of a race condition where an update occurs after the HEAD request. After a certain point, it is probably reasonable to just go with "most recent update wins" which is the simplest conflict resolution algorithm in existence. Unfortunately, this approach fails because the Atom Publishing Protocol makes client applications responsible for all the content within the atom:entry even if they are only interested in one field.

    Let's go back to the Facebook example above. Having an API now makes it quite likely that users will have multiple applications editing their data at once and sometimes these aplications will change their data without direct user intervention. For example, imagine Dave Fetterman has just moved to New York city and is updating his data across various services. So he updates his status message in his favorite IM client to "I've moved" then goes to Facebook to update his current location. However, he's installed a plugin that synchronizes his IM status message with his Facebook status message. So the IM plugin downloads the atom:entry that represents Dave Fetterman, Dave then updates his address on Facebook and right afterwards the IM plugin uploads his profile information with the old location and his new status message. The IM plugin is now responsible for data loss in a field it doesn't even operate on directly.

  3. Poor support for hierarchy: The Atom data model is that it doesn't directly support nesting or hierarchies. You can have a collection of media resources or entry resources but the entry resources cannot themselves contain entry resources. This means if you want to represent an item that has children they must be referenced via a link instead of included inline. This makes sense when you consider the blog syndication and blog editing background of Atom since it isn't a good idea to include all comments to a post directly children of an item in the feed or when editing the post. On the other hand, when you have a direct parent<->child hierarchical relationship, where the child is an addressable resource in its own right, it is cumbersome for clients to always have to make two or more calls to get all the data they need.

UPDATE: Bill de hÓra responds to these issues in his post APP on the Web has failed: miserably, utterly, and completely and points out to two more problems that developers may encounter while implementing GData/APP.


 

Saturday, 09 June 2007 09:01:31 (GMT Daylight Time, UTC+01:00)
> The second problem is more serious and should be of concern to anyone who's read Editing the Web: Detecting the Lost Update Problem Using Unreserved Checkout. The problem is that there is data loss if the entry has changed between the time the client downloaded it and when it tries to PUT its changes.

I don't understand. An HTTP client is supposed to use 'If-Match' with the ETag loaded. In case there's been a concurrent modification the server will respond with '412 Precondition Failed' and the client has to fetch the new representation and merge it with its edit. Basic Optimistic Concurrency Control.



> How exactly would one map this to an Atom entry?

I would define an XML media type for a facebook user, put that into the 'type' attribute of the atom:content element and make facebook:user a child of the atom:entry. But I've noticed Google apps tends to rather put extension elements as siblings to content.
Saturday, 09 June 2007 09:03:26 (GMT Daylight Time, UTC+01:00)
> and make facebook:user a child of the atom:entry.

Arg. Killed my point. I meant 'make it a child of atom:content' of course!
Saturday, 09 June 2007 13:58:36 (GMT Daylight Time, UTC+01:00)
Matthias,
You're right that the interaction doesn't require a HEAD before the update. The client can send a PUT request with an If-Match header then only do the diff/merge if that fails. Of course, I haven't found any Atom servers or other RESTful Web service implementations that support this in the wild but that doesn't mean it isn't possible.

As for smuggling a Facebook user element as the XML payload of atom:content. Re-read my comment about reinventing WebDAV badly.
Saturday, 09 June 2007 15:38:10 (GMT Daylight Time, UTC+01:00)
Some drive-by commentary...

I think you should broaden the critique beyond GData to all implementations of the Atom publishing protocol. The issues you raise boil down to three of the hard problems that we face in software engineering:
- extensibility
- caching
- concurrency

http://koranteng.blogspot.com/2006/03/rest-elevator-pitch.html#hard

It is very difficult to encode the resolution of these in a general purpose publishing protocol and I have been looking over the shoulder over the atompub working group to see just how much these things would be addressed.

Extensibility is almost a social problem and I think it is being addressed in the usual web/REST/HTTP/HTML way namely the combination of "must ignore" style extensibility for unknown elements in transfered media, and like the GData documentation suggests on the other hand, "must preserve" semantics for foreign markup. These, I'd argue are best practices and, well, the frameworks that will be most successful will give help and guidance to application developers. I don't know how much more they can do.

Caching and concurrency, it seems to me are being addressed in the typical HTTP way, namely you can't get away from dealing with cache control, e-tags and the like, and indeed you should explicitly have to deal with them. There is no free lunch. Here also, the frameworks (whether it is GData, Abdera, Aol's or whatever Microsoft come up with, to name a few) will do their best and I like the innovation in this space - GData's optimistic concurrency stuff is interesting. Standard REST thinking is that you need visibility to intemediaries and HTTP provides some but not all of the primitives we need. I think this is great, let the next few years of real world deployment at internet scale proceed and we'll see what shakes out and can be standardized.

The other business of course is synchronization. The most complete example we have at scale is Lotus Notes/Domino and even that hasn't been entirely successful.

Notes externalized the synchronization problem that is intrinsic in the distributed authoring of content but it provided base infrastructure for dealing with it. Looking over the evolution of replication in Notes over the past 20 years, you'll see incremental improvements in the strategy. It started out by recognizing that there'll be replication conflicts and you'll need some human judgement at some point but that . It took 10 years before Notes introduced field level replication which has greatly improved granularity and performance, but you as a designer modeling your Next Great Hypermedia, need to explicitly think about how you structure your design. Again this is an abstraction you have to deal with in distributed computing.

When Ozzie started looking at Atom/RSS, the first business that came out of it were the Simple Sharing Extensions. That is a step in the right direction and time will tell whether that is sufficient.

I've seen other attempts that are interesting, Andy Robert's Delta web to take and obvious example is well worth looking at

http://andyroberts007.blogspot.com/2006/11/delta-specification.html

The scheduling and calendering folks have to deal with it all the time so they'll innovate. Every few months I get asked to review sync proposals of some sort.

With the offline conundrum now in the news again, pace WHATWG, Google Gears and Mozilla's Firefox 3.0 looming, we'll have a large number of developers looking at this problem space. I hope something will get standardized but for now it's a greenfield.

Hmmm, looks like I've written a lot and should probably blog on my turf, but I'm embargoing the blog business for a while.

Anyway, all of the above was just to say that fail is an awfully strong word. We are all wading in the waters.

Cheers.
Sunday, 10 June 2007 19:07:00 (GMT Daylight Time, UTC+01:00)
You're an idiot. Freshen up on conditional requests using If-Match:, and learn how to delegate race-checking to the server: PUT-with-If-Match: and fire your re-GET and merge logic when you get back an HTTP "Precondition Failed".

This is basic lock-free algorithms class, dude. Not an wait-free algorithm, though. Not that I particularly care.
Luis Bruno
Sunday, 10 June 2007 21:23:04 (GMT Daylight Time, UTC+01:00)
It is now very obvious that the best minds in IT are now in Google, and definitely not in MS.
ak
Monday, 11 June 2007 00:56:09 (GMT Daylight Time, UTC+01:00)
Were the best minds in IT ever at Microsoft? Business acumen and technical vision rarely mix.
Monday, 11 June 2007 01:20:24 (GMT Daylight Time, UTC+01:00)
ak,
I find it interesting that the recommendations made by the various Atom experts [including Joe Gregorio and Bill de hÓra who are the editors of the Atom Publishing Protocol spec] actually run counter to the design decisions made by Google while designing GData.

I guess "the best minds in IT" are so smart they know how to use the protocol even better than the people who designed it. ;)
Tuesday, 12 June 2007 07:51:14 (GMT Daylight Time, UTC+01:00)
Ahh, the Internet -- where anyone can post a thought-provoking and highly relevant essay and then be attacked ad hominem for any mistakes made in it (or for their employer, or for no particular reason at all).

I miss the good ole days, before the Web, when people really knew how to flame. You young'uns could learn a thing or two from USENET. The attempts in this comment thread and other blogs were pathetic.

It's ok Dare, anytime you want to be properly flamed just let me know and I'll do it right.

You should consider replacing your captcha with a short literacy test. You can't tell from the reactions this post received, but many software engineers can read at the second or even third level of literacy, and consequently appreciate and comprehend your frequent and insightful posts on these subjects. As you well know, the issues in this post are frequent topics of conversation among the people at major corporations who are implementing and designing Web services today.
Comments are closed.