Early this week, Roy Fieldings wrote a post entitled REST APIs must be hypertext-driven where he criticized the SocialSite REST API (a derivative of the OpenSocial REST API) for violating some constraints of the Representational State Transfer architectural style (aka REST). Roy's key criticisms were

API designers, please note the following rules before calling your creation a REST API:

  • A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]
  • A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). Servers must have the freedom to control their own namespace. Instead, allow servers to instruct clients on how to construct appropriate URIs, such as is done in HTML forms and URI templates, by defining those instructions within media types and link relations. [Failure here implies that clients are assuming a resource structure due to out-of band information, such as a domain-specific standard, which is the data-oriented equivalent to RPC's functional coupling].
  • ..
  • A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations. The transitions may be determined (or limited by) the client’s knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly (e.g., code-on-demand). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]

In reading some of the responses to Roy's post on programming.reddit it seems there are of number of folks who found it hard to glean practical advice from Roy's post. I thought it be useful to go over his post in more depth and with some examples.

The key thing to remember is that REST is about building software that scales to usage on the World Wide Web by being a good participant of the Web ecosystem. Ideally a RESTful API should be designed to be implementable by thousands of websites and consumed by hundreds of applications running on dozens of platforms with zero coupling between the client applications and the Web services. A great example of this is RSS/Atom feeds which happen to be one of the world's most successful RESTful API stories.

This notion of building software that scales to Web-wide usage is critical to understanding Roy's points above. The first point above is that a RESTful API should primarily be concerned about data payloads and not defining how URI end points should handle various HTTP methods. For one, sticking to defining data payloads which are then made standard MIME types gives maximum reusability of the technology. The specifications for RSS 2.0 (application/xml+rss) and the Atom syndication format (application/xml+atom) primarily focus on defining the data format and how applications should process feeds independent of how they were retrieved. In addition, both formats are aimed at being standard formats that can be utilized by any Web site as opposed to being tied to a particular vendor or Web site which has aided their adoption. Unfortunately, few have learned from these lessons and we have people building RESTful APIs with proprietary data formats that aren't meant to be shared. My current favorite example of this is social graph/contacts APIs which seem to be getting reinvented every six months. Google has the Contacts Data API, Yahoo! has their Address Book API, Microsoft has the Windows Live Contacts API, Facebook has their friends REST APIs and so on. Each of these APIs claims to be RESTful in its own way yet they are helping to fragment the Web instead of helping to grow it. There have been some moves to address this with the OpenSocial influenced Portable Contacts API but it too shies away from standard MIME types and instead creates dependencies on URL structures to dictate how the data payloads should be retrieved/processed.

One bad practice Roy calls out, which is embraced by the Portable Contacts and SocialSite APIs, is requiring a specific URL structure for services that implement the API. Section 6.2 of the current Portable Contacts API spec states the following

A request using the Base URL alone MUST yield a result, assuming that adequate authorization credentials are provided. In addition, Consumers MAY append additional path information to the Base URL to request more specific information. Service Providers MUST recognize the following additional path information when appended to the Base URL, and MUST return the corresponding data:

  • /@me/@all -- Return all contact info (equivalent to providing no additional path info)
  • /@me/@all/{id} -- Only return contact info for the contact whose id value is equal to the provided {id}, if such a contact exists. In this case, the response format is the same as when requesting all contacts, but any contacts not matching the requested ID MUST be filtered out of the result list by the Service Provider
  • /@me/@self -- Return contact info for the owner of this information, i.e. the user on whose behalf this request is being made. In this case, the response format is the same as when requesting all contacts, but any contacts not matching the requested ID MUST be filtered out of the result list by the Service Provider.

The problem with this approach is that it assumes that every implementer will have complete control of their URI space and that clients should have URL structures baked into them. The reason this practice is a bad idea is well documented in Joe Gregorio's post No Fishing - or - Why 'robots.txt and 'favicon.ico' are bad ideas and shouldn't be emulated where he lists several reasons why hard coded URLs are a bad idea. The reasons against include lack of extensibility and poor support for people in hosted environments who may not fully control their URI space. The interesting thing to note is that both the robots.txt and favicon.ico scenarios eventually developed mechanisms to support using hyperlinks on the source page (i.e. noindex and rel="shortcut icon") instead of hard coded URIs since that practice doesn't scale to Web-wide usage.

The latest drafts of the OpenSocial specification have a great example of how a service can use existing technologies such as URI templates to make even complicated URL structures to be flexible and discoverable without having to force every client and service to hardcode a specific URL structure. Below is an excerpt from the discovery section of the current OpenSocial REST API spec

A container declares what collection and features it supports, and provides templates for discovering them, via a simple discovery document. A client starts the discovery process at the container's identifier URI (e.g., example.org). The full flow is available athttp://xrds-simple.net/core/1.0/; in a nutshell:

  1. Client GETs {container-url} with Accept: application/xrds+xml
  2. Container responds with either an X-XRDS-Location: header pointing to the discovery document, or the document itself.
  3. If the client received an X-XRDS-Location: header, follow it to get the discovery document.

The discovery document is an XML file in the same format used for OpenID and OAuth discovery, defined at http://xrds-simple.net/core/1.0/:

<XRDS xmlns="xri://$xrds">
<XRD xmlns:simple="http://xrds-simple.net/core/1.0" xmlns="xri://$XRD*($v*2.0)" xmlns:os="http://ns.opensocial.org/2008/opensocial" version="2.0">
<Type>xri://$xrds*simple</Type>
<Service>
<Type>http://ns.opensocial.org/2008/opensocial/people</Type>
<os:URI-Template>http://api.example.org/people/{guid}/{selector}{-prefix|/|pid}</os:URI-Template>
</Service>
<Service>
<Type>http://ns.opensocial.org/2008/opensocial/groups</Type>
<os:URI-Template>http://api.example.org/groups/{guid}</os:URI-Template>
</Service>
<Service
<Type>http://ns.opensocial.org/2008/opensocial/activities</Type>
<os:URI-Template>http://api.example.org/activities/{guid}/{appid}/{selector}</os:URI-Template>
</Service>
<Service>
<Type>http://ns.opensocial.org//2008/opensocial/appdata</Type>
<os:URI-Template>http://api.example.org/appdata/{guid}/{appid}/{selector}</os:URI-Template>
</Service>
<Service>
<Type>http://ns.opensocial.org//2008/opensocial/messages</Type>
<os:URI-Template>http://api.example.org/messages/{guid}/{selector}</os:URI-Template>
</Service>
</XRD>
</XRDS>

This approach makes it possible for a service to expose the OpenSocial end points however way it sees fit without clients having to expect a specific URL structure. 

Similarly links should be used for describing relationships between resources in the various payloads instead of expecting hard coded URL structures. Again, I'm drawn to the example of RSS & Atom feeds where link relations are used for defining the permalink to a feed item, the link to related media files (i.e. podcasts), links to comments, etc instead of applications expecting that every Web site that supports enclosures should have /@rss/{id}/@podcasts  URL instead of just examining the <enclosure> element.

Thus it is plain to see that hyperlinks are important both for discovery of service end points and for describing relationships between resources in a loosely coupled way.

Note Now Playing: Prince - When Doves Cry Note