Joshua Porter has an excellent post titled Are you building an everyday app? (the LinkedIn problem)  where he writes

In a recent interview, LinkedIn CEO Reid Hoffman describes moving away from day to day to a more strategic role in the company he founded:

I want to be able to sink my mind around a couple of problems and work through them. For example, many professionals still don’t understand how LinkedIn can be valuable on a daily or weekly basis”

Another way you could phrase this is: “people don’t use LinkedIn everyday…we need to figure out how to change that”.

The fact is that LinkedIn, in its current incarnation, is not an everyday app. An everyday app is one that is used every day (or most days) by its users.
In general, most people think they’re building an everyday app, but they’re not. When the actual use patterns are discovered, most apps will be used every few days or less. Designers have to ask themselves a very hard question: “How often are people really going to use our web application?”. The answer is important…it will even help drive design decisions. Whether or not you have an everyday app affects the entire design of what you’re building, including the screens, notifications, and frequency of the service. For example, only everyday apps really need to use real-time technology to update streams. If you find out that you’re not building an everyday app, you probably don’t need to invest in making it real-time. But…you might invest in a notifications system that can alert users to when something very interesting happens.

You don’t have to be an everyday app to be successful. Netflix, for example, is not an everyday app. It’s an every-few-days app. Most people go back every few days to update their queue. There is really no need to go back more often.

Many developers of social software applications on the Web believe they’ve built an everyday app but they actually haven’t. One thing I’ve learned in almost five years of working on social software applications at Microsoft is that simply having the features of an everyday app doesn’t translate to people using the application every day. The best way to think about this is that no application starts off as an everyday app. Very rarely does an application show up that is so amazing that people start using it everyday right off the bat. Instead there is a transition where either users transition from occasional to frequent users while the application stays static or the application itself transitions to catering to a more engaged user base relative to where it was in previous years.

An example of the former is Twitter, the site hasn’t really changed much since I started using it about a year and a half ago. However it wasn’t until the right set of factors came together such as getting enough people I was following, adding the Twitter app to Facebook & Windows Live to update my friends on those services and getting Twitter applications for my desktop & phone did I transition to becoming an everyday user. Twitter’s main problem is that not every user eventually hits this sweet spot which is why you read articles like Twitter Quitters Post Roadblock to Long-Term Growth which points out that retention rate over a one month period hovers between 30% – 40% for new users. This need to make the application instantly useful to users is what prompts features like the Suggested Users List whose purpose is to give new users content worth coming back to every day instead of the “Trying out Twitter for the first time” style posts that they probably see from most of their friends who are also kicking the tires on the service they heard about on Oprah.

Having features that are useful every day like a constantly updating activity stream doesn’t mean people will use the site every day. For the users that cross the hump, it does. The challenge is how to get users to cross that hump.

A site that has done a good job of motivating its users to check it out on a daily basis and adjusting its features as its user base has become more engaged is Facebook. One of Facebook’s most annoying features for a long time was the fact that notifications on new messages in the service didn't actually contain the message. I suspect the purpose of doing this was to drive users back to the site so that they would then catch up on all they’d missed such as invitations and content in the news feed after they were done answering the message. Although this feature is annoying, it was definitively effective given anecdotal feedback from various people I talked to at the time. After a certain point, Facebook’s user engagement grew to the point where sending messages without the content to drive users back to the site wasn’t worth it relative to the decreased customer satisfaction.

Another great example from Facebook is the transition from the old news feed to the new stream. In March of 2009, the news feed was transformed into a real-time stream and almost two months later the real-time stream now updates live without having to refresh the page. The previous functionality of the news feed was relegated to an alternate highlights stream as shown below 

From reading the blog posts about the changes, the problem the switch from a highlights-centric news feed to a real-time stream is addressing is the fact that the highlights-based feed is stale and doesn’t provide enough value for users who’ve not just become every day users but are now every hour users. Not only do over 100 million of its users login every day but with 90 million users generating 90 billion page views in the month of March 2009 that implies the average page view for a Facebook user is over once per hour.

And when you think about it, the introduction of the original news feed in 2006 was a successful attempt to go from being an occasionally updated rolodex for a large chunk of their users into a social utility to keep up with what’s going on in the lives of family, friends and coworkers. The switch to a real-time stream is how Facebook is addressing the fact that they have slowly become an every hour app instead of just an everyday app for their users.

A number of services I use online could learn from how Facebook has evolved their experience over time increase the engagement of their user base by making sometimes small and sometimes huge changes to the user experience to encourage people to make the service a regular part of their lives.


Categories: Social Software

In my previous post I talk about adding support for reading and commenting on your Facebook news feed in RSS Bandit. This functionality is made possible by the recently announced Facebook Open Stream API. As I worked through the code in my few free moments I was alternately impressed and frustrated by the Open Stream API. On the one hand, the functionality the API provides is greatly enabling and has already unleashed a bounty of innovation as evidenced by the growing number of applications for interacting with your Facebook news feed on the desktop (e.g. Seesmic, Tweetdeck, bdule, etc). On the other hand, figuring out there are a few quirks of the API that make the web developer in me cringe and the desktop developer want to beg for mercy.

Below are my opinions on the Open Stream API, the purpose of sharing is so other developers who plan to use the API may avoid some of the pitfalls I did and also to share feedback with my peers at Facebook and elsewhere on best practices in providing activity stream APIs. 

Good: Lots of options for getting at the data 

Most API calls at Facebook have two entry points. You can either call a “REST-like” URL endpoint via HTTP GET or perform a SQL-like query via FQL on a generic end point. For interacting with the Facebook news feed, you have stream.get and stream.getComments as the REST-like methods for accessing the news feed and a comment thread for a particular feed item respectively.  With FQL, you can perform queries against the stream (FQL) and comment (FQL) to get the same results.

Both mechanisms give you the option of getting back the data as XML or JSON. Below are example of what HTTP requests to retrieve your news feed look like using both approaches 

stream.get REQUEST:{0}&session_key={1}&api_key={2}&call_id={3}&sig={4}

FQL REQUEST:{0}&session_key={1}&api_key={2}&sig={3}&query={4}
select post_id, source_id, created_time, actor_id, target_id, app_id, message, attachment, comments, likes, permalink, attribution, type from stream where filter_key in (select filter_key FROM stream_filter where uid = {0} and type = 'newsfeed') and created_time >= {1} order by created_time desc limit 50

I ended up using stream.get over FQL because the only benefit I see to using FQL is being able to filter some of the result fields or combine data from multiple tables, neither of which I needed in this instance. I chose XML as the output format over JSON because I transform the results of the Facebook API request into an Atom feed and then hand it down to the regular Atom feed parsing code in RSS Bandit. It was much easier for me to handle this transformation using XSLT over XML results as opposed to procedural code over a JSON result set.

There’s also a third mechanism for getting news feed data out of Facebook that would have been perfect for my needs. You can access the news feed using the nascent activity streams standard which returns the data as an Atom feeds with certain extensions specific to social network activity stream. I couldn’t get this to work probably due to user error on my part (more on this later) but if I had, the structure of the GET request would have been along the lines of

Activity Streams REQUEST:{0}&app_id={1}&session_key={2}&sig={3}&v=0.7&read&updated_time={4}

If not for the initial problems I had figuring out which parameters to pass to APIs and the concern about building on an API that isn’t yet at version 1.0, the Activity Stream API would have been perfect for my needs.

Kudos to the Facebook team for providing such a rich and varied set of options for getting the news feed out of their system. There’s something for every temperament.

Bad: Misuse of HTTP status codes

The Facebook API documentation describes the platform as “REST-like” and not RESTful. They sure weren’t kidding. For the most part, when I’ve discussed the problems with APIs that aren’t completely RESTful the negatives to such an approach have seemed aesthetic in nature as opposed to being practical problems. Below are some of the practical problems I faced because the Facebook APIs do not use HTTP status codes in a manner consistent with the rest of the Web.

In HTTP, there are existing error codes which clients can interpret in a consistent manner and provide consistent feedback to users when they occur. Now consider this excerpt from the documentation on using Facebook’s activity stream API

Response Codes

Like most HTTP responses, Facebook Activity Streams responses include a response header, which always includes a traditional response code and a short response message. The supported response codes include:

  • 200 Code provided whenever the Facebook servers were able to accommodate the request and provide a response.
  • 304 Code provided whenever the request header included If-Modified-Since and no new posts have been generated since the specified time.
    Note: Code 304 will never be returned if If-Modified-Since isn't included in the request header.
  • 401 Code provided whenever the URL omits one or more of the required parameters.
  • 403 Code provided whenever the URL is syntactically valid, but the user hasn't granted the required extended permission.
  • 404 Code provided whenever the URL is syntactically valid, but the signature is incorrect, or the session key is invalid.

You might notice that an HTTP 401 is used to indicate that the request is improperly formed. However, let’s see what RFC 2616 has to say about the 401 status code as well as how to communicate badly formed arguments

400 Bad Request

The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications.

401 Unauthorized

The request requires user authentication. The response MUST include a WWW-Authenticate header field (section 14.47) containing a challenge applicable to the requested resource. The client MAY repeat the request with a suitable Authorization header field (section 14.8). If the request already included Authorization credentials, then the 401 response indicates that authorization has been refused for those credentials. If the 401 response contains the same challenge as the prior response, and the user agent has already attempted authentication at least once, then the user SHOULD be presented the entity that was given in the response, since that entity might include relevant diagnostic information. HTTP access authentication is explained in "HTTP Authentication: Basic and Digest Access Authentication"

According to the HTTP specification excerpted above, the status code to return on missing parameters should be 400 not 401. The practical problem with returning HTTP 401 in this case is that applications like RSS Bandit may have code paths that prompt the user to check or re-enter their credentials because an authentication error has occurred. We now have to special case getting a 401 from Facebook’s servers versus any other server on the Web. Thankfully, this error should be limited to development time unless Facebook changes their API in a backwards incompatible manner by requiring new parameters to their methods.

A bigger problem is that Facebook returns HTTP 200 which traditionally means success in regular occurring error conditions. Specifically, when a user’s session key expires a successful response is sent containing an error document. In RSS Bandit, we already have a processing pipeline that hands off successful responses containing an XML document to the RSS/Atom parsing layer. With Facebook’s behavior I had two choices

  1. Modify the RSS/Atom parsing layer to understand Facebook error documents and then kick back an error up to the user interface asking the user to re-enter their credentials. This was particularly hacky because that layer doesn’t really have a connection to the main UI thread nor should it.

  2. Pre-process the XML document before handing it to the RSS/Atom parsing layer. This implies an intermediate step in between dispatching on the response status and actually processing the document.

Option #2 proved more palatable since there was already an intermediate step needed to transform the results of stream.get to an Atom feed. If I’d used the Activity Streams API as was my initial plan then the decision may have been harder to make.

Good: Well thought out model for authorizing access to user data

A number of application platforms work by giving a user all-or-nothing access to the user’s data. My favorite example of this bad practice is Twitter which for a long time made this situation even worse by requiring the applications to collect the person’s username and password. The controversy around Twply earlier this year shows exactly why giving applications more access to a user’s data than they need is bad. Twply only needed access to a user’s @replies but given that there was no way to only give it access to just that aspect of a user’s Twitter data, it also got the ability to post tweets on their behalf and did so in a spammy manner.

The Facebook API has a notion of extended permissions which are access rights that require special opt-in from the user given that the data or functionality is sensitive and can be abused. The current set of extended permissions in the Facebook API is provided below

Permission Description
publish_stream Lets your application or site post content, comments, and likes to a user's profile and in the streams of the user's friends without prompting the user.

This permission is a superset of the status_update, photo_upload, video_upload, create_note, and share_item extended permissions, so if you haven't prompted users for those permissions yet, you need only prompt them for publish_stream.

Note: At this time, while the Open Stream API is in beta, the only Facebook users that can grant your application the publish_stream permission are the developers of your application.

read_stream Lets your application or site access a user's stream and display it. This includes all of the posts in a user's stream. You need an active session with the user to get this data.
email This permission allows an application to send email to its user. This permission can be obtained only through the fb:prompt-permission tag or the promptpermission attribute. When the user accepts, you can send him/her an email via notifications.sendEmail or directly to the proxied_email FQL field.
offline_access This permission grants an application access to user data when the user is offline or doesn't have an active session. This permission can be obtained only through the fb:prompt-permission tag or the promptpermission attribute.
create_event This permission allows an app to create and modify events for a user via the events.create, events.edit and events.cancel methods.
rsvp_event This permission allows an app to RSVP to an event on behalf of a user via the method.
sms This permission allows a mobile application to send messages to the user and respond to messages from the user via text message.
status_update This permission grants your application the ability to update a user's or Facebook Page's status with the status.set or users.setStatus method.

Note: You should prompt users for the publish_stream permission instead, since it includes the ability to update a user's status.

photo_upload This permission relaxes requirements on the photos.upload and photos.addTag methods. If the user grants this permission, photos uploaded by the application will bypass the pending state and the user will not have to manually approve the photos each time.

Note: You should prompt users for the publish_stream permission instead, since it includes the ability to upload a photo.

video_upload This permission allows an application to provide the mechanism for a user to upload videos to their profile.

Note: You should prompt users for the publish_stream permission instead, since it includes the ability to upload a video.

create_note This permission allows an application to provide the mechanism for a user to write, edit, and delete notes on their profile.

Note: You should prompt users for the publish_stream permission instead, since it includes the ability to let a user write notes.

share_item This permission allows an application to provide the mechanism for a user to post links to their profile.

Note: You should prompt users for the publish_stream permission instead, since it includes the ability to let a user share links.



Bad: Too many prompts for desktop applications

Although I’ve praised the extended permissions model, it currently leads to a cumbersome experience for desktop applications. Installing Facebook desktop applications like bdule or Facebook for Adobe Air requires running through three separate permissions screens. The user has to login, then grant the read_stream extended permission followed by the publish_stream extended permission. Granted, the latter two only need to be done once but they still affect the out of box experience fairly negatively in my opinion.

In RSS Bandit, I’ve attempted to reduce this by delaying the prompt for publish_stream permission until the first time a user tries to comment on a news feed item from within the application. Streamlining this experience will be a boon for application developers who want the entire experience to be smooth and painless. The documentation on the documentation the Open Streams API page states that there are options for streamlining these requests but they only apply to Web applications not desktop apps. Sad

Bad: Plethora of application identifiers and authentication requirements is a stumbling block for beginners

The official documentation on the Facebook API doesn’t do a good job of connecting all the dots when it comes to understanding how to make calls to the service. For example, when you read the documentation for stream.get it is not obvious from that page what endpoint to make requests to OR that every Facebook API call has a set of required parameters beyond the ones listed on that page. In fact, I was stumped for about a week or so trying to figure out the magical incantations to get the right set of parameters for API calls and the right data to put in these parameters until I stumbled on a Facebook forum post creating the 'sig' parameter which solved my problems. I believe I once saw this information in the Facebook API documents but after ten minutes of searching just now I can’t seem to find it so perhaps it was in my imagination.

Part of the problem is the varied number of identifiers that you have to keep straight as an application developer including your

  • application ID
  • API key
  • application secret
  • session key
  • client secret

The fact that various APIs take different combinations of the above lead to more than one confusing moment for me. Eventually I figured it out but I felt like I was being hazed as I was going through the process.


Categories: Platforms

So I’ve spent the weekend working my way through Facebook’s Open Stream API and have made a ton of progress in adding the option to read your Facebook news feed from RSS Bandit. In my previous post I should the initial screenshots where you can login to Facebook via RSS Bandit to begin the authentication process to retrieve your news feed. Below are a few more screenshots showing how much progress has been made

Requesting extended permission to read the stream

Fig 1: This is the screen where you give RSS Bandit permission to access your news feed. Note that this is different from the screen where you login to Facebook via RSS Bandit. You should only see this screen once but will get the login screen quite often unless you select the option to stay signed into Facebook via RSS Bandit.

 The Facebook news feed in RSS Bandit

Fig 2: This is what your Facebook news feed looks like in RSS Bandit with the default stylesheet for Facebook applied. As you can see from the screen shot I’ve attempted to replicate the look of the news feed as best as I can in RSS Bandit. This includes options below each item to comment or see who has liked an item.

Viewing news feed comments in RSS Bandit

Fig 3: There is also support for viewing the comments on a Facebook item inline within the Feed Details list view. Although this is different from how Facebook does it, I felt this was needed to make it consistent with the other inline comment viewing experiences within RSS Bandit.

There’s still a bunch of cleanup to do such as fix up the look of comments when you click on them and provide a way to post comments on your friends’ Facebook items. This is probably another day or so of work which I’ll do next weekend. After that it is back to fixing bugs and working with Torsten on our idea from over two years ago on how to add a Office 2007 style ribbon to the application.

Below are pictures of the initial prototypes which Torsten has dusted off and will start working on again

I’ll follow up this post with some of my thoughts on writing a desktop application that targets Facebook’s Open Stream API. The process was definitely more painful than I expected but the results are worth it which says a lot about the functionality provided by the API.


Categories: RSS Bandit

Farhad Manjoo has an article on Slate entitled Kill your RSS reader which captures a growing sentiment I’ve had for a while and ranted about during a recent panel at SXSW. Below are a few key excerpts from Farhad’s article that resonate strongly with me

In theory, the RSS reader is a great idea. Many years ago, as blogs became an ever-larger part of my news diet, I got addicted to Bloglines, one of the first popular RSS programs. I used to read a dozen different news sites every day, going to each site every so often to check whether something fresh had been posted. With Bloglines, I just had to list the sites I loved and it would do the visiting for me. This was fantastic—instead of scouring the Web for interesting stories, everything came to me!
But RSS started to bring me down. You know that sinking feeling you get when you open your e-mail and discover hundreds of messages you need to respond to—that realization that e-mail has become another merciless chore in your day? That's how I began to feel about my reader. RSS readers encourage you to oversubscribe to news. Every time you encounter an interesting new blog post, you've got an incentive to sign up to all the posts from that blog—after all, you don't want to miss anything. Eventually you find yourself subscribed to hundreds of blogs, many of which, you later notice, are completely useless. It's like having an inbox stuffed with e-mail from overactive listservs you no longer care to read.

It's true that many RSS readers have great tools by which to organize your feeds, and folks more capable than I am have probably hit on ways to categorize their blogs in a way that makes it easy to get through them. But that was just my problem—I began to resent that I had to think about

organizing my reader.

This mirrors my experience of that of many of my friends who used to be enthusiastic users of RSS readers. Today I primarily find out what’s going on in blogs using a combination of Twitter, Techmeme and Planet Intertwingly. The interesting thing is that I’m already subscribed to about half of the blogs that end up getting linked to in these sources on a regular basis yet I tend to avoid firing up my RSS reader.

The problem is that the RSS readers I use regularly, Google Reader and RSS Bandit, take their inspiration from email clients which is the wrong model for consuming casual content like blogs. Whenever I fire up an email application like Outlook or Hotmail it presents me with a list of tasks I must complete in the form of messages that need responses, work items, meeting invitations, spam that needs to deleting, notifications related to commercial/financial transactions that I need to be aware of and so on. Reading email is a chore where you are constantly taunted by the BOLD unread messages indicator silently nagging you about the stuff you haven’t done yet.

Given that a significant percentage of the time, the stuff in my email inbox is messages that were sent directly to me that need some form of response or acknowledgment this model is somewhat sound although as many have pointed out there is a lot of room for improvement.

When it comes to blogs and other casual content, this model breaks down. I really don’t need a constant nagging reminder that I haven’t read the half dozen reposts of the same tech news stories about Google, Twitter and Facebook after I’ve seen the first one. Furthermore, if I haven’t fired up my reader in a while then I don’t care to be nagged about all the stuff I missed since they are just blogs so it is OK if I never read them. This opinion isn’t new, Dave Winer has been evangelizing “River of News” style aggregators for several years and given the success of this model for social networking sites like Facebook and microblogging sites like Twitter, it’s clear that Dave was onto something.

Looking back at the time I’ve spent working on RSS Bandit, I realize there are a couple of features I added to attempt to glom the river of news model on top of an email based model for reading feeds. These features include

  • the ability to mark all items as read after navigating away from a feed. This allows you to skim the interesting headlines then not have to deal with the “guilt” of not reading the rest of the items in the feed.
  • a reading pane inspired by Google Reader where unread items are presented in a single flow and marked as read as you scroll past each item

Looking back now, it seems to me that the way we think of RSS readers needs to fundamentally change. Presenting information as a news feed where the user isn’t pressured to read every item or feel like a failure is one way to move the needle on the user experience here. What I wonder is whether it isn’t already too late for this category of applications as services like Twitter & Facebook take over as how people keep up to date with what’s going on with the people and content they care about.


This blog isn’t the only place where I’ve been falling behind given my home and work responsibilities. I’ve also not been doing a great job on supporting RSS Bandit. Below is a repost of my first post in months on the official RSS Bandit website.

I’d like to apologize about the lack of updates on this site in recent months. Both Torsten and I have had the combination of work and new additions to the family at home take up a lot of our free time. I’m slowly beginning to get some time back as my son has started growing and things get less hectic at work. This weekend I started hacking RSS Bandit code again and feel it’s a good time to open up communication channels about a codename for the next release Collossus and the planned features.

Right now, I’m working on integrating support for reading your news feed from Facebook using their recently announced Open Stream API. You will be to select Facebook as a choice from the “Synchronize Feeds” menu

once that choice is made, you have to login to Facebook.

For the best experience, you should check to option to stay logged in to RSS Bandit so that we don’t have to prompt you for your Facebook username each time you run the application.

In addition, we will also be looking at bug fixes for a number of the crashing bugs that have been reported which have affected the stability of the application. We know there is at least one problem caused by the inability to render favicons in the tree view correctly which causes the application to crash. If you suspect you are hitting that problem, a temporary workaround is to disable favicons in the tree view by going to the Tools->Options menu and unchecking the option to use favicons as feed icons from the Display tab.

If you have any issues, thoughts or requests related to future versions of RSS Bandit please leave a comment in response to this post.

PS: Thanks to a tip from Mike Vernal I’ve already gotten my first improvement on this feature. The Facebook login screen look more streamlined when this is finally released instead of looking like it barely fits in the dialog box as it does above.


Categories: RSS Bandit

While hacking on a side project I came up with today, I was trying to find a way to describe the difference between Facebook & Twitter given how my wife uses both services and came up with the following

TWITTER: Get up to the minute reports on what celebrities, who have no idea who you are, had for breakfast

FACEBOOK: Get up to the minute reports on what people you haven’t talked to in almost 20 years are having for lunch.


Categories: Social Software

I’ve been busy with work and spending time with my son so I haven’t been as diligent as I should be with the blog. Until my time management skills get better, here are some thoughts from a guest post I wrote recently for the Live Services blog.

Dare Obasanjo here, from the Live Services Program Management team. I'd like to talk a bit about the work we are doing to increase interoperability across the "Social Web."

The term The Social Web has been increasingly used to describe the rise of the Web as a way for people to interact, communicate and share with each other using the World Wide Web. Experiences that were once solitary such as reading news or browsing one's photo albums have now been made more social by sites such as Digg and Flickr. With so many Web sites offering social functionality, it has become increasingly important for people to be able to not only be able to connect and share with their friends on a single Web site but also to take these relationships and activities with them wherever they go on the Web.

With the recent update to Windows Live, we are continuing with the vision of enabling our 500 million customers to share and connect with the people they care about regardless of what services they use. Our customers can now invite their contacts from MySpace (the largest U.S. social networking site) and Hi5 to join them on Windows Live in a safe manner without having to resort to using the the password anti-pattern. These sites join Facebook and LinkedIn as social networks from which people can import their social graph or friend list into Windows Live.


In addition to interoperating with social networks to bridge relationships across the Web, we are also always working on enabling customers to share the content they are find interesting or activities they are participating in from all over the Web with their friends who use Windows Live services like Hotmail and Messenger. Customers of Windows Live can now add activities from over thirty different online services to their Windows Live profile including social networking sites like Facebook, photo sharing sites like Smugmug & Photobucket, social music sites like & Pandora, social bookmarking sites like Digg & Stumbleupon and much more.

We are also happy to announce today that in the coming months, MySpace customers will be able to share activities and updates from MySpace with their Windows Live network.

Below is a screenshot of some of the updates you might find on my profile on Windows Live 


These recent announcements bring us one step closer to a Social Web where interoperability is the norm instead of the exception. One of the most exciting things about our recent release is how much of the behind-the-scene integration is done using community driven technologies such as the Atom syndication format, Atom Activity Extensions, OAuth, and Portable Contacts. These community driven technologies are moving to ensure that the Social Web is a web of interconnected and interoperable web sites, not a set of competing walled gardens desperately clutching to customer data in an attempt to invent Lock-In 2.0 

As we look towards the future, I believe that the aforementioned standards around contact exchange, social activity streams and authorization are just the first steps. When we look at all the capabilities across the Web landscape it is clear that there are scenarios that are still completely broken due to lack of interoperability across various social websites. You can expect more from Windows Live when it comes to interoperability and the Social Web.

Just watch this space.

Note Now Playing: Eminem - We Made You Note


Categories: Social Software | Windows Live

Joe Gregorio, one of the editors of RFC 5023: The Atom Publishing Protocol, has declared it a failure in his blog post titled The Atom Publishing Protocol is a failure where he writes

The Atom Publishing Protocol is a failure. Now that I've met by blogging-hyperbole-quotient for the day let's talk about standards, protocols, and technology… AtomPub isn't a failure, but it hasn't seen the level of adoption I had hoped to see at this point in its life.

Thick clients, RIAs, were supposed to be a much larger component of your online life. The cliche at the time was, "you can't run Word in a browser". Well, we know how well that's held up. I expect a similar lifetime for today's equivalent cliche, "you can't do photo editing in a browser". The reality is that more and more functionality is moving into the browser and that takes away one of the driving forces for an editing protocol.

Another motivation was the "Editing on the airplane" scenario. The idea was that you wouldn't always be online and when you were offline you couldn't use your browser. The part of this cliche that wasn't put down by Virgin Atlantic and Edge cards was finished off by Gears and DVCS's.

The last motivation was for a common interchange format. The idea was that with a common format you could build up libraries and make it easy to move information around. The 'problem' in this case is that a better format came along in the interim: JSON. JSON, born of Javascript, born of the browser, is the perfect 'data' interchange format, and here I am distinguishing between 'data' interchange and 'document' interchange. If all you want to do is get data from point A to B then JSON is a much easier format to generate and consume as it maps directly into data structures, as opposed to a document oriented format like Atom, which has to be mapped manually into data structures and that mapping will be different from library to library.

The Atom effort rose up around the set of scenarios related to blogging based applications at the turn of the decade; RSS readers and blog editing tools. The Atom syndication format was supposed to be a boon for RSS readers while the Atom Publishing Protocol was intended to make thing better for blog editing tools. There was also the expectation that the format and protocol were general enough that they would standardize all microcontent syndication and publishing scenarios.

The Atom syndication format has been as successful or perhaps even more successful than originally intended because it's original scenarios are still fairly relevant on today's Web. Reading blogs using feed readers like Google Reader, Outlook and RSS Bandit is still just as relevant today as it was six or seven years ago. Secondly, interesting new ways to consume feeds have sprung in the form of social aggregation via sites such as FriendFeed. Also since the Atom format is actually a generic format for syndicating microcontent, it has proved useful as new classes of microcontent have shown up on the Web such as streams of status updates and social network news feeds. Thanks to the Atom syndication format's extensibility it is being applied to these new scenarios in effective ways via community efforts such as and OpenSocial.

On the other hand, Joe is right that the Atom Publishing Protocol hasn't fared as well with the times. Today, editing a blog post via a Web-based blog editing tool like the edit box on a site like Blogger is a significantly richer experience than it was at the turn of the century. The addition of features such as automatically saving draft posts has also worked towards obviating some of the claimed benefits of desktop applications for editing blogs. For example, even though I use Windows Live Writer to edit my blog I haven't been able to convince my wife to switch to using it for editing her blog because she finds the Web "edit box" experience to be good enough. The double whammy comes from the fact that although new forms of microcontent have shown up which do encourage the existence of desktop tools (e.g. there are almost a hundred desktop Twitter apps and the list of Facebook desktop applications is growing rapidly), the services which provide these content types have shunned AtomPub and embraced JSON as the way to expose APIs for rich clients to interact with their content. The primary reason for this is that JSON works well as a protocol for both browser based client apps and desktop apps since it is more compatible with object oriented programming models and the browser security model versus an XML-based document-centric data format.

In my opinion, the growth in popularity of object-centric JSON over document-centric XML as the way to expose APIs on the Web has been the real stake in the heart for the Atom Publishing Protocol.


  • JSON vs. XML: Browser Programming Models – Dare Obasanjo on why JSON is more attractive than XML for Web-based mashups due to presenting a friendlier programming model for the browser. 

  • JSON vs. XML: Browser Security Model -   Dare Obasanjo on why JSON is more attractive than XML for Web-based mashups due to circumventing some of the browser security constraints placed on XML.

Note Now Playing: Lady Gaga - Poker Face Note


April 13, 2009
@ 12:14 AM

Note Now Playing: Jay-Z - A Dream Note


Categories: Personal

Todd Hoff over on the high scalability blog has an interesting post along the vein of my favorite mantra of Disk is the new tape titled Are Cloud Based Memory Architectures the Next Big Thing? where he writes

Why might cloud based memory architectures be the next big thing? For now we'll just address the memory based architecture part of the question, the cloud component is covered a little later.

Behold the power of keeping data in memory:

Google query results are now served in under an astonishingly fast 200ms, down from 1000ms in the olden days. The vast majority of this great performance improvement is due to holding indexes completely in memory. Thousands of machines process each query in order to make search results appear nearly instantaneously.

This text was adapted from notes on Google Fellow Jeff Dean keynote speech at WSDM 2009.

Google isn't the only one getting a performance bang from moving data into memory. Both LinkedIn and Digg keep the graph of their network social network in memory. Facebook has northwards of 800 memcached servers creating a reservoir of 28 terabytes of memory enabling a 99% cache hit rate. Even little guys can handle 100s of millions of events per day by using memory instead of disk.

The entire post is sort of confusing since it seems to mix ideas that should be two or three different blog posts into a single entry. Of the many ideas thrown around in the post, the one I find most interesting is highlighting the trend of treating in-memory storage as a core part of how a system functions not just as an optimization that keeps you from having to go to disk.

The LinkedIn architecture is a great example of this trend. They have servers which they call The Cloud whose job is to cache the site's entire social graph in memory and then have created multiple instances of this cached social graph. Going to disk to satisfy social graph related queries which can require touching data for hundreds to thousands of users is simply never an option. This is different from how you would traditionally treat a caching layer such as ASP.NET caching or typical usage of memcached.

To build such a memory based architecture there are a number of features you need to consider that don't come out of the box in caching product like memcached. The first is data redundancy which is unsupported in memcached. There are forty instances of LinkedIn's in-memory social graph which have to be kept mostly in sync without putting to much pressure on underlying databases. Another feature common to such memory based architectures that you won't find in memcached is transparent support for failover. When your data is spread out across multiple servers, losing a server should not mean that an entire server's worth of data is no longer being served out of the cache. This is especially of concern when you have a decent sized server cloud because it should be expected that servers come and go out of rotation all the time given hardware failures. Memcached users can solve this problem by using libraries that support consistent hashing (my preference) or by keeping a server available as a hot spare with the same IP address as the downed server. The key problem with lack of native failover support is that there is no way to automatically rebalance the workload on the pool of servers are added and removed from the cloud.

For large scale Web applications, if you're going to disk to serve data in your primary scenarios then you're probably doing something wrong. The path most scaling stories take is that people start with a database, partition it and then move to a heavily cached based approach when they start hitting the limits of a disk based system. This is now common knowledge in the pantheon of Web development. What doesn't yet seem to be part of the communal knowledge is the leap from being a cache-based with the option to fall back to a DB to simply being memory-based. The shift is slight but fundamental. 


  • Improving Running Components at Twitter: Evan Weaver's presentation on how Twitter improved their performance by two orders of magnitude by increasing their usage of caches and improving their caching policies among other changes. Favorite quote, "Everything runs from memory in Web 2.0".

  • Defining a data grid: A description of some of the features that an in-memory architecture needs beyond simply having data cached in memory.

Note Now Playing: Ludacris - Last Of A Dying Breed [Ft. Lil Wayne] Note


Categories: Web Development