May 22, 2009
@ 02:54 PM

In the past week or so, two of the biggest perception problems preventing proliferation of OpenID as the de facto standard for decentralized identity on the Web have been addressed. The first perception problem is around the issue of usability. I remember attending the Social Graph Foo Camp last year and chatting with a Yahoo! employee about why they hadn’t become an Open ID relying party (i.e. enable people to login to Yahoo! account with OpenIDs). The response was that they had concerns about the usability of OpenID causing reducing the number of successful log-ins given that it takes the user off the Yahoo! sign-in page to an often confusing and poorly designed page created by a third party.

Last year’s launch and eventually success of Facebook Connect showed developers that it is possible to build a delegated identity workflow that isn’t as intimidating and counterproductive as the experience typically associated with delegated identity systems like OpenID. On May 14th, Google announced that a similar experience has now been successfully designed and implemented for OpenID in the Google Code blog post titled Google OpenID API - taking the next steps which states

We are happy to announce today two new enhancements to our API - introducing a new popup style UI for our user facing approval page, and extending our Attribute Exchange support to include first and last name, country and preferred language.

The new popup style UI, which implements the

OpenID User Interface Extension Specification, is designed to streamline the federated login experience for users. Specifically, it's designed to ensure that the context of the Relying Party website is always available and visible, even in the extreme case where a confused user closes the Google approval window. JanRain, a provider of OpenID solutions, is an early adopter of the new API, and already offers it as part of their RPX product. As demonstrated by UserVoice using JanRain's RPX, the initial step on the sign-in page of the Relying Party website is identical to that of the "full page" version, and does not require any changes in the Relying Party UI.

Once the user selects to sign in using his or her Google Account, the Google approval page is displayed. However, it does not replace the Relying Party's page in the main browser window. Instead it is displayed as a popup window on top of it. We have updated our Open Source project to include a complete Relying Party example, providing code for both the back-end (in Java) and front-end (javascript) components.

Once the user approves the request, the popup page closes, and the user is signed in to the Relying Party website.

The aforementioned OpenID User Interface Extension allows the relying party to request that the OpenID provider authenticate the user via a “pop up” instead of through navigating to their page and then redirecting the user back to the relying party’s site. Thus claim that OpenID usability harms the login experience is now effectively addressed and I expect to see more OpenID providers and relying parties adopt this new popup style experience as part of the authentication process.

The second biggest perception blocker is the one asked in articles like Is OpenID Being Exploited By The Big Internet Companies? which points out that no large web companies actually support OpenID as a way to login to their primary services. The implication being that companies are interested in using OpenID as a way to spread their reach across the web including becoming identity providers for other companies but don’t want others to do the same to them.

That was true until earlier this week when Luke Shepard announced Facebook Supports OpenID for Automatic Login. Specifically,

Now, users can register for Facebook using their Gmail accounts. This is a quicker, more streamlined way for new users to register for the site, find their friends, and start exploring.

Existing and new users can now link their Facebook accounts with their Gmail accounts or with accounts from those OpenID providers that support automatic login. Once a user links his or her account with a Gmail address or an OpenID URL, logs in to that account, then goes to Facebook, that user will already be logged in to Facebook.

In tests we've run, we've noticed that first-time users who register on the site with OpenID are more likely to become active Facebook users. They get up and running after registering even faster than before, find their friends easily, and quickly engage on the site.

This makes Facebook the first major web company to truly embrace OpenID as a way to enable users to sign up and login to the site using credentials from a third party (a competitor even). The fact that they also state that contrary to popular perception this actually improves the level of engagement of those users is also a big deal.

Given both of these events, I expect that we’ll see a number of more prominent sites adopting OpenID as they now clearly have nothing to lose and a lot to gain by doing so. This will turn out to be a great thing for users of the web and will bring us closer to the nirvana that is true interoperability across the social networking and social media sites on the web.


 

Categories: Web Development

After all the hype, I got around to taking Wolfram Alpha for a spin last night due to being unable to sleep after a weird Doctor Manhattan themed nightmare. The experience of using the site is very impressive and there is a great walkthrough of the power of the site in the Wolfram Alpha screencast which I encourage people to watch if you are interested in learning about a new breed of search engine.

There have been a ton of articles calling Wolfram Alpha a "Google Killer" but after using the site for a few hours although I find it fascinating, I question how much of a threat the site is to Google either as a way to satisfy the typical questions people ask Web search engines or a threat to Google’s search advertising cash cow. You can get a sense for the kinds of queries that Wolfram Alpha handles amazingly well from the list below

As you can tell from the above list, Wolfram Alpha is like having a search engine over the kind of data you’d see in the CIA's World Factbook or Time Almanac. There really isn’t anything like it on the Web today. However it isn’t really a competitor to traditional web search engines who for the most part are still focused on finding web pages despite the various advancements in answering a subset of queries with direct answers instead of links to web pages such as Google's OneBox results and Live Search’s instant answers feature.

From my perspective, the threat to search engines like Google isn’t Wolfram Alpha but the trend it represents. That trend is the renaissance of the vertical search engine. Earlier this year, I was putting together a panel at the MIX ‘09 conference and needed to invite the panelists from a pool of people who I’d either heard about or knew of professionally but had never contacted directly. How did I find out how to contact these people?  Even though all of them had blogs, there wasn’t a consistent way to track down contact information. So I looked them up on Facebook and sent each of them a private message. Mission accomplished. Unbeknownst to me, Facebook had become my “people” search engine”.

Here’s another story. Last year I worked on the most satisfying software release of my career, Windows Live (wave 3). After the launch I wanted to find out what people were saying about the product so I did a Twitter search for Windows Live and posted the results. While I wasn’t paying attention, Twitter had become my “what are people saying about <insert brand here>” search engine.

This trend of search engines dedicated to specific scenarios and contexts that can’t be answered well by Web search is the trend that traditional search engines should watch carefully.

I can imagine Wolfram Alpha eventually growing to satisfy a lot of the sorts of queries I go to Wikipedia today to get answers to and doing so in a more authoritative manner. In that case, it would become my “facts and trivia” search engine. However there are currently too many gaps in its knowledge of commercial products (e.g. search for “ipod” results in a coming soon notice) and people (e.g. the Jim Carrey entry is amazingly brief yet still manages to have a factually inaccuracy) to make it a true replacement for wikipedia. That said, the service shows great promise and it will be interesting watching as it evolves. 


 

Categories: Startup Shoutout

Biz Stone, Twitter’s , recently wrote in a blog post entitled The Replies Kerfuffle that

We removed a setting that 3% of all accounts had ever touched but for those folks it was beloved.

97% of all accounts were not affected at all by this change—the default setting is that you only see replies by people you follow to people you follow. For the 3% who wanted to see replies to people they don't follow, we cannot turn this setting back on in its original form for technical reasons and we won't rebuild it exactly the same for product design reasons.

Even though only 3% of all Twitter accounts ever changed this setting away from the default, it was causing a strain and impacting other parts of the system. Every time someone wrote a reply Twitter had to check and see what each of their followers' reply setting was and then manifest that tweet accordingly in their timeline—this was the most expensive work the database was doing and it was causing other features to degrade which lead to SMS delays, inconsistencies in following, fluctuations in direct message counts, and more. Ideally, we would redesign and rebuild this feature but there was no time, hence the sudden deploy.

As someone whose day job is working on a system for distributing a user’s updates and activities to their social network in real-time across Web and desktop applications, I’m always interested in reading about the implementation choices of others who have built similar systems. However when it comes to Twitter I tend to be more confused than enlightened whenever something is revealed about their architecture.

So let’s look at what we know. When Ashton Kutcher posts an update on Twitter such as

it has to be delivered to all 1.75 million of his followers. On the other hand, when Ashton Kutcher posts an update directed to one of his celebrity friends such as

then Twitter needs to decide how to deliver it based on the Replies settings of users.

One option would to check each of the 1.75 million followers of aplusk’s setting to decide whether they need have @replies restricted to only people they are following. Since this will be true for 97% of his followers (i.e. 1.7 million people) then there would need to be a 1.7 million checks to see if the intended recipient are also friends of John Mayer before  delivering the message to each of them. On the other hand, it would be pretty straightforward to deliver the message to the 3% of users who want to see all replies. Now this seems to be what Biz Stone is describing as how Twitter works but in that case the default setting should be more expensive than the feature that is only used by a minority of their user base.

In that case I’d expect Twitter to argue that the feature they want to remove for engineering reasons is filtering out some of the tweets you see based on whether you are a follower of the person the message is directed to not the other way around.

What have I missed here? 

Update: A comment on Hacker News put me on track to what I probably missed in analyzing this problem. In the above example, if the default case was the only case they had to support then all they have to do to determine who should receive Ashton Kutcher’s reply to John Meyer is perform an intersection of both user’s follower lists. Given that both lists need to be in memory for the system to be anywhere near responsive, performing the intersection isn’t as expensive as it sounds.

However with the fact that 3% of users will want to have received the update even though they aren’t John Mayer’s friends means Twitter needs to do a second pass over whoever was not found in both follower lists and check what their @reply delivery settings are. In the above example, even if every follower of John Mayer was a follower of Ashton Kutcher, it would still require 750,000 settings checks. Given that it sounds like they keep this setting in their database instead of in some sort of cache, it is no surprise that this is a feature they’d like to eliminate. 


 

Categories: Web Development

Joshua Porter has an excellent post titled Are you building an everyday app? (the LinkedIn problem)  where he writes

In a recent interview, LinkedIn CEO Reid Hoffman describes moving away from day to day to a more strategic role in the company he founded:

I want to be able to sink my mind around a couple of problems and work through them. For example, many professionals still don’t understand how LinkedIn can be valuable on a daily or weekly basis”

Another way you could phrase this is: “people don’t use LinkedIn everyday…we need to figure out how to change that”.

The fact is that LinkedIn, in its current incarnation, is not an everyday app. An everyday app is one that is used every day (or most days) by its users.
...
In general, most people think they’re building an everyday app, but they’re not. When the actual use patterns are discovered, most apps will be used every few days or less. Designers have to ask themselves a very hard question: “How often are people really going to use our web application?”. The answer is important…it will even help drive design decisions. Whether or not you have an everyday app affects the entire design of what you’re building, including the screens, notifications, and frequency of the service. For example, only everyday apps really need to use real-time technology to update streams. If you find out that you’re not building an everyday app, you probably don’t need to invest in making it real-time. But…you might invest in a notifications system that can alert users to when something very interesting happens.

You don’t have to be an everyday app to be successful. Netflix, for example, is not an everyday app. It’s an every-few-days app. Most people go back every few days to update their queue. There is really no need to go back more often.

Many developers of social software applications on the Web believe they’ve built an everyday app but they actually haven’t. One thing I’ve learned in almost five years of working on social software applications at Microsoft is that simply having the features of an everyday app doesn’t translate to people using the application every day. The best way to think about this is that no application starts off as an everyday app. Very rarely does an application show up that is so amazing that people start using it everyday right off the bat. Instead there is a transition where either users transition from occasional to frequent users while the application stays static or the application itself transitions to catering to a more engaged user base relative to where it was in previous years.

An example of the former is Twitter, the site hasn’t really changed much since I started using it about a year and a half ago. However it wasn’t until the right set of factors came together such as getting enough people I was following, adding the Twitter app to Facebook & Windows Live to update my friends on those services and getting Twitter applications for my desktop & phone did I transition to becoming an everyday user. Twitter’s main problem is that not every user eventually hits this sweet spot which is why you read articles like Twitter Quitters Post Roadblock to Long-Term Growth which points out that retention rate over a one month period hovers between 30% – 40% for new users. This need to make the application instantly useful to users is what prompts features like the Suggested Users List whose purpose is to give new users content worth coming back to every day instead of the “Trying out Twitter for the first time” style posts that they probably see from most of their friends who are also kicking the tires on the service they heard about on Oprah.

Having features that are useful every day like a constantly updating activity stream doesn’t mean people will use the site every day. For the users that cross the hump, it does. The challenge is how to get users to cross that hump.

A site that has done a good job of motivating its users to check it out on a daily basis and adjusting its features as its user base has become more engaged is Facebook. One of Facebook’s most annoying features for a long time was the fact that notifications on new messages in the service didn't actually contain the message. I suspect the purpose of doing this was to drive users back to the site so that they would then catch up on all they’d missed such as invitations and content in the news feed after they were done answering the message. Although this feature is annoying, it was definitively effective given anecdotal feedback from various people I talked to at the time. After a certain point, Facebook’s user engagement grew to the point where sending messages without the content to drive users back to the site wasn’t worth it relative to the decreased customer satisfaction.

Another great example from Facebook is the transition from the old news feed to the new stream. In March of 2009, the news feed was transformed into a real-time stream and almost two months later the real-time stream now updates live without having to refresh the page. The previous functionality of the news feed was relegated to an alternate highlights stream as shown below 

From reading the blog posts about the changes, the problem the switch from a highlights-centric news feed to a real-time stream is addressing is the fact that the highlights-based feed is stale and doesn’t provide enough value for users who’ve not just become every day users but are now every hour users. Not only do over 100 million of its users login every day but with 90 million users generating 90 billion page views in the month of March 2009 that implies the average page view for a Facebook user is over once per hour.

And when you think about it, the introduction of the original news feed in 2006 was a successful attempt to go from being an occasionally updated rolodex for a large chunk of their users into a social utility to keep up with what’s going on in the lives of family, friends and coworkers. The switch to a real-time stream is how Facebook is addressing the fact that they have slowly become an every hour app instead of just an everyday app for their users.

A number of services I use online could learn from how Facebook has evolved their experience over time increase the engagement of their user base by making sometimes small and sometimes huge changes to the user experience to encourage people to make the service a regular part of their lives.


 

Categories: Social Software

In my previous post I talk about adding support for reading and commenting on your Facebook news feed in RSS Bandit. This functionality is made possible by the recently announced Facebook Open Stream API. As I worked through the code in my few free moments I was alternately impressed and frustrated by the Open Stream API. On the one hand, the functionality the API provides is greatly enabling and has already unleashed a bounty of innovation as evidenced by the growing number of applications for interacting with your Facebook news feed on the desktop (e.g. Seesmic, Tweetdeck, bdule, etc). On the other hand, figuring out there are a few quirks of the API that make the web developer in me cringe and the desktop developer want to beg for mercy.

Below are my opinions on the Open Stream API, the purpose of sharing is so other developers who plan to use the API may avoid some of the pitfalls I did and also to share feedback with my peers at Facebook and elsewhere on best practices in providing activity stream APIs. 

Good: Lots of options for getting at the data 

Most API calls at Facebook have two entry points. You can either call a “REST-like” URL endpoint via HTTP GET or perform a SQL-like query via FQL on a generic end point. For interacting with the Facebook news feed, you have stream.get and stream.getComments as the REST-like methods for accessing the news feed and a comment thread for a particular feed item respectively.  With FQL, you can perform queries against the stream (FQL) and comment (FQL) to get the same results.

Both mechanisms give you the option of getting back the data as XML or JSON. Below are example of what HTTP requests to retrieve your news feed look like using both approaches 

stream.get REQUEST:
http://api.facebook.com/restserver.php?v=1.0&method=stream.get&format=XML&viewer_id={0}&session_key={1}&api_key={2}&call_id={3}&sig={4}

FQL REQUEST:
http://api.facebook.com/restserver.php?v=1.0&method=fql.query&format=XML&call_id={0}&session_key={1}&api_key={2}&sig={3}&query={4}
FQL QUERY:
select post_id, source_id, created_time, actor_id, target_id, app_id, message, attachment, comments, likes, permalink, attribution, type from stream where filter_key in (select filter_key FROM stream_filter where uid = {0} and type = 'newsfeed') and created_time >= {1} order by created_time desc limit 50

I ended up using stream.get over FQL because the only benefit I see to using FQL is being able to filter some of the result fields or combine data from multiple tables, neither of which I needed in this instance. I chose XML as the output format over JSON because I transform the results of the Facebook API request into an Atom feed and then hand it down to the regular Atom feed parsing code in RSS Bandit. It was much easier for me to handle this transformation using XSLT over XML results as opposed to procedural code over a JSON result set.

There’s also a third mechanism for getting news feed data out of Facebook that would have been perfect for my needs. You can access the news feed using the nascent activity streams standard which returns the data as an Atom feeds with certain extensions specific to social network activity stream. I couldn’t get this to work probably due to user error on my part (more on this later) but if I had, the structure of the GET request would have been along the lines of

Activity Streams REQUEST:
http://www.facebook.com/activitystreams/feed.php?source_id={0}&app_id={1}&session_key={2}&sig={3}&v=0.7&read&updated_time={4}

If not for the initial problems I had figuring out which parameters to pass to APIs and the concern about building on an API that isn’t yet at version 1.0, the Activity Stream API would have been perfect for my needs.

Kudos to the Facebook team for providing such a rich and varied set of options for getting the news feed out of their system. There’s something for every temperament.

Bad: Misuse of HTTP status codes

The Facebook API documentation describes the platform as “REST-like” and not RESTful. They sure weren’t kidding. For the most part, when I’ve discussed the problems with APIs that aren’t completely RESTful the negatives to such an approach have seemed aesthetic in nature as opposed to being practical problems. Below are some of the practical problems I faced because the Facebook APIs do not use HTTP status codes in a manner consistent with the rest of the Web.

In HTTP, there are existing error codes which clients can interpret in a consistent manner and provide consistent feedback to users when they occur. Now consider this excerpt from the documentation on using Facebook’s activity stream API

Response Codes

Like most HTTP responses, Facebook Activity Streams responses include a response header, which always includes a traditional response code and a short response message. The supported response codes include:

  • 200 Code provided whenever the Facebook servers were able to accommodate the request and provide a response.
  • 304 Code provided whenever the request header included If-Modified-Since and no new posts have been generated since the specified time.
    Note: Code 304 will never be returned if If-Modified-Since isn't included in the request header.
  • 401 Code provided whenever the URL omits one or more of the required parameters.
  • 403 Code provided whenever the URL is syntactically valid, but the user hasn't granted the required extended permission.
  • 404 Code provided whenever the URL is syntactically valid, but the signature is incorrect, or the session key is invalid.

You might notice that an HTTP 401 is used to indicate that the request is improperly formed. However, let’s see what RFC 2616 has to say about the 401 status code as well as how to communicate badly formed arguments

400 Bad Request

The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications.

401 Unauthorized

The request requires user authentication. The response MUST include a WWW-Authenticate header field (section 14.47) containing a challenge applicable to the requested resource. The client MAY repeat the request with a suitable Authorization header field (section 14.8). If the request already included Authorization credentials, then the 401 response indicates that authorization has been refused for those credentials. If the 401 response contains the same challenge as the prior response, and the user agent has already attempted authentication at least once, then the user SHOULD be presented the entity that was given in the response, since that entity might include relevant diagnostic information. HTTP access authentication is explained in "HTTP Authentication: Basic and Digest Access Authentication"

According to the HTTP specification excerpted above, the status code to return on missing parameters should be 400 not 401. The practical problem with returning HTTP 401 in this case is that applications like RSS Bandit may have code paths that prompt the user to check or re-enter their credentials because an authentication error has occurred. We now have to special case getting a 401 from Facebook’s servers versus any other server on the Web. Thankfully, this error should be limited to development time unless Facebook changes their API in a backwards incompatible manner by requiring new parameters to their methods.

A bigger problem is that Facebook returns HTTP 200 which traditionally means success in regular occurring error conditions. Specifically, when a user’s session key expires a successful response is sent containing an error document. In RSS Bandit, we already have a processing pipeline that hands off successful responses containing an XML document to the RSS/Atom parsing layer. With Facebook’s behavior I had two choices

  1. Modify the RSS/Atom parsing layer to understand Facebook error documents and then kick back an error up to the user interface asking the user to re-enter their credentials. This was particularly hacky because that layer doesn’t really have a connection to the main UI thread nor should it.

  2. Pre-process the XML document before handing it to the RSS/Atom parsing layer. This implies an intermediate step in between dispatching on the response status and actually processing the document.

Option #2 proved more palatable since there was already an intermediate step needed to transform the results of stream.get to an Atom feed. If I’d used the Activity Streams API as was my initial plan then the decision may have been harder to make.

Good: Well thought out model for authorizing access to user data

A number of application platforms work by giving a user all-or-nothing access to the user’s data. My favorite example of this bad practice is Twitter which for a long time made this situation even worse by requiring the applications to collect the person’s username and password. The controversy around Twply earlier this year shows exactly why giving applications more access to a user’s data than they need is bad. Twply only needed access to a user’s @replies but given that there was no way to only give it access to just that aspect of a user’s Twitter data, it also got the ability to post tweets on their behalf and did so in a spammy manner.

The Facebook API has a notion of extended permissions which are access rights that require special opt-in from the user given that the data or functionality is sensitive and can be abused. The current set of extended permissions in the Facebook API is provided below

Permission Description
publish_stream Lets your application or site post content, comments, and likes to a user's profile and in the streams of the user's friends without prompting the user.

This permission is a superset of the status_update, photo_upload, video_upload, create_note, and share_item extended permissions, so if you haven't prompted users for those permissions yet, you need only prompt them for publish_stream.

Note: At this time, while the Open Stream API is in beta, the only Facebook users that can grant your application the publish_stream permission are the developers of your application.

read_stream Lets your application or site access a user's stream and display it. This includes all of the posts in a user's stream. You need an active session with the user to get this data.
email This permission allows an application to send email to its user. This permission can be obtained only through the fb:prompt-permission tag or the promptpermission attribute. When the user accepts, you can send him/her an email via notifications.sendEmail or directly to the proxied_email FQL field.
offline_access This permission grants an application access to user data when the user is offline or doesn't have an active session. This permission can be obtained only through the fb:prompt-permission tag or the promptpermission attribute.
create_event This permission allows an app to create and modify events for a user via the events.create, events.edit and events.cancel methods.
rsvp_event This permission allows an app to RSVP to an event on behalf of a user via the events.rsvp method.
sms This permission allows a mobile application to send messages to the user and respond to messages from the user via text message.
status_update This permission grants your application the ability to update a user's or Facebook Page's status with the status.set or users.setStatus method.

Note: You should prompt users for the publish_stream permission instead, since it includes the ability to update a user's status.

photo_upload This permission relaxes requirements on the photos.upload and photos.addTag methods. If the user grants this permission, photos uploaded by the application will bypass the pending state and the user will not have to manually approve the photos each time.

Note: You should prompt users for the publish_stream permission instead, since it includes the ability to upload a photo.

video_upload This permission allows an application to provide the mechanism for a user to upload videos to their profile.

Note: You should prompt users for the publish_stream permission instead, since it includes the ability to upload a video.

create_note This permission allows an application to provide the mechanism for a user to write, edit, and delete notes on their profile.

Note: You should prompt users for the publish_stream permission instead, since it includes the ability to let a user write notes.

share_item This permission allows an application to provide the mechanism for a user to post links to their profile.

Note: You should prompt users for the publish_stream permission instead, since it includes the ability to let a user share links.

 

 

Bad: Too many prompts for desktop applications

Although I’ve praised the extended permissions model, it currently leads to a cumbersome experience for desktop applications. Installing Facebook desktop applications like bdule or Facebook for Adobe Air requires running through three separate permissions screens. The user has to login, then grant the read_stream extended permission followed by the publish_stream extended permission. Granted, the latter two only need to be done once but they still affect the out of box experience fairly negatively in my opinion.

In RSS Bandit, I’ve attempted to reduce this by delaying the prompt for publish_stream permission until the first time a user tries to comment on a news feed item from within the application. Streamlining this experience will be a boon for application developers who want the entire experience to be smooth and painless. The documentation on the documentation the Open Streams API page states that there are options for streamlining these requests but they only apply to Web applications not desktop apps. Sad

Bad: Plethora of application identifiers and authentication requirements is a stumbling block for beginners

The official documentation on the Facebook API doesn’t do a good job of connecting all the dots when it comes to understanding how to make calls to the service. For example, when you read the documentation for stream.get it is not obvious from that page what endpoint to make requests to OR that every Facebook API call has a set of required parameters beyond the ones listed on that page. In fact, I was stumped for about a week or so trying to figure out the magical incantations to get the right set of parameters for API calls and the right data to put in these parameters until I stumbled on a Facebook forum post creating the 'sig' parameter which solved my problems. I believe I once saw this information in the Facebook API documents but after ten minutes of searching just now I can’t seem to find it so perhaps it was in my imagination.

Part of the problem is the varied number of identifiers that you have to keep straight as an application developer including your

  • application ID
  • API key
  • application secret
  • session key
  • client secret

The fact that various APIs take different combinations of the above lead to more than one confusing moment for me. Eventually I figured it out but I felt like I was being hazed as I was going through the process.


 

Categories: Platforms

So I’ve spent the weekend working my way through Facebook’s Open Stream API and have made a ton of progress in adding the option to read your Facebook news feed from RSS Bandit. In my previous post I should the initial screenshots where you can login to Facebook via RSS Bandit to begin the authentication process to retrieve your news feed. Below are a few more screenshots showing how much progress has been made

Requesting extended permission to read the stream

Fig 1: This is the screen where you give RSS Bandit permission to access your news feed. Note that this is different from the screen where you login to Facebook via RSS Bandit. You should only see this screen once but will get the login screen quite often unless you select the option to stay signed into Facebook via RSS Bandit.

 The Facebook news feed in RSS Bandit

Fig 2: This is what your Facebook news feed looks like in RSS Bandit with the default stylesheet for Facebook applied. As you can see from the screen shot I’ve attempted to replicate the look of the news feed as best as I can in RSS Bandit. This includes options below each item to comment or see who has liked an item.

Viewing news feed comments in RSS Bandit

Fig 3: There is also support for viewing the comments on a Facebook item inline within the Feed Details list view. Although this is different from how Facebook does it, I felt this was needed to make it consistent with the other inline comment viewing experiences within RSS Bandit.


There’s still a bunch of cleanup to do such as fix up the look of comments when you click on them and provide a way to post comments on your friends’ Facebook items. This is probably another day or so of work which I’ll do next weekend. After that it is back to fixing bugs and working with Torsten on our idea from over two years ago on how to add a Office 2007 style ribbon to the application.

Below are pictures of the initial prototypes which Torsten has dusted off and will start working on again

I’ll follow up this post with some of my thoughts on writing a desktop application that targets Facebook’s Open Stream API. The process was definitely more painful than I expected but the results are worth it which says a lot about the functionality provided by the API.


 

Categories: RSS Bandit

Farhad Manjoo has an article on Slate entitled Kill your RSS reader which captures a growing sentiment I’ve had for a while and ranted about during a recent panel at SXSW. Below are a few key excerpts from Farhad’s article that resonate strongly with me

In theory, the RSS reader is a great idea. Many years ago, as blogs became an ever-larger part of my news diet, I got addicted to Bloglines, one of the first popular RSS programs. I used to read a dozen different news sites every day, going to each site every so often to check whether something fresh had been posted. With Bloglines, I just had to list the sites I loved and it would do the visiting for me. This was fantastic—instead of scouring the Web for interesting stories, everything came to me!
...
But RSS started to bring me down. You know that sinking feeling you get when you open your e-mail and discover hundreds of messages you need to respond to—that realization that e-mail has become another merciless chore in your day? That's how I began to feel about my reader. RSS readers encourage you to oversubscribe to news. Every time you encounter an interesting new blog post, you've got an incentive to sign up to all the posts from that blog—after all, you don't want to miss anything. Eventually you find yourself subscribed to hundreds of blogs, many of which, you later notice, are completely useless. It's like having an inbox stuffed with e-mail from overactive listservs you no longer care to read.

It's true that many RSS readers have great tools by which to organize your feeds, and folks more capable than I am have probably hit on ways to categorize their blogs in a way that makes it easy to get through them. But that was just my problem—I began to resent that I had to think about

organizing my reader.

This mirrors my experience of that of many of my friends who used to be enthusiastic users of RSS readers. Today I primarily find out what’s going on in blogs using a combination of Twitter, Techmeme and Planet Intertwingly. The interesting thing is that I’m already subscribed to about half of the blogs that end up getting linked to in these sources on a regular basis yet I tend to avoid firing up my RSS reader.

The problem is that the RSS readers I use regularly, Google Reader and RSS Bandit, take their inspiration from email clients which is the wrong model for consuming casual content like blogs. Whenever I fire up an email application like Outlook or Hotmail it presents me with a list of tasks I must complete in the form of messages that need responses, work items, meeting invitations, spam that needs to deleting, notifications related to commercial/financial transactions that I need to be aware of and so on. Reading email is a chore where you are constantly taunted by the BOLD unread messages indicator silently nagging you about the stuff you haven’t done yet.

Given that a significant percentage of the time, the stuff in my email inbox is messages that were sent directly to me that need some form of response or acknowledgment this model is somewhat sound although as many have pointed out there is a lot of room for improvement.

When it comes to blogs and other casual content, this model breaks down. I really don’t need a constant nagging reminder that I haven’t read the half dozen reposts of the same tech news stories about Google, Twitter and Facebook after I’ve seen the first one. Furthermore, if I haven’t fired up my reader in a while then I don’t care to be nagged about all the stuff I missed since they are just blogs so it is OK if I never read them. This opinion isn’t new, Dave Winer has been evangelizing “River of News” style aggregators for several years and given the success of this model for social networking sites like Facebook and microblogging sites like Twitter, it’s clear that Dave was onto something.

Looking back at the time I’ve spent working on RSS Bandit, I realize there are a couple of features I added to attempt to glom the river of news model on top of an email based model for reading feeds. These features include

  • the ability to mark all items as read after navigating away from a feed. This allows you to skim the interesting headlines then not have to deal with the “guilt” of not reading the rest of the items in the feed.
  • a reading pane inspired by Google Reader where unread items are presented in a single flow and marked as read as you scroll past each item

Looking back now, it seems to me that the way we think of RSS readers needs to fundamentally change. Presenting information as a news feed where the user isn’t pressured to read every item or feel like a failure is one way to move the needle on the user experience here. What I wonder is whether it isn’t already too late for this category of applications as services like Twitter & Facebook take over as how people keep up to date with what’s going on with the people and content they care about.


 

This blog isn’t the only place where I’ve been falling behind given my home and work responsibilities. I’ve also not been doing a great job on supporting RSS Bandit. Below is a repost of my first post in months on the official RSS Bandit website.

I’d like to apologize about the lack of updates on this site in recent months. Both Torsten and I have had the combination of work and new additions to the family at home take up a lot of our free time. I’m slowly beginning to get some time back as my son has started growing and things get less hectic at work. This weekend I started hacking RSS Bandit code again and feel it’s a good time to open up communication channels about a codename for the next release Collossus and the planned features.

Right now, I’m working on integrating support for reading your news feed from Facebook using their recently announced Open Stream API. You will be to select Facebook as a choice from the “Synchronize Feeds” menu

once that choice is made, you have to login to Facebook.

For the best experience, you should check to option to stay logged in to RSS Bandit so that we don’t have to prompt you for your Facebook username each time you run the application.

In addition, we will also be looking at bug fixes for a number of the crashing bugs that have been reported which have affected the stability of the application. We know there is at least one problem caused by the inability to render favicons in the tree view correctly which causes the application to crash. If you suspect you are hitting that problem, a temporary workaround is to disable favicons in the tree view by going to the Tools->Options menu and unchecking the option to use favicons as feed icons from the Display tab.

If you have any issues, thoughts or requests related to future versions of RSS Bandit please leave a comment in response to this post.

PS: Thanks to a tip from Mike Vernal I’ve already gotten my first improvement on this feature. The Facebook login screen look more streamlined when this is finally released instead of looking like it barely fits in the dialog box as it does above.


 

Categories: RSS Bandit

While hacking on a side project I came up with today, I was trying to find a way to describe the difference between Facebook & Twitter given how my wife uses both services and came up with the following

TWITTER: Get up to the minute reports on what celebrities, who have no idea who you are, had for breakfast

FACEBOOK: Get up to the minute reports on what people you haven’t talked to in almost 20 years are having for lunch.


 

Categories: Social Software

I’ve been busy with work and spending time with my son so I haven’t been as diligent as I should be with the blog. Until my time management skills get better, here are some thoughts from a guest post I wrote recently for the Live Services blog.

Dare Obasanjo here, from the Live Services Program Management team. I'd like to talk a bit about the work we are doing to increase interoperability across the "Social Web."

The term The Social Web has been increasingly used to describe the rise of the Web as a way for people to interact, communicate and share with each other using the World Wide Web. Experiences that were once solitary such as reading news or browsing one's photo albums have now been made more social by sites such as Digg and Flickr. With so many Web sites offering social functionality, it has become increasingly important for people to be able to not only be able to connect and share with their friends on a single Web site but also to take these relationships and activities with them wherever they go on the Web.

With the recent update to Windows Live, we are continuing with the vision of enabling our 500 million customers to share and connect with the people they care about regardless of what services they use. Our customers can now invite their contacts from MySpace (the largest U.S. social networking site) and Hi5 to join them on Windows Live in a safe manner without having to resort to using the the password anti-pattern. These sites join Facebook and LinkedIn as social networks from which people can import their social graph or friend list into Windows Live.

image

In addition to interoperating with social networks to bridge relationships across the Web, we are also always working on enabling customers to share the content they are find interesting or activities they are participating in from all over the Web with their friends who use Windows Live services like Hotmail and Messenger. Customers of Windows Live can now add activities from over thirty different online services to their Windows Live profile including social networking sites like Facebook, photo sharing sites like Smugmug & Photobucket, social music sites like last.fm & Pandora, social bookmarking sites like Digg & Stumbleupon and much more.

We are also happy to announce today that in the coming months, MySpace customers will be able to share activities and updates from MySpace with their Windows Live network.

Below is a screenshot of some of the updates you might find on my profile on Windows Live 

image

These recent announcements bring us one step closer to a Social Web where interoperability is the norm instead of the exception. One of the most exciting things about our recent release is how much of the behind-the-scene integration is done using community driven technologies such as the Atom syndication format, Atom Activity Extensions, OAuth, and Portable Contacts. These community driven technologies are moving to ensure that the Social Web is a web of interconnected and interoperable web sites, not a set of competing walled gardens desperately clutching to customer data in an attempt to invent Lock-In 2.0 

As we look towards the future, I believe that the aforementioned standards around contact exchange, social activity streams and authorization are just the first steps. When we look at all the capabilities across the Web landscape it is clear that there are scenarios that are still completely broken due to lack of interoperability across various social websites. You can expect more from Windows Live when it comes to interoperability and the Social Web.

Just watch this space.

Note Now Playing: Eminem - We Made You Note


 

Categories: Social Software | Windows Live