So I’ve spent the weekend working my way through Facebook’s Open Stream API and have made a ton of progress in adding the option to read your Facebook news feed from RSS Bandit. In my previous post I should the initial screenshots where you can login to Facebook via RSS Bandit to begin the authentication process to retrieve your news feed. Below are a few more screenshots showing how much progress has been made

Requesting extended permission to read the stream

Fig 1: This is the screen where you give RSS Bandit permission to access your news feed. Note that this is different from the screen where you login to Facebook via RSS Bandit. You should only see this screen once but will get the login screen quite often unless you select the option to stay signed into Facebook via RSS Bandit.

 The Facebook news feed in RSS Bandit

Fig 2: This is what your Facebook news feed looks like in RSS Bandit with the default stylesheet for Facebook applied. As you can see from the screen shot I’ve attempted to replicate the look of the news feed as best as I can in RSS Bandit. This includes options below each item to comment or see who has liked an item.

Viewing news feed comments in RSS Bandit

Fig 3: There is also support for viewing the comments on a Facebook item inline within the Feed Details list view. Although this is different from how Facebook does it, I felt this was needed to make it consistent with the other inline comment viewing experiences within RSS Bandit.


There’s still a bunch of cleanup to do such as fix up the look of comments when you click on them and provide a way to post comments on your friends’ Facebook items. This is probably another day or so of work which I’ll do next weekend. After that it is back to fixing bugs and working with Torsten on our idea from over two years ago on how to add a Office 2007 style ribbon to the application.

Below are pictures of the initial prototypes which Torsten has dusted off and will start working on again

I’ll follow up this post with some of my thoughts on writing a desktop application that targets Facebook’s Open Stream API. The process was definitely more painful than I expected but the results are worth it which says a lot about the functionality provided by the API.


 

Categories: RSS Bandit

Farhad Manjoo has an article on Slate entitled Kill your RSS reader which captures a growing sentiment I’ve had for a while and ranted about during a recent panel at SXSW. Below are a few key excerpts from Farhad’s article that resonate strongly with me

In theory, the RSS reader is a great idea. Many years ago, as blogs became an ever-larger part of my news diet, I got addicted to Bloglines, one of the first popular RSS programs. I used to read a dozen different news sites every day, going to each site every so often to check whether something fresh had been posted. With Bloglines, I just had to list the sites I loved and it would do the visiting for me. This was fantastic—instead of scouring the Web for interesting stories, everything came to me!
...
But RSS started to bring me down. You know that sinking feeling you get when you open your e-mail and discover hundreds of messages you need to respond to—that realization that e-mail has become another merciless chore in your day? That's how I began to feel about my reader. RSS readers encourage you to oversubscribe to news. Every time you encounter an interesting new blog post, you've got an incentive to sign up to all the posts from that blog—after all, you don't want to miss anything. Eventually you find yourself subscribed to hundreds of blogs, many of which, you later notice, are completely useless. It's like having an inbox stuffed with e-mail from overactive listservs you no longer care to read.

It's true that many RSS readers have great tools by which to organize your feeds, and folks more capable than I am have probably hit on ways to categorize their blogs in a way that makes it easy to get through them. But that was just my problem—I began to resent that I had to think about

organizing my reader.

This mirrors my experience of that of many of my friends who used to be enthusiastic users of RSS readers. Today I primarily find out what’s going on in blogs using a combination of Twitter, Techmeme and Planet Intertwingly. The interesting thing is that I’m already subscribed to about half of the blogs that end up getting linked to in these sources on a regular basis yet I tend to avoid firing up my RSS reader.

The problem is that the RSS readers I use regularly, Google Reader and RSS Bandit, take their inspiration from email clients which is the wrong model for consuming casual content like blogs. Whenever I fire up an email application like Outlook or Hotmail it presents me with a list of tasks I must complete in the form of messages that need responses, work items, meeting invitations, spam that needs to deleting, notifications related to commercial/financial transactions that I need to be aware of and so on. Reading email is a chore where you are constantly taunted by the BOLD unread messages indicator silently nagging you about the stuff you haven’t done yet.

Given that a significant percentage of the time, the stuff in my email inbox is messages that were sent directly to me that need some form of response or acknowledgment this model is somewhat sound although as many have pointed out there is a lot of room for improvement.

When it comes to blogs and other casual content, this model breaks down. I really don’t need a constant nagging reminder that I haven’t read the half dozen reposts of the same tech news stories about Google, Twitter and Facebook after I’ve seen the first one. Furthermore, if I haven’t fired up my reader in a while then I don’t care to be nagged about all the stuff I missed since they are just blogs so it is OK if I never read them. This opinion isn’t new, Dave Winer has been evangelizing “River of News” style aggregators for several years and given the success of this model for social networking sites like Facebook and microblogging sites like Twitter, it’s clear that Dave was onto something.

Looking back at the time I’ve spent working on RSS Bandit, I realize there are a couple of features I added to attempt to glom the river of news model on top of an email based model for reading feeds. These features include

  • the ability to mark all items as read after navigating away from a feed. This allows you to skim the interesting headlines then not have to deal with the “guilt” of not reading the rest of the items in the feed.
  • a reading pane inspired by Google Reader where unread items are presented in a single flow and marked as read as you scroll past each item

Looking back now, it seems to me that the way we think of RSS readers needs to fundamentally change. Presenting information as a news feed where the user isn’t pressured to read every item or feel like a failure is one way to move the needle on the user experience here. What I wonder is whether it isn’t already too late for this category of applications as services like Twitter & Facebook take over as how people keep up to date with what’s going on with the people and content they care about.


 

This blog isn’t the only place where I’ve been falling behind given my home and work responsibilities. I’ve also not been doing a great job on supporting RSS Bandit. Below is a repost of my first post in months on the official RSS Bandit website.

I’d like to apologize about the lack of updates on this site in recent months. Both Torsten and I have had the combination of work and new additions to the family at home take up a lot of our free time. I’m slowly beginning to get some time back as my son has started growing and things get less hectic at work. This weekend I started hacking RSS Bandit code again and feel it’s a good time to open up communication channels about a codename for the next release Collossus and the planned features.

Right now, I’m working on integrating support for reading your news feed from Facebook using their recently announced Open Stream API. You will be to select Facebook as a choice from the “Synchronize Feeds” menu

once that choice is made, you have to login to Facebook.

For the best experience, you should check to option to stay logged in to RSS Bandit so that we don’t have to prompt you for your Facebook username each time you run the application.

In addition, we will also be looking at bug fixes for a number of the crashing bugs that have been reported which have affected the stability of the application. We know there is at least one problem caused by the inability to render favicons in the tree view correctly which causes the application to crash. If you suspect you are hitting that problem, a temporary workaround is to disable favicons in the tree view by going to the Tools->Options menu and unchecking the option to use favicons as feed icons from the Display tab.

If you have any issues, thoughts or requests related to future versions of RSS Bandit please leave a comment in response to this post.

PS: Thanks to a tip from Mike Vernal I’ve already gotten my first improvement on this feature. The Facebook login screen look more streamlined when this is finally released instead of looking like it barely fits in the dialog box as it does above.


 

Categories: RSS Bandit

While hacking on a side project I came up with today, I was trying to find a way to describe the difference between Facebook & Twitter given how my wife uses both services and came up with the following

TWITTER: Get up to the minute reports on what celebrities, who have no idea who you are, had for breakfast

FACEBOOK: Get up to the minute reports on what people you haven’t talked to in almost 20 years are having for lunch.


 

Categories: Social Software

I’ve been busy with work and spending time with my son so I haven’t been as diligent as I should be with the blog. Until my time management skills get better, here are some thoughts from a guest post I wrote recently for the Live Services blog.

Dare Obasanjo here, from the Live Services Program Management team. I'd like to talk a bit about the work we are doing to increase interoperability across the "Social Web."

The term The Social Web has been increasingly used to describe the rise of the Web as a way for people to interact, communicate and share with each other using the World Wide Web. Experiences that were once solitary such as reading news or browsing one's photo albums have now been made more social by sites such as Digg and Flickr. With so many Web sites offering social functionality, it has become increasingly important for people to be able to not only be able to connect and share with their friends on a single Web site but also to take these relationships and activities with them wherever they go on the Web.

With the recent update to Windows Live, we are continuing with the vision of enabling our 500 million customers to share and connect with the people they care about regardless of what services they use. Our customers can now invite their contacts from MySpace (the largest U.S. social networking site) and Hi5 to join them on Windows Live in a safe manner without having to resort to using the the password anti-pattern. These sites join Facebook and LinkedIn as social networks from which people can import their social graph or friend list into Windows Live.

image

In addition to interoperating with social networks to bridge relationships across the Web, we are also always working on enabling customers to share the content they are find interesting or activities they are participating in from all over the Web with their friends who use Windows Live services like Hotmail and Messenger. Customers of Windows Live can now add activities from over thirty different online services to their Windows Live profile including social networking sites like Facebook, photo sharing sites like Smugmug & Photobucket, social music sites like last.fm & Pandora, social bookmarking sites like Digg & Stumbleupon and much more.

We are also happy to announce today that in the coming months, MySpace customers will be able to share activities and updates from MySpace with their Windows Live network.

Below is a screenshot of some of the updates you might find on my profile on Windows Live 

image

These recent announcements bring us one step closer to a Social Web where interoperability is the norm instead of the exception. One of the most exciting things about our recent release is how much of the behind-the-scene integration is done using community driven technologies such as the Atom syndication format, Atom Activity Extensions, OAuth, and Portable Contacts. These community driven technologies are moving to ensure that the Social Web is a web of interconnected and interoperable web sites, not a set of competing walled gardens desperately clutching to customer data in an attempt to invent Lock-In 2.0 

As we look towards the future, I believe that the aforementioned standards around contact exchange, social activity streams and authorization are just the first steps. When we look at all the capabilities across the Web landscape it is clear that there are scenarios that are still completely broken due to lack of interoperability across various social websites. You can expect more from Windows Live when it comes to interoperability and the Social Web.

Just watch this space.

Note Now Playing: Eminem - We Made You Note


 

Categories: Social Software | Windows Live

Joe Gregorio, one of the editors of RFC 5023: The Atom Publishing Protocol, has declared it a failure in his blog post titled The Atom Publishing Protocol is a failure where he writes

The Atom Publishing Protocol is a failure. Now that I've met by blogging-hyperbole-quotient for the day let's talk about standards, protocols, and technology… AtomPub isn't a failure, but it hasn't seen the level of adoption I had hoped to see at this point in its life.

Thick clients, RIAs, were supposed to be a much larger component of your online life. The cliche at the time was, "you can't run Word in a browser". Well, we know how well that's held up. I expect a similar lifetime for today's equivalent cliche, "you can't do photo editing in a browser". The reality is that more and more functionality is moving into the browser and that takes away one of the driving forces for an editing protocol.

Another motivation was the "Editing on the airplane" scenario. The idea was that you wouldn't always be online and when you were offline you couldn't use your browser. The part of this cliche that wasn't put down by Virgin Atlantic and Edge cards was finished off by Gears and DVCS's.

The last motivation was for a common interchange format. The idea was that with a common format you could build up libraries and make it easy to move information around. The 'problem' in this case is that a better format came along in the interim: JSON. JSON, born of Javascript, born of the browser, is the perfect 'data' interchange format, and here I am distinguishing between 'data' interchange and 'document' interchange. If all you want to do is get data from point A to B then JSON is a much easier format to generate and consume as it maps directly into data structures, as opposed to a document oriented format like Atom, which has to be mapped manually into data structures and that mapping will be different from library to library.

The Atom effort rose up around the set of scenarios related to blogging based applications at the turn of the decade; RSS readers and blog editing tools. The Atom syndication format was supposed to be a boon for RSS readers while the Atom Publishing Protocol was intended to make thing better for blog editing tools. There was also the expectation that the format and protocol were general enough that they would standardize all microcontent syndication and publishing scenarios.

The Atom syndication format has been as successful or perhaps even more successful than originally intended because it's original scenarios are still fairly relevant on today's Web. Reading blogs using feed readers like Google Reader, Outlook and RSS Bandit is still just as relevant today as it was six or seven years ago. Secondly, interesting new ways to consume feeds have sprung in the form of social aggregation via sites such as FriendFeed. Also since the Atom format is actually a generic format for syndicating microcontent, it has proved useful as new classes of microcontent have shown up on the Web such as streams of status updates and social network news feeds. Thanks to the Atom syndication format's extensibility it is being applied to these new scenarios in effective ways via community efforts such as ActivityStrea.ms and OpenSocial.

On the other hand, Joe is right that the Atom Publishing Protocol hasn't fared as well with the times. Today, editing a blog post via a Web-based blog editing tool like the edit box on a site like Blogger is a significantly richer experience than it was at the turn of the century. The addition of features such as automatically saving draft posts has also worked towards obviating some of the claimed benefits of desktop applications for editing blogs. For example, even though I use Windows Live Writer to edit my blog I haven't been able to convince my wife to switch to using it for editing her blog because she finds the Web "edit box" experience to be good enough. The double whammy comes from the fact that although new forms of microcontent have shown up which do encourage the existence of desktop tools (e.g. there are almost a hundred desktop Twitter apps and the list of Facebook desktop applications is growing rapidly), the services which provide these content types have shunned AtomPub and embraced JSON as the way to expose APIs for rich clients to interact with their content. The primary reason for this is that JSON works well as a protocol for both browser based client apps and desktop apps since it is more compatible with object oriented programming models and the browser security model versus an XML-based document-centric data format.

In my opinion, the growth in popularity of object-centric JSON over document-centric XML as the way to expose APIs on the Web has been the real stake in the heart for the Atom Publishing Protocol.

FURTHER READING

  • JSON vs. XML: Browser Programming Models – Dare Obasanjo on why JSON is more attractive than XML for Web-based mashups due to presenting a friendlier programming model for the browser. 

  • JSON vs. XML: Browser Security Model -   Dare Obasanjo on why JSON is more attractive than XML for Web-based mashups due to circumventing some of the browser security constraints placed on XML.

Note Now Playing: Lady Gaga - Poker Face Note


 

April 13, 2009
@ 12:14 AM

Note Now Playing: Jay-Z - A Dream Note


 

Categories: Personal

Todd Hoff over on the high scalability blog has an interesting post along the vein of my favorite mantra of Disk is the new tape titled Are Cloud Based Memory Architectures the Next Big Thing? where he writes

Why might cloud based memory architectures be the next big thing? For now we'll just address the memory based architecture part of the question, the cloud component is covered a little later.

Behold the power of keeping data in memory:


Google query results are now served in under an astonishingly fast 200ms, down from 1000ms in the olden days. The vast majority of this great performance improvement is due to holding indexes completely in memory. Thousands of machines process each query in order to make search results appear nearly instantaneously.

This text was adapted from notes on Google Fellow Jeff Dean keynote speech at WSDM 2009.

Google isn't the only one getting a performance bang from moving data into memory. Both LinkedIn and Digg keep the graph of their network social network in memory. Facebook has northwards of 800 memcached servers creating a reservoir of 28 terabytes of memory enabling a 99% cache hit rate. Even little guys can handle 100s of millions of events per day by using memory instead of disk.

The entire post is sort of confusing since it seems to mix ideas that should be two or three different blog posts into a single entry. Of the many ideas thrown around in the post, the one I find most interesting is highlighting the trend of treating in-memory storage as a core part of how a system functions not just as an optimization that keeps you from having to go to disk.

The LinkedIn architecture is a great example of this trend. They have servers which they call The Cloud whose job is to cache the site's entire social graph in memory and then have created multiple instances of this cached social graph. Going to disk to satisfy social graph related queries which can require touching data for hundreds to thousands of users is simply never an option. This is different from how you would traditionally treat a caching layer such as ASP.NET caching or typical usage of memcached.

To build such a memory based architecture there are a number of features you need to consider that don't come out of the box in caching product like memcached. The first is data redundancy which is unsupported in memcached. There are forty instances of LinkedIn's in-memory social graph which have to be kept mostly in sync without putting to much pressure on underlying databases. Another feature common to such memory based architectures that you won't find in memcached is transparent support for failover. When your data is spread out across multiple servers, losing a server should not mean that an entire server's worth of data is no longer being served out of the cache. This is especially of concern when you have a decent sized server cloud because it should be expected that servers come and go out of rotation all the time given hardware failures. Memcached users can solve this problem by using libraries that support consistent hashing (my preference) or by keeping a server available as a hot spare with the same IP address as the downed server. The key problem with lack of native failover support is that there is no way to automatically rebalance the workload on the pool of servers are added and removed from the cloud.

For large scale Web applications, if you're going to disk to serve data in your primary scenarios then you're probably doing something wrong. The path most scaling stories take is that people start with a database, partition it and then move to a heavily cached based approach when they start hitting the limits of a disk based system. This is now common knowledge in the pantheon of Web development. What doesn't yet seem to be part of the communal knowledge is the leap from being a cache-based with the option to fall back to a DB to simply being memory-based. The shift is slight but fundamental. 

FURTHER READING

  • Improving Running Components at Twitter: Evan Weaver's presentation on how Twitter improved their performance by two orders of magnitude by increasing their usage of caches and improving their caching policies among other changes. Favorite quote, "Everything runs from memory in Web 2.0".

  • Defining a data grid: A description of some of the features that an in-memory architecture needs beyond simply having data cached in memory.

Note Now Playing: Ludacris - Last Of A Dying Breed [Ft. Lil Wayne] Note


 

Categories: Web Development

Last week, Digg announced the launch of the DiggBar which manages to combine two trends that Web geeks can't stand. It is both a URL shortener (whose problems are captured in the excellent post by Joshua Schachter on URL shorteners) and brings back the trend of one website putting another's content in a frame (which has detailed criticism in the wikipedia article on framing on the World Wide Web). 

The increasing popularity of URL shortening services has been fueled by the growth of Twitter. Twitter has a 140 character limit on posts on the site which means users sharing links on the site often have to find some way of shortening URLs to make their content fit within the limit. From my perspective, this is really a problem that Twitter should fix given the amount of collateral damage the growth of these services may end up placing on the Web.

Some Web developers believe this problem can be solved by the judicious use of microformats. One such developer is Chris Shiflett who has written a post entitled Save the Internet with rev="canonical" which states the following

There's a new proposal ("URL shortening that doesn't hurt the Internet") floating around for using rev="canonical" to help put a stop to the URL-shortening madness. It sounds like a pretty good idea, and based on some discussions on IRC this morning, I think a more thorough explanation would be helpful. I'm going to try.

This is easiest to explain with an example. I have an article about CSRF located at the following URL:

http://shiflett.org/articles/cross-site-request-forgeries

I happen to think this URL is beautiful. :-) Unfortunately, it is sure to get mangled into some garbage URL if you try to talk about it on Twitter, because it's not very short. I really hate when that happens. What can I do?

If rev="canonical" gains momentum and support, I can offer my own short URL for people who need one. Perhaps I decide the following is an acceptable alternative:

http://shiflett.org/csrf

Here are some clear advantages this URL has over any TinyURL.com replacement:

  • The URL is mine. If it goes away, it's my fault. (Ma.gnolia reminds us of the potential for data loss when relying on third parties.)
  • The URL has meaning. Both the domain (shiflett.org) and the path (csrf) are meaningful.
  • Because the URL has meaning, visitors who click the link know where they're going.
  • I can search for links to my content; they're not hidden behind an indefinite number of short URLs.

There are other advantages, but these are the few I can think of quickly.

Let's try to walk through how this is expected to work. I type in a long URL like http://www.25hoursaday.com/weblog/2009/03/22/VideoStandardsForAggregatingActivityFeedsAndSocialAggregationServicesAtMIX09Conference.aspx into Twitter. Twitter allows me to post the URL and then crawls the site to see if it has a link tag with a rev="canonical" attribute. It finds one and then replaces the short URL with something like http://www.25hoursaday.com/weblog/MIX09talk which is the alternate short URL I've created for my talk. What could go wrong? Smile

So for this to solve the problem, every site that could potentially be linked to from Twitter (i.e. every website in the world) needs to run their own URL shortening service. Then Twitter needs to make sure to crawl the website behind every URL in every tweet that flows through the system.  Oh yeah, and the fact is that the URLs still aren't as efficient as those created by sites like http://tr.im unless everyone buys a short domain name as well.

Sounds like a lot of stars have to align to make this useful to the general populace and not just a hack that is implemented by a couple dozen web geeks.  

Note Now Playing: Nas - Hate Me Now (feat. Puff Daddy) Note


 

Categories: Web Development

Last week, Douglas Bowman posted a screed against making web design based strictly on usage data. In a post entitled Goodbye Google, he wrote

With every new design decision, critics cry foul. Without conviction, doubt creeps in. Instincts fail. “Is this the right move?” When a company is filled with engineers, it turns to engineering to solve problems. Reduce each decision to a simple logic problem. Remove all subjectivity and just look at the data. Data in your favor? Ok, launch it. Data shows negative effects? Back to the drawing board. And that data eventually becomes a crutch for every decision, paralyzing the company and preventing it from making any daring design decisions.

Yes, it’s true that a team at Google couldn’t decide between two blues, so they’re testing 41 shades between each blue to see which one performs better. I had a recent debate over whether a border should be 3, 4 or 5 pixels wide, and was asked to prove my case. I can’t operate in an environment like that. I’ve grown tired of debating such minuscule design decisions. There are more exciting design problems in this world to tackle.

I can’t fault Google for this reliance on data. And I can’t exactly point to financial failure or a shrinking number of users to prove it has done anything wrong. Billions of shareholder dollars are at stake. The company has millions of users around the world to please. That’s no easy task. Google has momentum, and its leadership found a path that works very well.

One thing I love about building web-based software is that there is the unique ability to try out different designs and test them in front of thousands to millions of users without incurring a massive cost. Experimentation practices such as A/B testing and Multivariate testing enable web designers to measure the impact of their designs on the usability of a site on actual users instead of having to resort to theoretical arguments about the quality of the design or waiting until after they've shipped to find out the new design is a mistake.

Experimentation is most useful when you have a clear goal or workflow the design is trying to achieve and you are worried that a design change may impact that goal. A great example of this is how shopping cart recommendations were shipped at Amazon which is recalled in a great story told by Greg Linden in his post Early Amazon: Shopping cart recommendations excerpted below

The idea of recommending items at checkout is nothing new. Grocery stories put candy and other impulse buys in the checkout lanes. Hardware stores put small tools and gadgets near the register. But here we had an opportunity to personalize impulse buys. It is as if the rack near the checkout lane peered into your grocery cart and magically rearranged the candy based on what you are buying.Health food in your cart? Let's bubble that organic dark chocolate bar to the top of the impulse buys. Steaks and soda? Get those snack-sized potato chip bags up there right away.

I hacked up a prototype. On a test site, I modified the Amazon.com shopping cart page to recommend other items you might enjoy adding to your cart. Looked pretty good to me. I started showing it around.While the reaction was positive, there was some concern. In particular, a marketing senior vice-president was dead set against it. His main objection was that it might distract people away from checking out -- it is true that it is much easier and more common to see customers abandon their cart at the register in online retail -- and he rallied others to his cause.

At this point, I was told I was forbidden to work on this any further. I was told Amazon was not ready to launch this feature. It should have stopped there. Instead, I prepared the feature for an online test. I believed in shopping cart recommendations. I wanted to measure the sales impact. I heard the SVP was angry when he discovered I was pushing out a test. But, even for top executives, it was hard to block a test. Measurement is good. The only good argument against testing would be that the negative impact might be so severe that Amazon couldn't afford it, a difficult claim to make. The test rolled out.

The results were clear. Not only did it win, but the feature won by such a wide margin that not having it live was costing Amazon a noticeable chunk of change. With new urgency, shopping cart recommendations launched.

This is a great example of using data to validate a design change instead of relying on gut feel. However one thing that is often overlooked is that the changes still have to be well-designed. Shopping cart recommendations feature on Amazon is designed in such a way that it doesn't break you out of the checkout flow. See below for a screenshot of the current shopping cart recommendation flow on Amazon

On the above page, it is always very clear how to complete the checkout AND the process of adding an item to the cart is a one click process that keeps you on the same page. Sadly, a lot of sites have tried to implement similar features but often end up causing users to abandon shopping carts because the design encourages users to break their flow as part of the checkout process.

One of the places experimentation falls down is when it is used to measure the impact of aesthetic changes to the site when these changes aren't part of a particular workflow (e.g. changing the company logo). Another problem with experimentation is that it may encourage focusing on metrics that are easily measurable to the detriment of other aspects of the user experience. For example, Google's famous holiday logos were a way to show of the fun, light-hearted aspect of their brand. Doing A/B testing on whether people do more searches with or without the holiday logos on the page would miss the point. Similarly, sometimes even if A/B testing does show that a design impacts particular workflows it sometimes is worth it if the message behind the design benefits the brand. For example, take this story from Valleywag "I'm feeling lucky" button costs Google $110 million per year

Google cofounder Sergey Brin told public radio's Marketplace that around one percent of all Google searches go through the "I'm Feeling Lucky" button. Because the button takes users directly to the top search result, Google doesn't get to show search ads on one percent of all its searches. That costs the company around $110 million in annual revenue, according to Rapt's Tom Chavez. So why does Google keep such a costly button around?

"It's possible to become too dry, too corporate, too much about making money. I think what's delightful about 'I'm Feeling Lucky' is that it reminds you there are real people here," Google exec Marissa Mayer explained

~~~

Last night, I stumbled on a design change in Twitter that I suspect wouldn't have been rolled out if it had gone through experimentation first. On the Twitter blog, Biz Stone writes Replies Are Now Mentions 

We're updating the Replies feature and referring to it instead as Mentions. In your Twitter sidebar you'll now see your own @username tab. When you click that tab, you'll see a list of all tweets referencing your account with the @username convention anywhere in the tweet—instead of only at the beginning which is how it used to work. So for me it would be all mentions of @biz. For developers, this update will also be included in the APIs.

Compare the above sidebar with the previous one below and which do you think will be more intuitive for new users to understand?

This would be a great candidate to test because the metric is straightforward; compare clicks on the replies tab by new users using the old version as the control and the new version as the test. Then again, maybe they did A/B test it which is why the "@username" text is used instead of "Mentions" which is even more unintuitive. :)

Note Now Playing: Jamie Foxx - Blame It (remix) (feat. T-Pain & Rosco) Note


 

Categories: Web Development