John McCrea of Plaxo has written a cleverly titled guest post on TechCrunchIT, Facebook Connect and OpenID Relationship Status: “It’s Complicated”, where he makes the argument that Facebook Connect is a competing technology to OpenID but the situation is complicated by Facebook developers engaging in discussions with the OpenID. He writes

You see, it’s been about a month since the first implementation of Facebook Connect was spotted in the wild over at CBS’s celebrity gossip site, Want to sign up for the site? Click a single button. A little Facebook window pops up to confirm that you want to connect via your Facebook account. One more click – and you’re done. You’ve got a new account, a mini profile with your Facebook photo, and access to that subset of your Facebook friends who have also connected their accounts to TheInsider. Oh, and you can have your activities on TheInsider flow into your Facebook news feed automatically. All that, without having to create and remember a new username/password pair for the site. Why, it’s just like the vision for OpenID and the Open Stack – except without a single open building block under the hood!
After the intros, Allen Tom of Yahoo, who organized the event, turned the first session over Max Engel of MySpace, who in turn suggested an alternative – why not let Facebook’s Julie Zhuo kick it off instead? And for the next hour, Julie took us through the details of Facebook Connect and the decisions they had to make along the way to get the user interface and user experience just right. It was not just a presentation; it was a very active and engaged discussion, with questions popping up from all over the room. Julie and the rest of the Facebook team were engaged and eager to share what they had learned.

What the heck is going on here? Is Facebook preparing to go the next step of open, switching from the FB stack to the Open Stack? Only time will tell. But one thing is clear: Facebook Connect is the best thing ever for OpenID (and the rest of the Open Stack). Why? Because Facebook has set a high bar with Facebook Connect that is inspiring everyone in the open movement to work harder and faster to bring up the quality of the UI/UX for OpenID and the Open Stack.

There are a number of points worth discussing from the above excerpt. The first is the implication that OpenID is an equivalent technology to Facebook Connect. This is clearly not the case. OpenID just allows you to delegate to act of authenticating a user to another website so the user doesn't need to create credentials (i.e. username + password) on your site. OpenID alone doesn't get you the user's profile data nor does it allow you to pull in the authenticated user's social graph from the other site or publish activities to their activity feed. For that, you would need other other "Open brand" technologies like OpenID Attribute Exchange, Portable Contacts and OpenSocial. So it is fairer to describe the contest as Facebook Connect vs. OpenID + OpenID Attribute Exchange + Portable Contacts + OpenSocial.

The question then is who should we root for? At the end of the day, I don't think it makes a ton of sense for websites to have to target umpteen different APIs that do the same thing instead of targeting one standard implemented by multiple services. Specifically, it seems ridiculous to me that will have to code against Facebook Connect to integrate Facebook accounts into their site but code against something else if they want to integrate MySpace accounts and yet another API if they want to integrate LinkedIn accounts and so on. This is an area that is crying out for standardization.

Unfortunately, the key company providing thought leadership in this area is Facebook and for now they are building their solution with proprietary technologies instead of de jure or de facto ("Open brand") standards. This is unsurprising given that it takes three or four different specs in varying states of completeness created by different audiences deliver the scenarios they are currently interested in. What is encouraging is that Facebook developers are working with OpenID implementers by sharing their knowledge. However OpenID isn't the only technology needed to satisfy this scenario and I wonder if Facebook will be similarly engaged with the folks working on Portable Contacts and OpenSocial.

Facebook Connect is a step in the right direction when it comes to bringing the vision of social network interoperability to fruition. The key question is whether we will see effective open standards emerge that will target the same scenarios [which eventually even Facebook could adopt] or whether competitors will offer their own proprietary alternatives? So far it sounds like the latter is happening which means unnecessary reinvention of the wheel for sites that want to support "connecting" with multiple social networking sites.

PS: If OpenID phishing is a concern now when the user is redirected to the ID provider's site to login, it seems Facebook Connect is even worse since all it does is provide a pop over. I wonder if this is because the Facebook folks think the phishing concerns are overblown.

Note Now Playing: 2Pac - Mind Made Up (feat. Daz, Method Man & Redman) Note


Early this week, Roy Fieldings wrote a post entitled REST APIs must be hypertext-driven where he criticized the SocialSite REST API (a derivative of the OpenSocial REST API) for violating some constraints of the Representational State Transfer architectural style (aka REST). Roy's key criticisms were

API designers, please note the following rules before calling your creation a REST API:

  • A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]
  • A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). Servers must have the freedom to control their own namespace. Instead, allow servers to instruct clients on how to construct appropriate URIs, such as is done in HTML forms and URI templates, by defining those instructions within media types and link relations. [Failure here implies that clients are assuming a resource structure due to out-of band information, such as a domain-specific standard, which is the data-oriented equivalent to RPC's functional coupling].
  • ..
  • A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations. The transitions may be determined (or limited by) the client’s knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly (e.g., code-on-demand). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]

In reading some of the responses to Roy's post on programming.reddit it seems there are of number of folks who found it hard to glean practical advice from Roy's post. I thought it be useful to go over his post in more depth and with some examples.

The key thing to remember is that REST is about building software that scales to usage on the World Wide Web by being a good participant of the Web ecosystem. Ideally a RESTful API should be designed to be implementable by thousands of websites and consumed by hundreds of applications running on dozens of platforms with zero coupling between the client applications and the Web services. A great example of this is RSS/Atom feeds which happen to be one of the world's most successful RESTful API stories.

This notion of building software that scales to Web-wide usage is critical to understanding Roy's points above. The first point above is that a RESTful API should primarily be concerned about data payloads and not defining how URI end points should handle various HTTP methods. For one, sticking to defining data payloads which are then made standard MIME types gives maximum reusability of the technology. The specifications for RSS 2.0 (application/xml+rss) and the Atom syndication format (application/xml+atom) primarily focus on defining the data format and how applications should process feeds independent of how they were retrieved. In addition, both formats are aimed at being standard formats that can be utilized by any Web site as opposed to being tied to a particular vendor or Web site which has aided their adoption. Unfortunately, few have learned from these lessons and we have people building RESTful APIs with proprietary data formats that aren't meant to be shared. My current favorite example of this is social graph/contacts APIs which seem to be getting reinvented every six months. Google has the Contacts Data API, Yahoo! has their Address Book API, Microsoft has the Windows Live Contacts API, Facebook has their friends REST APIs and so on. Each of these APIs claims to be RESTful in its own way yet they are helping to fragment the Web instead of helping to grow it. There have been some moves to address this with the OpenSocial influenced Portable Contacts API but it too shies away from standard MIME types and instead creates dependencies on URL structures to dictate how the data payloads should be retrieved/processed.

One bad practice Roy calls out, which is embraced by the Portable Contacts and SocialSite APIs, is requiring a specific URL structure for services that implement the API. Section 6.2 of the current Portable Contacts API spec states the following

A request using the Base URL alone MUST yield a result, assuming that adequate authorization credentials are provided. In addition, Consumers MAY append additional path information to the Base URL to request more specific information. Service Providers MUST recognize the following additional path information when appended to the Base URL, and MUST return the corresponding data:

  • /@me/@all -- Return all contact info (equivalent to providing no additional path info)
  • /@me/@all/{id} -- Only return contact info for the contact whose id value is equal to the provided {id}, if such a contact exists. In this case, the response format is the same as when requesting all contacts, but any contacts not matching the requested ID MUST be filtered out of the result list by the Service Provider
  • /@me/@self -- Return contact info for the owner of this information, i.e. the user on whose behalf this request is being made. In this case, the response format is the same as when requesting all contacts, but any contacts not matching the requested ID MUST be filtered out of the result list by the Service Provider.

The problem with this approach is that it assumes that every implementer will have complete control of their URI space and that clients should have URL structures baked into them. The reason this practice is a bad idea is well documented in Joe Gregorio's post No Fishing - or - Why 'robots.txt and 'favicon.ico' are bad ideas and shouldn't be emulated where he lists several reasons why hard coded URLs are a bad idea. The reasons against include lack of extensibility and poor support for people in hosted environments who may not fully control their URI space. The interesting thing to note is that both the robots.txt and favicon.ico scenarios eventually developed mechanisms to support using hyperlinks on the source page (i.e. noindex and rel="shortcut icon") instead of hard coded URIs since that practice doesn't scale to Web-wide usage.

The latest drafts of the OpenSocial specification have a great example of how a service can use existing technologies such as URI templates to make even complicated URL structures to be flexible and discoverable without having to force every client and service to hardcode a specific URL structure. Below is an excerpt from the discovery section of the current OpenSocial REST API spec

A container declares what collection and features it supports, and provides templates for discovering them, via a simple discovery document. A client starts the discovery process at the container's identifier URI (e.g., The full flow is available at; in a nutshell:

  1. Client GETs {container-url} with Accept: application/xrds+xml
  2. Container responds with either an X-XRDS-Location: header pointing to the discovery document, or the document itself.
  3. If the client received an X-XRDS-Location: header, follow it to get the discovery document.

The discovery document is an XML file in the same format used for OpenID and OAuth discovery, defined at

<XRDS xmlns="xri://$xrds">
<XRD xmlns:simple="" xmlns="xri://$XRD*($v*2.0)" xmlns:os="" version="2.0">

This approach makes it possible for a service to expose the OpenSocial end points however way it sees fit without clients having to expect a specific URL structure. 

Similarly links should be used for describing relationships between resources in the various payloads instead of expecting hard coded URL structures. Again, I'm drawn to the example of RSS & Atom feeds where link relations are used for defining the permalink to a feed item, the link to related media files (i.e. podcasts), links to comments, etc instead of applications expecting that every Web site that supports enclosures should have /@rss/{id}/@podcasts  URL instead of just examining the <enclosure> element.

Thus it is plain to see that hyperlinks are important both for discovery of service end points and for describing relationships between resources in a loosely coupled way.

Note Now Playing: Prince - When Doves Cry Note


October 19, 2008
@ 08:47 AM

Tim Bray has a thought provoking post on embracing cloud computing entitled Get In the Cloud where he brings up the problem of vendor lock-in. He writes

Tech Issue · But there are two problems. The small problem is that we haven’t quite figured out the architectural sweet spot for cloud platforms. Is it Amazon’s EC2/S3 “Naked virtual whitebox” model? Is it a Platform-as-a-service flavor like Google App Engine? We just don’t know yet; stay tuned.

Big Issue · I mean a really big issue: if cloud computing is going to take off, it absolutely, totally, must be lockin-free. What that means if that I’m deploying my app on Vendor X’s platform, there have to be other vendors Y and Z such that I can pull my app and its data off X and it’ll all run with minimal tweaks on either Y or Z.

At the moment, I don’t think either the Amazon or Google offerings qualify.

Are we so deranged here in the twenty-first century that we’re going to re-enact, wide-eyed, the twin tragedies of the great desktop-suite lock-in and the great proprietary-SQL lock-in? You know, the ones where you give a platform vendor control over your IT budget? Gimme a break.

I’m simply not interested in any cloud offering at any level unless it offers zero barrier-to-exit.

Tim's post is about cloud platforms but I think it is useful to talk about avoiding lock-in when taking a bet on cloud based applications as well as when embracing cloud based platforms. This is especially true when you consider that moving from one application to another is a similar yet smaller scoped problem compared to moving from one Web development platform to another.

So let's say your organization wants to move from a cloud based office suite like Google Apps for Business to Zoho. The first question you have to ask yourself is whether it is possible to extract all of your organization's data from one service and import it without data loss into another. For business documents this should be straightforward thanks to standards like ODF and OOXML. However there are points to consider such as whether there is an automated way to perform such bulk imports and exports or whether individuals have to manually export and/or import their online documents to these standard formats. Thus the second question is how expensive it is for your organization to move the data. The cost includes everything from the potential organizational downtime to account for switching services to the actual IT department cost of moving all the data. At this point, you then have to weigh the impact of all the links and references to your organization's data that will be broken by your switch. I don't just mean links to documents returning 404 because you have switched from being hosted at to but more insidious problems like the broken experience of anyone who is using the calendar or document sharing feature of the service to give specific people access to their data. Also you have to ensure that email that is sent to your organization after the switch goes to the right place. Making this aspect of the transition smooth will likely be the most difficult part of the migration since it requires more control over application resources than application service providers typically give their customers. Finally, you will have to evaluate which features you will lose by switching applications and ensure that none of them is mission critical to your business.

Despite all of these concerns, switching hosted application providers is mostly a tractable problem. Standard data formats make data migration feasible although it might be unwieldy to extract the data from the service. In addition, Internet technologies like SMTP and HTTP all have built in ways to handle forwarding/redirecting references so that they aren't broken. However although the technology makes it possible, the majority of hosted application providers fall far short of making it easy to fully migrate to or away from their service without significant effort.

When it comes to cloud computing platforms, you have all of the same problems described above and a few extra ones. The key wrinkle with cloud computing platforms is that there is no standardization of the APIs and platform technologies that underlie these services. The APIs provided by Amazon's cloud computing platform (EC2/S3/EBS/etc) are radically different from those provided by Google App Engine (Datastore API/Python runtime/Images API/etc). For zero lock-in to occur in this space, there need to be multiple providers of the same underlying APIs. Otherwise, migrating between cloud computing platforms will be more like switching your application from Ruby on Rails and MySQL to Django and PostgreSQL (i.e. a complete rewrite).

In response to Tim Bray's post, Dewitt Clinton of Google left a comment which is excerpted below

That's why I asked -- you can already do that in both the case of Amazon's services and App Engine. Sure, in the case of EC2 and S3 you'll need to find a new place to host the image and a new backend for the data, but Amazon isn't trying to stop you from doing that. (Actually not sure about the AMI format licensing, but I assumed it was supposed to be open.)

In App Engine's case people can run the open source userland stack (which exposes the API you code to) on other providers any time the want, and there are plenty of open source bigtable implementations to chose from. Granted, bulk export of data is still a bit of a manual process, but it is doable even today and we're working to make it even easier.

Ae you saying that lock-in is avoided only once the alternative hosts exist?

But how does Amazon or Google facilitate that, beyond getting licensing correct and open sourcing as much code as they can? Obviously we can't be the ones setting up the alternative instances. (Though we can cheer for them, like we did when we saw the App Engine API implemented on top of EC2 and S3.)

To Doug Cutting's very good point, the way Amazon and Google (and everyone else in this space) seem to be trying to compete is by offering the best value, in terms of reliability (uptime, replication) and performance (data locality, latency, etc) and monitoring and management tools. Which is as it should be.

Although Dewitt is correct that Google and Amazon are not explicitly trying to lock-in customers to their platform, the fact is that today if a customer has heavily invested in either platform then there isn't a straightforward way for customers to extricate themselves from the platform and switch to another vendor. In addition there is not a competitive marketplace of vendors providing standard/interoperable platforms as there are with email hosting or Web hosting providers.

As long as these conditions remain the same, it may be that lock-in is too strong a word describe the situation but it is clear that the options facing adopters of cloud computing platforms aren't great when it comes to vendor choice.

Note Now Playing: Britney Spears - Womanizer Note


Categories: Platforms | Web Development

October 18, 2008
@ 02:18 AM

At 4:58 AM this morning, Nathan Omotoyinbo Obasanjo became the newest addition to our family. He was a healthy 9 pounds 6 ounces and was born at the Eastside Birth Center in Bellevue. His journey into our home has been fraught with delays and some drama which I'll recount here for posterity.

Tuesday – October 14th
At around midnight his mother let me know that she'd been having contractions for the past few hours and was ready to call the midwives. We drove to the birthcenter and arrived sometime before 2 AM. We were met by the midwife on call (Loren Riccio) who was later joined by two interns (not sure if that's what they're called). We also called my mother-in-law and she joined us there about 30 minutes later.  After a few hours and some time spent getting in and out of the tub (we planned to have a water birth) it became clear that my wife was either going through pre-labor or false labor and we were sent home at around 6AM.  We got home and I slept for about three or four hours then had to rush into work because I was scheduled to give a talk on the platform we built to power a key feature in Windows Live's Wave 3 release at Microsoft. I got there on time, the presentation was well received and I got to chat with my management folks about some blog drama that blew up the previous night. When I scheduled the presentation, I'd assumed Nathan would already have been born since the midwives gave us a due date of October 7th. Thankfully, my wife's mom stayed with her at home so I didn't have to make the choice between leaving my wife by herself at home and giving the talk. Later that day we went in for our previously scheduled midwife appointment with Christine Thain. She suggested that we come in the next day if the labor didn't kick in that night.
Wednesday - October 15th
We actually went in to see Christine twice that day. We went in early in the morning and she gave us some herbs (Gossypium) that are supposed to encourage the onset of labor. So we had a delicious lunch at Qdoba and ran a bunch of errands before going back to see Christine later that evening. She checked Jenna's progress and it didn't look like the baby was ready to be born yet. At this point we started questioning whether we wanted to stay the course with a natural childbirth and wait possibly for another week before the baby was born or if we wanted to go to a hospital and have an induction of labor. To take our minds of the waiting game, we decided to watch Don't Mess with the Zohan as a lighthearted diversion to keep our spirits up. It didn't help much especially since the movie was awful. During the movie, Jenna started having contractions and after confirming that they seemed more productive than from Monday night we called the midwife and the mother-in-law then rushed to the birth center. Unfortunately, it turned out to be another false alarm. At this point we started feeling like the boy who cried wolf especially since we'd had the two interns get up out of bed twice during the past two days.
Thursday - October 16th
Around 5AM Jenna woke me up and told me that she'd been having contractions for the past hour. Since we'd already scheduled an early morning checkup with Christine for 8AM we didn't feel the need to call the birth center's emergency number. When we got to the birth center there was already someone else going through labor in the birth suite. So we had to have an office visit where we learned that these contractions weren't productive either. At this point we made the call that we'd go in to the hospital on Friday to have the baby induced and had Christine make an appointment. Later that day we were grocery shopping and we got a call from Christine. We decided to go in one more time to the birth center to see if we could get one last checkup before going to the hospital on Friday. When we got there the lady from the morning was still in labor almost eleven hours later. So we had another office visit and discussed the pros & cons of going to the hospital versus trying to wait out a natural birth. At the end of the talk we stuck with the decision to go to the hospital although that was dependent on there being available beds.
Friday - October 17th
At about 1:30 AM I'm awoken by my wife who's going through very painful contractions. After timing the contractions for a while we decide to go into the birth center. This time I don't call my mother-in-law until after we've arrived at the birth center and the progress of the labor was confirmed. I also send a text message to my mom telling her yet again that we're having the baby and this time it's for real. Nathan was born just over 2 hours after we arrived at the birth center. Later on we found out that the lady who'd been in the labor or the entirety of the previous day ended up being sent to the hospital and had her baby at around the same time Nathan was being born.

So far, our previous addition to the family has been getting along great with Nathan.

Note Now Playing: Jadakiss - The Champ Is Here Note


Categories: Personal

I just found out via an article by Dan Farber that Yahoo! has rolled out a new "universal profile" and decided to give it a quick look. As I was creating my profile at I saw an interesting option that I thought was worth commenting on which is captured in the screenshot below

The circled text says

Allow my connections to share my information labeled “My Connections” with third-party applications they install and developers who made the applications. Learn More

The reason for this feature harkens back to the Robert Scoble vs. Facebook saga where Robert Scoble was temporarily banned from Facebook for extracting personal information about his Facebook friends from Facebook into Plaxo. As I outlined in my post on the topic, Breaking the Social Contract: My Data is not Your Data, the problem was that Scoble's friends on Facebook may have been OK with sharing their personal information with him on Facebook but not with him sharing that information with other services.

With this feature, Yahoo! has allowed users to opt-in to allowing an application used by a member of their social network to access their personal information. This is a very user-centric approach and avoids the draconian practice of services like Facebook that get around this issue by not providing any personally idenitifable user information via their APIs. Instead a user can specify which pieces of their personal information they don't mind being accessed by 3rd party applications being used by their friends using the option above. Nicely done.

The only problem I can see is that this is a pretty nuanced option which users may not understand clearly and may overlook since it is just another checkbox in the profile creation flow. Then again, it is hard to imagine how to introduce users to this concept after they have created their profile. 

Kudos to the Yahoo! folks on getting this feature and the rest of their universal profile out there.


Categories: Social Software

Allen Tom has an interesting post on the Yahoo! Developer blog entitled Yahoo! Releases OpenID Research where he shares the results of some usability studies the folks at Yahoo! have been doing around OpenID. The concluding paragraphs of his post are particularly interesting and are excerpted below

I'm happy to announce that Yahoo! is releasing the results of a usability study that we did for OpenID. Our test subjects were several experienced Yahoo! users (representative of our mainstream audience) who were observed as they tried to sign into a product review site using the Yahoo OpenID service.
On the Yahoo! side of things, we streamlined our OP (OpenID Provider) last week, and removed as much as we could. We removed the CAPTCHA and slimmed down the OP to just a single screen, and focused the UI to get the user back to the RP. We expect that RPs will enjoy a much higher success rate for users signing in with their Yahoo OpenID.

On the RP (Relying Party) side of things, our recommendation is that they emphasize to users that they can sign in with an existing account, specifically their YahooID. We believe that the YahooID, as well has IDs from other providers, have a higher brand awareness than OpenID. We also believe that first time users signing in with an OpenID should be able to go directly to their intended destination after signing in, instead of having to complete additional registration. Hopefully, as SimpleReg/AttributeExchange are more widely supported (Yahoo does not currently support them), relying parties will no longer feel the need to force the user through an additional registration form after signing in with an OpenID.

It's nice to see how much of this dovetails with my post on Things to Keep in Mind when Implementing OpenID on Your Website. In that post, I pointed out that the key risk of using OpenID on your Web site (i.e. being a Relying Party) is that there is a high risk of losing users if the OpenID sign-in flow is more complicated than simply having the user sign-up for your site. The Yahoo! usability study points to the fact that this seems to be the common case in typical OpenID deployments.

Actually there are two problems. The first being that most people don't know what OpenID is so simply stating that people can use OpenIDs to log-in to your site or using the logo may work for geeks but doesn't work for the typical Web user. The risk here is that the work of deploying ID on your site ends up being wasted. The second problem is the risk of losing the user after they decide to use OpenID to sign-in either due to an unintuitive user experience on your site (e.g. having to enter an OpenID URL) or on the site of the OpenID provider (e.g. lots of jargon with no clear call to action).

I did find it interesting that Yahoo! is recommending that services should prefer to using the brand of the target services whose credentials you plan to accept [especially if you white list OpenID providers you support] instead of using the OpenID brand since it isn't recognizable to the typical Web user. I tend to agree with this, OpenID is a means to an end and not an end in itself so it is weird to be putting it front and center in an end user facing user experience. Talking explicitly about OpenID should probably be at the developer to developer level. I feel the same way about RSS and other Web technologies for connecting services together.

The other interesting point is that a lot of services still require users to go through a sign-up flow after logging in with an OpenID thus the only thing they've saved is having the user pick a username (which would probably have been their email address) and password. That saving doesn't seem worth the extra complexity in the user experience of going through an OpenID provider. I agree with Tom that if more OpenID providers supported OpenID Attribute Exchange then the need for a post-login account creation would likely disappear since the Relying Party would get the basic information they need from the OpenID provider. 

In conclusion, the typical way OpenID is being implemented on the Web today leads to more costs than benefits. Hopefully, services will take to heart the lessons from Yahoo's usability study and we'll start seeing smarter usage of OpenID that benefits both users and the services that are adopting the technology.

Note Now Playing: Leona Lewis - Better In Time Note


Categories: Web Development

As I read about the U.K. partially nationalizing major banks and the U.S. government's plan to do the same, it makes one wonder how the financial system could have become so broken that these steps are even necessary. The more I read, the more it seems clear that our so called "free" markets had built into the some weaknesses that didn't take into account human nature. Let's start with the following quote from the MSNBC article about the U.K. government investing in major banks

As a condition of the deal, the government has required the banks to pay no bonuses to board members at the banks this year.

British Treasury chief Alistair Darling, speaking with Brown Monday, said it would be "nonsense" for board members to be taking their bonuses. The government also insisted that the bulk of future bonuses be paid in shares to ensure that bonuses encourage management to take a more long-term approach to profit making.

The above statement makes it sound like the board members of the various banks were actually on track to make their bonuses even though the media makes it sound like they are guilty of gross incompetence or some degree of negligence given the current state of the financial markets. If this is the case, how come the vast majority of the largest banks in the world seem to all have the same problem of boards and CEOs that were reaping massive rewards while effectively running the companies into the ground? How come the "free" market didn't work efficiently to discover and penalize these companies before we got to this juncture?

One reason for this problem is outlined by Philip Greenspun in his post Time for corporate governance reform? where he writes

What would it take to restore investor confidence in the U.S. market?  How about governance reform?

Right now the shareholders of a public company are at the mercy of management.  Without an expensive proxy fight, the shareholders cannot nominate or vote for their own representatives on the Board of Directors.  The CEO nominates a slate of golfing buddies to serve on the Board, while he or she will in turn serve on their boards.  Lately it seems that the typical CEO’s golfing buddies have decided on very generous compensation for the CEO, often amount to a substantial share of the company’s profits.  The golfing buddies have also decided that the public shareholders should be diluted by stock options granted to top executives and that the price on those options should be reset every time the company’s stock takes a dive (probably there is a lot of option price resetting going on right now!  Wouldn’t want your CEO to lose incentive).

Corporations are supposed to operate for the benefit of shareholders.  The only way that this can happen is if a majority of Directors are nominated by and selected by shareholders.  It may have been the case that social mores in the 1950s constrained CEO-nominated Boards from paying their friend $50 million per year, but those mores are apparently gone and the present structure in which management regulates itself serves only to facilitate large-scale looting by management.

For one the incentive system for corporate leadership is currently broken. As Phil states, companies (aka the market) have made it hard for shareholders to effect the decision making at the top of major corporations without expensive proxy fights and thus the main [counterproductive] recourse they have is selling their shares. And even then the cronyism between boards and executive management is such that the folks at the top still figure out how to get paid big bucks even if the stock has been taking a beating due to shareholder disaffection.

Further problems are caused by contagion, where people see one group getting rewards for particular behavior and then they want to join in the fun. Below is an excerpt of a Harvard Business School posting summarizing an interview with Warren Buffett entitled Wisdom of Warren Buffett: On Innovators, Imitators, and Idiots 

At one point, his interviewer asked the question that is on all our minds: "Should wise people have known better?" Of course, they should have, Buffett replied, but there's a "natural progression" to how good new ideas go wrong. He called this progression the "three Is." First come the innovators, who see opportunities that others don't. Then come the imitators, who copy what the innovators have done. And then come the idiots, whose avarice undoes the very innovations they are trying to use to get rich.

The problem, in other words, isn't with innovation--it's with the idiocy that follows. So how do we as individuals (not to mention as companies and societies) continue to embrace the value-creating upside of creativity while guarding against the value-destroying downsides of imitation? The answer, it seems to me, is about values--about always being able to distinguish between that which is smart and that which is expedient. And that takes discipline. Can you distinguish between a genuine innovation and a mindless imitation? Are you prepared to walk away from ideas that promise to make money, even if they make no sense?

It's not easy--which is why so many of us fall prey to so many bad ideas. "People don't get smarter about things that get as basic as greed," Warren Buffett told his interviewer. "You can't stand to see your neighbor getting rich. You know you're smarter than he is, but he's doing all these [crazy] things, and he's getting pretty soon you start doing it."

As Warren Buffett points out, our financial markets and corporate governance structures do not have safeguards that prevent greed from taking over and destroying the system. The underlying assumption in a capitalist system is that greed is good if it is properly harnessed. The problem we have today is that the people have moved faster than the rules and regulations to keep greed in check can keep up and in some cases successfully argued against these rules only for those decisions to come back and bite us on the butt.

So what does this have to do with designing social applications and other software systems? Any system that requires human interaction has to ensure that it takes into account the variations in human behavior and not just focus on the ideal user. This doesn't just mean malicious users and spammers who will have negative intentions towards the system. It also includes regular users who will unintentionally misuse or outwit the system in ways that the designers may not have expected.

A great example of this is Greg Linden's post on the Netflix Prize at KDD 2008 where he writes

Gavin Potter, the famous guy in a garage, had a short paper in the workshop, "Putting the collaborator back into collaborative filtering" (PDF). This paper has a fascinating discussion of how not assuming rationality and consistency when people rate movies and instead looking for patterns in people's biases can yield remarkable gains in accuracy. Some excerpts:

When [rating movies] ... a user is being asked to perform two separate tasks.

First, they are being asked to estimate their preferences for a particular item. Second, they are being asked to translate that preference into a score.

There is a significant issue ... that the scoring system, therefore, only produces an indirect estimate of the true preference of the user .... Different users are translating their preferences into scores using different scoring functions.

[For example, people] use the rating system in different ways -- some reserving the highest score only for films that they regard as truly exceptional, others using the score for films they simply enjoy .... Some users [have] only small differences in preferences of the films they have rated, and others [have] large differences .... Incorporation of a scoring function calibrated for an individual user can lead to an improvement in results.

[Another] powerful [model] we found was to include the impact of the date of the rating. It seems intuitively plausible that a user would allocate different scores depending on the mood they were in on the date of the rating.

Gavin has done quite well in the Netflix Prize; at the time of writing, he was in eighth place with an impressive score of .8684.
Galvin's paper is a light and easy read. Definitely worthwhile. Galvin's work forces us to challenge our common assumption that people are objective when providing ratings, instead suggesting that it is quite important to detect biases and moods when people rate on a 1..5 scale.

There were two key insights in Gavin Potter's paper related to how people interact with a ranking system on a 1 to 5 scale. The first is that some people will have coarser grained scoring methodology than others (e.g. John only rank movies as 1 for waste of time, 3 for satisfactory and 5 for would watch again while Jane's movie rating system ranges from 2.5 stars to 5 stars). The second insight is that you can detect and correct for when a user is having a crappy day versus a good day by seeing if they rated a lot of movies on a particular day and whether the average rating is at an extreme (e.g. Jane rates ten movies on Saturday and gives them an average score under 3 stars).

The fact that users will treat your 1 to 5 rating scale as a 2.5 to 5 rating scale or may rate a ton of movies poorly because they had a bad day at the office is a consideration that a recommendation system designer should keep in mind if they don't want to give consistently poor results to users in certain cases.  This is human nature in effect.

Another great example of how human nature foils our expectations of how users should behave is the following excerpt from the Engineering Windows 7 blog post about User Account Control

One extra click to do normal things like open the device manager, install software, or turn off your firewall is sometimes confusing and frustrating for our users. Here is a representative sample of the feedback we’ve received from the Windows Feedback Panel:

  • “I do not like to be continuously asked if I want to do what I just told the computer to do.”
  • “I feel like I am asked by Vista to approve every little thing that I do on my PC and I find it very aggravating.”
  • “The constant asking for input to make any changes is annoying. But it is good that it makes kids ask me for password for stuff they are trying to change.”
  • “Please work on simplifying the User Account control.....highly perplexing and bothersome at times”

We understand adding an extra click can be annoying, especially for users who are highly knowledgeable about what is happening with their system (or for people just trying to get work done). However, for most users, the potential benefit is that UAC forces malware or poorly written software to show itself and get your approval before it can potentially harm the system.

Does this make the system more secure? If every user of Windows were an expert that understands the cause/effect of all operations, the UAC prompt would make perfect sense and nothing malicious would slip through. The reality is that some people don’t read the prompts, and thus gain no benefit from them (and are just annoyed). In Vista, some power users have chosen to disable UAC – a setting that is admittedly hard to find. We don’t recommend you do this, but we understand you find value in the ability to turn UAC off. For the rest of you who try to figure out what is going on by reading the UAC prompt , there is the potential for a definite security benefit if you take the time to analyze each prompt and decide if it’s something you want to happen. However, we haven’t made things easy on you - the dialogs in Vista aren’t easy to decipher and are often not memorable. In one lab study we conducted, only 13% of participants could provide specific details about why they were seeing a UAC dialog in Vista.  Some didn’t remember they had seen a dialog at all when asked about it.

How do you design a dialog prompt to warn users about the potential risk of an action they are about to take if they are so intent on clicking OK and getting the job done that they forget that there was even a warning dialog afterwards?

There are a lot more examples out there but the fundamental message is the same; if you are designing a system that is going to be used by humans then you should account for the various ways people will try to outwit the system simply because they can't help themselves.

Note Now Playing: Kanye West - Love Lockdown Note


October 12, 2008
@ 08:24 PM

Bloglines stopped polling my feed over a week ago probably due to a temporary error in my feed. I've been trying to find a way to get them to re-enable my feed given that for the 1,670 subscribed to my feed on their service my blog hasn't been updated since October 3rd. Unfortunately there doesn't seem to be a way to contact the product team.

I sent a mail via the contact form but didn't get a response and their support forum is overrun with spam which leads me to believe it has been abandoned. Any ideas on how I can get Bloglines to start polling my feed again?

Note Now Playing: Babyface - When Can I See You Note


Categories: Personal

Some of my readers who missed the dotcom boom and bust from the early 2000s may not be familiar with FuckedCompany, a Web site that was dedicated to gloating about layoffs and other misfortunes at Web companies as the tech bubble popped.  Although fairly popular at the turn of the century the Web site was nothing more than postings about which companies had recently had layoffs, rumors of companies about to have layoffs and snarky comments about stock prices. You can read some of the old postings yourself in the WayBack Machine for FuckedCompany. I guess schadenfreude is a national pastime.

The current financial crises which has led to the worst week in the history of the Dow Jones and S&P 500 indexes as well as worldwide turmoil in financial markets to the point where countries like Austria, Russia, Iceland, Romania and Ukraine had to suspend trading on their stock markets. This has clearly pointed to the need for another schadenfreude filled website which gloats about the misfortunes of others. Thankfully TechCrunch has stepped up to the plate. Here are some of their opening morsels as they begin their transformation from tech bubble hypesters into its gloating eulogizers

For some reason, I was expecting more leadership from Arrington and his posse. Anyway, instead of reading and encouraging this sort of garbage from TechCrunch it would be great if more people keep posts like Dave McClure's Fear is the Mind Killer of the Silicon Valley Entrepreneur (we must be Muad'Dib, not Clark Kent) in mind instead. The last thing we need is popular blogs AND the mass media spreading despair and schadenfreude at a time like this.

Note Now Playing: T.I. - Dead And Gone (Featuring Justin Timberlake) Note


Categories: Current Affairs

John Battelle has a blog post entitled When Doesn't It Pay To Pay Attention To Search Quality? which contains the following statement and screenshot

the top result is the best result - and it's a paid link.

Bad Results

In the past I've talked about Google's strategy tax which is the conflict between increasing the relevance of their search results and increasing the relevance of their search ads. The more relevant Google's "organic" results are the less likely users are to click on their ads which means the less money the company makes. This effectively puts a cap on how good Google's search quality can get especially given the company's obsessive measurements of every small change they make to get the most bang for the buck.

When I first wrote about this, the conclusion from some quarters was that this inherent conflict of interest would eventually be Google's undoing since there were search innovations that they would either be slow to adopt or put on the back burner so as not to harm their cash cow. However John Battelle's post puts another spin on the issue. As long as people find what they want it doesn't matter if the result is "organic" or an ad.

As Jeremy Zawodny noted in his post The Truth about Web Navigation 

Oh, here's a bonus tip: normal people can't tell the difference between AdSense style ads and all the other links on most web sites. And almost the same number don't know what "sponsored results" on the Search Results Page are either. It's just a page of links to them. They click the ones that look like they'll get them what they want. It's that simple.

even more interesting is the comment by Marshall Kirkpatrick in response to Jeremy's post

The part of your post about AdWords reminds me of a survey I read awhile ago. Some tiny percentage of users were able to tell the difference between paid and natural search results, then once away from the computer almost all of them when asked said that the best ways to make it clear would be: putting paid links in a colored box, putting them in a different section of the page and putting the words "sponsored links" near them!! lol

What this means in practice is that the relevance of Google's ads in relation to the search term will increase in comparison to the relevance of the organic search results for that term. John Battelle has shown one example of this in his blog post. Over time this trend will get more pronounced. The problem for Google's competitors is that this doesn't necessarily mean that their search experience will get worse over time since their ad relevance will likely make up for any deficiencies in their organic results (at least for commercial queries – where the money is). What competitors will have to learn to exploit is Google's tendency to push users to Adwords results by making their organic results satisfactory instead of great.  

For example, consider the following search results page which my wife just got while looking for an acupuncturist in Bellevue, Washington

The interesting thing about the organic results is that it is relevant but very cluttered thus leading to the paradox of choice. On the other hand, the sponsored links give you a name and a description of the person's qualifications in their first result. Which result do you think my wife clicked?

Now why do you think Google ended up going with this format for commercial/business-based search results?  The average Web user is a satisficer and will look for the clearest and simplest result which is in the ads. However the geeks and power users (i.e. people who don't click on ads) are often maximizers when it comes to search results and are thus served by the organic results.

The question for you is whether you'd consider this a weakness or a strength of Google Search?

Note Now Playing: Missy Elliott - Hit Em Wit Da Hee (remix) (feat. Lil Kim & Mocha) Note