Over the past few days I've been mulling over the recent news from Yahoo! that they are building a Facebook-like platform based on OpenSocial. I find this interesting given that a number of people have come to the conclusion that Facebook is slowly killing it's widget platform in order to replace it with Facebook Connect.

The key reason developers believe Facebook is killing of its platform is captured in Nick O'Neill's post entitled Scott Rafer: The Facebook Platform is Dead which states

When speaking at the Facebook developer conference today in Berlin, Scott Rafer declared that Facebook platform dead. He posted statistics including one that I posted that suggests Facebook widgets are dead. Lookery’s own statistics from Quantcast suggest that their publisher traffic has been almost halved since the new site design was released. Ultimately, I think we may see an increase in traffic as users become educated on the new design but there is no doubt that developers were impacted significantly.

So what is Scott’s solution for developers looking to thrive following the shift to the new design? Leave the platform and jump on the Facebook Connect opportunity.

The bottom line is that by moving applications off of the profile page in their recent redesign, Facebook has reduced the visibility of these application thus reducing their page views and their ability to spread virally. Some may think that the impact of these changes is unforeseen, however the Facebook team is obsessive about testing the impact of their UX changes so it is extremely unlikely that they aren't aware that the redesign would negatively impact Facebook applications.

The question to ask then is why Facebook would knowingly damaging a platform which has been uniformly praised across the industry and has had established Web players like Google and Yahoo! scrambling to deploy copycat efforts? Alex Iskold over at ReadWriteWeb believes he has the answer in his post Why Platforms Are Letting Us Down - And What They Should Do About It which contains the following excerpt

When the Facebook platform was unveiled in 2007, it was called genius. Never before had a company in a single stroke enabled others to tap into millions of its users completely free. The platform was hailed as a game changer under the famous mantra "we built it and they will come". And they did come, hundreds of companies rushing to write Facebook applications. Companies and VC funds focused specifically on Facebook apps.

It really did look like a revolution, but it didn't last. The first reason was that Facebook apps quickly arranged themselves on a power law curve. A handful of apps (think Vampires, Byte Me and Sell My Friends) landed millions of users, but those in the pack had hardly any. The second problem was, ironically, the bloat. Users polluted their profiles with Facebook apps and no one could find anything in their profiles. Facebook used to be simple - pictures, wall, friends. Now each profile features a zoo of heterogenous apps, each one trying to grab the user's attention to take advantage of the network effect. Users are confused.

Worst of all, the platform had no infrastructure to monetize the applications. When Sheryl Sandberg arrived on the scene and looked at the balance sheet, she spotted the hefty expense that was the Facebook platform. Trying to live up to a huge valuation isn't easy, and in the absense of big revenues people rush to cut costs. Since it was both an expense and users were confused less than a year after its glorious launch, Facebook decided to revamp its platform.

The latest release of Facebook, which was released in July, makes it nearly impossible for new applications to take advantage of the network effect. Now users must first install the application, then find it under the application menu or one of the tabs, then check a bunch of boxes to add it to their profile (old applications are grand-daddied in). Facebook has sent a clear message to developers - the platform is no longer a priority.

Alex's assertion is that after Facebook looked at the pros and cons of their widget platform, the company came to the conclusion that the platform was turning into a cost center instead of being away to improve the value of Facebook to its users. There is evidence that applications built on Facebook's platform did cause negative reactions from its users. For example, there was the creation of the "This has got to stop…pointless applications are ruining Facebook" group which at its height had half a million Facebook users protesting the behavior of Facebook apps. In addition, the creation of Facebook's Great Apps program along with the guiding principles for building Facebook applications implies that the Facebook team realized that applications being built on their platform typically don't have their users best interests at heart.

This brings up the interesting point that although there has been a lot of discussion on how Facebook apps make money there haven't been similar conversations on how the application platform improves Facebook's bottom line. There is definitely a win-win equation when so-called "great apps" like iLike and Causes, which positively increase user engagement, are built on Facebook's platform. However there is also a long tail of applications that try their best to spread virally at the cost of decreasing user satisfaction in the Facebook experience. These dissatisfied users likely end up reducing their usage of Facebook thus actually costing Facebook users and page views. It is quite possible that the few "great apps" built on the Facebook platform do not outweigh the amount of not-so-great apps built on the platform which have caused users to protest in the past. This would confirm Alex Iskold's suspicions about why Facebook has started sabotaging the popularity of applications built on its platform and has started emphasizing partnerships via Facebook Connect.


A similar situation has occurred with regards to the Java platform and Sun Microsystems. The sentiment is captured in a Javaworl article by Josh Fruhlinger entitled Sun melting down, and where's Java? which contains the following excerpt

one of the most interesting things about the coverage of the company's problems is how Java figures into the conversation, which is exactly not at all. In most of the articles, the word only appears as Sun's stock ticker; the closest I could find to a mention is in this AP story, which notes that "Sun's strategy of developing free, 'open-source' software and giving it away to spur sales of its high-end computer servers and support services hasn't paid off as investors would like." Even longtime tech journalist Ashlee Vance, when gamely badgering Jon Schwartz for the New York Time about whether Sun would sell its hardware division and focus on software, only mentions Solaris and MySQL in discussing the latter.

Those in the Java community no doubt believe that Java is too big to fail, that Sun can't abandon it because it's too important, even if it can't tangibly be tied to anything profitable. But if Sun's investors eventually dismember the company to try to extract what value is left in it, I'm not sure where Java will fit into that plan.

It is interesting to note that after a decade of investment in the Java platform, it is hard to point to what concrete benefits Sun has gotten from being the originator and steward of the Java platform and programming language. Definitely another example of a platform that may have benefited applications built on it yet which didn't really benefit the platform vendor as expected.

The lesson here is that building a platform isn't just about making the developers who use the platform successful but also making sure that the platform itself furthers the goals of its developers in the first place.

Note Now Playing: Kardinal Offishall - Dangerous (Feat. Akon) Note


 

Categories: Platforms

A couple of months ago, Russell Beattie wrote a post about the end of his startup entitled The end of Mowser which is excerpted below

The argument up to now has been simply that there are roughly 3 billion phones out there, and that when these phones get on the Internet, their vast numbers will outweigh PCs and tilt the market towards mobile as the primary web device. The problem is that these billions of users *haven't* gotten on the Internet, and they won't until the experience is better and access to the web is barrier-free - and that means better devices and "full browsers". Let's face it, you really aren't going to spend any real time or effort browsing the web on your mobile phone unless you're using Opera Mini, or have a smart phone with a decent browser - as any other option is a waste of time, effort and money. Users recognize this, and have made it very clear they won't be using the "Mobile Web" as a substitute for better browsers, rather they'll just stay away completely.

In fact, if you look at the number of page views of even the most popular mobile-only websites out there, they don't compare to the traffic of popular blogs, let alone major portals or social networks.

Let me say that again clearly, the mobile traffic just isn't there. It's not there now, and it won't be.

What's going to drive that traffic eventually? Better devices and full-browsers. M-Metrics recently spelled it out very clearly - in the US 85% of iPhone owners browsed the web vs. 58% of smartphone users, and only 13% of the overall mobile market. Those numbers *may* be higher in other parts of the world, but it's pretty clear where the trend line is now. (What a difference a year makes.) It would be easy to say that the iPhone "disrupted" the mobile web market, but in fact I think all it did is point out that there never was one to begin with.

I filed away Russell's post as interesting at the time but hadn't really experienced it first hand until recently. I recently switched to using Skyfire as my primary browser on my mobile phone and it has made a world of difference in how a use my phone. No longer am I restricted to crippled versions of popular sites nor do I have to lose features when I visit the regular versions of the page. I can view the real version of my news feed on Facebook. Vote up links in reddit or Digg. And reading blogs is no longer an exercise in frustration due to CSS issues or problems rendering widgets. Unsurprisingly my usage of the Web on my phone has pretty much doubled.

This definitely brings to the forefront how ridiculous of an idea it was to think that we need a "mobile Web" complete with its own top level domain (.mobi). Which makes more sense, that every Web site in the world should create duplicate versions of their pages for mobile phones and regular browsers or that software + hardware would eventually evolve to the point where I can run a full fledged browser on the device in my pocket? Thanks to the iPhone, it is now clear to everyone that this idea of a second class Web for mobile phones was a stopgap solution at best whose time is now past. 

One other thing I find interesting is treating the iPhone as a separate category from "smartphones" in the highlighted quote. This is similar to a statement made by Walt Mossberg when he reviewed Google's Android. That article began as follows

In the exciting new category of modern hand-held computers — devices that fit in your pocket but are used more like a laptop than a traditional phone — there has so far been only one serious option. But that will all change on Oct. 22, when T-Mobile and Google bring out the G1, the first hand-held computer that’s in the same class as Apple’s iPhone.

The key feature that the iPhone and Android have in common that separates them from regular "smartphones" is that they both include a full featured browser based on Webkit. The other features like downloadable 3rd party applications, wi-fi support, rich video support, GPS, and so on have been available on phones running Windows Mobile for years. This shows how important having a full Web experience was for mobile phones and just how irrelevant the notion of a "mobile Web" has truly become.

Note Now Playing: Kilo Ali - Lost Y'all Mind Note


 

Categories: Technology

The second most interesting announcement out of PDC this morning is that Windows Live ID is becoming an OpenID Provider. The information below explains how to try it out and give feedback to the team responsible.

Try It Now. Tell Us What You Think

We want you to try the Windows Live ID OpenID Provider CTP release, let us know your feedback, and tell us about any problems you find.

To prepare:

  1. Go to https://login.live-int.com and use the sign-up button to set up a Windows Live ID test account in the INT environment.
  2. Go to https://login.live-int.com/beta/ManageOpenID.srf to set up your OpenID test alias.

Then:

  • Users - At any Web site that supports OpenID 2.0, type openid.live-INT.com in the OpenID login box to sign in to that site by means of your Windows Live ID OpenID alias.
  • Library developers - Test your libraries against the Windows Live ID OP endpoint and let us know of any problems you find.
  • Web site owners - Test signing in to your site by using a Windows Live ID OpenID alias and let us know of any problems you find.
  • You can send us feedback at:
  • E-mail - openidfb@microsoft.com

This is awesome news. I've been interested in Windows Live supporting OpenID for a while and I'm glad to see that we've taken the plunge. Please try it out and send the team your feedback.

I've tried it out already and sent some initial feedback. In general, my feedback was on applying the lessons from the Yahoo! OpenID Usability Study since it looks like our implementation has some of the same usability issues that inspired Jeff Atwood's rants about Yahoo's OpenID implementation. Since it is still a Community Technology Preview, I'm sure the user experience will improve as feedback trickles in.

Kudos to Jorgen Thelin and the rest of the folks on the Identity Services team for getting this out. Great work, guys.

UPDATE: Angus Logan posted a comment with a link to the following screencast of the current user experience when using Windows Live ID as an OpenID provider experience


Note Now Playing: Christina Aguilera - Keeps Gettin' Better Note


 

Categories: Windows Live

October 27, 2008
@ 05:39 PM

Just because you aren't attending Microsoft's Professional Developer Conference doesn't mean you can't follow the announcements. The most exciting announcement so far [from my perspective] has been Windows Azure which is described as follows from the official site

The Azure™ Services Platform (Azure) is an internet-scale cloud services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services that can be used individually or together. Azure’s flexible and interoperable platform can be used to build new applications to run from the cloud or enhance existing applications with cloud-based capabilities. Its open architecture gives developers the choice to build web applications, applications running on connected devices, PCs, servers, or hybrid solutions offering the best of online and on-premises.

Azure reduces the need for up-front technology purchases, and it enables developers to quickly and easily create applications running in the cloud by using their existing skills with the Microsoft Visual Studio development environment and the Microsoft .NET Framework. In addition to managed code languages supported by .NET, Azure will support more programming languages and development environments in the near future. Azure simplifies maintaining and operating applications by providing on-demand compute and storage to host, scale, and manage web and connected applications. Infrastructure management is automated with a platform that is designed for high availability and dynamic scaling to match usage needs with the option of a pay-as-you-go pricing model. Azure provides an open, standards-based and interoperable environment with support for multiple internet protocols, including HTTP, REST, SOAP, and XML.

It will be interesting to read what developers make of this announcement and what kind of apps start getting built on this platform. I'll also be on the look out for any in depth discussions on the platform, there is lots to chew on in this announcement.

For a quick overview of what Azure means to developers, take a look at Azure for Business and Azure for Web Developers

Note Now Playing: Guns N' Roses - Welcome to the Jungle Note


 

Categories: Platforms | Windows Live

John McCrea of Plaxo has written a cleverly titled guest post on TechCrunchIT, Facebook Connect and OpenID Relationship Status: “It’s Complicated”, where he makes the argument that Facebook Connect is a competing technology to OpenID but the situation is complicated by Facebook developers engaging in discussions with the OpenID. He writes

You see, it’s been about a month since the first implementation of Facebook Connect was spotted in the wild over at CBS’s celebrity gossip site, TheInsider.com. Want to sign up for the site? Click a single button. A little Facebook window pops up to confirm that you want to connect via your Facebook account. One more click – and you’re done. You’ve got a new account, a mini profile with your Facebook photo, and access to that subset of your Facebook friends who have also connected their accounts to TheInsider. Oh, and you can have your activities on TheInsider flow into your Facebook news feed automatically. All that, without having to create and remember a new username/password pair for the site. Why, it’s just like the vision for OpenID and the Open Stack – except without a single open building block under the hood!
...
After the intros, Allen Tom of Yahoo, who organized the event, turned the first session over Max Engel of MySpace, who in turn suggested an alternative – why not let Facebook’s Julie Zhuo kick it off instead? And for the next hour, Julie took us through the details of Facebook Connect and the decisions they had to make along the way to get the user interface and user experience just right. It was not just a presentation; it was a very active and engaged discussion, with questions popping up from all over the room. Julie and the rest of the Facebook team were engaged and eager to share what they had learned.

What the heck is going on here? Is Facebook preparing to go the next step of open, switching from the FB stack to the Open Stack? Only time will tell. But one thing is clear: Facebook Connect is the best thing ever for OpenID (and the rest of the Open Stack). Why? Because Facebook has set a high bar with Facebook Connect that is inspiring everyone in the open movement to work harder and faster to bring up the quality of the UI/UX for OpenID and the Open Stack.

There are a number of points worth discussing from the above excerpt. The first is the implication that OpenID is an equivalent technology to Facebook Connect. This is clearly not the case. OpenID just allows you to delegate to act of authenticating a user to another website so the user doesn't need to create credentials (i.e. username + password) on your site. OpenID alone doesn't get you the user's profile data nor does it allow you to pull in the authenticated user's social graph from the other site or publish activities to their activity feed. For that, you would need other other "Open brand" technologies like OpenID Attribute Exchange, Portable Contacts and OpenSocial. So it is fairer to describe the contest as Facebook Connect vs. OpenID + OpenID Attribute Exchange + Portable Contacts + OpenSocial.

The question then is who should we root for? At the end of the day, I don't think it makes a ton of sense for websites to have to target umpteen different APIs that do the same thing instead of targeting one standard implemented by multiple services. Specifically, it seems ridiculous to me that TheInsider.com will have to code against Facebook Connect to integrate Facebook accounts into their site but code against something else if they want to integrate MySpace accounts and yet another API if they want to integrate LinkedIn accounts and so on. This is an area that is crying out for standardization.

Unfortunately, the key company providing thought leadership in this area is Facebook and for now they are building their solution with proprietary technologies instead of de jure or de facto ("Open brand") standards. This is unsurprising given that it takes three or four different specs in varying states of completeness created by different audiences deliver the scenarios they are currently interested in. What is encouraging is that Facebook developers are working with OpenID implementers by sharing their knowledge. However OpenID isn't the only technology needed to satisfy this scenario and I wonder if Facebook will be similarly engaged with the folks working on Portable Contacts and OpenSocial.

Facebook Connect is a step in the right direction when it comes to bringing the vision of social network interoperability to fruition. The key question is whether we will see effective open standards emerge that will target the same scenarios [which eventually even Facebook could adopt] or whether competitors will offer their own proprietary alternatives? So far it sounds like the latter is happening which means unnecessary reinvention of the wheel for sites that want to support "connecting" with multiple social networking sites.

PS: If OpenID phishing is a concern now when the user is redirected to the ID provider's site to login, it seems Facebook Connect is even worse since all it does is provide a pop over. I wonder if this is because the Facebook folks think the phishing concerns are overblown.

Note Now Playing: 2Pac - Mind Made Up (feat. Daz, Method Man & Redman) Note


 

Early this week, Roy Fieldings wrote a post entitled REST APIs must be hypertext-driven where he criticized the SocialSite REST API (a derivative of the OpenSocial REST API) for violating some constraints of the Representational State Transfer architectural style (aka REST). Roy's key criticisms were

API designers, please note the following rules before calling your creation a REST API:

  • A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]
  • A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). Servers must have the freedom to control their own namespace. Instead, allow servers to instruct clients on how to construct appropriate URIs, such as is done in HTML forms and URI templates, by defining those instructions within media types and link relations. [Failure here implies that clients are assuming a resource structure due to out-of band information, such as a domain-specific standard, which is the data-oriented equivalent to RPC's functional coupling].
  • ..
  • A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations. The transitions may be determined (or limited by) the client’s knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly (e.g., code-on-demand). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]

In reading some of the responses to Roy's post on programming.reddit it seems there are of number of folks who found it hard to glean practical advice from Roy's post. I thought it be useful to go over his post in more depth and with some examples.

The key thing to remember is that REST is about building software that scales to usage on the World Wide Web by being a good participant of the Web ecosystem. Ideally a RESTful API should be designed to be implementable by thousands of websites and consumed by hundreds of applications running on dozens of platforms with zero coupling between the client applications and the Web services. A great example of this is RSS/Atom feeds which happen to be one of the world's most successful RESTful API stories.

This notion of building software that scales to Web-wide usage is critical to understanding Roy's points above. The first point above is that a RESTful API should primarily be concerned about data payloads and not defining how URI end points should handle various HTTP methods. For one, sticking to defining data payloads which are then made standard MIME types gives maximum reusability of the technology. The specifications for RSS 2.0 (application/xml+rss) and the Atom syndication format (application/xml+atom) primarily focus on defining the data format and how applications should process feeds independent of how they were retrieved. In addition, both formats are aimed at being standard formats that can be utilized by any Web site as opposed to being tied to a particular vendor or Web site which has aided their adoption. Unfortunately, few have learned from these lessons and we have people building RESTful APIs with proprietary data formats that aren't meant to be shared. My current favorite example of this is social graph/contacts APIs which seem to be getting reinvented every six months. Google has the Contacts Data API, Yahoo! has their Address Book API, Microsoft has the Windows Live Contacts API, Facebook has their friends REST APIs and so on. Each of these APIs claims to be RESTful in its own way yet they are helping to fragment the Web instead of helping to grow it. There have been some moves to address this with the OpenSocial influenced Portable Contacts API but it too shies away from standard MIME types and instead creates dependencies on URL structures to dictate how the data payloads should be retrieved/processed.

One bad practice Roy calls out, which is embraced by the Portable Contacts and SocialSite APIs, is requiring a specific URL structure for services that implement the API. Section 6.2 of the current Portable Contacts API spec states the following

A request using the Base URL alone MUST yield a result, assuming that adequate authorization credentials are provided. In addition, Consumers MAY append additional path information to the Base URL to request more specific information. Service Providers MUST recognize the following additional path information when appended to the Base URL, and MUST return the corresponding data:

  • /@me/@all -- Return all contact info (equivalent to providing no additional path info)
  • /@me/@all/{id} -- Only return contact info for the contact whose id value is equal to the provided {id}, if such a contact exists. In this case, the response format is the same as when requesting all contacts, but any contacts not matching the requested ID MUST be filtered out of the result list by the Service Provider
  • /@me/@self -- Return contact info for the owner of this information, i.e. the user on whose behalf this request is being made. In this case, the response format is the same as when requesting all contacts, but any contacts not matching the requested ID MUST be filtered out of the result list by the Service Provider.

The problem with this approach is that it assumes that every implementer will have complete control of their URI space and that clients should have URL structures baked into them. The reason this practice is a bad idea is well documented in Joe Gregorio's post No Fishing - or - Why 'robots.txt and 'favicon.ico' are bad ideas and shouldn't be emulated where he lists several reasons why hard coded URLs are a bad idea. The reasons against include lack of extensibility and poor support for people in hosted environments who may not fully control their URI space. The interesting thing to note is that both the robots.txt and favicon.ico scenarios eventually developed mechanisms to support using hyperlinks on the source page (i.e. noindex and rel="shortcut icon") instead of hard coded URIs since that practice doesn't scale to Web-wide usage.

The latest drafts of the OpenSocial specification have a great example of how a service can use existing technologies such as URI templates to make even complicated URL structures to be flexible and discoverable without having to force every client and service to hardcode a specific URL structure. Below is an excerpt from the discovery section of the current OpenSocial REST API spec

A container declares what collection and features it supports, and provides templates for discovering them, via a simple discovery document. A client starts the discovery process at the container's identifier URI (e.g., example.org). The full flow is available athttp://xrds-simple.net/core/1.0/; in a nutshell:

  1. Client GETs {container-url} with Accept: application/xrds+xml
  2. Container responds with either an X-XRDS-Location: header pointing to the discovery document, or the document itself.
  3. If the client received an X-XRDS-Location: header, follow it to get the discovery document.

The discovery document is an XML file in the same format used for OpenID and OAuth discovery, defined at http://xrds-simple.net/core/1.0/:

<XRDS xmlns="xri://$xrds">
<XRD xmlns:simple="http://xrds-simple.net/core/1.0" xmlns="xri://$XRD*($v*2.0)" xmlns:os="http://ns.opensocial.org/2008/opensocial" version="2.0">
<Type>xri://$xrds*simple</Type>
<Service>
<Type>http://ns.opensocial.org/2008/opensocial/people</Type>
<os:URI-Template>http://api.example.org/people/{guid}/{selector}{-prefix|/|pid}</os:URI-Template>
</Service>
<Service>
<Type>http://ns.opensocial.org/2008/opensocial/groups</Type>
<os:URI-Template>http://api.example.org/groups/{guid}</os:URI-Template>
</Service>
<Service
<Type>http://ns.opensocial.org/2008/opensocial/activities</Type>
<os:URI-Template>http://api.example.org/activities/{guid}/{appid}/{selector}</os:URI-Template>
</Service>
<Service>
<Type>http://ns.opensocial.org//2008/opensocial/appdata</Type>
<os:URI-Template>http://api.example.org/appdata/{guid}/{appid}/{selector}</os:URI-Template>
</Service>
<Service>
<Type>http://ns.opensocial.org//2008/opensocial/messages</Type>
<os:URI-Template>http://api.example.org/messages/{guid}/{selector}</os:URI-Template>
</Service>
</XRD>
</XRDS>

This approach makes it possible for a service to expose the OpenSocial end points however way it sees fit without clients having to expect a specific URL structure. 

Similarly links should be used for describing relationships between resources in the various payloads instead of expecting hard coded URL structures. Again, I'm drawn to the example of RSS & Atom feeds where link relations are used for defining the permalink to a feed item, the link to related media files (i.e. podcasts), links to comments, etc instead of applications expecting that every Web site that supports enclosures should have /@rss/{id}/@podcasts  URL instead of just examining the <enclosure> element.

Thus it is plain to see that hyperlinks are important both for discovery of service end points and for describing relationships between resources in a loosely coupled way.

Note Now Playing: Prince - When Doves Cry Note


 

October 19, 2008
@ 08:47 AM

Tim Bray has a thought provoking post on embracing cloud computing entitled Get In the Cloud where he brings up the problem of vendor lock-in. He writes

Tech Issue · But there are two problems. The small problem is that we haven’t quite figured out the architectural sweet spot for cloud platforms. Is it Amazon’s EC2/S3 “Naked virtual whitebox” model? Is it a Platform-as-a-service flavor like Google App Engine? We just don’t know yet; stay tuned.

Big Issue · I mean a really big issue: if cloud computing is going to take off, it absolutely, totally, must be lockin-free. What that means if that I’m deploying my app on Vendor X’s platform, there have to be other vendors Y and Z such that I can pull my app and its data off X and it’ll all run with minimal tweaks on either Y or Z.

At the moment, I don’t think either the Amazon or Google offerings qualify.

Are we so deranged here in the twenty-first century that we’re going to re-enact, wide-eyed, the twin tragedies of the great desktop-suite lock-in and the great proprietary-SQL lock-in? You know, the ones where you give a platform vendor control over your IT budget? Gimme a break.

I’m simply not interested in any cloud offering at any level unless it offers zero barrier-to-exit.

Tim's post is about cloud platforms but I think it is useful to talk about avoiding lock-in when taking a bet on cloud based applications as well as when embracing cloud based platforms. This is especially true when you consider that moving from one application to another is a similar yet smaller scoped problem compared to moving from one Web development platform to another.

So let's say your organization wants to move from a cloud based office suite like Google Apps for Business to Zoho. The first question you have to ask yourself is whether it is possible to extract all of your organization's data from one service and import it without data loss into another. For business documents this should be straightforward thanks to standards like ODF and OOXML. However there are points to consider such as whether there is an automated way to perform such bulk imports and exports or whether individuals have to manually export and/or import their online documents to these standard formats. Thus the second question is how expensive it is for your organization to move the data. The cost includes everything from the potential organizational downtime to account for switching services to the actual IT department cost of moving all the data. At this point, you then have to weigh the impact of all the links and references to your organization's data that will be broken by your switch. I don't just mean links to documents returning 404 because you have switched from being hosted at google.com to zoho.com but more insidious problems like the broken experience of anyone who is using the calendar or document sharing feature of the service to give specific people access to their data. Also you have to ensure that email that is sent to your organization after the switch goes to the right place. Making this aspect of the transition smooth will likely be the most difficult part of the migration since it requires more control over application resources than application service providers typically give their customers. Finally, you will have to evaluate which features you will lose by switching applications and ensure that none of them is mission critical to your business.

Despite all of these concerns, switching hosted application providers is mostly a tractable problem. Standard data formats make data migration feasible although it might be unwieldy to extract the data from the service. In addition, Internet technologies like SMTP and HTTP all have built in ways to handle forwarding/redirecting references so that they aren't broken. However although the technology makes it possible, the majority of hosted application providers fall far short of making it easy to fully migrate to or away from their service without significant effort.

When it comes to cloud computing platforms, you have all of the same problems described above and a few extra ones. The key wrinkle with cloud computing platforms is that there is no standardization of the APIs and platform technologies that underlie these services. The APIs provided by Amazon's cloud computing platform (EC2/S3/EBS/etc) are radically different from those provided by Google App Engine (Datastore API/Python runtime/Images API/etc). For zero lock-in to occur in this space, there need to be multiple providers of the same underlying APIs. Otherwise, migrating between cloud computing platforms will be more like switching your application from Ruby on Rails and MySQL to Django and PostgreSQL (i.e. a complete rewrite).

In response to Tim Bray's post, Dewitt Clinton of Google left a comment which is excerpted below

That's why I asked -- you can already do that in both the case of Amazon's services and App Engine. Sure, in the case of EC2 and S3 you'll need to find a new place to host the image and a new backend for the data, but Amazon isn't trying to stop you from doing that. (Actually not sure about the AMI format licensing, but I assumed it was supposed to be open.)

In App Engine's case people can run the open source userland stack (which exposes the API you code to) on other providers any time the want, and there are plenty of open source bigtable implementations to chose from. Granted, bulk export of data is still a bit of a manual process, but it is doable even today and we're working to make it even easier.

Ae you saying that lock-in is avoided only once the alternative hosts exist?

But how does Amazon or Google facilitate that, beyond getting licensing correct and open sourcing as much code as they can? Obviously we can't be the ones setting up the alternative instances. (Though we can cheer for them, like we did when we saw the App Engine API implemented on top of EC2 and S3.)

To Doug Cutting's very good point, the way Amazon and Google (and everyone else in this space) seem to be trying to compete is by offering the best value, in terms of reliability (uptime, replication) and performance (data locality, latency, etc) and monitoring and management tools. Which is as it should be.

Although Dewitt is correct that Google and Amazon are not explicitly trying to lock-in customers to their platform, the fact is that today if a customer has heavily invested in either platform then there isn't a straightforward way for customers to extricate themselves from the platform and switch to another vendor. In addition there is not a competitive marketplace of vendors providing standard/interoperable platforms as there are with email hosting or Web hosting providers.

As long as these conditions remain the same, it may be that lock-in is too strong a word describe the situation but it is clear that the options facing adopters of cloud computing platforms aren't great when it comes to vendor choice.

Note Now Playing: Britney Spears - Womanizer Note


 

Categories: Platforms | Web Development

October 18, 2008
@ 02:18 AM

At 4:58 AM this morning, Nathan Omotoyinbo Obasanjo became the newest addition to our family. He was a healthy 9 pounds 6 ounces and was born at the Eastside Birth Center in Bellevue. His journey into our home has been fraught with delays and some drama which I'll recount here for posterity.

Tuesday – October 14th
At around midnight his mother let me know that she'd been having contractions for the past few hours and was ready to call the midwives. We drove to the birthcenter and arrived sometime before 2 AM. We were met by the midwife on call (Loren Riccio) who was later joined by two interns (not sure if that's what they're called). We also called my mother-in-law and she joined us there about 30 minutes later.  After a few hours and some time spent getting in and out of the tub (we planned to have a water birth) it became clear that my wife was either going through pre-labor or false labor and we were sent home at around 6AM.  We got home and I slept for about three or four hours then had to rush into work because I was scheduled to give a talk on the platform we built to power a key feature in Windows Live's Wave 3 release at Microsoft. I got there on time, the presentation was well received and I got to chat with my management folks about some blog drama that blew up the previous night. When I scheduled the presentation, I'd assumed Nathan would already have been born since the midwives gave us a due date of October 7th. Thankfully, my wife's mom stayed with her at home so I didn't have to make the choice between leaving my wife by herself at home and giving the talk. Later that day we went in for our previously scheduled midwife appointment with Christine Thain. She suggested that we come in the next day if the labor didn't kick in that night.
Wednesday - October 15th
We actually went in to see Christine twice that day. We went in early in the morning and she gave us some herbs (Gossypium) that are supposed to encourage the onset of labor. So we had a delicious lunch at Qdoba and ran a bunch of errands before going back to see Christine later that evening. She checked Jenna's progress and it didn't look like the baby was ready to be born yet. At this point we started questioning whether we wanted to stay the course with a natural childbirth and wait possibly for another week before the baby was born or if we wanted to go to a hospital and have an induction of labor. To take our minds of the waiting game, we decided to watch Don't Mess with the Zohan as a lighthearted diversion to keep our spirits up. It didn't help much especially since the movie was awful. During the movie, Jenna started having contractions and after confirming that they seemed more productive than from Monday night we called the midwife and the mother-in-law then rushed to the birth center. Unfortunately, it turned out to be another false alarm. At this point we started feeling like the boy who cried wolf especially since we'd had the two interns get up out of bed twice during the past two days.
Thursday - October 16th
Around 5AM Jenna woke me up and told me that she'd been having contractions for the past hour. Since we'd already scheduled an early morning checkup with Christine for 8AM we didn't feel the need to call the birth center's emergency number. When we got to the birth center there was already someone else going through labor in the birth suite. So we had to have an office visit where we learned that these contractions weren't productive either. At this point we made the call that we'd go in to the hospital on Friday to have the baby induced and had Christine make an appointment. Later that day we were grocery shopping and we got a call from Christine. We decided to go in one more time to the birth center to see if we could get one last checkup before going to the hospital on Friday. When we got there the lady from the morning was still in labor almost eleven hours later. So we had another office visit and discussed the pros & cons of going to the hospital versus trying to wait out a natural birth. At the end of the talk we stuck with the decision to go to the hospital although that was dependent on there being available beds.
Friday - October 17th
At about 1:30 AM I'm awoken by my wife who's going through very painful contractions. After timing the contractions for a while we decide to go into the birth center. This time I don't call my mother-in-law until after we've arrived at the birth center and the progress of the labor was confirmed. I also send a text message to my mom telling her yet again that we're having the baby and this time it's for real. Nathan was born just over 2 hours after we arrived at the birth center. Later on we found out that the lady who'd been in the labor or the entirety of the previous day ended up being sent to the hospital and had her baby at around the same time Nathan was being born.

So far, our previous addition to the family has been getting along great with Nathan.

Note Now Playing: Jadakiss - The Champ Is Here Note


 

Categories: Personal

I just found out via an article by Dan Farber that Yahoo! has rolled out a new "universal profile" and decided to give it a quick look. As I was creating my profile at http://profiles.yahoo.com/kpako I saw an interesting option that I thought was worth commenting on which is captured in the screenshot below

The circled text says

Allow my connections to share my information labeled “My Connections” with third-party applications they install and developers who made the applications. Learn More

The reason for this feature harkens back to the Robert Scoble vs. Facebook saga where Robert Scoble was temporarily banned from Facebook for extracting personal information about his Facebook friends from Facebook into Plaxo. As I outlined in my post on the topic, Breaking the Social Contract: My Data is not Your Data, the problem was that Scoble's friends on Facebook may have been OK with sharing their personal information with him on Facebook but not with him sharing that information with other services.

With this feature, Yahoo! has allowed users to opt-in to allowing an application used by a member of their social network to access their personal information. This is a very user-centric approach and avoids the draconian practice of services like Facebook that get around this issue by not providing any personally idenitifable user information via their APIs. Instead a user can specify which pieces of their personal information they don't mind being accessed by 3rd party applications being used by their friends using the option above. Nicely done.

The only problem I can see is that this is a pretty nuanced option which users may not understand clearly and may overlook since it is just another checkbox in the profile creation flow. Then again, it is hard to imagine how to introduce users to this concept after they have created their profile. 

Kudos to the Yahoo! folks on getting this feature and the rest of their universal profile out there.


 

Categories: Social Software

Allen Tom has an interesting post on the Yahoo! Developer blog entitled Yahoo! Releases OpenID Research where he shares the results of some usability studies the folks at Yahoo! have been doing around OpenID. The concluding paragraphs of his post are particularly interesting and are excerpted below

I'm happy to announce that Yahoo! is releasing the results of a usability study that we did for OpenID. Our test subjects were several experienced Yahoo! users (representative of our mainstream audience) who were observed as they tried to sign into a product review site using the Yahoo OpenID service.
...
On the Yahoo! side of things, we streamlined our OP (OpenID Provider) last week, and removed as much as we could. We removed the CAPTCHA and slimmed down the OP to just a single screen, and focused the UI to get the user back to the RP. We expect that RPs will enjoy a much higher success rate for users signing in with their Yahoo OpenID.

On the RP (Relying Party) side of things, our recommendation is that they emphasize to users that they can sign in with an existing account, specifically their YahooID. We believe that the YahooID, as well has IDs from other providers, have a higher brand awareness than OpenID. We also believe that first time users signing in with an OpenID should be able to go directly to their intended destination after signing in, instead of having to complete additional registration. Hopefully, as SimpleReg/AttributeExchange are more widely supported (Yahoo does not currently support them), relying parties will no longer feel the need to force the user through an additional registration form after signing in with an OpenID.

It's nice to see how much of this dovetails with my post on Things to Keep in Mind when Implementing OpenID on Your Website. In that post, I pointed out that the key risk of using OpenID on your Web site (i.e. being a Relying Party) is that there is a high risk of losing users if the OpenID sign-in flow is more complicated than simply having the user sign-up for your site. The Yahoo! usability study points to the fact that this seems to be the common case in typical OpenID deployments.

Actually there are two problems. The first being that most people don't know what OpenID is so simply stating that people can use OpenIDs to log-in to your site or using the logo may work for geeks but doesn't work for the typical Web user. The risk here is that the work of deploying ID on your site ends up being wasted. The second problem is the risk of losing the user after they decide to use OpenID to sign-in either due to an unintuitive user experience on your site (e.g. having to enter an OpenID URL) or on the site of the OpenID provider (e.g. lots of jargon with no clear call to action).

I did find it interesting that Yahoo! is recommending that services should prefer to using the brand of the target services whose credentials you plan to accept [especially if you white list OpenID providers you support] instead of using the OpenID brand since it isn't recognizable to the typical Web user. I tend to agree with this, OpenID is a means to an end and not an end in itself so it is weird to be putting it front and center in an end user facing user experience. Talking explicitly about OpenID should probably be at the developer to developer level. I feel the same way about RSS and other Web technologies for connecting services together.

The other interesting point is that a lot of services still require users to go through a sign-up flow after logging in with an OpenID thus the only thing they've saved is having the user pick a username (which would probably have been their email address) and password. That saving doesn't seem worth the extra complexity in the user experience of going through an OpenID provider. I agree with Tom that if more OpenID providers supported OpenID Attribute Exchange then the need for a post-login account creation would likely disappear since the Relying Party would get the basic information they need from the OpenID provider. 

In conclusion, the typical way OpenID is being implemented on the Web today leads to more costs than benefits. Hopefully, services will take to heart the lessons from Yahoo's usability study and we'll start seeing smarter usage of OpenID that benefits both users and the services that are adopting the technology.

Note Now Playing: Leona Lewis - Better In Time Note


 

Categories: Web Development

As I read about the U.K. partially nationalizing major banks and the U.S. government's plan to do the same, it makes one wonder how the financial system could have become so broken that these steps are even necessary. The more I read, the more it seems clear that our so called "free" markets had built into the some weaknesses that didn't take into account human nature. Let's start with the following quote from the MSNBC article about the U.K. government investing in major banks

As a condition of the deal, the government has required the banks to pay no bonuses to board members at the banks this year.

British Treasury chief Alistair Darling, speaking with Brown Monday, said it would be "nonsense" for board members to be taking their bonuses. The government also insisted that the bulk of future bonuses be paid in shares to ensure that bonuses encourage management to take a more long-term approach to profit making.

The above statement makes it sound like the board members of the various banks were actually on track to make their bonuses even though the media makes it sound like they are guilty of gross incompetence or some degree of negligence given the current state of the financial markets. If this is the case, how come the vast majority of the largest banks in the world seem to all have the same problem of boards and CEOs that were reaping massive rewards while effectively running the companies into the ground? How come the "free" market didn't work efficiently to discover and penalize these companies before we got to this juncture?

One reason for this problem is outlined by Philip Greenspun in his post Time for corporate governance reform? where he writes

What would it take to restore investor confidence in the U.S. market?  How about governance reform?

Right now the shareholders of a public company are at the mercy of management.  Without an expensive proxy fight, the shareholders cannot nominate or vote for their own representatives on the Board of Directors.  The CEO nominates a slate of golfing buddies to serve on the Board, while he or she will in turn serve on their boards.  Lately it seems that the typical CEO’s golfing buddies have decided on very generous compensation for the CEO, often amount to a substantial share of the company’s profits.  The golfing buddies have also decided that the public shareholders should be diluted by stock options granted to top executives and that the price on those options should be reset every time the company’s stock takes a dive (probably there is a lot of option price resetting going on right now!  Wouldn’t want your CEO to lose incentive).

Corporations are supposed to operate for the benefit of shareholders.  The only way that this can happen is if a majority of Directors are nominated by and selected by shareholders.  It may have been the case that social mores in the 1950s constrained CEO-nominated Boards from paying their friend $50 million per year, but those mores are apparently gone and the present structure in which management regulates itself serves only to facilitate large-scale looting by management.

For one the incentive system for corporate leadership is currently broken. As Phil states, companies (aka the market) have made it hard for shareholders to effect the decision making at the top of major corporations without expensive proxy fights and thus the main [counterproductive] recourse they have is selling their shares. And even then the cronyism between boards and executive management is such that the folks at the top still figure out how to get paid big bucks even if the stock has been taking a beating due to shareholder disaffection.

Further problems are caused by contagion, where people see one group getting rewards for particular behavior and then they want to join in the fun. Below is an excerpt of a Harvard Business School posting summarizing an interview with Warren Buffett entitled Wisdom of Warren Buffett: On Innovators, Imitators, and Idiots 

At one point, his interviewer asked the question that is on all our minds: "Should wise people have known better?" Of course, they should have, Buffett replied, but there's a "natural progression" to how good new ideas go wrong. He called this progression the "three Is." First come the innovators, who see opportunities that others don't. Then come the imitators, who copy what the innovators have done. And then come the idiots, whose avarice undoes the very innovations they are trying to use to get rich.

The problem, in other words, isn't with innovation--it's with the idiocy that follows. So how do we as individuals (not to mention as companies and societies) continue to embrace the value-creating upside of creativity while guarding against the value-destroying downsides of imitation? The answer, it seems to me, is about values--about always being able to distinguish between that which is smart and that which is expedient. And that takes discipline. Can you distinguish between a genuine innovation and a mindless imitation? Are you prepared to walk away from ideas that promise to make money, even if they make no sense?

It's not easy--which is why so many of us fall prey to so many bad ideas. "People don't get smarter about things that get as basic as greed," Warren Buffett told his interviewer. "You can't stand to see your neighbor getting rich. You know you're smarter than he is, but he's doing all these [crazy] things, and he's getting rich...so pretty soon you start doing it."

As Warren Buffett points out, our financial markets and corporate governance structures do not have safeguards that prevent greed from taking over and destroying the system. The underlying assumption in a capitalist system is that greed is good if it is properly harnessed. The problem we have today is that the people have moved faster than the rules and regulations to keep greed in check can keep up and in some cases successfully argued against these rules only for those decisions to come back and bite us on the butt.


So what does this have to do with designing social applications and other software systems? Any system that requires human interaction has to ensure that it takes into account the variations in human behavior and not just focus on the ideal user. This doesn't just mean malicious users and spammers who will have negative intentions towards the system. It also includes regular users who will unintentionally misuse or outwit the system in ways that the designers may not have expected.

A great example of this is Greg Linden's post on the Netflix Prize at KDD 2008 where he writes

Gavin Potter, the famous guy in a garage, had a short paper in the workshop, "Putting the collaborator back into collaborative filtering" (PDF). This paper has a fascinating discussion of how not assuming rationality and consistency when people rate movies and instead looking for patterns in people's biases can yield remarkable gains in accuracy. Some excerpts:

When [rating movies] ... a user is being asked to perform two separate tasks.

First, they are being asked to estimate their preferences for a particular item. Second, they are being asked to translate that preference into a score.

There is a significant issue ... that the scoring system, therefore, only produces an indirect estimate of the true preference of the user .... Different users are translating their preferences into scores using different scoring functions.

[For example, people] use the rating system in different ways -- some reserving the highest score only for films that they regard as truly exceptional, others using the score for films they simply enjoy .... Some users [have] only small differences in preferences of the films they have rated, and others [have] large differences .... Incorporation of a scoring function calibrated for an individual user can lead to an improvement in results.

[Another] powerful [model] we found was to include the impact of the date of the rating. It seems intuitively plausible that a user would allocate different scores depending on the mood they were in on the date of the rating.

Gavin has done quite well in the Netflix Prize; at the time of writing, he was in eighth place with an impressive score of .8684.
Galvin's paper is a light and easy read. Definitely worthwhile. Galvin's work forces us to challenge our common assumption that people are objective when providing ratings, instead suggesting that it is quite important to detect biases and moods when people rate on a 1..5 scale.

There were two key insights in Gavin Potter's paper related to how people interact with a ranking system on a 1 to 5 scale. The first is that some people will have coarser grained scoring methodology than others (e.g. John only rank movies as 1 for waste of time, 3 for satisfactory and 5 for would watch again while Jane's movie rating system ranges from 2.5 stars to 5 stars). The second insight is that you can detect and correct for when a user is having a crappy day versus a good day by seeing if they rated a lot of movies on a particular day and whether the average rating is at an extreme (e.g. Jane rates ten movies on Saturday and gives them an average score under 3 stars).

The fact that users will treat your 1 to 5 rating scale as a 2.5 to 5 rating scale or may rate a ton of movies poorly because they had a bad day at the office is a consideration that a recommendation system designer should keep in mind if they don't want to give consistently poor results to users in certain cases.  This is human nature in effect.

Another great example of how human nature foils our expectations of how users should behave is the following excerpt from the Engineering Windows 7 blog post about User Account Control

One extra click to do normal things like open the device manager, install software, or turn off your firewall is sometimes confusing and frustrating for our users. Here is a representative sample of the feedback we’ve received from the Windows Feedback Panel:

  • “I do not like to be continuously asked if I want to do what I just told the computer to do.”
  • “I feel like I am asked by Vista to approve every little thing that I do on my PC and I find it very aggravating.”
  • “The constant asking for input to make any changes is annoying. But it is good that it makes kids ask me for password for stuff they are trying to change.”
  • “Please work on simplifying the User Account control.....highly perplexing and bothersome at times”

We understand adding an extra click can be annoying, especially for users who are highly knowledgeable about what is happening with their system (or for people just trying to get work done). However, for most users, the potential benefit is that UAC forces malware or poorly written software to show itself and get your approval before it can potentially harm the system.

Does this make the system more secure? If every user of Windows were an expert that understands the cause/effect of all operations, the UAC prompt would make perfect sense and nothing malicious would slip through. The reality is that some people don’t read the prompts, and thus gain no benefit from them (and are just annoyed). In Vista, some power users have chosen to disable UAC – a setting that is admittedly hard to find. We don’t recommend you do this, but we understand you find value in the ability to turn UAC off. For the rest of you who try to figure out what is going on by reading the UAC prompt , there is the potential for a definite security benefit if you take the time to analyze each prompt and decide if it’s something you want to happen. However, we haven’t made things easy on you - the dialogs in Vista aren’t easy to decipher and are often not memorable. In one lab study we conducted, only 13% of participants could provide specific details about why they were seeing a UAC dialog in Vista.  Some didn’t remember they had seen a dialog at all when asked about it.

How do you design a dialog prompt to warn users about the potential risk of an action they are about to take if they are so intent on clicking OK and getting the job done that they forget that there was even a warning dialog afterwards?

There are a lot more examples out there but the fundamental message is the same; if you are designing a system that is going to be used by humans then you should account for the various ways people will try to outwit the system simply because they can't help themselves.

Note Now Playing: Kanye West - Love Lockdown Note


 

October 12, 2008
@ 08:24 PM

Bloglines stopped polling my feed over a week ago probably due to a temporary error in my feed. I've been trying to find a way to get them to re-enable my feed given that for the 1,670 subscribed to my feed on their service my blog hasn't been updated since October 3rd. Unfortunately there doesn't seem to be a way to contact the product team.

I sent a mail via the contact form but didn't get a response and their support forum is overrun with spam which leads me to believe it has been abandoned. Any ideas on how I can get Bloglines to start polling my feed again?

Note Now Playing: Babyface - When Can I See You Note


 

Categories: Personal

Some of my readers who missed the dotcom boom and bust from the early 2000s may not be familiar with FuckedCompany, a Web site that was dedicated to gloating about layoffs and other misfortunes at Web companies as the tech bubble popped.  Although fairly popular at the turn of the century the Web site was nothing more than postings about which companies had recently had layoffs, rumors of companies about to have layoffs and snarky comments about stock prices. You can read some of the old postings yourself in the WayBack Machine for FuckedCompany. I guess schadenfreude is a national pastime.

The current financial crises which has led to the worst week in the history of the Dow Jones and S&P 500 indexes as well as worldwide turmoil in financial markets to the point where countries like Austria, Russia, Iceland, Romania and Ukraine had to suspend trading on their stock markets. This has clearly pointed to the need for another schadenfreude filled website which gloats about the misfortunes of others. Thankfully TechCrunch has stepped up to the plate. Here are some of their opening morsels as they begin their transformation from tech bubble hypesters into its gloating eulogizers

For some reason, I was expecting more leadership from Arrington and his posse. Anyway, instead of reading and encouraging this sort of garbage from TechCrunch it would be great if more people keep posts like Dave McClure's Fear is the Mind Killer of the Silicon Valley Entrepreneur (we must be Muad'Dib, not Clark Kent) in mind instead. The last thing we need is popular blogs AND the mass media spreading despair and schadenfreude at a time like this.

Note Now Playing: T.I. - Dead And Gone (Featuring Justin Timberlake) Note


 

Categories: Current Affairs

John Battelle has a blog post entitled When Doesn't It Pay To Pay Attention To Search Quality? which contains the following statement and screenshot

the top result is the best result - and it's a paid link.

Bad Results

In the past I've talked about Google's strategy tax which is the conflict between increasing the relevance of their search results and increasing the relevance of their search ads. The more relevant Google's "organic" results are the less likely users are to click on their ads which means the less money the company makes. This effectively puts a cap on how good Google's search quality can get especially given the company's obsessive measurements of every small change they make to get the most bang for the buck.

When I first wrote about this, the conclusion from some quarters was that this inherent conflict of interest would eventually be Google's undoing since there were search innovations that they would either be slow to adopt or put on the back burner so as not to harm their cash cow. However John Battelle's post puts another spin on the issue. As long as people find what they want it doesn't matter if the result is "organic" or an ad.

As Jeremy Zawodny noted in his post The Truth about Web Navigation 

Oh, here's a bonus tip: normal people can't tell the difference between AdSense style ads and all the other links on most web sites. And almost the same number don't know what "sponsored results" on the Search Results Page are either. It's just a page of links to them. They click the ones that look like they'll get them what they want. It's that simple.

even more interesting is the comment by Marshall Kirkpatrick in response to Jeremy's post

The part of your post about AdWords reminds me of a survey I read awhile ago. Some tiny percentage of users were able to tell the difference between paid and natural search results, then once away from the computer almost all of them when asked said that the best ways to make it clear would be: putting paid links in a colored box, putting them in a different section of the page and putting the words "sponsored links" near them!! lol

What this means in practice is that the relevance of Google's ads in relation to the search term will increase in comparison to the relevance of the organic search results for that term. John Battelle has shown one example of this in his blog post. Over time this trend will get more pronounced. The problem for Google's competitors is that this doesn't necessarily mean that their search experience will get worse over time since their ad relevance will likely make up for any deficiencies in their organic results (at least for commercial queries – where the money is). What competitors will have to learn to exploit is Google's tendency to push users to Adwords results by making their organic results satisfactory instead of great.  

For example, consider the following search results page which my wife just got while looking for an acupuncturist in Bellevue, Washington

The interesting thing about the organic results is that it is relevant but very cluttered thus leading to the paradox of choice. On the other hand, the sponsored links give you a name and a description of the person's qualifications in their first result. Which result do you think my wife clicked?

Now why do you think Google ended up going with this format for commercial/business-based search results?  The average Web user is a satisficer and will look for the clearest and simplest result which is in the ads. However the geeks and power users (i.e. people who don't click on ads) are often maximizers when it comes to search results and are thus served by the organic results.

The question for you is whether you'd consider this a weakness or a strength of Google Search?

Note Now Playing: Missy Elliott - Hit Em Wit Da Hee (remix) (feat. Lil Kim & Mocha) Note


 

Given the current spirit of frugality that fills the air due to the credit crises I'm reconsidering whether to replace my AT&T Tilt (aka HTC Kaiser) with an iPhone 3G. After test driving a couple of iPhones I've realized that the really compelling reason for me to switch is to get a fully-featured Web browser instead of my current situation of having to choose between text-based "mobile" versions of popular sites or mangled Web pages.

I was discussing this with a coworker and he suggested that I try out alternative browsers for the Windows Mobile before getting an iPhone. I'd previously tried DeepFish from the Microsoft Live Labs team but found it too buggy to be usable. I looked for it again recently but it seems it has been cancelled. This led me to try out SkyFire which claims to give a complete PC-style Web browsing experience [including Flash Video, AJAX, Silverlight and Quicktime video] on a mobile phone.

After using SkyFire for a couple of days, I have to admit that it is a much improved Web browsing experience compared to what shipped by default on my phone. At first I marveled at how a small startup could build such a sophisticated browser in what seems like a relatively short time until I learned about the clever hack which is at the center of the application. None of the actual rendering and processing of content is done on your phone. Instead, there is an instance of a Web browser (supposedly Firefox) running on the SkyFire servers which acts as a proxy for your phone and then sends you a compressed image of the fully rendered results. There is still some clever hackery involved especially with regards to converting a streaming Flash video into a series of animated image and accompanying sound then sending it down to your phone in real-time. However it is nowhere near as complex as shipping complete Javascript, Flash, Quicktime and Silverlight implementations on mobile phone browser. 

The one problem with SkyFire's approach is that all of your requests go through their servers. This means your passwords, emails, bank account records or whatever other Web sites you visit with your mobile browser will flow through SkyFire's servers. This may be a deal breaker for some while for others it will mean being careful about what sites they visit using the browser. 

If this sounds interesting, check out the video demo below

Note Now Playing: Michael Jackson - PYT (Pretty Young Thing) Note


 

Categories: Technology

Yesterday there was a news article on MSNBC that claimed 1 in 6 now owe more on their mortgage then their property is worth. The article states

The relentless slide in home prices has left nearly one in six U.S. homeowners owing more on a mortgage than the home is worth, raising the possibility of a rise in defaults — the very misfortune that touched off the credit crisis last year. The result of homeowners being "underwater" is more pressure on an economy that is already in a downturn. No longer having equity in their homes makes people feel less rich and thus less inclined to shop at the mall.

And having more homeowners underwater is likely to mean more eventual foreclosures, because it is hard for borrowers in financial trouble to refinance or sell their homes and pay off their mortgage if their debt exceeds the home's value. A foreclosed home, in turn, tends to lower the value of other homes in its neighborhood.

Among people who bought within the past five years, it's worse: 29 percent are underwater on their mortgages, according to an estimate by real-estate Web site Zillow.com.

According to Zillow, our home is one of those that is currently "underwater" because it's estimated value has dropped $25,000 since we bought it according to their algorithms. Given that we bought our home last year I don't doubt that we are underwater and in fact I expect our home value to only go down further. This is because the disparity between median house values and median incomes is still fairly stark even with the current depreciation in the neighborhood.

Here's what I mean; according to Zillow the median household income in the area is about $46,000 while the median home price is around $345,000. This disparity is shocking when you apply some of the basic rules from the "old days" before we had the flood of easy credit which led up to the current crises. For argument's sake, let's assume that everyone that moves to the area actually pays the traditional 20% down payment even though the personal savings rate of the average American is in the negative. This means they need a mortgage of $276,000. Plugging that number into a simple mortgage calculator assuming a 30 year loan at 5.75% interest gives a monthly mortgage payment of over $1600.

Using the traditional debt-to-income ratio of 0.28 a person with $46,000 in gross income shouldn't get a mortgage that over $1100 because they are hard pressed to afford it. Using another metric, the authors of the Complete Idiot's Guide to Buying and Selling a Home argue that you shouldn't get a mortgage over 2 1/2 times your household income which still has us with around $150,000 being the appropriate size of a mortgage that someone that lives in my neighborhood can afford.

However you slice it even assuming a 20% down payment, the people in my neighborhood live in homes that they couldn't afford to get a legitimate mortgage on at today's prices. That is fundamentally broken. 

Things get particularly clear when you look at the chart below and realize that house prices rose over $100,000 dollars in the past five years. 

A lot of people have started talking about "stabilizing home prices" and "bailing out home owners" because of underwater mortgages. In truth, easy credit caused houses to become overpriced especially when you consider that house prices were rising at a much faster rate than wages. Despite the current drop, house prices are still unrealistic and will need to come down further. Trying to prevent that from happening is like trying to have our cake and eat it too. You just can't.

I expect that more banks will end up having to create programs like Bank of America's Nationwide Homeownership Retention for CountryWide Customers which will modify mortgage principals and interest rates downwards in a move that will end up costing them over $8.6 billion but will make it more likely that their customers can afford to pay their mortgages. I'm surprised that it took a class action lawsuit to get this to happen instead of common sense. Then again it is 8.6 BILLION dollars. 

Note Now Playing: 50 Cent - When It Rains It Pours Note


 

Categories: Current Affairs | Personal

October 7, 2008
@ 03:37 PM

I logged in to my 401K account today and was greeted by the following message

Personal Rate of Return from 01/01/2008 to 10/06/2008 is -23.5%

Of course, it could have been worse,  I could have had it all in the stock market.

I've been chatting with co-workers who've only posted single digit percentage loses (i.e. their 401K is down less than 10% this year) and been surprised that every single person in that position had hedged their bets by having a large chunk of their 401K as cash. I remember Joshua advising me to do this a couple of months ago when things started looking bad but I took it as paranoia, now I wish I had listened.

Of course, I'd still have the problem of having to trust the institution that was holding the cash like the guy from the MSNBC article excerpted below

Mani Behimehr, a home designer living in Tustin, Calif., isn't feeling reassured after what happened to WaMu and Wachovia. After he heard the news that WaMu had been seized and sold to JP Morgan, he rushed out to withdraw about $150,000 in savings and opened a new account at Wachovia only to learn about its sale to Citigroup two days later.

"I thought this is the strongest economy in the world; nothing like that happens in this country," said Behimehr, 46, who is originally from Iran.

At least I don't have to worry about living off of my 401(k) anytime soon.

Update: A commenter brought up that I should explain what a 401(k) account is for non-US readers. From wikipedia; in the United States of America, a 401(k) plan allows a worker to save for retirement while deferring income taxes on the saved money and earnings until withdrawal. The employee elects to have a portion of his or her wage paid directly, or "deferred," into his or her 401(k) account. In participant-directed plans (the most common option), the employee can select from a number of investment options, usually an assortment of mutual funds that emphasize stocks, bonds, money market investments, or some mix of the above.

Note Now Playing: Abba - Money, Money, Money Note


 

Categories: Current Affairs | Personal

A common practice among social networking sites is to ask users to import their contacts from one of the big email service providers (e.g. Yahoo! Mail, Hotmail or Gmail) as part of the sign up process. This is often seen as a way to bootstrap the user's social network on the site by telling the user who in their email address book is also a user of the site they have just joined. However, there is one problem with this practice which Jeremy Keith described as the password anti-pattern and I quote

The problems start when web apps start replicating bad design patterns as if they were viruses. I want to call attention to the most egregious of these anti-patterns.

Allowing users to import contact lists from other services is a useful feature. But the means have to justify the ends. Empowering the user to import data through an authentication layer like OAuth is the correct way to export data. On the other hand, asking users to input their email address and password from a third-party site like GMail or Yahoo Mail is completely unacceptable. Here’s why:

It teaches people how to be phished.

The reason these social networks request login credentials for new users is because they log-in to the user's email account and then screen scrape the contents of their address books. For a long time, the argument for doing this has been that the big email services have not given them an alternative and are thus holding user data hostage.

This may have been true once but it isn't anymore. Google has the Contacts Data API, Yahoo! has their Address Book API and Microsoft has the Windows Live Contacts API. Each of these is provides a user-centric authorization model where instead of a user giving their email address and password to random sites, they log-in at their email provider's site and then delegate authority to access their address book to the social network site. 

The only problem that remains is that each site that provides an address book or social graph API is reinventing the wheel both with regards to the delegated auth model they implement and the actual API for retrieving a user's contacts. This means that social networking sites that want to implement a contact import feature have to support a different API and delegated authorization model for each service they want to talk to even though each API and delegated auth model effectively does the same thing.

Just as OAuth has slowly been increasing in buzz as the standard that will sweep away the various proprietary delegated auth models that we have on the Web today, there has been a similar effort underway by a similar set of dedicated individuals intent on standardizing contacts and social graph APIs on the Web. The primary output from this effort is the Portable Contacts API.

I've read been reading latest draft specification of the Portable Contacts API and below are some of the highlights as well as some thoughts on them

  • A service's Portable Contacts API endpoint needs to be auto-discoverable using XRDS-Simple (formerly YADIS).

  • The API supports either direct authorization where a caller provides a username and password as well as delegated authorization where the caller passes a delegation token obtained by the application out-of-band. The former MUST support HTTP Basic Authentication while the latter MUST support OAuth. My initial thinking is that there must be some kind of mistake and the spec meant to say HTTP Digest Authentication not HTTP Basic Authentication (which is described as insecure in the very RFC that defines at it).

  • The API defines a specific URL structure that sites must expose. Specifically /@me/@all which returns all of a user's contacts, /@me/@all/{id} which returns the contact with the given ID, and /@me/@self which returns contact info for the user must all be supported when appended to the base URI that is the Portable Contacts API end point. In general being prescriptive about how services should structure their URI space is to be discouraged as explained in Joe Gregorio's post No Fishing - or - Why 'robots.txt and 'favicon.ico' are bad ideas and shouldn't be emulated but since the service gets to control the base URI via XRDS-Simple this isn't as bad as the situation we have today with robots.txt and favicon.ico

  • The API defines a set of query parameters for filtering (filterBy, filterOp & filterValue) and sorting (sortBy & sortOrder) of results. The API wisely acknowledges that it may be infeasible or expensive for services to support these operations and so supporting them is optional. However services MUST indicate which parameters were ignored/not supported when returning responses to requests containing the aforementioned parameters. I definitely like the approach of having services indicate which parameter was ignored in a request because it wasn't supported. However it would be nice to get an explicit mechanism for determining which features are supported by the provider in a without having to resort to seeing which parameters to various API calls were ignored.

  • The API also defines query parameters for pagination (startIndex & count). These are pretty straightforward and are also optional to support.

  • There are query parameters to indicate which fields to return (fields parameter whose value is a comma delimited list) and what data format to return (format parameter can be either JSON or XML). The definition of the XML format is full of errors but seems to be a simplistic mapping of JSON to XML. It also isn't terribly clear how type information is conveyed in the XML format. It clearly seems like the having the API support XML is currently a half baked afterthought within the spec.

  • Each result set begins with a series of numeric values; startIndex, itemsPerPage and totalResults which are taken from OpenSearch followed by a series of entries corresponding to each contact in the users address book.

  • The API defines the schema as a contact as the union of fields from the vCard data format and those from OpenSocial. This means a contact can have the basic fields like name, gender and birthday as well as more exotic fields like happiestWhen, profileSong and scaredOf. Interesting data models they are building over there in OpenSocial land. :)

Except for the subpar work with regards to defining an XML serialization for the contacts schema this seems like a decent spec.

If anything, I'm concerned by the growing number of interdependent specs that seem poised to have a significant impact on the Web and yet are being defined outside of formal standards bodies in closed processes funded by big companies. For example, about half of the references in the Portable Contacts API specs are to IETF RFCs while the other half are to specs primarily authored by Google and Yahoo! employees outside of any standards body (OpenSocial, OAuth, OpenSearch, XRDS-Simple, etc). I've previously questioned the rise of semi-closed, vendor driven consortiums in the area of Web specifications given that we have perfectly good and open standards bodies like IETF for defining the Open Web but this led to personal attacks on TechCrunch with no real reasons given for why Web standards need to go in this direction. I find that worrying. 

Note Now Playing: Madonna - Don't Cry For Me Argentina Note


 

Nick O'Neil of AllFacebook.com recently posted a blog entry entitled The Future of Widgets on Facebook: Dead where he wrote

As a joke I created the Bush Countdown Clock when the platform launched and amazingly I attracted close to 50,000 users. While the application was nothing more than a simple flash badge, it helped a lot of people express themselves. Expression is not Facebook’s purpose though, sharing is. Widgets or badges that help users express their personal beliefs, ideals, and personality are now harder to find with the new design.

Thanks to the redesign all the badges which were “cluttering” the profile have been moved to a “Boxes” tab which most people don’t visit apparently. When the new profile was first rolled out, the traffic to my application actually jumped a little but oddly enough on September 11th, things took a turn for the worse. I’m not sure what happened but my guess is that a lot of the profiles started to get shifted over.
...
It’s clear though that widgets have not survived the shift over and my guess is that within a matter of weeks we will see most top-performing widget applications practically disappear.

-Bush Countdown Clock Daily Traffic Graph-

This is one aspect of the Facebook redesign that I didn't consider in my original post on What You Can Learn from the Facebook Redesign. Although moving the various applications which are basically badges for self expression like Bumper Sticker does reduce page load times, by relegating them to an infrequently visited tab they are guaranteed to be less useful (people don't see them on my profile) and less likely to be spread virally (people don't see them on my profile and say "I gotta have that"). On the other hand, applications that are primarily about users interacting with each other such as Scrabble and We're Related should still do fine.

Application developers have already started inventing workarounds to Facebook's changes which penalize their apps. For example, the Bumper Sticker application now focuses on adding items to your Mini-Feed instead of adding a badge/box to your profile. This gives it valuable placement on your profile (if only for a short time) and a small chance that it will show up in the News Feeds of your friends.

This aspect of the redesign has definitely attacked what many had started calling the MySpace-ization of Facebook which resulted in the need for a  Facebook Profile Clean Up Tool. It will be interesting what this will lead to new classes of applications becoming popular on the site or whether it just another chapter in the cat & mouse game that is spreading virally on the Facebook platform.

Note Now Playing: Game - We Don't Play No Games (feat. G-Unit) Note


 

Categories: Social Software

For a hot, pre-IPO startup I'm surprised that Facebook seems to be going through an exodus of co-founders (how many do they have?) and well-respected key employees. Just this year alone we've seen the following exits

If you combine the list above with the New York Times article, The Facebooker Who Friended Obama which states

Mr. [Chris] Hughes, 24, was one of four founders of Facebook. In early 2007, he left the company to work in Chicago on Senator Obama’s new-media campaign. Leaving behind his company at such a critical time would appear to require some cognitive dissonance: political campaigns, after all, are built on handshakes and persuasion, not computer servers, and Mr. Hughes has watched, sometimes ruefully, as Facebook has marketed new products that he helped develop.

That's three departures from people named as co-founders of Facebook. Of course, it isn't unusual for co-founders to leave a business they helped start even if the business is on the path to being super successful (see Paul Allen) but it is still an interesting trend nonetheless.

Note Now Playing: T.I. - My Life, Your Entertainment (Ft. Usher) Note


 

October 1, 2008
@ 04:22 PM

Werner Vogels, CTO of Amazon, writes in his blog post Expanding the Cloud: Microsoft Windows Server on Amazon EC2 that

With today's announcement that Microsoft Windows Server is available on Amazon EC2 we can now run the majority of popular software systems in the cloud. Windows Server ranked very high on the list of requests by customers so we are happy that we will be able to provide this.

One particular area that customers have been asking for Amazon EC2 with Windows Server was for Windows Media transcoding and streaming. There is a range of excellent codecs available for Windows Media and there is a large amount of legacy content in those formats. In past weeks I met with a number of folks from the entertainment industry and often their first question was: when can we run on windows?

There are many different reasons why customers have requested Windows Server; for example many customers want to run ASP.NET websites using Internet Information Server and use Microsoft SQL Server as their database. Amazon EC2 running Windows Server enables this scenario for building scalable websites. In addition, several customers would like to maintain a global single Windows-based desktop environment using Microsoft Remote Desktop, and Amazon EC2 is a scalable and dependable platform on which to do so.

This is great news. I'm starting a month long vacation as a precursor to my paternity leave since the baby is due next week and was looking to do some long overdue hacking in-between burping the baby and changing diapers. My choices were

  • Facebook platform app
  • Google App Engine app
  • EC2/S3/EBS app

The problem with Amazon was the need to use Linux which I haven't seriously messed with since my college days running SuSe. If I could use Windows Server and ASP.NET while still learning the nuances of EC2/S3/EBS that would be quite sweet.

I wonder how who I need to holler at to get in the Windows Server on EC2 beta? Maybe Derek can hook me up. Hmmmmm.

Note Now Playing: Lil Wayne - Best Rapper Alive [Explicit] Note


 

Categories: Platforms | Web Development

I'm sure y'all have seen this link floating around the Internet's already but I wanted to have this here for posterity. Below is an excerpt from a 1999 article in the New York Times by Steven Holmes titled Fannie Mae Eases Credit To Aid Mortgage Lending 

In a move that could help increase home ownership rates among minorities and low-income consumers, the Fannie Mae Corporation is easing the credit requirements on loans that it will purchase from banks and other lenders.

The action, which will begin as a pilot program involving 24 banks in 15 markets -- including the New York metropolitan region -- will encourage those banks to extend home mortgages to individuals whose credit is generally not good enough to qualify for conventional loans. Fannie Mae officials say they hope to make it a nationwide program by next spring.

Fannie Mae, the nation's biggest underwriter of home mortgages, has been under increasing pressure from the Clinton Administration to expand mortgage loans among low and moderate income people and felt pressure from stock holders to maintain its phenomenal growth in profits.

In moving, even tentatively, into this new area of lending, Fannie Mae is taking on significantly more risk, which may not pose any difficulties during flush economic times. But the government-subsidized corporation may run into trouble in an economic downturn, prompting a government rescue similar to that of the savings and loan industry in the 1980's.

''From the perspective of many people, including me, this is another thrift industry growing up around us,'' said Peter Wallison a resident fellow at the American Enterprise Institute. ''If they fail, the government will have to step up and bail them out the way it stepped up and bailed out the thrift industry.''

Although the article is particularly prescient there is one thing it didn't predict, which is how much this risk of failure would be compounded while remaining hidden due to the rise of Mortgage Backed Securities. Still, it is definitely interesting reading to see that someone called the current clusterfuck as far back as 1999.

Way to go Clinton administration, as always the road to hell was paved with good intentions.

Note Now Playing: Young Jeezy - The Recession (Intro) Note


 

Categories: Current Affairs