Over the past few days I've been mulling over the recent news from Yahoo! that they are building a Facebook-like platform based on OpenSocial. I find this interesting given that a number of people have come to the conclusion that Facebook is slowly killing it's widget platform in order to replace it with Facebook Connect.

The key reason developers believe Facebook is killing of its platform is captured in Nick O'Neill's post entitled Scott Rafer: The Facebook Platform is Dead which states

When speaking at the Facebook developer conference today in Berlin, Scott Rafer declared that Facebook platform dead. He posted statistics including one that I posted that suggests Facebook widgets are dead. Lookery’s own statistics from Quantcast suggest that their publisher traffic has been almost halved since the new site design was released. Ultimately, I think we may see an increase in traffic as users become educated on the new design but there is no doubt that developers were impacted significantly.

So what is Scott’s solution for developers looking to thrive following the shift to the new design? Leave the platform and jump on the Facebook Connect opportunity.

The bottom line is that by moving applications off of the profile page in their recent redesign, Facebook has reduced the visibility of these application thus reducing their page views and their ability to spread virally. Some may think that the impact of these changes is unforeseen, however the Facebook team is obsessive about testing the impact of their UX changes so it is extremely unlikely that they aren't aware that the redesign would negatively impact Facebook applications.

The question to ask then is why Facebook would knowingly damaging a platform which has been uniformly praised across the industry and has had established Web players like Google and Yahoo! scrambling to deploy copycat efforts? Alex Iskold over at ReadWriteWeb believes he has the answer in his post Why Platforms Are Letting Us Down - And What They Should Do About It which contains the following excerpt

When the Facebook platform was unveiled in 2007, it was called genius. Never before had a company in a single stroke enabled others to tap into millions of its users completely free. The platform was hailed as a game changer under the famous mantra "we built it and they will come". And they did come, hundreds of companies rushing to write Facebook applications. Companies and VC funds focused specifically on Facebook apps.

It really did look like a revolution, but it didn't last. The first reason was that Facebook apps quickly arranged themselves on a power law curve. A handful of apps (think Vampires, Byte Me and Sell My Friends) landed millions of users, but those in the pack had hardly any. The second problem was, ironically, the bloat. Users polluted their profiles with Facebook apps and no one could find anything in their profiles. Facebook used to be simple - pictures, wall, friends. Now each profile features a zoo of heterogenous apps, each one trying to grab the user's attention to take advantage of the network effect. Users are confused.

Worst of all, the platform had no infrastructure to monetize the applications. When Sheryl Sandberg arrived on the scene and looked at the balance sheet, she spotted the hefty expense that was the Facebook platform. Trying to live up to a huge valuation isn't easy, and in the absense of big revenues people rush to cut costs. Since it was both an expense and users were confused less than a year after its glorious launch, Facebook decided to revamp its platform.

The latest release of Facebook, which was released in July, makes it nearly impossible for new applications to take advantage of the network effect. Now users must first install the application, then find it under the application menu or one of the tabs, then check a bunch of boxes to add it to their profile (old applications are grand-daddied in). Facebook has sent a clear message to developers - the platform is no longer a priority.

Alex's assertion is that after Facebook looked at the pros and cons of their widget platform, the company came to the conclusion that the platform was turning into a cost center instead of being away to improve the value of Facebook to its users. There is evidence that applications built on Facebook's platform did cause negative reactions from its users. For example, there was the creation of the "This has got to stop…pointless applications are ruining Facebook" group which at its height had half a million Facebook users protesting the behavior of Facebook apps. In addition, the creation of Facebook's Great Apps program along with the guiding principles for building Facebook applications implies that the Facebook team realized that applications being built on their platform typically don't have their users best interests at heart.

This brings up the interesting point that although there has been a lot of discussion on how Facebook apps make money there haven't been similar conversations on how the application platform improves Facebook's bottom line. There is definitely a win-win equation when so-called "great apps" like iLike and Causes, which positively increase user engagement, are built on Facebook's platform. However there is also a long tail of applications that try their best to spread virally at the cost of decreasing user satisfaction in the Facebook experience. These dissatisfied users likely end up reducing their usage of Facebook thus actually costing Facebook users and page views. It is quite possible that the few "great apps" built on the Facebook platform do not outweigh the amount of not-so-great apps built on the platform which have caused users to protest in the past. This would confirm Alex Iskold's suspicions about why Facebook has started sabotaging the popularity of applications built on its platform and has started emphasizing partnerships via Facebook Connect.


A similar situation has occurred with regards to the Java platform and Sun Microsystems. The sentiment is captured in a Javaworl article by Josh Fruhlinger entitled Sun melting down, and where's Java? which contains the following excerpt

one of the most interesting things about the coverage of the company's problems is how Java figures into the conversation, which is exactly not at all. In most of the articles, the word only appears as Sun's stock ticker; the closest I could find to a mention is in this AP story, which notes that "Sun's strategy of developing free, 'open-source' software and giving it away to spur sales of its high-end computer servers and support services hasn't paid off as investors would like." Even longtime tech journalist Ashlee Vance, when gamely badgering Jon Schwartz for the New York Time about whether Sun would sell its hardware division and focus on software, only mentions Solaris and MySQL in discussing the latter.

Those in the Java community no doubt believe that Java is too big to fail, that Sun can't abandon it because it's too important, even if it can't tangibly be tied to anything profitable. But if Sun's investors eventually dismember the company to try to extract what value is left in it, I'm not sure where Java will fit into that plan.

It is interesting to note that after a decade of investment in the Java platform, it is hard to point to what concrete benefits Sun has gotten from being the originator and steward of the Java platform and programming language. Definitely another example of a platform that may have benefited applications built on it yet which didn't really benefit the platform vendor as expected.

The lesson here is that building a platform isn't just about making the developers who use the platform successful but also making sure that the platform itself furthers the goals of its developers in the first place.

Note Now Playing: Kardinal Offishall - Dangerous (Feat. Akon) Note


 

Categories: Platforms

A couple of months ago, Russell Beattie wrote a post about the end of his startup entitled The end of Mowser which is excerpted below

The argument up to now has been simply that there are roughly 3 billion phones out there, and that when these phones get on the Internet, their vast numbers will outweigh PCs and tilt the market towards mobile as the primary web device. The problem is that these billions of users *haven't* gotten on the Internet, and they won't until the experience is better and access to the web is barrier-free - and that means better devices and "full browsers". Let's face it, you really aren't going to spend any real time or effort browsing the web on your mobile phone unless you're using Opera Mini, or have a smart phone with a decent browser - as any other option is a waste of time, effort and money. Users recognize this, and have made it very clear they won't be using the "Mobile Web" as a substitute for better browsers, rather they'll just stay away completely.

In fact, if you look at the number of page views of even the most popular mobile-only websites out there, they don't compare to the traffic of popular blogs, let alone major portals or social networks.

Let me say that again clearly, the mobile traffic just isn't there. It's not there now, and it won't be.

What's going to drive that traffic eventually? Better devices and full-browsers. M-Metrics recently spelled it out very clearly - in the US 85% of iPhone owners browsed the web vs. 58% of smartphone users, and only 13% of the overall mobile market. Those numbers *may* be higher in other parts of the world, but it's pretty clear where the trend line is now. (What a difference a year makes.) It would be easy to say that the iPhone "disrupted" the mobile web market, but in fact I think all it did is point out that there never was one to begin with.

I filed away Russell's post as interesting at the time but hadn't really experienced it first hand until recently. I recently switched to using Skyfire as my primary browser on my mobile phone and it has made a world of difference in how a use my phone. No longer am I restricted to crippled versions of popular sites nor do I have to lose features when I visit the regular versions of the page. I can view the real version of my news feed on Facebook. Vote up links in reddit or Digg. And reading blogs is no longer an exercise in frustration due to CSS issues or problems rendering widgets. Unsurprisingly my usage of the Web on my phone has pretty much doubled.

This definitely brings to the forefront how ridiculous of an idea it was to think that we need a "mobile Web" complete with its own top level domain (.mobi). Which makes more sense, that every Web site in the world should create duplicate versions of their pages for mobile phones and regular browsers or that software + hardware would eventually evolve to the point where I can run a full fledged browser on the device in my pocket? Thanks to the iPhone, it is now clear to everyone that this idea of a second class Web for mobile phones was a stopgap solution at best whose time is now past. 

One other thing I find interesting is treating the iPhone as a separate category from "smartphones" in the highlighted quote. This is similar to a statement made by Walt Mossberg when he reviewed Google's Android. That article began as follows

In the exciting new category of modern hand-held computers — devices that fit in your pocket but are used more like a laptop than a traditional phone — there has so far been only one serious option. But that will all change on Oct. 22, when T-Mobile and Google bring out the G1, the first hand-held computer that’s in the same class as Apple’s iPhone.

The key feature that the iPhone and Android have in common that separates them from regular "smartphones" is that they both include a full featured browser based on Webkit. The other features like downloadable 3rd party applications, wi-fi support, rich video support, GPS, and so on have been available on phones running Windows Mobile for years. This shows how important having a full Web experience was for mobile phones and just how irrelevant the notion of a "mobile Web" has truly become.

Note Now Playing: Kilo Ali - Lost Y'all Mind Note


 

Categories: Technology

The second most interesting announcement out of PDC this morning is that Windows Live ID is becoming an OpenID Provider. The information below explains how to try it out and give feedback to the team responsible.

Try It Now. Tell Us What You Think

We want you to try the Windows Live ID OpenID Provider CTP release, let us know your feedback, and tell us about any problems you find.

To prepare:

  1. Go to https://login.live-int.com and use the sign-up button to set up a Windows Live ID test account in the INT environment.
  2. Go to https://login.live-int.com/beta/ManageOpenID.srf to set up your OpenID test alias.

Then:

  • Users - At any Web site that supports OpenID 2.0, type openid.live-INT.com in the OpenID login box to sign in to that site by means of your Windows Live ID OpenID alias.
  • Library developers - Test your libraries against the Windows Live ID OP endpoint and let us know of any problems you find.
  • Web site owners - Test signing in to your site by using a Windows Live ID OpenID alias and let us know of any problems you find.
  • You can send us feedback at:
  • E-mail - openidfb@microsoft.com

This is awesome news. I've been interested in Windows Live supporting OpenID for a while and I'm glad to see that we've taken the plunge. Please try it out and send the team your feedback.

I've tried it out already and sent some initial feedback. In general, my feedback was on applying the lessons from the Yahoo! OpenID Usability Study since it looks like our implementation has some of the same usability issues that inspired Jeff Atwood's rants about Yahoo's OpenID implementation. Since it is still a Community Technology Preview, I'm sure the user experience will improve as feedback trickles in.

Kudos to Jorgen Thelin and the rest of the folks on the Identity Services team for getting this out. Great work, guys.

UPDATE: Angus Logan posted a comment with a link to the following screencast of the current user experience when using Windows Live ID as an OpenID provider experience


Note Now Playing: Christina Aguilera - Keeps Gettin' Better Note


 

Categories: Windows Live

October 27, 2008
@ 05:39 PM

Just because you aren't attending Microsoft's Professional Developer Conference doesn't mean you can't follow the announcements. The most exciting announcement so far [from my perspective] has been Windows Azure which is described as follows from the official site

The Azure™ Services Platform (Azure) is an internet-scale cloud services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services that can be used individually or together. Azure’s flexible and interoperable platform can be used to build new applications to run from the cloud or enhance existing applications with cloud-based capabilities. Its open architecture gives developers the choice to build web applications, applications running on connected devices, PCs, servers, or hybrid solutions offering the best of online and on-premises.

Azure reduces the need for up-front technology purchases, and it enables developers to quickly and easily create applications running in the cloud by using their existing skills with the Microsoft Visual Studio development environment and the Microsoft .NET Framework. In addition to managed code languages supported by .NET, Azure will support more programming languages and development environments in the near future. Azure simplifies maintaining and operating applications by providing on-demand compute and storage to host, scale, and manage web and connected applications. Infrastructure management is automated with a platform that is designed for high availability and dynamic scaling to match usage needs with the option of a pay-as-you-go pricing model. Azure provides an open, standards-based and interoperable environment with support for multiple internet protocols, including HTTP, REST, SOAP, and XML.

It will be interesting to read what developers make of this announcement and what kind of apps start getting built on this platform. I'll also be on the look out for any in depth discussions on the platform, there is lots to chew on in this announcement.

For a quick overview of what Azure means to developers, take a look at Azure for Business and Azure for Web Developers

Note Now Playing: Guns N' Roses - Welcome to the Jungle Note


 

Categories: Platforms | Windows Live

John McCrea of Plaxo has written a cleverly titled guest post on TechCrunchIT, Facebook Connect and OpenID Relationship Status: “It’s Complicated”, where he makes the argument that Facebook Connect is a competing technology to OpenID but the situation is complicated by Facebook developers engaging in discussions with the OpenID. He writes

You see, it’s been about a month since the first implementation of Facebook Connect was spotted in the wild over at CBS’s celebrity gossip site, TheInsider.com. Want to sign up for the site? Click a single button. A little Facebook window pops up to confirm that you want to connect via your Facebook account. One more click – and you’re done. You’ve got a new account, a mini profile with your Facebook photo, and access to that subset of your Facebook friends who have also connected their accounts to TheInsider. Oh, and you can have your activities on TheInsider flow into your Facebook news feed automatically. All that, without having to create and remember a new username/password pair for the site. Why, it’s just like the vision for OpenID and the Open Stack – except without a single open building block under the hood!
...
After the intros, Allen Tom of Yahoo, who organized the event, turned the first session over Max Engel of MySpace, who in turn suggested an alternative – why not let Facebook’s Julie Zhuo kick it off instead? And for the next hour, Julie took us through the details of Facebook Connect and the decisions they had to make along the way to get the user interface and user experience just right. It was not just a presentation; it was a very active and engaged discussion, with questions popping up from all over the room. Julie and the rest of the Facebook team were engaged and eager to share what they had learned.

What the heck is going on here? Is Facebook preparing to go the next step of open, switching from the FB stack to the Open Stack? Only time will tell. But one thing is clear: Facebook Connect is the best thing ever for OpenID (and the rest of the Open Stack). Why? Because Facebook has set a high bar with Facebook Connect that is inspiring everyone in the open movement to work harder and faster to bring up the quality of the UI/UX for OpenID and the Open Stack.

There are a number of points worth discussing from the above excerpt. The first is the implication that OpenID is an equivalent technology to Facebook Connect. This is clearly not the case. OpenID just allows you to delegate to act of authenticating a user to another website so the user doesn't need to create credentials (i.e. username + password) on your site. OpenID alone doesn't get you the user's profile data nor does it allow you to pull in the authenticated user's social graph from the other site or publish activities to their activity feed. For that, you would need other other "Open brand" technologies like OpenID Attribute Exchange, Portable Contacts and OpenSocial. So it is fairer to describe the contest as Facebook Connect vs. OpenID + OpenID Attribute Exchange + Portable Contacts + OpenSocial.

The question then is who should we root for? At the end of the day, I don't think it makes a ton of sense for websites to have to target umpteen different APIs that do the same thing instead of targeting one standard implemented by multiple services. Specifically, it seems ridiculous to me that TheInsider.com will have to code against Facebook Connect to integrate Facebook accounts into their site but code against something else if they want to integrate MySpace accounts and yet another API if they want to integrate LinkedIn accounts and so on. This is an area that is crying out for standardization.

Unfortunately, the key company providing thought leadership in this area is Facebook and for now they are building their solution with proprietary technologies instead of de jure or de facto ("Open brand") standards. This is unsurprising given that it takes three or four different specs in varying states of completeness created by different audiences deliver the scenarios they are currently interested in. What is encouraging is that Facebook developers are working with OpenID implementers by sharing their knowledge. However OpenID isn't the only technology needed to satisfy this scenario and I wonder if Facebook will be similarly engaged with the folks working on Portable Contacts and OpenSocial.

Facebook Connect is a step in the right direction when it comes to bringing the vision of social network interoperability to fruition. The key question is whether we will see effective open standards emerge that will target the same scenarios [which eventually even Facebook could adopt] or whether competitors will offer their own proprietary alternatives? So far it sounds like the latter is happening which means unnecessary reinvention of the wheel for sites that want to support "connecting" with multiple social networking sites.

PS: If OpenID phishing is a concern now when the user is redirected to the ID provider's site to login, it seems Facebook Connect is even worse since all it does is provide a pop over. I wonder if this is because the Facebook folks think the phishing concerns are overblown.

Note Now Playing: 2Pac - Mind Made Up (feat. Daz, Method Man & Redman) Note


 

Early this week, Roy Fieldings wrote a post entitled REST APIs must be hypertext-driven where he criticized the SocialSite REST API (a derivative of the OpenSocial REST API) for violating some constraints of the Representational State Transfer architectural style (aka REST). Roy's key criticisms were

API designers, please note the following rules before calling your creation a REST API:

  • A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]
  • A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). Servers must have the freedom to control their own namespace. Instead, allow servers to instruct clients on how to construct appropriate URIs, such as is done in HTML forms and URI templates, by defining those instructions within media types and link relations. [Failure here implies that clients are assuming a resource structure due to out-of band information, such as a domain-specific standard, which is the data-oriented equivalent to RPC's functional coupling].
  • ..
  • A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations. The transitions may be determined (or limited by) the client’s knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly (e.g., code-on-demand). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]

In reading some of the responses to Roy's post on programming.reddit it seems there are of number of folks who found it hard to glean practical advice from Roy's post. I thought it be useful to go over his post in more depth and with some examples.

The key thing to remember is that REST is about building software that scales to usage on the World Wide Web by being a good participant of the Web ecosystem. Ideally a RESTful API should be designed to be implementable by thousands of websites and consumed by hundreds of applications running on dozens of platforms with zero coupling between the client applications and the Web services. A great example of this is RSS/Atom feeds which happen to be one of the world's most successful RESTful API stories.

This notion of building software that scales to Web-wide usage is critical to understanding Roy's points above. The first point above is that a RESTful API should primarily be concerned about data payloads and not defining how URI end points should handle various HTTP methods. For one, sticking to defining data payloads which are then made standard MIME types gives maximum reusability of the technology. The specifications for RSS 2.0 (application/xml+rss) and the Atom syndication format (application/xml+atom) primarily focus on defining the data format and how applications should process feeds independent of how they were retrieved. In addition, both formats are aimed at being standard formats that can be utilized by any Web site as opposed to being tied to a particular vendor or Web site which has aided their adoption. Unfortunately, few have learned from these lessons and we have people building RESTful APIs with proprietary data formats that aren't meant to be shared. My current favorite example of this is social graph/contacts APIs which seem to be getting reinvented every six months. Google has the Contacts Data API, Yahoo! has their Address Book API, Microsoft has the Windows Live Contacts API, Facebook has their friends REST APIs and so on. Each of these APIs claims to be RESTful in its own way yet they are helping to fragment the Web instead of helping to grow it. There have been some moves to address this with the OpenSocial influenced Portable Contacts API but it too shies away from standard MIME types and instead creates dependencies on URL structures to dictate how the data payloads should be retrieved/processed.

One bad practice Roy calls out, which is embraced by the Portable Contacts and SocialSite APIs, is requiring a specific URL structure for services that implement the API. Section 6.2 of the current Portable Contacts API spec states the following

A request using the Base URL alone MUST yield a result, assuming that adequate authorization credentials are provided. In addition, Consumers MAY append additional path information to the Base URL to request more specific information. Service Providers MUST recognize the following additional path information when appended to the Base URL, and MUST return the corresponding data:

  • /@me/@all -- Return all contact info (equivalent to providing no additional path info)
  • /@me/@all/{id} -- Only return contact info for the contact whose id value is equal to the provided {id}, if such a contact exists. In this case, the response format is the same as when requesting all contacts, but any contacts not matching the requested ID MUST be filtered out of the result list by the Service Provider
  • /@me/@self -- Return contact info for the owner of this information, i.e. the user on whose behalf this request is being made. In this case, the response format is the same as when requesting all contacts, but any contacts not matching the requested ID MUST be filtered out of the result list by the Service Provider.

The problem with this approach is that it assumes that every implementer will have complete control of their URI space and that clients should have URL structures baked into them. The reason this practice is a bad idea is well documented in Joe Gregorio's post No Fishing - or - Why 'robots.txt and 'favicon.ico' are bad ideas and shouldn't be emulated where he lists several reasons why hard coded URLs are a bad idea. The reasons against include lack of extensibility and poor support for people in hosted environments who may not fully control their URI space. The interesting thing to note is that both the robots.txt and favicon.ico scenarios eventually developed mechanisms to support using hyperlinks on the source page (i.e. noindex and rel="shortcut icon") instead of hard coded URIs since that practice doesn't scale to Web-wide usage.

The latest drafts of the OpenSocial specification have a great example of how a service can use existing technologies such as URI templates to make even complicated URL structures to be flexible and discoverable without having to force every client and service to hardcode a specific URL structure. Below is an excerpt from the discovery section of the current OpenSocial REST API spec

A container declares what collection and features it supports, and provides templates for discovering them, via a simple discovery document. A client starts the discovery process at the container's identifier URI (e.g., example.org). The full flow is available athttp://xrds-simple.net/core/1.0/; in a nutshell:

  1. Client GETs {container-url} with Accept: application/xrds+xml
  2. Container responds with either an X-XRDS-Location: header pointing to the discovery document, or the document itself.
  3. If the client received an X-XRDS-Location: header, follow it to get the discovery document.

The discovery document is an XML file in the same format used for OpenID and OAuth discovery, defined at http://xrds-simple.net/core/1.0/:

<XRDS xmlns="xri://$xrds">
<XRD xmlns:simple="http://xrds-simple.net/core/1.0" xmlns="xri://$XRD*($v*2.0)" xmlns:os="http://ns.opensocial.org/2008/opensocial" version="2.0">
<Type>xri://$xrds*simple</Type>
<Service>
<Type>http://ns.opensocial.org/2008/opensocial/people</Type>
<os:URI-Template>http://api.example.org/people/{guid}/{selector}{-prefix|/|pid}</os:URI-Template>
</Service>
<Service>
<Type>http://ns.opensocial.org/2008/opensocial/groups</Type>
<os:URI-Template>http://api.example.org/groups/{guid}</os:URI-Template>
</Service>
<Service
<Type>http://ns.opensocial.org/2008/opensocial/activities</Type>
<os:URI-Template>http://api.example.org/activities/{guid}/{appid}/{selector}</os:URI-Template>
</Service>
<Service>
<Type>http://ns.opensocial.org//2008/opensocial/appdata</Type>
<os:URI-Template>http://api.example.org/appdata/{guid}/{appid}/{selector}</os:URI-Template>
</Service>
<Service>
<Type>http://ns.opensocial.org//2008/opensocial/messages</Type>
<os:URI-Template>http://api.example.org/messages/{guid}/{selector}</os:URI-Template>
</Service>
</XRD>
</XRDS>

This approach makes it possible for a service to expose the OpenSocial end points however way it sees fit without clients having to expect a specific URL structure. 

Similarly links should be used for describing relationships between resources in the various payloads instead of expecting hard coded URL structures. Again, I'm drawn to the example of RSS & Atom feeds where link relations are used for defining the permalink to a feed item, the link to related media files (i.e. podcasts), links to comments, etc instead of applications expecting that every Web site that supports enclosures should have /@rss/{id}/@podcasts  URL instead of just examining the <enclosure> element.

Thus it is plain to see that hyperlinks are important both for discovery of service end points and for describing relationships between resources in a loosely coupled way.

Note Now Playing: Prince - When Doves Cry Note


 

October 19, 2008
@ 08:47 AM

Tim Bray has a thought provoking post on embracing cloud computing entitled Get In the Cloud where he brings up the problem of vendor lock-in. He writes

Tech Issue · But there are two problems. The small problem is that we haven’t quite figured out the architectural sweet spot for cloud platforms. Is it Amazon’s EC2/S3 “Naked virtual whitebox” model? Is it a Platform-as-a-service flavor like Google App Engine? We just don’t know yet; stay tuned.

Big Issue · I mean a really big issue: if cloud computing is going to take off, it absolutely, totally, must be lockin-free. What that means if that I’m deploying my app on Vendor X’s platform, there have to be other vendors Y and Z such that I can pull my app and its data off X and it’ll all run with minimal tweaks on either Y or Z.

At the moment, I don’t think either the Amazon or Google offerings qualify.

Are we so deranged here in the twenty-first century that we’re going to re-enact, wide-eyed, the twin tragedies of the great desktop-suite lock-in and the great proprietary-SQL lock-in? You know, the ones where you give a platform vendor control over your IT budget? Gimme a break.

I’m simply not interested in any cloud offering at any level unless it offers zero barrier-to-exit.

Tim's post is about cloud platforms but I think it is useful to talk about avoiding lock-in when taking a bet on cloud based applications as well as when embracing cloud based platforms. This is especially true when you consider that moving from one application to another is a similar yet smaller scoped problem compared to moving from one Web development platform to another.

So let's say your organization wants to move from a cloud based office suite like Google Apps for Business to Zoho. The first question you have to ask yourself is whether it is possible to extract all of your organization's data from one service and import it without data loss into another. For business documents this should be straightforward thanks to standards like ODF and OOXML. However there are points to consider such as whether there is an automated way to perform such bulk imports and exports or whether individuals have to manually export and/or import their online documents to these standard formats. Thus the second question is how expensive it is for your organization to move the data. The cost includes everything from the potential organizational downtime to account for switching services to the actual IT department cost of moving all the data. At this point, you then have to weigh the impact of all the links and references to your organization's data that will be broken by your switch. I don't just mean links to documents returning 404 because you have switched from being hosted at google.com to zoho.com but more insidious problems like the broken experience of anyone who is using the calendar or document sharing feature of the service to give specific people access to their data. Also you have to ensure that email that is sent to your organization after the switch goes to the right place. Making this aspect of the transition smooth will likely be the most difficult part of the migration since it requires more control over application resources than application service providers typically give their customers. Finally, you will have to evaluate which features you will lose by switching applications and ensure that none of them is mission critical to your business.

Despite all of these concerns, switching hosted application providers is mostly a tractable problem. Standard data formats make data migration feasible although it might be unwieldy to extract the data from the service. In addition, Internet technologies like SMTP and HTTP all have built in ways to handle forwarding/redirecting references so that they aren't broken. However although the technology makes it possible, the majority of hosted application providers fall far short of making it easy to fully migrate to or away from their service without significant effort.

When it comes to cloud computing platforms, you have all of the same problems described above and a few extra ones. The key wrinkle with cloud computing platforms is that there is no standardization of the APIs and platform technologies that underlie these services. The APIs provided by Amazon's cloud computing platform (EC2/S3/EBS/etc) are radically different from those provided by Google App Engine (Datastore API/Python runtime/Images API/etc). For zero lock-in to occur in this space, there need to be multiple providers of the same underlying APIs. Otherwise, migrating between cloud computing platforms will be more like switching your application from Ruby on Rails and MySQL to Django and PostgreSQL (i.e. a complete rewrite).

In response to Tim Bray's post, Dewitt Clinton of Google left a comment which is excerpted below

That's why I asked -- you can already do that in both the case of Amazon's services and App Engine. Sure, in the case of EC2 and S3 you'll need to find a new place to host the image and a new backend for the data, but Amazon isn't trying to stop you from doing that. (Actually not sure about the AMI format licensing, but I assumed it was supposed to be open.)

In App Engine's case people can run the open source userland stack (which exposes the API you code to) on other providers any time the want, and there are plenty of open source bigtable implementations to chose from. Granted, bulk export of data is still a bit of a manual process, but it is doable even today and we're working to make it even easier.

Ae you saying that lock-in is avoided only once the alternative hosts exist?

But how does Amazon or Google facilitate that, beyond getting licensing correct and open sourcing as much code as they can? Obviously we can't be the ones setting up the alternative instances. (Though we can cheer for them, like we did when we saw the App Engine API implemented on top of EC2 and S3.)

To Doug Cutting's very good point, the way Amazon and Google (and everyone else in this space) seem to be trying to compete is by offering the best value, in terms of reliability (uptime, replication) and performance (data locality, latency, etc) and monitoring and management tools. Which is as it should be.

Although Dewitt is correct that Google and Amazon are not explicitly trying to lock-in customers to their platform, the fact is that today if a customer has heavily invested in either platform then there isn't a straightforward way for customers to extricate themselves from the platform and switch to another vendor. In addition there is not a competitive marketplace of vendors providing standard/interoperable platforms as there are with email hosting or Web hosting providers.

As long as these conditions remain the same, it may be that lock-in is too strong a word describe the situation but it is clear that the options facing adopters of cloud computing platforms aren't great when it comes to vendor choice.

Note Now Playing: Britney Spears - Womanizer Note


 

Categories: Platforms | Web Development

October 18, 2008
@ 02:18 AM

At 4:58 AM this morning, Nathan Omotoyinbo Obasanjo became the newest addition to our family. He was a healthy 9 pounds 6 ounces and was born at the Eastside Birth Center in Bellevue. His journey into our home has been fraught with delays and some drama which I'll recount here for posterity.

Tuesday – October 14th
At around midnight his mother let me know that she'd been having contractions for the past few hours and was ready to call the midwives. We drove to the birthcenter and arrived sometime before 2 AM. We were met by the midwife on call (Loren Riccio) who was later joined by two interns (not sure if that's what they're called). We also called my mother-in-law and she joined us there about 30 minutes later.  After a few hours and some time spent getting in and out of the tub (we planned to have a water birth) it became clear that my wife was either going through pre-labor or false labor and we were sent home at around 6AM.  We got home and I slept for about three or four hours then had to rush into work because I was scheduled to give a talk on the platform we built to power a key feature in Windows Live's Wave 3 release at Microsoft. I got there on time, the presentation was well received and I got to chat with my management folks about some blog drama that blew up the previous night. When I scheduled the presentation, I'd assumed Nathan would already have been born since the midwives gave us a due date of October 7th. Thankfully, my wife's mom stayed with her at home so I didn't have to make the choice between leaving my wife by herself at home and giving the talk. Later that day we went in for our previously scheduled midwife appointment with Christine Thain. She suggested that we come in the next day if the labor didn't kick in that night.
Wednesday - October 15th
We actually went in to see Christine twice that day. We went in early in the morning and she gave us some herbs (Gossypium) that are supposed to encourage the onset of labor. So we had a delicious lunch at Qdoba and ran a bunch of errands before going back to see Christine later that evening. She checked Jenna's progress and it didn't look like the baby was ready to be born yet. At this point we started questioning whether we wanted to stay the course with a natural childbirth and wait possibly for another week before the baby was born or if we wanted to go to a hospital and have an induction of labor. To take our minds of the waiting game, we decided to watch Don't Mess with the Zohan as a lighthearted diversion to keep our spirits up. It didn't help much especially since the movie was awful. During the movie, Jenna started having contractions and after confirming that they seemed more productive than from Monday night we called the midwife and the mother-in-law then rushed to the birth center. Unfortunately, it turned out to be another false alarm. At this point we started feeling like the boy who cried wolf especially since we'd had the two interns get up out of bed twice during the past two days.
Thursday - October 16th
Around 5AM Jenna woke me up and told me that she'd been having contractions for the past hour. Since we'd already scheduled an early morning checkup with Christine for 8AM we didn't feel the need to call the birth center's emergency number. When we got to the birth center there was already someone else going through labor in the birth suite. So we had to have an office visit where we learned that these contractions weren't productive either. At this point we made the call that we'd go in to the hospital on Friday to have the baby induced and had Christine make an appointment. Later that day we were grocery shopping and we got a call from Christine. We decided to go in one more time to the birth center to see if we could get one last checkup before going to the hospital on Friday. When we got there the lady from the morning was still in labor almost eleven hours later. So we had another office visit and discussed the pros & cons of going to the hospital versus trying to wait out a natural birth. At the end of the talk we stuck with the decision to go to the hospital although that was dependent on there being available beds.
Friday - October 17th
At about 1:30 AM I'm awoken by my wife who's going through very painful contractions. After timing the contractions for a while we decide to go into the birth center. This time I don't call my mother-in-law until after we've arrived at the birth center and the progress of the labor was confirmed. I also send a text message to my mom telling her yet again that we're having the baby and this time it's for real. Nathan was born just over 2 hours after we arrived at the birth center. Later on we found out that the lady who'd been in the labor or the entirety of the previous day ended up being sent to the hospital and had her baby at around the same time Nathan was being born.

So far, our previous addition to the family has been getting along great with Nathan.

Note Now Playing: Jadakiss - The Champ Is Here Note


 

Categories: Personal

I just found out via an article by Dan Farber that Yahoo! has rolled out a new "universal profile" and decided to give it a quick look. As I was creating my profile at http://profiles.yahoo.com/kpako I saw an interesting option that I thought was worth commenting on which is captured in the screenshot below

The circled text says

Allow my connections to share my information labeled “My Connections” with third-party applications they install and developers who made the applications. Learn More

The reason for this feature harkens back to the Robert Scoble vs. Facebook saga where Robert Scoble was temporarily banned from Facebook for extracting personal information about his Facebook friends from Facebook into Plaxo. As I outlined in my post on the topic, Breaking the Social Contract: My Data is not Your Data, the problem was that Scoble's friends on Facebook may have been OK with sharing their personal information with him on Facebook but not with him sharing that information with other services.

With this feature, Yahoo! has allowed users to opt-in to allowing an application used by a member of their social network to access their personal information. This is a very user-centric approach and avoids the draconian practice of services like Facebook that get around this issue by not providing any personally idenitifable user information via their APIs. Instead a user can specify which pieces of their personal information they don't mind being accessed by 3rd party applications being used by their friends using the option above. Nicely done.

The only problem I can see is that this is a pretty nuanced option which users may not understand clearly and may overlook since it is just another checkbox in the profile creation flow. Then again, it is hard to imagine how to introduce users to this concept after they have created their profile. 

Kudos to the Yahoo! folks on getting this feature and the rest of their universal profile out there.


 

Categories: Social Software

Allen Tom has an interesting post on the Yahoo! Developer blog entitled Yahoo! Releases OpenID Research where he shares the results of some usability studies the folks at Yahoo! have been doing around OpenID. The concluding paragraphs of his post are particularly interesting and are excerpted below

I'm happy to announce that Yahoo! is releasing the results of a usability study that we did for OpenID. Our test subjects were several experienced Yahoo! users (representative of our mainstream audience) who were observed as they tried to sign into a product review site using the Yahoo OpenID service.
...
On the Yahoo! side of things, we streamlined our OP (OpenID Provider) last week, and removed as much as we could. We removed the CAPTCHA and slimmed down the OP to just a single screen, and focused the UI to get the user back to the RP. We expect that RPs will enjoy a much higher success rate for users signing in with their Yahoo OpenID.

On the RP (Relying Party) side of things, our recommendation is that they emphasize to users that they can sign in with an existing account, specifically their YahooID. We believe that the YahooID, as well has IDs from other providers, have a higher brand awareness than OpenID. We also believe that first time users signing in with an OpenID should be able to go directly to their intended destination after signing in, instead of having to complete additional registration. Hopefully, as SimpleReg/AttributeExchange are more widely supported (Yahoo does not currently support them), relying parties will no longer feel the need to force the user through an additional registration form after signing in with an OpenID.

It's nice to see how much of this dovetails with my post on Things to Keep in Mind when Implementing OpenID on Your Website. In that post, I pointed out that the key risk of using OpenID on your Web site (i.e. being a Relying Party) is that there is a high risk of losing users if the OpenID sign-in flow is more complicated than simply having the user sign-up for your site. The Yahoo! usability study points to the fact that this seems to be the common case in typical OpenID deployments.

Actually there are two problems. The first being that most people don't know what OpenID is so simply stating that people can use OpenIDs to log-in to your site or using the logo may work for geeks but doesn't work for the typical Web user. The risk here is that the work of deploying ID on your site ends up being wasted. The second problem is the risk of losing the user after they decide to use OpenID to sign-in either due to an unintuitive user experience on your site (e.g. having to enter an OpenID URL) or on the site of the OpenID provider (e.g. lots of jargon with no clear call to action).

I did find it interesting that Yahoo! is recommending that services should prefer to using the brand of the target services whose credentials you plan to accept [especially if you white list OpenID providers you support] instead of using the OpenID brand since it isn't recognizable to the typical Web user. I tend to agree with this, OpenID is a means to an end and not an end in itself so it is weird to be putting it front and center in an end user facing user experience. Talking explicitly about OpenID should probably be at the developer to developer level. I feel the same way about RSS and other Web technologies for connecting services together.

The other interesting point is that a lot of services still require users to go through a sign-up flow after logging in with an OpenID thus the only thing they've saved is having the user pick a username (which would probably have been their email address) and password. That saving doesn't seem worth the extra complexity in the user experience of going through an OpenID provider. I agree with Tom that if more OpenID providers supported OpenID Attribute Exchange then the need for a post-login account creation would likely disappear since the Relying Party would get the basic information they need from the OpenID provider. 

In conclusion, the typical way OpenID is being implemented on the Web today leads to more costs than benefits. Hopefully, services will take to heart the lessons from Yahoo's usability study and we'll start seeing smarter usage of OpenID that benefits both users and the services that are adopting the technology.

Note Now Playing: Leona Lewis - Better In Time Note


 

Categories: Web Development