Yesterday Twitter announced Fabric, a new mobile SDK for Android and iOS composed of four distinct pieces

  • Crashlytics – an application analytics package that gives developers tools to measure how their apps are being used and measure app quality in the wild (i.e. crashes).
  • Twitter Kit – an SDK that makes it add Twitter integration such as signing in with Twitter, embedding tweets or posting tweets from your app
  • MoPub Kit – makes it easy to embed ads from Twitter’s ad network in your app so you can make money
  • Digits – makes it easy for any app to build phone number based sign-in similar to what Skype Qik and WhatsApp have. This is quite frankly a game changer.

The response to this release I’ve seen online have swung between two extremes, fawning adoration from the tech press proclaiming that Twitter has moved beyond tweets into mobile services and skepticism from developers who don’t trust Twitter as represented in this tweet below

The root of this angst is Twitter’s tumultuous relationship with developers of Twitter clients which eventually led to their infamous quadrant of death post which effectively limited the growth of any app whose primary function was to be a replacement Twitter experience. This hurt many developers who had been working on Twitter reading experiences and in fact led to the CEO of Flipboard quitting Twitter’s board in disgust.

Thus it is a valid question for developers as to whether they can trust Twitter this time? The answer is Yes for a very simple reason. Twitter’s API moves in 2012 and yesterday’s announcements were borne from the same motives, to grow its primary business of selling ads tied to their mobile experiences. In 2012, they had to address the fact that their liberal exposure of their service via their API had created a situation where a huge slice of their user base were using the app through experiences Twitter could not effectively monetize. 

At the height of the 3rd party Twitter app boom almost half of their users were using official apps (42%) although that percentage dwindled as they stepped up their mobile app efforts and sent the message to app developers that they no longer wanted people to compete with them on providing mobile experiences.


Taking control of the primary user experience for Twitter was the smart business decision and is why they now  generate over a billion dollars a year as a business.

This brings us to Fabric. All four components aid Twitters core business of selling ads for mobile experiences.

  • Twitter Kit increases engagement with Twitter by making it easy for users to consume and generate tweets from other apps without those apps being a threat to Twitter by becoming competing experiences.
  • Digits allows Twitter to build a profile of users based on their phone number the same way Facebook builds a profile of users based on the apps and websites they visit that use Facebook Connect.
  • Crashlytics + MoPub is the Trojan horse with a approach to Flurry which Yahoo acquired for $200 million. Crashlytics is a incredibly useful component that is valuable to all mobile apps since they all care about user behavior and crashes. Once you’re hooked on Crashlytics, it’s easier to upsell you to also using Twitters ad network and hence $$$.

All of these efforts help Twitter’s core business and it would be insanity for them to screw developers by abandoning them just as it would have been insanity for them to pursue an ad-based business model in a world where a huge chunk of their most active users were using 3rd party apps as their primary Twitter experience.

So go ahead, try out Fabric and judge it on its merits. I’m curious to hear what you think.

Note Now Playing: Chris BrownLoyal (featuring Lil Wayne and Tyga) Note


Categories: Platforms

July 15, 2012
@ 11:04 PM

A few weeks ago Dalton Caldwell, founder of imeem and Picplz, wrote a well received blog post titled What Twitter could have been where he laments the fact that Twitter hasn’t fulfilled some of the early promise developers saw in it as a platform. Specifically he writes

Perhaps you think that Twitter today is a really cool and powerful company. Well, it is. But that doesn’t mean that it couldn’t have been much, much more. I believe an API-centric Twitter could have enabled an ecosystem far more powerful than what Facebook is today. Perhaps you think that the API-centric model would have never worked, and that if the ad guys wouldn’t have won, Twitter would not be alive today. Maybe. But is the service we think of as Twitter today really the Twitter from a few years ago living up to its full potential? Did all of the man-hours of brilliant engineers, product people and designers, and hundreds of millions of VC dollars really turn into, well, this?

His blog post struck a chord with developers which made Dalton follow it up with another blog post titled announcing an audacious proposal as well as launching the Dalton’s proposal is as follows

I believe so deeply in the importance of having a financially sustainable realtime feed API & service that I am going to refocus to become exactly that. I have the experience, vision, infrastructure and team to do it. Additionally, we already have much of this built: a polished native iOS app, a robust technical infrastructure currently capable of handing ~200MM API calls per day with no code changes, and a developer-facing API provisioning, documentation and analytics system. This isn’t vaporware.

To manifest this grand vision, we are officially launching a Kickstarter-esque campaign. We will only accept money for this financially sustainable, ad-free service if we hit what I believe is critical mass. I am defining minimum critical mass as $500,000, which is roughly equivalent to ~10,000 backers.

As can be expected as someone who’s worked on software as both an API client and platform provider of APIs, I have a few thoughts on this topic. So let’s start from the beginning.

The Promise of Twitter Annotations

About two years ago, Twitter CEO Dick Costolo wrote an expansive “state of the union” style blog post about the Twitter platform. Besides describing the state of the Twitter platform at the time it also set forth the direction the platform intended to go in and made a set of promises about how Twitter saw its role relative to its developer ecosystem. Dick wrote

To foster this real-time open information platform, we provide a short-format publish/subscribe network and access points to that network such as, and several Twitter-branded mobile clients for iPhone, BlackBerry, and Android devices. We also provide a complete API into the functions of the network so that others may create access points. We manage the integrity and relevance of the content in the network in the form of the timeline and we will continue to spend a great deal of time and money fostering user delight and satisfaction. Finally, we are responsible for the extensibility of the network to enable innovations that range from Annotations and Geo-Location to headers that can route support tickets for companies. There are over 100,000 applications leveraging the Twitter API, and we expect that to grow significantly with the expansion of the platform via Annotations in the coming months.

There was a lot of excitement in the industry about Twitter Annotations both from developers as well as the usual suspects in the tech press like TechCrunch and ReadWriteWeb. Blog posts such as How Twitter Annotations Could Bring the Real-Time and Semantic Web Together show how excited some developers were by the possibilities presented by the technology. For those who aren’t familiar with the concept, annotations were supposed to be the way to attach arbitrary metadata to a tweet beyond the well-known 140 characters. Below is a screenshot from a Twitter presentation showing this concept visually

Although it’s been two years since the Twitter Annotations was supposed to begin testing this feature has never been released nor has it talked about by the Twitter Platform team in recent months. Many believe that Twitter Annotations will never be released due to a changing of the guard within Twitter which is hinted at in Dalton Caldwell’s What Twitter could have been post

As I understand, a hugely divisive internal debate occurred among Twitter employees around this time. One camp wanted to build the entire business around their realtime API. In this scenario, Twitter would have turned into something like a realtime cloud API company. The other camp looked at Google’s advertising model for inspiration, and decided that building their own version of AdWords would be the right way to go.

As you likely already know, the advertising group won that battle, and many of the open API people left the company.

It is this seeming change in direction that Dalton has seized on to create


Why Twitter Annotations was a Difficult Promise to Make

I have no idea what went on within Twitter but as someone who has built both platforms and API clients the challenges in delivering a feature such as Twitter Annotations seem quite obvious to me. A big problem when delivering a platform as part of an end user facing service is that there are often trade offs one has to make at the expense of the other. Doing things that make end users happy may end up ticking off people who’ve built businesses on your platform (e.g. layoffs caused by Google Panda update, companies losing customers overnight after launch of Facebook's timeline, etc). On the other hand, doing things that make developers happy can create suboptimal user experiences such as the “openness” of Android as a platform making it a haven for mobile malware.

The challenge with Twitter Annotations is that it threw user experience consistency out of the Window. Imagine that the tweet shown in the image above was created by Dare’s Awesome Twitter App which allows users to “attach” a TV episode from a source like Hulu or Netflix into the app before they tweet. Now in Dare’s Awesome Twitter App, the user sees their tweet an an inline experience where they can consume the video inline similar to what Twitter calls Expanded Tweets today. However the user’s friends who are using the Twitter website or other Twitter apps just see 140 characters. You can imagine that apps would then compete on how many annotations they supported and creating new interesting annotations. This is effectively what happened with RSS and a variety of RSS extensions with no RSS reader supporting the full gamut of extensions. Heck, I supported more RSS extensions in RSS Bandit in 2009 than Google Reader does today.

This cacophony of Annotations would have meant that not only would there no longer be such a thing as a consistent Twitter experience but Twitter itself would be on an eternal treadmill of supporting various annotations as they were introduced into the ecosystem by various clients and content producers.

Secondly, the ability to put arbitrary machine readable content in tweets would have made Twitter an attractive mechanism as a “free” publish-subscribe infrastructure of all manner of apps and devices. Instead of building a push notification system to communicate with client apps or devices, an enterprising developer could just create a special Twitter account and have all the end points connect to that. The notion of Twitter controlled botnets and the rise of home appliances connected to Twitter are all indicative that there is some demand for this capability. If this content not intended for humans ever becomes a large chunk of the data flowing through Twitter, how to monetize it will be extremely challenging. Charging people for using Twitter in this way isn’t easy since it isn’t clear how you differentiate a 20,000 machine botnet from a moderately popular Internet celebrity with a lot of fans who mostly lurk on Twitter.

I have no idea if Twitter ever plans to ship Annotations but if they do I’d be very interested to see how they would solve the two problems mentioned above.

Challenges Facing as a Consumer Service

Now that we’ve talked about Twitter Annotations, let’s look at what plans to deliver to address the demand that has yet to be fulfilled by the promise of Twitter Annotations. From we learn the product that is intended to be delivered is

OK, great, but what exactly is this product you will be delivering?

As a member, you'll have a new social graph and real-time feed that you access from an mobile application or website. At first, the user experience will be very similar to what Twitter was like before it turned into a media company. On a forward basis, we will focus on expanding our core experience by nurturing a powerful ecosystem based on 3rd-party developer built "apps". This is why we think the name "" is appropriate for this service.

From a developer perspective, you will be able to read and write to a Twitter-like API. Developers behaving in good faith will have free reign to build alternate UIs, new business models of their own, and whatever they can dream up.

As this project progresses, we will be collaborating with you, our backers, while sharing and iterating screenshots of the app, API documentation, and more. There are quite a few technical improvements to existing APIs that we would like to see in the API. For example, longer message character limits, RSS feeds, and rich annotations support.

In short it intends to be a Twitter clone with a more expansive API policy than Twitter. The key problem facing is that as a social graph based services it will suffer from the curse of network effects. Social apps need to cross a particular threshold of people on the service before they are useful and once they cross that threshold it often leads to a “winner-take-all” dynamic where similar sites with smaller users bleed users to the dominant service since everyone is on the dominant service. This is how Facebook killed MySpace & Bebo and Twitter killed Pownce & Jaiku. will need to differentiate itself more than just being “open version of popular social network”. Services like and Diaspora have tried and failed to make a dent with that approach while fresh approaches to the same old social graph and news feed like Pinterest and Instagram have grown by leaps and bounds.

A bigger challenge is the implication that it wants to be a paid social network. As I mentioned earlier, social graph based services live and die by network effects. The more people that use them the more useful they get for their members. Charging money for an online service is an easy way to reduce your target audience by at least an order of magnitude (i.e. reduce your target audience by at least 90%). As an end user, I don’t see the value of joining a Twitter-clone populated by people who could afford $50 a year as opposed to joining a free service like Facebook, Twitter or even Google+.

Challenges Facing as a Developer Platform

As a developer it isn’t clear to me what I’m expected to get out of The question on how they came up with their pricing tiers is answered thusly

Additionally, there are several comps of users being willing to pay roughly this amount for services that are deeply valuable, trustworthy and dependable. For instance, Dropbox charges $10 and up per month, Evernote charges $5 and up per month, Github charges $7 and up per month.

The developer price is inspired by the amount charged by the Apple Developer Program, $99. We think this demonstrates that developers are willing to pay for access to a high quality development platform.

I’ve already mentioned above that network effects are working against as a popular consumer service. The thought of spending $50 a year to target less users than I can by building apps on Facebook or Twitter’s platforms doesn’t sound logical to me. The leaves using as a low cost publish-subscribe mechanism for various apps or devices I deploy. This is potentially interesting but I’d need to get more data before I can tell just how interesting it could be. For example, there is a bunch of talk about claiming your Twitter handle and other things which makes it sound like developers only get a single account on the service. True developer friendliness for this type of service would include disclosure on how one could programmatically create and manage nodes in the social graph (aka accounts in the system).

At the end of the day although there is a lot of persuasive writing on, there’s just not enough to explain why it will be a platform that will provide me enough value as a developer for me to part with my hard earned $50. The comparison to Apple’s developer program is rich though. Apple has given developers five billion reasons why paying them $99 a year is worth it, we haven’t gotten one good one from

That said, I wish Dalton luck on this project. Props to anyone who can pick himself up and continue to build new things after what he went through with imeem and PicPlz.

Note Now Playing: French Montana - Pop That (featuring Rick Ross, Lil Wayne and Drake) Note


Categories: Platforms

Earlier this morning, Jeff Kunins posted the announcement that Messenger Connect is out of beta and available worldwide. Key excerpts from his post include

Today, we are pleased to announce that Messenger Connect is out of beta and available worldwide. We’ve gotten a great response so far: leading sharing syndicators ShareThis, AddThis, Gigya, and AddToThis have already made the Windows Live sharing badge available on more than 1 million websites (check it out now on Bing).

Over 2500 developers gave us great feedback during the beta, helping us to refine and improve this release of Messenger Connect. Below is a quick summary, but for all the details check out this post from Angus on the Windows Live for Developers blog. Our focus with the release of Messenger Connect was to make it easier for partners to adopt, without compromising user privacy.

  • Easier to check out –We made it faster and simpler for partners to try out Messenger Connect and determine if it would be useful for their sites. For example: you can try out the real time chat control without needing to write any code.
    Learn at the Windows Live Developer Center
  • Easier to adopt and integrate– We reduced the effort needed for sites to implement Messenger Connect usefully and powerfully by providing new tools and sample code for template selectors and more.
    Sample code

A number of folks worked really hard behind the scenes to get us to this point and I’m glad to see what we’ve shipped today. I haven’t announced this on my blog yet but I recently took over as the Lead Program Manager responsible for our Messenger Connect and related platform efforts in Windows Live. If you’ve been a regular reader of my blog it shouldn’t be a surprise that I’ve decided to make working on building open web platforms my day job and not just a hobby I was interested in.

As Angus Logan says in his follow up blog post on the Windows Live for Developers blog; this is just the beginning. We’d love to get feedback from the community of developers on what we’ve released and the feedback we’ve gotten thus far has been immensely helpful. You can find us at

PS: Since someone asked on Twitter, you can find out more about Messenger Connect by reading What is Messenger Connect?

Note Now Playing: Waka Flocka Flame - No Hands (featuring Roscoe Dash and Wale) Note


Categories: Platforms | Windows Live

April 29, 2010
@ 01:45 PM

Earlier this morning, Ori Amiga posted Messenger across the Web on the Inside Windows Live blog. Key excerpts from his blog post include

Earlier today, John Richards and Angus Logan took the stage at The Next Web Conference in Amsterdam where they announced Messenger Connect – a new way for partners and developers to connect with Messenger. Messenger Connect allows web, Windows and mobile app developers to create compelling social experiences on their websites and apps by providing them with social promotion and distribution via Messenger.

Messenger Connect

Messenger Connect brings the individual APIs we’ve had for a long time (Windows Live ID, Contacts API, Messenger Web Toolkit, etc.) together in a single API that's based on industry standards and specifications (OAuth WRAP,, PortableContacts) and adds a number of new scenarios.

The new Messenger Connect provides our developer partners with three big things:

  • Instantly create a user profile and social graph: Messenger user profile and social graph information allows our shared customers to easily sign-in and access their friends list and profile information. This allows our partners to more rapidly personalize their experiences, provides a ready-made social graph for customers to interact with, and provides a channel to easily invite additional friends to join in.
  • Drive engagement directly through chat indirectly through social distribution: By enabling both real-time instant messaging conversations (chat) and feed-based sharing options for customers on their site, developers can drive additional engagement and usage of their experiences by connecting to the over 320 million Messenger customers worldwide.
  • Designing for easy integration in your technical environment: We are delivering an API service that will expose a RESTful interface, and we’ll wrap those in a range of libraries (including JavaScript, .NET, and others). Websites and apps will be able to choose the right integration type for their specific scenario. Some websites prefer to keep everything at the presentation tier, and use JavaScript libraries when the user is present. Others may prefer to do server-side integration, so they can call the RESTful endpoints from back-end processes. We're aiming to provide the same set of capabilities across the API service and the libraries that we offer.

I’m really proud of the work that’s gone into building Messenger Connect. Although I was in some of the early discussions around it, I ducked out early to focus on the platform behind the new social view in Messenger and didn’t have much insight into the day to day of building the product. However I’ve got to say I love the way the project has turned out. I suspect a lot of web developers will as well.

Kudos to Ori and the rest of the team.

Note Now Playing: Ludacris - My Chick Bad (featuring Nicki Minaj) Note


Categories: Platforms | Windows Live

In a move that was telegraphed by Fred Wilson’s (Twitter investor) post titled The Twitter Platform's Inflection Point where he criticized Twitter platform developers for “filling holes” in Twitter’s user experience, the Twitter team have indicated they will start providing official Twitter clients for various mobile platforms. There have been announcements of an official Blackberry client and the purchase of Tweetie so it can become the official iPhone client. The latter was announced in the blog post Twitter for iPhone excerpted below

Careful analysis of the Twitter user experience in the iTunes AppStore revealed massive room for improvement. People are looking for an app from Twitter, and they're not finding one. So, they get confused and give up. It's important that we optimize for user benefit and create an awesome experience.

We're thrilled to announce that we've entered into an agreement with Atebits (aka Loren Brichter) to acquire Tweetie, a leading iPhone Twitter client.

This has led to some anger on the part of Twitter client developers with some of the more colorful reactions being the creation of the Twitter Destroyed My Market Segment T-shirt and a somewhat off-color image that is making the rounds as representative of what Twitter means by “filling holes”.

As an end user and someone who works on web platforms, none of this is really surprising. Geeks consider having to wade through half a dozen Twitter clients before finding one that works for them a feature even though paradox of choice means that most people are actually happier with less choices not more. This is made worse by the fact that in the mobile world, this may mean paying for multiple apps until you find one that you’re happy with.

Any web company that cares about their customers will want to ensure that their experience is as simple and as pleasant as possible. Trusting your primary mobile experience to the generosity and talents of 3rd party developers means you are not responsible for the primary way many people will access your service. This loss of control isn’t great especially when the design direction you want to take your service in may not line up with what developers are doing in their apps. Then there’s the fact that forcing your users to make purchasing decisions before they can use your site conveniently on their phone isn’t a great first time experience either. 

I expect mobile clients are just the beginning. There are lots of flaws in the Twitter user experience that are due to Twitter’s reliance on “hole fillers” that I expect they’ll start to fill. The fact that I ever have to go to as part of my Twitter workflow is a bug. URL shorteners really have no reason to exist in the majority of use cases except when Twitter is sending an SMS message. Sites that exist simply as image hosting services for Twitter like Twitpic and YFrog also seem extremely superflous especially when you consider that since only power users know about them not every Twitter user is figuring out how to use the service for image sharing. I expect this will eventually become a native feature of Twitter as well. Once Twitter controls the primary mobile clients for accessing their service, it’ll actually be easier for them to make these changes since they don’t have to worry about whether 3rd party apps will support Twitter image hosting vs. Twitpic versus rolling their own ghetto solution.

The situation is made particularly tough for 3rd party developers due to Twitter’s lack of a business model as Chris Dixon points out in his post Twitter and third-party Twitter developers

Normally, when third parties try to predict whether their products will be subsumed by a platform, the question boils down to whether their products will be strategic to the platform. When the platform has an established business model, this analysis is fairly straightforward (for example, here is my strategic analysis of Google’s platform).  If you make games for the iPhone, you are pretty certain Apple will take their 30% cut and leave you alone. Similarly, if you are a content website relying on SEO and Google Adsense you can be pretty confident Google will leave you alone. Until Twitter has a successful business model, they can’t have a consistent strategy and third parties should expect erratic behavior and even complete and sudden shifts in strategy.

So what might Twitter’s business model eventually be?  I expect that Twitter search will monetize poorly because most searches on Twitter don’t have purchasing intent.  Twitter’s move into mobile clients and hints about a more engaging website suggest they may be trying to mimic Facebook’s display ad model.

The hard question then is what opportunities will be left for developers on Twitter’s platform once the low hanging fruit has been picked by the company. Here I agree with frequent comments by Dave Winer and Robert Scoble, that there needs to be more metadata attached to tweets so that different data aggregation and search scenarios can be built which satisfy thousands of niches. I especially like what Dave Winer wrote in his post How Twitter can kill the Twitter-killers where he stated

Suppose Twitter wants to make their offering much more competitive and at the same time much more attractive to developers. Sure, as Fred Wilson telegraphed, some developers are going to get rolled over, esp those who camped out on the natural evolutionary path of the platform vendor. But there are lots of things Twitter Corp can do to create more opportunities for developers, ones that expand the scope of the platform and make it possible for a thousand flowers to bloom, a thousand valuable non-trivial flowers. Permalink to this paragraph

The largest single thing Twitter could do is open tweet-level metadata. If I want to write an app for dogs who tweet, let me add a "field" to a tweet called isDog, a boolean, that tells me that the author of the tweet is a dog. That way the dog food company who has a Twitter presence can learn that the tweet is from a dog, from the guy who's developing a special Twitter client just for dogs, even though Twitter itself has no knowledge of the special needs of dogs. We can also add a field for breed and age (in dog years of course). Coat type. Toy preference. A link to his or her owner. Are there children in the household?

I probably wouldn’t have used the tweeting dog example but the idea is sound. Location is an example of metadata that is added to tweets which can be used for interesting applications on top of the core news feed experience as shown by Twittervision and Bing's Twitter Maps. I think there’s an opportunity to build interesting things in this space especially if developers can invent new types of metadata without relying on Twitter to first bless new fields like they’ve had to do with location (although their current implementation is still inadequate in my opinion).

Over the next few months, Twitter will likely continue to encroach on territory which was once assumed to belong to 3rd party developers. The question is whether Twitter will replace these opportunities they’ve taken away with new opportunities or instead if they’ve simply used developers as a means to an end and now they are no longer useful?

Note Now Playing: Notorious B.I.G. - One More Chance Note


June 4, 2009
@ 04:11 PM

I initially planned to write up some detailed thoughts on the Google Wave video and the Google Wave Federation protocol. However the combination of the fact that literally millions of people have watched the video [according to YouTube] and I’ve had enough private conversations with others that have influenced my thinking that I’d rather not post something that makes it seem like I’m taking credit for the ideas of others. That said, I thought it would still be useful to share some of the most insightful commentary I’ve seen on Google Wave from various developer blogs.

Sam Ruby writes in his post Google Wave 

At one level, Google Wave is clearly a bold statement that “this is the type of application that every browser should be able to run natively without needing to resort to a plugin”.  At to give Google credit, they have been working relentlessly towards that vision, addressing everything from garbage collection issues, to enabling drag and drop of photos, to providing compelling content (e.g., Google Maps, GMail, and now Google Wave).

But stepping back a bit, the entire and much hyped HTML5 interface is just a facade.  That’s not a criticism, in fact that’s generally the way the web works.  What makes Google Wave particularly interesting is that there is an API which operates directly on the repository.  Furthermore, you can host your own server, and such servers federate using XMPP.

These servers are not merely passive, they can actively interact with processes called “robots” using HTTP (More specifically, JSON-RPC over POST).  Once invoked, these robots have access to a full range of operations (Java, Python).  The Python library implementation looks relatively straightforward, and would be relatively easy to port to, say Ruby.

This dichotomy pointed out by Sam is very interesting. One the one hand, there is the Google Wave web application which pushes the boundaries of what it means to be a rich web application that simply uses Javascript and the HTML DOM. This is a companion step in Google’s transition to taking an active role in the future of building Web applications where previous steps have included Google representatives drafting the HTML 5 specification, Google Gears and Google Chrome. However where things get interesting is that the API makes it possible to build alternate client applications (e.g. a .NET Wave client written in C#) and even build services that interact with users regardless of which Wave client they are using.

Joe Gregorio has more on these APIs in his blog post Wave Protocol Thoughts where he writes

There are actually 3 protocols and 2 APIs that are used in Wave:

  • Federation (XMPP)
  • The robot protocol (JSONRPC)
  • The gadget API (OpenSocial)
  • The wave embed API (Javascript)
  • The client-server protocol (As defined by GWT)

The last one in that list is really nothing that needs to be, or will probably ever be documented, it is generated by GWT and when you build your own Wave client you will need to define how it talks to your Wave server. The rest of the protocols and APIs are based on existing technologies.

The robot protocol looks very easy to use, here is the code for an admittedly simple robot. Now some people have commented that Wave reminds them of Lotus Notes, and I'm sure with a little thought you could extend that to Exchange and Groove. The difference is that the extension model with Wave is events over HTTP, which makes it language agnostic, a feature you get when you define things in terms of protocols. That is, as long as you can stand up an HTTP server and parse JSON, you can create robots for Wave, which is a huge leap forward compared to the extension models for Notes, Exchange and Groove, which are all "object" based extension models. In the "object" based extension model the application exposes "objects" that are bound to locally that you manipulate to control the application, which means that your language choices are limited to those that have bindings into that object model.

As someone’s whose first paying job in the software industry was an internship where I had to write Outlook automation scripts to trigger special behaviors when people sent or modified Outlook task requests, I can appreciate the novelty of moving away from a programming model based on building a plugin in an application’s object model and instead building a Web service and having the web application notify you when it is time to act which is the way the Wave robot protocol works. Now that I’ve been exposed to this idea, it seems doubly weird that Google also shipped Google Apps Script within weeks of this announcement. 

Nick Gall writes in his post My 2¢ on Google Wave: WWW is a Unidirectional Web of Published Documents -- Wave is a bidirectional Web of Instant Messages that

Whether or not the Wave client succeeds, Wave is undoubtedly going to have a major impact on how application designers approach web applications. The analogy would be that even if Google Maps had "failed" to become the dominant map site/service, it still had major impact on web app design.

I suspect this as well. Specifically I have doubts about the viability of the communications paradigm shift that Google Wave is trying to force taking hold. On the other hand, I’m sure there are thousands of Web developers out there right now asking themselves "would my app be better if users could see each other’s edits in real time?","should we add a playback feature to our service as well" [ed note - wikipedia could really use this] and "why don’t we support seamless drag and drop in our application?". All inspired by their exposure to Google Wave.

Finally, I've ruminated publicly that I see a number of parallels between Google Wave and the announcement of Live Mesh. The one interesting parallel worth calling out is that both products/visions/platforms are most powerful when there is a world of different providers each exposing their data types to one or more of these rich user applications (i.e. a Mesh client or Wave client). Thus far I think Google has done a better job than we did with Live Mesh in being very upfront about this realization and evangelizing to developers that they participate as providers. Of course, the proof will be in the pudding in a year or so when we see just how many services have what it takes to implement a truly interoperable federated provider model for Google Wave.

Note Now Playing: Eminem - Underground/Ken Kaniff Note

Categories: Platforms | Web Development

In my previous post I talk about adding support for reading and commenting on your Facebook news feed in RSS Bandit. This functionality is made possible by the recently announced Facebook Open Stream API. As I worked through the code in my few free moments I was alternately impressed and frustrated by the Open Stream API. On the one hand, the functionality the API provides is greatly enabling and has already unleashed a bounty of innovation as evidenced by the growing number of applications for interacting with your Facebook news feed on the desktop (e.g. Seesmic, Tweetdeck, bdule, etc). On the other hand, figuring out there are a few quirks of the API that make the web developer in me cringe and the desktop developer want to beg for mercy.

Below are my opinions on the Open Stream API, the purpose of sharing is so other developers who plan to use the API may avoid some of the pitfalls I did and also to share feedback with my peers at Facebook and elsewhere on best practices in providing activity stream APIs. 

Good: Lots of options for getting at the data 

Most API calls at Facebook have two entry points. You can either call a “REST-like” URL endpoint via HTTP GET or perform a SQL-like query via FQL on a generic end point. For interacting with the Facebook news feed, you have stream.get and stream.getComments as the REST-like methods for accessing the news feed and a comment thread for a particular feed item respectively.  With FQL, you can perform queries against the stream (FQL) and comment (FQL) to get the same results.

Both mechanisms give you the option of getting back the data as XML or JSON. Below are example of what HTTP requests to retrieve your news feed look like using both approaches 

stream.get REQUEST:{0}&session_key={1}&api_key={2}&call_id={3}&sig={4}

FQL REQUEST:{0}&session_key={1}&api_key={2}&sig={3}&query={4}
select post_id, source_id, created_time, actor_id, target_id, app_id, message, attachment, comments, likes, permalink, attribution, type from stream where filter_key in (select filter_key FROM stream_filter where uid = {0} and type = 'newsfeed') and created_time >= {1} order by created_time desc limit 50

I ended up using stream.get over FQL because the only benefit I see to using FQL is being able to filter some of the result fields or combine data from multiple tables, neither of which I needed in this instance. I chose XML as the output format over JSON because I transform the results of the Facebook API request into an Atom feed and then hand it down to the regular Atom feed parsing code in RSS Bandit. It was much easier for me to handle this transformation using XSLT over XML results as opposed to procedural code over a JSON result set.

There’s also a third mechanism for getting news feed data out of Facebook that would have been perfect for my needs. You can access the news feed using the nascent activity streams standard which returns the data as an Atom feeds with certain extensions specific to social network activity stream. I couldn’t get this to work probably due to user error on my part (more on this later) but if I had, the structure of the GET request would have been along the lines of

Activity Streams REQUEST:{0}&app_id={1}&session_key={2}&sig={3}&v=0.7&read&updated_time={4}

If not for the initial problems I had figuring out which parameters to pass to APIs and the concern about building on an API that isn’t yet at version 1.0, the Activity Stream API would have been perfect for my needs.

Kudos to the Facebook team for providing such a rich and varied set of options for getting the news feed out of their system. There’s something for every temperament.

Bad: Misuse of HTTP status codes

The Facebook API documentation describes the platform as “REST-like” and not RESTful. They sure weren’t kidding. For the most part, when I’ve discussed the problems with APIs that aren’t completely RESTful the negatives to such an approach have seemed aesthetic in nature as opposed to being practical problems. Below are some of the practical problems I faced because the Facebook APIs do not use HTTP status codes in a manner consistent with the rest of the Web.

In HTTP, there are existing error codes which clients can interpret in a consistent manner and provide consistent feedback to users when they occur. Now consider this excerpt from the documentation on using Facebook’s activity stream API

Response Codes

Like most HTTP responses, Facebook Activity Streams responses include a response header, which always includes a traditional response code and a short response message. The supported response codes include:

  • 200 Code provided whenever the Facebook servers were able to accommodate the request and provide a response.
  • 304 Code provided whenever the request header included If-Modified-Since and no new posts have been generated since the specified time.
    Note: Code 304 will never be returned if If-Modified-Since isn't included in the request header.
  • 401 Code provided whenever the URL omits one or more of the required parameters.
  • 403 Code provided whenever the URL is syntactically valid, but the user hasn't granted the required extended permission.
  • 404 Code provided whenever the URL is syntactically valid, but the signature is incorrect, or the session key is invalid.

You might notice that an HTTP 401 is used to indicate that the request is improperly formed. However, let’s see what RFC 2616 has to say about the 401 status code as well as how to communicate badly formed arguments

400 Bad Request

The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications.

401 Unauthorized

The request requires user authentication. The response MUST include a WWW-Authenticate header field (section 14.47) containing a challenge applicable to the requested resource. The client MAY repeat the request with a suitable Authorization header field (section 14.8). If the request already included Authorization credentials, then the 401 response indicates that authorization has been refused for those credentials. If the 401 response contains the same challenge as the prior response, and the user agent has already attempted authentication at least once, then the user SHOULD be presented the entity that was given in the response, since that entity might include relevant diagnostic information. HTTP access authentication is explained in "HTTP Authentication: Basic and Digest Access Authentication"

According to the HTTP specification excerpted above, the status code to return on missing parameters should be 400 not 401. The practical problem with returning HTTP 401 in this case is that applications like RSS Bandit may have code paths that prompt the user to check or re-enter their credentials because an authentication error has occurred. We now have to special case getting a 401 from Facebook’s servers versus any other server on the Web. Thankfully, this error should be limited to development time unless Facebook changes their API in a backwards incompatible manner by requiring new parameters to their methods.

A bigger problem is that Facebook returns HTTP 200 which traditionally means success in regular occurring error conditions. Specifically, when a user’s session key expires a successful response is sent containing an error document. In RSS Bandit, we already have a processing pipeline that hands off successful responses containing an XML document to the RSS/Atom parsing layer. With Facebook’s behavior I had two choices

  1. Modify the RSS/Atom parsing layer to understand Facebook error documents and then kick back an error up to the user interface asking the user to re-enter their credentials. This was particularly hacky because that layer doesn’t really have a connection to the main UI thread nor should it.

  2. Pre-process the XML document before handing it to the RSS/Atom parsing layer. This implies an intermediate step in between dispatching on the response status and actually processing the document.

Option #2 proved more palatable since there was already an intermediate step needed to transform the results of stream.get to an Atom feed. If I’d used the Activity Streams API as was my initial plan then the decision may have been harder to make.

Good: Well thought out model for authorizing access to user data

A number of application platforms work by giving a user all-or-nothing access to the user’s data. My favorite example of this bad practice is Twitter which for a long time made this situation even worse by requiring the applications to collect the person’s username and password. The controversy around Twply earlier this year shows exactly why giving applications more access to a user’s data than they need is bad. Twply only needed access to a user’s @replies but given that there was no way to only give it access to just that aspect of a user’s Twitter data, it also got the ability to post tweets on their behalf and did so in a spammy manner.

The Facebook API has a notion of extended permissions which are access rights that require special opt-in from the user given that the data or functionality is sensitive and can be abused. The current set of extended permissions in the Facebook API is provided below

Permission Description
publish_stream Lets your application or site post content, comments, and likes to a user's profile and in the streams of the user's friends without prompting the user.

This permission is a superset of the status_update, photo_upload, video_upload, create_note, and share_item extended permissions, so if you haven't prompted users for those permissions yet, you need only prompt them for publish_stream.

Note: At this time, while the Open Stream API is in beta, the only Facebook users that can grant your application the publish_stream permission are the developers of your application.

read_stream Lets your application or site access a user's stream and display it. This includes all of the posts in a user's stream. You need an active session with the user to get this data.
email This permission allows an application to send email to its user. This permission can be obtained only through the fb:prompt-permission tag or the promptpermission attribute. When the user accepts, you can send him/her an email via notifications.sendEmail or directly to the proxied_email FQL field.
offline_access This permission grants an application access to user data when the user is offline or doesn't have an active session. This permission can be obtained only through the fb:prompt-permission tag or the promptpermission attribute.
create_event This permission allows an app to create and modify events for a user via the events.create, events.edit and events.cancel methods.
rsvp_event This permission allows an app to RSVP to an event on behalf of a user via the method.
sms This permission allows a mobile application to send messages to the user and respond to messages from the user via text message.
status_update This permission grants your application the ability to update a user's or Facebook Page's status with the status.set or users.setStatus method.

Note: You should prompt users for the publish_stream permission instead, since it includes the ability to update a user's status.

photo_upload This permission relaxes requirements on the photos.upload and photos.addTag methods. If the user grants this permission, photos uploaded by the application will bypass the pending state and the user will not have to manually approve the photos each time.

Note: You should prompt users for the publish_stream permission instead, since it includes the ability to upload a photo.

video_upload This permission allows an application to provide the mechanism for a user to upload videos to their profile.

Note: You should prompt users for the publish_stream permission instead, since it includes the ability to upload a video.

create_note This permission allows an application to provide the mechanism for a user to write, edit, and delete notes on their profile.

Note: You should prompt users for the publish_stream permission instead, since it includes the ability to let a user write notes.

share_item This permission allows an application to provide the mechanism for a user to post links to their profile.

Note: You should prompt users for the publish_stream permission instead, since it includes the ability to let a user share links.



Bad: Too many prompts for desktop applications

Although I’ve praised the extended permissions model, it currently leads to a cumbersome experience for desktop applications. Installing Facebook desktop applications like bdule or Facebook for Adobe Air requires running through three separate permissions screens. The user has to login, then grant the read_stream extended permission followed by the publish_stream extended permission. Granted, the latter two only need to be done once but they still affect the out of box experience fairly negatively in my opinion.

In RSS Bandit, I’ve attempted to reduce this by delaying the prompt for publish_stream permission until the first time a user tries to comment on a news feed item from within the application. Streamlining this experience will be a boon for application developers who want the entire experience to be smooth and painless. The documentation on the documentation the Open Streams API page states that there are options for streamlining these requests but they only apply to Web applications not desktop apps. Sad

Bad: Plethora of application identifiers and authentication requirements is a stumbling block for beginners

The official documentation on the Facebook API doesn’t do a good job of connecting all the dots when it comes to understanding how to make calls to the service. For example, when you read the documentation for stream.get it is not obvious from that page what endpoint to make requests to OR that every Facebook API call has a set of required parameters beyond the ones listed on that page. In fact, I was stumped for about a week or so trying to figure out the magical incantations to get the right set of parameters for API calls and the right data to put in these parameters until I stumbled on a Facebook forum post creating the 'sig' parameter which solved my problems. I believe I once saw this information in the Facebook API documents but after ten minutes of searching just now I can’t seem to find it so perhaps it was in my imagination.

Part of the problem is the varied number of identifiers that you have to keep straight as an application developer including your

  • application ID
  • API key
  • application secret
  • session key
  • client secret

The fact that various APIs take different combinations of the above lead to more than one confusing moment for me. Eventually I figured it out but I felt like I was being hazed as I was going through the process.


Categories: Platforms

Over the past few days I've been mulling over the recent news from Yahoo! that they are building a Facebook-like platform based on OpenSocial. I find this interesting given that a number of people have come to the conclusion that Facebook is slowly killing it's widget platform in order to replace it with Facebook Connect.

The key reason developers believe Facebook is killing of its platform is captured in Nick O'Neill's post entitled Scott Rafer: The Facebook Platform is Dead which states

When speaking at the Facebook developer conference today in Berlin, Scott Rafer declared that Facebook platform dead. He posted statistics including one that I posted that suggests Facebook widgets are dead. Lookery’s own statistics from Quantcast suggest that their publisher traffic has been almost halved since the new site design was released. Ultimately, I think we may see an increase in traffic as users become educated on the new design but there is no doubt that developers were impacted significantly.

So what is Scott’s solution for developers looking to thrive following the shift to the new design? Leave the platform and jump on the Facebook Connect opportunity.

The bottom line is that by moving applications off of the profile page in their recent redesign, Facebook has reduced the visibility of these application thus reducing their page views and their ability to spread virally. Some may think that the impact of these changes is unforeseen, however the Facebook team is obsessive about testing the impact of their UX changes so it is extremely unlikely that they aren't aware that the redesign would negatively impact Facebook applications.

The question to ask then is why Facebook would knowingly damaging a platform which has been uniformly praised across the industry and has had established Web players like Google and Yahoo! scrambling to deploy copycat efforts? Alex Iskold over at ReadWriteWeb believes he has the answer in his post Why Platforms Are Letting Us Down - And What They Should Do About It which contains the following excerpt

When the Facebook platform was unveiled in 2007, it was called genius. Never before had a company in a single stroke enabled others to tap into millions of its users completely free. The platform was hailed as a game changer under the famous mantra "we built it and they will come". And they did come, hundreds of companies rushing to write Facebook applications. Companies and VC funds focused specifically on Facebook apps.

It really did look like a revolution, but it didn't last. The first reason was that Facebook apps quickly arranged themselves on a power law curve. A handful of apps (think Vampires, Byte Me and Sell My Friends) landed millions of users, but those in the pack had hardly any. The second problem was, ironically, the bloat. Users polluted their profiles with Facebook apps and no one could find anything in their profiles. Facebook used to be simple - pictures, wall, friends. Now each profile features a zoo of heterogenous apps, each one trying to grab the user's attention to take advantage of the network effect. Users are confused.

Worst of all, the platform had no infrastructure to monetize the applications. When Sheryl Sandberg arrived on the scene and looked at the balance sheet, she spotted the hefty expense that was the Facebook platform. Trying to live up to a huge valuation isn't easy, and in the absense of big revenues people rush to cut costs. Since it was both an expense and users were confused less than a year after its glorious launch, Facebook decided to revamp its platform.

The latest release of Facebook, which was released in July, makes it nearly impossible for new applications to take advantage of the network effect. Now users must first install the application, then find it under the application menu or one of the tabs, then check a bunch of boxes to add it to their profile (old applications are grand-daddied in). Facebook has sent a clear message to developers - the platform is no longer a priority.

Alex's assertion is that after Facebook looked at the pros and cons of their widget platform, the company came to the conclusion that the platform was turning into a cost center instead of being away to improve the value of Facebook to its users. There is evidence that applications built on Facebook's platform did cause negative reactions from its users. For example, there was the creation of the "This has got to stop…pointless applications are ruining Facebook" group which at its height had half a million Facebook users protesting the behavior of Facebook apps. In addition, the creation of Facebook's Great Apps program along with the guiding principles for building Facebook applications implies that the Facebook team realized that applications being built on their platform typically don't have their users best interests at heart.

This brings up the interesting point that although there has been a lot of discussion on how Facebook apps make money there haven't been similar conversations on how the application platform improves Facebook's bottom line. There is definitely a win-win equation when so-called "great apps" like iLike and Causes, which positively increase user engagement, are built on Facebook's platform. However there is also a long tail of applications that try their best to spread virally at the cost of decreasing user satisfaction in the Facebook experience. These dissatisfied users likely end up reducing their usage of Facebook thus actually costing Facebook users and page views. It is quite possible that the few "great apps" built on the Facebook platform do not outweigh the amount of not-so-great apps built on the platform which have caused users to protest in the past. This would confirm Alex Iskold's suspicions about why Facebook has started sabotaging the popularity of applications built on its platform and has started emphasizing partnerships via Facebook Connect.

A similar situation has occurred with regards to the Java platform and Sun Microsystems. The sentiment is captured in a Javaworl article by Josh Fruhlinger entitled Sun melting down, and where's Java? which contains the following excerpt

one of the most interesting things about the coverage of the company's problems is how Java figures into the conversation, which is exactly not at all. In most of the articles, the word only appears as Sun's stock ticker; the closest I could find to a mention is in this AP story, which notes that "Sun's strategy of developing free, 'open-source' software and giving it away to spur sales of its high-end computer servers and support services hasn't paid off as investors would like." Even longtime tech journalist Ashlee Vance, when gamely badgering Jon Schwartz for the New York Time about whether Sun would sell its hardware division and focus on software, only mentions Solaris and MySQL in discussing the latter.

Those in the Java community no doubt believe that Java is too big to fail, that Sun can't abandon it because it's too important, even if it can't tangibly be tied to anything profitable. But if Sun's investors eventually dismember the company to try to extract what value is left in it, I'm not sure where Java will fit into that plan.

It is interesting to note that after a decade of investment in the Java platform, it is hard to point to what concrete benefits Sun has gotten from being the originator and steward of the Java platform and programming language. Definitely another example of a platform that may have benefited applications built on it yet which didn't really benefit the platform vendor as expected.

The lesson here is that building a platform isn't just about making the developers who use the platform successful but also making sure that the platform itself furthers the goals of its developers in the first place.

Note Now Playing: Kardinal Offishall - Dangerous (Feat. Akon) Note


Categories: Platforms

October 27, 2008
@ 05:39 PM

Just because you aren't attending Microsoft's Professional Developer Conference doesn't mean you can't follow the announcements. The most exciting announcement so far [from my perspective] has been Windows Azure which is described as follows from the official site

The Azure™ Services Platform (Azure) is an internet-scale cloud services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services that can be used individually or together. Azure’s flexible and interoperable platform can be used to build new applications to run from the cloud or enhance existing applications with cloud-based capabilities. Its open architecture gives developers the choice to build web applications, applications running on connected devices, PCs, servers, or hybrid solutions offering the best of online and on-premises.

Azure reduces the need for up-front technology purchases, and it enables developers to quickly and easily create applications running in the cloud by using their existing skills with the Microsoft Visual Studio development environment and the Microsoft .NET Framework. In addition to managed code languages supported by .NET, Azure will support more programming languages and development environments in the near future. Azure simplifies maintaining and operating applications by providing on-demand compute and storage to host, scale, and manage web and connected applications. Infrastructure management is automated with a platform that is designed for high availability and dynamic scaling to match usage needs with the option of a pay-as-you-go pricing model. Azure provides an open, standards-based and interoperable environment with support for multiple internet protocols, including HTTP, REST, SOAP, and XML.

It will be interesting to read what developers make of this announcement and what kind of apps start getting built on this platform. I'll also be on the look out for any in depth discussions on the platform, there is lots to chew on in this announcement.

For a quick overview of what Azure means to developers, take a look at Azure for Business and Azure for Web Developers

Note Now Playing: Guns N' Roses - Welcome to the Jungle Note


Categories: Platforms | Windows Live

October 19, 2008
@ 08:47 AM

Tim Bray has a thought provoking post on embracing cloud computing entitled Get In the Cloud where he brings up the problem of vendor lock-in. He writes

Tech Issue · But there are two problems. The small problem is that we haven’t quite figured out the architectural sweet spot for cloud platforms. Is it Amazon’s EC2/S3 “Naked virtual whitebox” model? Is it a Platform-as-a-service flavor like Google App Engine? We just don’t know yet; stay tuned.

Big Issue · I mean a really big issue: if cloud computing is going to take off, it absolutely, totally, must be lockin-free. What that means if that I’m deploying my app on Vendor X’s platform, there have to be other vendors Y and Z such that I can pull my app and its data off X and it’ll all run with minimal tweaks on either Y or Z.

At the moment, I don’t think either the Amazon or Google offerings qualify.

Are we so deranged here in the twenty-first century that we’re going to re-enact, wide-eyed, the twin tragedies of the great desktop-suite lock-in and the great proprietary-SQL lock-in? You know, the ones where you give a platform vendor control over your IT budget? Gimme a break.

I’m simply not interested in any cloud offering at any level unless it offers zero barrier-to-exit.

Tim's post is about cloud platforms but I think it is useful to talk about avoiding lock-in when taking a bet on cloud based applications as well as when embracing cloud based platforms. This is especially true when you consider that moving from one application to another is a similar yet smaller scoped problem compared to moving from one Web development platform to another.

So let's say your organization wants to move from a cloud based office suite like Google Apps for Business to Zoho. The first question you have to ask yourself is whether it is possible to extract all of your organization's data from one service and import it without data loss into another. For business documents this should be straightforward thanks to standards like ODF and OOXML. However there are points to consider such as whether there is an automated way to perform such bulk imports and exports or whether individuals have to manually export and/or import their online documents to these standard formats. Thus the second question is how expensive it is for your organization to move the data. The cost includes everything from the potential organizational downtime to account for switching services to the actual IT department cost of moving all the data. At this point, you then have to weigh the impact of all the links and references to your organization's data that will be broken by your switch. I don't just mean links to documents returning 404 because you have switched from being hosted at to but more insidious problems like the broken experience of anyone who is using the calendar or document sharing feature of the service to give specific people access to their data. Also you have to ensure that email that is sent to your organization after the switch goes to the right place. Making this aspect of the transition smooth will likely be the most difficult part of the migration since it requires more control over application resources than application service providers typically give their customers. Finally, you will have to evaluate which features you will lose by switching applications and ensure that none of them is mission critical to your business.

Despite all of these concerns, switching hosted application providers is mostly a tractable problem. Standard data formats make data migration feasible although it might be unwieldy to extract the data from the service. In addition, Internet technologies like SMTP and HTTP all have built in ways to handle forwarding/redirecting references so that they aren't broken. However although the technology makes it possible, the majority of hosted application providers fall far short of making it easy to fully migrate to or away from their service without significant effort.

When it comes to cloud computing platforms, you have all of the same problems described above and a few extra ones. The key wrinkle with cloud computing platforms is that there is no standardization of the APIs and platform technologies that underlie these services. The APIs provided by Amazon's cloud computing platform (EC2/S3/EBS/etc) are radically different from those provided by Google App Engine (Datastore API/Python runtime/Images API/etc). For zero lock-in to occur in this space, there need to be multiple providers of the same underlying APIs. Otherwise, migrating between cloud computing platforms will be more like switching your application from Ruby on Rails and MySQL to Django and PostgreSQL (i.e. a complete rewrite).

In response to Tim Bray's post, Dewitt Clinton of Google left a comment which is excerpted below

That's why I asked -- you can already do that in both the case of Amazon's services and App Engine. Sure, in the case of EC2 and S3 you'll need to find a new place to host the image and a new backend for the data, but Amazon isn't trying to stop you from doing that. (Actually not sure about the AMI format licensing, but I assumed it was supposed to be open.)

In App Engine's case people can run the open source userland stack (which exposes the API you code to) on other providers any time the want, and there are plenty of open source bigtable implementations to chose from. Granted, bulk export of data is still a bit of a manual process, but it is doable even today and we're working to make it even easier.

Ae you saying that lock-in is avoided only once the alternative hosts exist?

But how does Amazon or Google facilitate that, beyond getting licensing correct and open sourcing as much code as they can? Obviously we can't be the ones setting up the alternative instances. (Though we can cheer for them, like we did when we saw the App Engine API implemented on top of EC2 and S3.)

To Doug Cutting's very good point, the way Amazon and Google (and everyone else in this space) seem to be trying to compete is by offering the best value, in terms of reliability (uptime, replication) and performance (data locality, latency, etc) and monitoring and management tools. Which is as it should be.

Although Dewitt is correct that Google and Amazon are not explicitly trying to lock-in customers to their platform, the fact is that today if a customer has heavily invested in either platform then there isn't a straightforward way for customers to extricate themselves from the platform and switch to another vendor. In addition there is not a competitive marketplace of vendors providing standard/interoperable platforms as there are with email hosting or Web hosting providers.

As long as these conditions remain the same, it may be that lock-in is too strong a word describe the situation but it is clear that the options facing adopters of cloud computing platforms aren't great when it comes to vendor choice.

Note Now Playing: Britney Spears - Womanizer Note


Categories: Platforms | Web Development

October 1, 2008
@ 04:22 PM

Werner Vogels, CTO of Amazon, writes in his blog post Expanding the Cloud: Microsoft Windows Server on Amazon EC2 that

With today's announcement that Microsoft Windows Server is available on Amazon EC2 we can now run the majority of popular software systems in the cloud. Windows Server ranked very high on the list of requests by customers so we are happy that we will be able to provide this.

One particular area that customers have been asking for Amazon EC2 with Windows Server was for Windows Media transcoding and streaming. There is a range of excellent codecs available for Windows Media and there is a large amount of legacy content in those formats. In past weeks I met with a number of folks from the entertainment industry and often their first question was: when can we run on windows?

There are many different reasons why customers have requested Windows Server; for example many customers want to run ASP.NET websites using Internet Information Server and use Microsoft SQL Server as their database. Amazon EC2 running Windows Server enables this scenario for building scalable websites. In addition, several customers would like to maintain a global single Windows-based desktop environment using Microsoft Remote Desktop, and Amazon EC2 is a scalable and dependable platform on which to do so.

This is great news. I'm starting a month long vacation as a precursor to my paternity leave since the baby is due next week and was looking to do some long overdue hacking in-between burping the baby and changing diapers. My choices were

  • Facebook platform app
  • Google App Engine app
  • EC2/S3/EBS app

The problem with Amazon was the need to use Linux which I haven't seriously messed with since my college days running SuSe. If I could use Windows Server and ASP.NET while still learning the nuances of EC2/S3/EBS that would be quite sweet.

I wonder how who I need to holler at to get in the Windows Server on EC2 beta? Maybe Derek can hook me up. Hmmmmm.

Note Now Playing: Lil Wayne - Best Rapper Alive [Explicit] Note


Categories: Platforms | Web Development

September 28, 2008
@ 09:51 PM

I'm in the market for a new phone and I've been considering getting an iPhone 3G to replace my AT&T Tilt (aka HTC Kaiser). The Tilt is a great PDA (thanks to Windows Mobile 6) and I love the slide out QWERTY keyboard. My main problems with it are the relatively huge physical size, small amount of storage space and needing two hands if I want to send email or text messages.

Although I've recently seen a lot of hype around Google's Android operating system and T-Mobile G1 (aka the HTC Dream), I haven't had any interest in it since it has no support for integrating with Microsoft Exchange which is the only reason I want a smart phone in the first place. However I have found it interesting that a lot of recent blog posts about the iPhone are about how it is in a weak position against Android because Google's open approach will trump Apple's closed approach with regards to their developer platform.

A typical example of this trend in iPhone coverage is Antionio Cangiano's blog post entitled Don't Alienate Developers which is excerpted below

Apple, a company that is generally considered far from “sinister” or “evil”, on the other hand, is trying their best to alienate developers.

Their first idiotic move was to place an NDA on a finished product like the iPhone SDK (including the final version).

Apple then decided that it was a good idea to charge people for the privilege to develop for the iPhone: $99

These were two blatant mistakes, but, if you can believe it, Apple managed to alienate developers further still. A few thousand people put up with the NDA on the SDK, with the cost of the Standard Program, and with the lengthy and bureaucratic process it takes to access the only viable distribution channel, the iPhone App Store. Some of them spent months trying to create excellent, innovative applications for the iPhone, only to see their work rejected for no good reason other than that it competed with Apple’s own products (e.g. Podcaster) or was inconvenient for their business partner AT&T (e.g. NetShare).

I fail to see Apple’s usual business insight and only see blind greed, the kind that acts as a highly effective cautionary tale against developing for Apple’s platforms. This all comes at a time when Google is promoting a truly open platform, Android, which poses a few challenges due to the heterogeneous nature of the devices it will be deployed on, but is equally interesting from a technical standpoint. Google even went so far as to award ten million dollars in prize money through a contest that they held, to attract new developers and applications. Android is definitely welcoming new developers and it’s doing so free from glaring restrictions and limitations. I suspect that many will put up with Java, to get a cup of freedom.

This kind of thinking is particularly naive because it fails to consider why developers adopt platforms in the first place. Developers go where the users are. Users go where they can get the best user experience for the right price. Openness of the platform only helps if it improves the user experience, thus attracting more users and reinforcing the virtuous cycle.

Rory Blyth recently a very insightful which compared to "open" approach taken with the Windows Mobile developer ecosystem with the "closed" approach of Apple's iPhone ecosystem. In his post entitled iPhone vs. Windows Mobile - Apple vs. Microsoft - It's the Little Things Rory wrote

Here, from what I've learned, is how iPhone and Windows Mobile rate against these criteria.

---- Windows Mobile

  • My access to your money: If you have a WM device, you probably have money. Even with carrier "discounts" they're not cheap. If you know what to get, it's worth the moolah, and you'll take advantage of what your chosen device has to offer by downloading apps that make use of it. This is as easy as:
    1. Trying to figure out where the big app stores are. There are a few, and they don't all support Windows Mobile, nor do they all have apps that will run on whatever version of WM you've got. It can be frustrating because Microsoft's practice of renaming products and slapping weird version numbers on things that are meaningless without context can easily leave you wondering what version of Windows Mobile you have (and wondering if what you've got is  the same as/compatible with PocketPC, WinCE, PPC, etc.).
    2. If not big stores, then searching the small independents where devs post their stuff on sites that look like they were made with a beta release of the first version of FrontPage.
    3. When you find an app, you go through whatever arbitrary transaction process the store/dev is using. This might mean creating an account with a site you'll never return to, handing your credit card info over to an independent whose trustworthiness is unknown, or even going through PayPal and then having to wait for the dev to check his email and manually respond with a serial number (or whatever).
    4. Run the app on your desktop which will kick off ActiveSync's install bits that install stuff on your PC in addition to the device.
    5. After clicking "Yes" or "I think so" or "Sure" on a few dialog boxes that pop up on the desktop and on the device, a CAB file is opened on the device and a local installer runs. This can mean more dialog boxes, and it can also mean having to make choices about things you don't understand (many users aren't going to comprehend the impact/difference between installing to the device's memory or to an expansion card).
    6. Run the app! Easy as 1-2-3-4-5-6!

  • App distribution options and ease of install for the customer: As you may have figured out from my "Easy as 1-2-3-4-5-6!" list above, finding, buying, and installing apps on WM devices has always been a pain in the ass. Going back to my first PocketPC (the first iPaq (the 3630)), I wondered why I needed ActiveSync just to install some stupid little app. ActiveSync makes sense if, say, I'm syncing something with the desktop like mail or calendar data, but it doesn't make sense if I'm installing Super Solitaire 5000 Deluxe Color Edition. Where do you sell your app? How do you get the word out? I haven't looked into it for a while because it frustrated me so much in the past. I'm going to take a look again, and, because I plan to target one specific platform for my app, I also plan to develop for others. In the case of Windows Mobile, I'm hoping Microsoft copies Apple's model.

    ---- Apple's iPhone

  • App distribution options and ease of install for the customer: Apple users have been bitching about using iTunes to install iPhone software. If they had any idea what it's like with other platforms, they'd shut it. While iTunes as an app store feels wrong and stupid and lame and stupid to me, at least iTunes is an app everybody has nowadays (aight - not everybody, but many, and that's good enough). Not that it matters much - with the introduction of Apple's App Store, you can browse apps on your phone, pay for, and install them without having to do some stupid syncing thing. You could be out at a bar where Jolene Blalock is hitting on you, and without having to run home to your iMac, you can buy, install, and run a crossword game before you've even had the chance to realize you've just made the biggest mistake in your life by ignoring her. And when you do realize it, and you see Jolene running off with another man, at least you'll have your crossword puzzles.
  • This is just one example of how a "closed" approach where a vendor supplies the entire end-to-end user experience provides a superior experience to an "open" approach where the vendor leaves it up to other developers to fill in the gaps. Apple's approach seems to be working well for developers some of whom are making hundreds of thousands of dollars a month thanks to how good of a job Apple has done in making it easy for users to find, purchase and install applications on their iPhones.

    The key thing Apple has brought to the table is building a user experience that its customers love to use instead of one that they merely tolerate. Getting this right is way more important than the "openness" of the ecosystem. Customers and developers can put up with a closed ecosystem that limits choice as long as it improves the quality of the user experience. Where Google Android has to shine is in building a better user experience for the same price or a comparable experience at a lower price. Everything else is just noise.

    Don't take my word for it, here's what the John Wang of HTC [Chief Marketing Officer of the company that is shipping the first Google Android phone] has to say about the topic in an article from Digitimes

    Some believe the success of Android handsets will rely on their open source platform. However, this is not true since Linux-based handsets have already been on the market for a while, Wang argued.

    The key element is innovation, said Wang, noting that the T-Mobile G1 is being rolled by combining Google's Internet services, HTC's proven capability in smartphone manufacturing, and T-Mobile's telecom network resources.

    Apple is definitely ticking off developers but until another vendor shows up with a phone whose hardware and software provides a better experience for customers then it will continue to get the lions share of attention from top mobile developers.

    Note Now Playing: T.I. - Whatever You Like Note

    Categories: Platforms

    I've been thinking a lot about platform adoption recently. I guess it is the combination of the upcoming Microsoft PDC and watching the various moves in the area of social networking platforms like OpenSocial and fbOpen. One thing that is abundantly clear is that the dynamics that drive platform adoption are amazingly consistent regardless of whether you are talking about operating system platforms like Windows' Win32 and *nix's POSIX, cloud computing platforms like Amazon's EC2 + EBS + S3 and Google App Engine, or even data access APIs like the Flickr API and Google's GData.

    When a developer adopts a platform, there is a value exchange between the developer and the software vendor. The more value that is provided to developers by the platform vendor, the more developers are attracted to the platform. Although this seems self evident, where providers of platforms go astray is that they often don't understand the value developers actually want out of software platforms and instead operate from an if we build it they will come mentality. 

    There are three main benefits adoption of one platform over another can offer a developer. These benefits are captured in the following "laws" of platform adoption  

    1. Developers adopt a platform when it offers differentiation from competitors: In competitive software markets, building an application that stands out from the crowd is important. Platforms or technologies that enable developers to provide features that are unique to the application, even just temporarily, are thus valuable to developers in such markets. Typically, if these features are truly valuable to end users this leads to a "keeping up with the Joneses" effect where the majority applications in that space eventually adopt this platform. Recent examples of this include Flash video, AJAX and the Flickr API.

    2. Developers adopt a platform when it reduces the cost of software development: Building software is a labor intensive, complicated and error prone process which often results in failure. This makes software development an expensive undertaking. Platforms which reduce the cost of development are thus very valuable to developers. Platforms like Java and the .NET Framework reduced the cost of software development compared to building applications using C and C++. In recent years, platforms based on dynamic languages such as Ruby On Rails and Django have become increasingly popular for building Web applications as they offer simpler development options compared to using "enterprise" platforms like Java.

    3. Developers adopt a platform when it provides reach and/or better distribution: Every platform choice places a limit on which users the developer can reach. Building a Facebook application limits you to building applications for Facebook users, building an iPhone application limits you to people running Apple's phone and building an AJAX application limits you to users of modern browsers who don't have JavaScript disabled. In all of the aforementioned cases, the reach of the application platform is in the millions of users. Additionally, in all of the aforementioned cases there are alternative platforms that developers could choose that do not as many addressable end users and thus have less reach. 

      Distribution is another value add that modern platforms have begun to offer which augments the reach of the platform. The Facebook platform offers several viral distribution mechanisms for applications which enables applications to get noticed by end users and spread organically among users without explicit action by the developer. The Apple iPhone has the App Store which is the single, well-integrated entry point for users to discover and purchase applications for the device. Thus even though the iPhone may have users than other smart phone platforms, targeting the iPhone platform may still be more attractive than targeting other platforms due to its superior distribution channel for applications built on its platform. 

    Sometimes developers adopt platforms due to external effects such as management mandate or peer pressure but even in these cases the underlying justification is usually one or more of the benefits above. These laws mean different things for the various participants in the software ecosystem.

    • For software vendors, they clarify that delivering a software platform isn't just about delivering a technology or a set of APIs. The value proposition to developers with regards to the three laws of adoption must be clearly shown to get developers to accept the platform. Smart platform vendors should pick one or more of these axes as the value proposition of their platform and hammer it home. Good examples of these approaches include how Google has been hammering home the 'reach' benefit of adopting OpenSocial or how Erlang evangelists have been pitching it as a solution to the multicore crises given that building concurrent applications is still an expensive endeavor in today's popular programming languages. 

    • For developers, they explain how to evaluate a platform and why some platforms get traction among their peers while others do not. So if you are a fan of the Common Lisp or the Pownce API, these laws of platform adoption explain why developers have flocked to other platforms in their stead.

    Now Playing: Pink - So What


    Categories: Platforms | Programming

    I've been reading about the Ning vs. WidgetLaboratory drama on TechCrunch. The meat of the conflict seems to be that widgets from WidgetLaboratory were so degrading the user experience of Ning that they had to be cut off. The relevant excerpts from the most recent TechCrunch story on the war of words are below

    For those of you not closely following the drama between social network platform Ning and a popular widget provider called WidgetLaboratory, you can read the background here. On Friday Ning unceremoniously shut down their access to Ning, making all those widgets vanish.
    In an email to WL on August 2 (more than three weeks ago), CEO Gina Bianchini wrote “Our only goal is to have you build your products in such a way that doesn’t slow down the networks running your products or takedown the Ning Platform with what you’re doing. Both of those would result in us needing to shutdown WidgetLaboratory products and that’s has never been our first choice of options. Hopefully, you know this after 8 months of working with us.”

    Ignoring the he said, she said nature of the communication between both companies, there is a legitimate concern that 3rd party widgets included on the pages of a Web site can degrade the performance to the extent that the site becomes unusably slow. In fact, TechCrunch has had similar problems with 3rd party widgets as Mike Arrington has mentioned on his personal blog which led to him excluding the widgets from his site.

    Typically, widgets are embedded in a site by including references to Javascript hosted on a 3rd party site in the page's HTML. This means rendering the page is dependent on how quickly the script files can be downloaded from the 3rd party site AND how long it takes for the script to execute especially since it may also fetch data from one or more servers as well. Thus a slow server or a badly written script can make every page that embeds the widget unbearably slow to render. Given that the ability to embed widgets is a key feature of social networking sites, it is important for such sites to figure out how to isolate their user experience from badly written widgets or widgets hosted on slow Web servers.

    Below are some best practices that have emerged on how social networking sites can immunize themselves from the kinds of problems Ning has had with WidgetLaboratory

    1. Host the Scripts Yourself: If you have a popular site, it is quite likely that you have more resources to handle lots of page views than the typical widget developer. Thus it makes sense to take away the dependency on externally hosted scripts by hosting the widgets yourself. Microsoft encourages developers to submit their gadgets to Windows Live Gallery if they want to build gadgets for or Windows Live Spaces. For it's AJAX homepage service, Google does not require developers to submit gadgets to them for hosting but instead caches gadget data for hours at a time which means they are effectively hosting the gadgets themselves for the majority of the accesses by their users.

    2. Keep External Dependencies off of Pages that Need to Render Quickly: In many cases, it isn't feasible to host all of the data and content related to widgets that are being shown on your site. In that case, you should ensure that the key scenarios on your Web site are insulated from the problems caused by slow or broken 3rd party widgets. For example, on Facebook viewing someone's profile is a key part of the user experience that is important to make sure happens as quickly and as smoothly as possible. For this reason, Facebook caches all 3rd party content that shows up on a user's profile and requires applications to call Profile.SetFBML to add content to the profile instead of providing a way to directly embed widgets on a user's profile.

    3. Make It Clear Who Is to Blame if Things go Awry: One of the issues raised by Ning in their conflict with WidgetLaboratory is that user pages wouldn't render correctly or would show degraded performance due to WidgetLaboratory's widgets but Ning would get the support calls. This kind of user confusion is avoided if the user experience makes it clear when the failure of a page to render correctly is the fault of the external widget and when it is part of the hosting site. For example, Facebook Canvas Pages for applications make it clear that the user is using a 3rd party application and not part of the core Facebook experience. I've seen lots of user complain about the slowness of Scrabulous and Scrabble but never seen anyone who thought that Facebook was to blame and not the application developers.

    Following some of these practices would have saved Ning and its users some of their current grief.

    Now Playing: Ice Cube - Get Money, Spend Money, No Money


    Categories: Platforms | Social Software

    About a week ago, the Facebook Data team quietly released the Cassandra Project on Google Code. The Cassandra project has been described as a cross between Google's BigTable and Amazon's Dynamo storage systems. An overview of the project is available in the SIGMOD presentation on Cassandra available at SlideShare. A summary of the salient aspects of the project follows.

    The problem Cassandra is aimed at solving is one that plagues social networking sites or any other service that has lots of relationships between users and their data. In such services, data often needs to be denormalized to prevent having to do lots of joins when performing queries. However this means the system needs to deal with the increased write traffic due to denormalization. At this point if you're using a relational database, you realize you're pretty much breaking every major rule of relational database design. Google tackled this problem by coming up with BigTable. Facebook has followed their lead by developing Cassandra which they admit is inspired by BigTable. 

    The Cassandra data model is fairly straightforward. The entire system is a giant table with lots of rows. Each row is identified by a unique key. Each row has a column family, which can be thought of as the schema for the row. A column family can contain thousands of columns which are a tuple of {name, value, timestamp} and/or super columns which are a tuple of {name, column+} where column+ means one or more columns. This is very similar to the data model behind Google's BigTable.

    As I mentioned earlier, denormalized data means you have to be able to handle a lot more writes than you would if storing data in a normalized relational database. Cassandra has several optimizations to make writes cheaper. When a write operation occurs, it doesn't immediately cause a write to the disk. Instead the record is updated in memory and the write operation is added to the commit log. Periodically the list of pending writes is processed and write operations are flushed to disk. As part of the flushing process the set of pending writes is analyzed and redundant writes eliminated. Additionally, the writes are sorted so that the disk is written to sequentially thus significantly improving seek time on the hard drive and reducing the impact of random writes to the system. How important is improving seek time when accessing data on a hard drive? It can make the difference between taking hours versus days to flush a hundred gigabytes of writes to a disk. Disk is the new tape.

    Cassandra is described as "always writable" which means that a write operation always returns success even if it fails internally to the system. This is similar to the model exposed by Amazon's Dynamo which has an eventual consistency model.  From what I've read, it isn't clear how writes operations that occur during an internal failure are reconciled and exposed to users of the system. I'm sure someone with more knowledge can chime in in the comments.

    At first glance, this is a very nice addition to the world of Open Source software by the Facebook team. Kudos.

    Found via James Hamilton.

    PS: Is it me or is this the second significant instance of Facebook Open Sourcing a key infrastructure component "inspired" by Google internals?

    Now Playing: Ray J - Gifts


    In the past year both Google and Facebook have released the remote procedure call (RPC) technologies that are used for communication between servers within their data centers as Open Source projects. 

    Facebook Thrift allows you to define data types and service interfaces in a simple definition file. Taking that file as input, the compiler generates code to be used to easily build RPC clients and servers that communicate seamlessly across programming languages. It supports the following programming languages; C++, Java, Python, PHP and Ruby.

    Google Protocol Buffers allows you to define data types and service interfaces in a simple definition file. Taking that file as input, the compiler generates code to be used to easily build RPC clients and servers that communicate seamlessly across programming languages. It supports the following programming languages; C++, Java and Python.

    That’s interesting. Didn’t Steve Vinoski recently claim that RPC and it's descendants are "fundamentally flawed"? If so, why are Google and Facebook not only using RPC but proud enough of their usage of yet another distributed object RPC technology based on binary protocols that they are Open Sourcing them? Didn’t they get the memo that everyone is now on the REST + JSON/XML bandwagon (preferrably AtomPub)?

    In truth, Google is on the REST + XML band wagon. Google has the Google Data APIs (GData) which is a consistent set of RESTful APIs for accessing data from Google's services based on the Atom Publishing Protocol aka RFC 5023. And even Facebook has a set of plain old XML over HTTP APIs (POX/HTTP) which they incorrectly refer to as the Facebook REST API.

    So what is the story here?

    It is all about coupling and how much control you have over the distributed end points. On the Web where you have little to no control over who talks to your servers or what technology they use, you want to utilize flexible technologies that make no assumptions about either end of the communication. This is where RESTful XML-based Web services shine. However when you have tight control over the service end points (e.g. if they are all your servers running in your data center) then you can use more optimized communications technologies that add a layer of tight coupling to your system. An example of the kind of tight coupling you have to live with is that  Facebook Thrift requires specific versions of g++ and Java if you plan to talk to it using code written in either language and you can’t talk to it from a service written in C#.

    In general, the Web is about openness and loose coupling. Binary protocols that require specific programming languages and runtimes are the exact opposite of this. However inside your Web service where you control both ends of the pipe, you can optimize the interaction between your services and simplify development by going with a binary RPC based technology. More than likely different parts of your system are already doing this anyway (e.g. memcached uses a binary protocol to talk between cache instances, SQL Server uses TDS as the communications protocol between the database and it's clients, etc).

    Always remember to use the right tool for the job. One size doesn’t fit all when it comes to technology decisions.


    • Exposing a WCF Service With Multiple Bindings and Endpoints – Keith Elder describes how Windows Communication Foundation (WCF) supports multiple bindings that enable developers to expose their services in a variety of ways.  A developer can create a service once and then expose it to support net.tcp:// or http:// and various versions of http:// (Soap1.1, Soap1.2, WS*, JSON, etc).  This can be useful if a service crosses boundaries between the intranet and the Internet.

    Now Playing: Pink - Family Portrait


    Categories: Platforms | Programming

    Gnip is a newly launched startup that pitches itself as a service that aims to “make data portability suck less”. Mike Arrington describes the service in his post Gnip Launches To Ease The Strain On Web Services which is excerpted below

    A close analogy is a blog ping server (see our overview here). Ping servers tell blog search engines like Technorati and Google Blog Search when a blog has been updated, so the search engines don’t have to constantly re-index sites just to see if new content has been posted. Instead, the blog tells the ping server when it updates, which tells the search engines to drop by and re-index. The creation of the first ping server,, by Dave Winer resulted in orders of magnitude better efficiency for blog search engines.

    The same thinking basically applies to Gnip. The idea is to gather simple information from social networks - just a username and the fact that they created new content (like writing a Twitter message, for example). Gnip then distributes that data to whoever wants it, and those downstream services can then access the core service’s API, with proper user authentication, and access the actual data (in our example, the actual Twitter message).

    From a user’s perspective, the result is faster data updates across services and less downtime for services since their APIs won’t be hit as hard.

    From my perspective, Gnip also shares some similarity to services like FeedBurner as well as blog ping servers. The original purpose of blog ping servers was to make it cheaper for services like Technorati and Feedster to index the blogosphere without having to invest in a Google-sized server farm and crawl the entire Web every couple of minutes. In addition, since blogs often have tiny readerships and are thus infrequently linked to, crawling alone was not enough to ensure that they find their way into the search index. It wasn’t about taking load off of the sites that were doing the pinging.

    On the other hand, FeedBurner hosts a site’s RSS feed as a way to take load off of their servers and then provides analytics data so the site doesn’t miss out from losing the direct connection to its subscribers. This is more in line with the expectation that Gnip will take load off of a service’s API servers. However unlike FeedBurner, Gnip doesn’t actually store the user data from the social networking site. It simply stores a record that indicates that “user X on site Y made an update of type Z at time T”.  The thinking is that web sites will publish a notification to Gnip whenever their users perform an update. Below is a sample interaction between Digg and Gnip where Digg notifies Gnip that the users amy and john.doe have dugg two stories.

      POST /publishers/digg/activity.xml
      Accept: application/xml
      Content-Type: application/xml
        <activity at="2008-06-08T10:12:42Z" uid="amy" type="dugg" guid=""/>
        <activity at="2008-06-09T09:14:07Z" uid="john.doe" type="dugg" guid=""/>
      200 OK
      Content-Type: application/xml

    There are two modes in which "subscribers" can choose to interact with the data published to Gnip. The first is in a mode similar to how blog search engines interact with the changes.xml file on and other blog ping servers. For example, services like Summize or TweetScan can ask Gnip for the last hour of changes on Twitter instead of whatever mechanism they are using today to crawl the site. Below is what a sample interaction to retrieve the most recent updates on Twitter from Gnip would look like

    GET /publishers/twitter/activity/current.xml
    Accept: application/xml

    200 OK
    Content-Type: application/xml

    <activity at="2008-06-08T10:12:07Z" uid="john.doe" type="tweet" guid=""/>
    <activity at="2008-06-08T10:12:42Z" uid="amy" type="tweet" guid=""/>

    The main problem with this approach is the same one that affects blog ping servers. If the rate of updates is more than the ping server can handle then it may begin to fall behind or lose updates completely. Services that don’t want to risk their content not being crawled are best off providing their own update stream that applications can poll periodically. That’s why the folks at Six Apart came up with the Six Apart Update Stream for LiveJournal, TypePad and Vox weblogs.

    The second mode is one that has gotten Twitter fans like Dave Winer raving about Gnip being the solution to Twitter’s scaling problems. In this mode, an application creates a collection of one or more usernames they are interested in. Below is what a collection document created by the Twadget application to indicate that it is interested in my Twitter updates might look like.

    <collection name="twadget-carnage4life">
         <uid name="carnage4life""twitter"/>

    Then instead of polling Twitter every 5 minutes for updates it polls Gnip every 5 minutes for updates and only talks to Twitter’s servers when Gnip indicates that I’ve made an update since the last time the application polled Gnip. The interaction between Twadget and Gnip would then be as follows

    GET /collections/twadget-carnage4life/activity/current.xml
    Accept: application/xml
    200 OK
    Content-Type: application/xml

    <activity at="2008-06-08T10:12:07Z" uid="carnage4life" type="tweet" guid=""/>


    Of course, this makes me wonder why one would think that it is feasible for Gnip to build a system that can handle the API polling traffic of every microblogging and social networking site out there but it is infeasible for Twitter to figure out how to handle the polling traffic for their own service. Talk about lowered expectations. Wink

    So what do I think of Gnip? I think the ping server mode may be of some interest for services that think it is cheaper to have code that pings Gnip after every user update instead building out an update stream service. However since a lot of sites already have some equivalent of the public timeline it isn’t clear that there is a huge need for a ping service. Crawlers can just hit the public timeline which I assume is what services like Summize and TweetScan do to keep their indexes of tweets up to date.

    As for using Gnip as a mechanism for reducing the load API clients put on a microblogging or similar service? Gnip is totally useless for that in it’s current incarnation. API clients aren’t interested in updates made by single user. They are interested in all the updates made by all the people the user is following. So for Twadget to use Gnip to lighten the load it causes on Twitter’s servers on my behalf, it has to build a collection of all the people I am following in Gnip and then keep that list of users in sync with whatever that list is on Twitter. But if it has to constantly poll Twitter for my friend list, isn’t it still putting the same amount of load on Twitter? I guess this could be fixed by having Twitter publish follower/following lists to Gnip but that introduces all sorts of interesting technical and privacy issues. But that doesn’t matter since the folks at Gnip brag about only keeping 60 minutes of worth of updates as the “secret sauce” to their scalability. This means if I shut my Twitter client hasn’t polled Gnip in a 60 minute window (maybe my laptop is closed) then it doesn’t matter anyway and it has to poll Twitter.  I suspect someone didn’t finish doing their homework before rushing to “launch” Gnip.

    PS: One thing that is confusing to me is why all communication between applications and Gnip needs to be over SSL. The only thing I can see it adding is making it more expensive for Gnip run their service. I can’t think of any reason why the interactions described above need to be over a secure channel.

    Now Playing: Lil Wayne - Hustler Musik


    Categories: Platforms

    Late last week, the folks on the Google Data APIs blog announced that Google will now be supporting OAuth as the delegated authentication mechanism for all Google Data APIs. This move is meant to encourage the various online services that provide APIs that access a user’s data in the “cloud” to stop reinventing the wheel when it comes to delegated authentication and standardize on a single approach.

    Every well-designed Web API that provides access to a customer’s data in the cloud utilizes a delegated authentication mechanism which allows users to grant 3rd party applications access to their data without having to give the application their username and password. There is a good analogy for this practice in the OAuth: Introduction page which is excerpted below

    What is it For?

    Many luxury cars today come with a valet key. It is a special key you give the parking attendant and unlike your regular key, will not allow the car to drive more than a mile or two. Some valet keys will not open the trunk, while others will block access to your onboard cell phone address book. Regardless of what restrictions the valet key imposes, the idea is very clever. You give someone limited access to your car with a special key, while using your regular key to unlock everything.

    Everyday new website offer services which tie together functionality from other sites. A photo lab printing your online photos, a social network using your address book to look for friends, and APIs to build your own desktop application version of a popular site. These are all great services – what is not so great about some of the implementations available today is their request for your username and password to the other site. When you agree to share your secret credentials, not only you expose your password to someone else (yes, that same password you also use for online banking), you also give them full access to do as they wish. They can do anything they wanted – even change your password and lock you out.

    This is what OAuth does, it allows the you the User to grant access to your private resources on one site (which is called the Service Provider), to another site (called Consumer, not to be confused with you, the User). While OpenID is all about using a single identity to sign into many sites, OAuth is about giving access to your stuff without sharing your identity at all (or its secret parts).

    So every service provider invented their own protocol to do this, all of which are different but have the same basic components. Today we have Google AuthSub, Yahoo! BBAuth, Windows Live DelAuth, AOL OpenAuth, the Flickr Authentication API, the Facebook Authentication API and others. All different, proprietary solutions to the same problem.

    This ends up being problematic for developers because if you want to build an application that talks to multiple services you not only have to deal with the different APIs provided by these services but also the different authorization/authentication models they utilize as well. In a world where “social aggregation” is becoming more commonplace with services like Plaxo Pulse & FriendFeed and more applications are trying to bridge the desktop/cloud divide like OutSync and RSS Bandit, it sucks that these applications have to rewrite the same type of code over and over again to deal with the basic task of getting permission to access a user’s data. Standardizing on OAuth is meant to fix that. A number of startups like Digg & Twitter as well as major players like Yahoo and Google have promised to support it, so this should make the lives of developers easier.

    Of course, we still have work to do as an industry when it comes to the constant wheel reinvention in the area of Web APIs. Chris Messina points to another place where every major service provider has invented a different proprietary protocol for doing the same task in his post Inventing contact schemas for fun and profit! (Ugh) where he writes

    And then there were three
    Today, Yahoo!
    announced the public availability of their own Address Book API.

    However, I have to lament yet more needless reinvention of contact schema. Why is this a problem? Well, as I pointed out about Facebook’s approach to developing their own platform methods and formats, having to write and debug against yet another contact schema makes the “tax” of adding support for contact syncing and export increasingly onerous for sites and web services that want to better serve their customers by letting them host and maintain their address book elsewhere.

    This isn’t just a problem that I have with Yahoo!. It’s something that I encountered last November with the SREG and proposed Attribute Exchange profile definition. And yet again when Google announced their Contacts API. And then again when Microsoft released theirs! Over and over again we’re seeing better ways of fighting the password anti-pattern flow of inviting friends to new social services, but having to implement support for countless contact schemas. What we need is one common contacts interchange format and I strongly suggest that it inherit from vcard with allowances or extension points for contemporary trends in social networking profile data.

    I’ve gone ahead and whipped up a comparison matrix between the primary contact schemas to demonstrate the mess we’re in.

    Kudos to the folks at Google for trying to force the issue when it comes to standardizing on a delegated authentication protocol for use on the Web. However there are still lots of places across the industry where we speak different protocols and thus incur a needless burden on developers when a single language might do. It would be nice to see some of this unnecessary redundancy eliminated in the future.

    Now Playing: G-Unit - I Like The Way She Do It


    Categories: Platforms | Web Development

    Matt Asay of C|Net has an article entitled Facebook adopts the CPAL poison pill where he writes

    Instead, by choosing CPAL, Facebook has ensured that it can be open source without anyone actually using its source. Was that the intent?

    As OStatic explains, CPAL requires display of an attribution notice on derivative works. This practice, which effectively requires downstream code to carry the original developer(s)' logo, came to be known as "badgeware." It was approved by the OSI but continues to be viewed with suspicion within the open-source community.

    I've written before about how most open-source licenses don't apply themselves well to the networked economy. Only the OSL, AGPL, and CPAL contemplate web-based services. It's not surprising that Facebook opted for one of these licenses, but I am surprised it chose the one least likely to lead to developers actually modifying the Facebook platform.

    If the point was to protect the Facebook platform from competition (i.e., derivative works), Facebook chose a good license. If it was to encourage development, it chose the wrong license.

    But if the purpose was to prevent modifications of the platform, why bother open sourcing it at all?

    I've seen more than one person repeat the sentiment in the above article which leaves me completely perplexed. With fbOpen Facebook has allowed anyone who is interested to run Facebook applications and participate in what is currently the most popular & vibrant social network widget ecosystem in the world.

    I can think of lots of good reasons for not wanting to adopt fbOpen. Maybe the code is in PHP and you are a Ruby On Rails shop. Or maybe it conflicts with your company's grand strategy of painting Facebook as the devil and you the heroes of openness (*cough* Google *cough*). However I can't see how requiring that you mention somewhere on your site that your social network's widget platform is powered by the Facebook developer platform is some sort of onerous POISON PILL which prevents you from using it. In the old days, companies used to charge you for the right to say your application is compatible with theirs, heck, Apple still does. So it seems pretty wacky for someone to call Facebook out for letting people use their code and encouraging them to use the Facebook brand in describing their product. Shoot!

    The premise of the entire article is pretty ridiculous, it's like calling the BSD License a poison pill license because of the advertising clause. This isn't to say there aren't real issues with an advertising clause as pointed out in the GNU foundation's article The BSD License Problem. However as far as I'm aware,  adopters of fbOpen don't have to worry about being obligated to display dozens powered by X messages because every bit of code they depend on requires that it is similarly advertised. So that argument is moot in this case.

    Crazy article but I've come to expect that from Matt Asay's writing.

    Now Playing: Eminem & D12 - Sh*t On You


    When Google Gears was first announced, it was praised as the final nail in the coffin for desktop applications as it now made it possible to take Web applications offline. However in the past year that Gears has existed, there hasn't been as much progress or enthusiasm in taking applications offline as was initially thought when Gears was announced. There are various reasons for this and since I've already given my thoughts on taking Web applications offline so I won't repeat myself in this post. What I do find interesting is that many proponents of Google Gears including technology evangelists at Google have been gradually switching to pushing Gears as a way to route around browsers and add features to the Web, as opposed to just being about an 'offline solution'. Below are some posts from the past couple of months showing this gradual switch in positioning.

    Alex Russell of the Dojo Toolkit wrote a blog post entitled Progress Is N+1 in March of this year that contained the following excerpt

    Every browser that we depend on either needs an open development process or it needs to have a public plan for N+1. The goal is to ensure that the market knows that there is momentum and a vehicle+timeline for progress. When that’s not possible or available, it becomes incumbent on us to support alternate schemes to rev the web faster. Google Gears is our best hope for this right now, and at the same time as we’re encouraging browser venders to do the right thing, we should also be championing the small, brave Open Source team that is bringing us a viable Plan B. Every webdev should be building Gear-specific features into their app today, not because Gears is the only way to get something in an app, but because in lieu of a real roadmap from Microsoft, Gears is our best chance at getting great things into the fabric of the web faster. If the IE team doesn’t produce a roadmap, we’ll be staring down a long flush-out cycle to replace it with other browsers. The genius of Gears is that it can augment existing browsers instead of replacing them wholesale. Gears targets the platform/product split and gives us a platform story even when we’re neglected by the browser vendors.

    Gears has an open product development process, an auto-upgrade plan, and a plan for N+1.

    At this point in the webs evolution, I’m glad to see browser vendors competing and I still feel like that’s our best long-term hope. But we’ve been left at the altar before, and the IE team isn’t giving us lots of reasons to trust them as platform vendors (not as product vendors). For once, we have an open, viable Plan B.

    Gears is real, bankable progress.

    This was followed up by a post by Dion Almaer who's a technical evangelist at Google who wrote the following in his post Gears as a bleeding-edge HTML 5 implementation

    I do not see HTML 5 as competition for Gears at all. I am sitting a few feet away from Hixie’s desk as I write this, and he and the Gears team have a good communication loop.

    There is a lot in common between Gears and HTML 5. Both are moving the Web forward, something that we really need to accelerate. Both have APIs to make the Web do new tricks. However HTML 5 is a specification, and Gears is an implementation.
    Standards bodies are not a place to innovate, else you end up with EJB and the like.
    Gears is a battle hardened Web update mechanism, that is open source and ready for anyone to join and take in interesting directions.

    and what do Web developers actually think about using Google's technology as a way to "upgrade the Web" instead of relying on Web browsers and standards bodies for the next generation of features for the Web? Here's one answer from Matt Mullenweg, founder of WordPress, taken from his post Infrastructure as Competitive Advantage 

    When a website “pops” it probably has very little to do with their underlying server infrastructure and a lot to do with the perceived performance largely driven by how it’s coded at the HTML, CSS, and Javascript level. This, incidentally, is one of the reasons Google Gears is going to change the web as we know it today - LocalServer will obsolete CDNs as we know them. (Look for this in WordPress soonish.)

    That's a rather bold claim (pun intended) by Matt. If you're wondering what features Matt is adding to WordPress that will depend on Gears, they were recently discussed in Dion Almaer's post Speed Up! with Wordpress and Gears which is excerpted below

    WordPress 2.6 and Google Gears

    However, Gears is so much more than offline, and it is really exciting to see “Speed Up!” as a link instead of “Go Offline?”

    This is just the beginning. As the Gears community fills in the gaps in the Web development model and begins to bring you HTML5 functionality I expect to see less “Go Offline” and more “Speed Up!” and other such phrases. In fact, I will be most excited when I don’t see any such linkage, and the applications are just better.

    With an embedded database, local server storage, worker pool execution, desktop APIs, and other exciting modules such as notifications, resumable HTTP being talked about in the community…. I think we can all get excited.

    Remember all those rumors back in the day that Google was working on their own browser? Well they've gone one better and are working on the next Flash. Adobe likes pointing out that Flash has more market share than any single browser and we all know that has Flash has gone above and beyond the [X]HTML standards bodies to extend the Web thus powering popular, rich user experiences that weren't possible otherwise (e.g. YouTube). Google is on the road to doing the same thing with Gears. And just like social networks and content sharing sites were a big part in making Flash an integral part of the Web experience for a majority of Web users, Google is learning from history with Gears as can be seen by the the recent announcements from MySpace. I expect we'll soon see Google leverage the popularity of YouTube as another vector to spread Google Gears.  

    So far none of the Web sites promoting Google Gears have required it which will limit its uptake. Flash got ahead by being necessary for sites to even work. It will be interesting to see if or when sites move beyond using Gears for nice-to-have features and start requiring it to function. It sounds crazy but I never would have expected to see sites that would be completely broken if Flash wasn't installed five years ago but it isn't surprising today (e.g. YouTube).

    PS: For those who are not familiar with the technical details of Google Gears, it currently provides three main bits of functionality; thread pools for asynchronous operations, access to a SQL database running on the user's computer, and access to the user's file system for storing documents, images and other media. There are also beta APIs which provide more access to the user's computer from the browser such as the Desktop API which allows applications to create shortcuts on the user's desktop.  

    Now Playing: Nas - It Ain't Hard To Tell


    Categories: Platforms | Web Development

    A few months ago Michael Mace, former Chief Competitive Officer and VP of Product Planning at Palm, wrote an insightful and perceptive eulogy for mobile application platforms entitled Mobile applications, RIP where he wrote

    Back in 1999 when I joined Palm, it seemed we had the whole mobile ecosystem nailed. The market was literally exploding, with the installed base of devices doubling every year, and an incredible range of creative and useful software popping up all over. In a 22-month period, the number of registered Palm developers increased from 3,000 to over 130,000. The PalmSource conference was swamped, with people spilling out into the halls, and David Pogue took center stage at the close of the conference to tell us how brilliant we all were.

    It felt like we were at the leading edge of a revolution, but in hindsight it was more like the high water mark of a flash flood.

    Two problems have caused a decline the mobile apps business over the last few years. First, the business has become tougher technologically. Second, marketing and sales have also become harder.

    From the technical perspective, there are a couple of big issues. One is the proliferation of operating systems. Back in the late 1990s there were two platforms we had to worry about, Pocket PC and Palm OS. Symbian was there too, but it was in Europe and few people here were paying attention. Now there are at least ten platforms. Microsoft alone has several -- two versions of Windows Mobile, Tablet PC, and so on. [Elia didn't mention it, but the fragmentation of Java makes this situation even worse.]

    I call it three million platforms with a hundred users each (link).  

    In the mobile world, what have we done? We created a series of elegant technology platforms optimized just for mobile computing. We figured out how to extend battery life, start up the system instantly, conserve precious wireless bandwidth, synchronize to computers all over the planet, and optimize the display of data on a tiny screen.

    But we never figured out how to help developers make money. In fact, we paired our elegant platforms with a developer business model so deeply broken that it would take many years, and enormous political battles throughout the industry, to fix it -- if it can ever be fixed at all.

    So what does this have to do with social networking sites? The first excerpt from the post where it talks about 130,000 registered developers for the Palm OS sounds a lot like the original hype around the Facebook platform with headlines screaming Facebook Platform Attracts 1000 Developers a Day.

    The second excerpt talks about the time there became two big mobile platforms, analogous to the appearance of Google's OpenSocial on the scene as a competing platform used by a consortium of Facebook's competitors. This means that widget developers like Slide and RockYou have to target one set of APIs when building widgets for MySpace, LinkedIn, & Orkut and another completely different set of APIs when building widgets for Facebook & Bebo. Things will likely only get worse. One reason for this is that despite API standardization, all of these sites do not have the same features. Facebook has a Web-based IM, Bebo does not. Orkut has video uploads, LinkedIn does not. All of these differences eventually creep into such APIs as "vendor extensions". The fact that both Google and Facebook are also shipping Open Source implementations of their platforms (Shindig and fbOpen) makes it even more likely that the social networking sites will find ways to extend these platforms to suit their needs.

    Finally, there's the show-me-the-money problem. It still isn't clear how one makes money out of building on these social networking platforms. Although companies like Photobucket and Slide have gotten a quarter of a billion and half a billion dollar valuations these have all been typical "Web 2.0" zero profit valuations. This implies that platform developers don't really make money but instead are simply trying to gather a lot of eyeballs then flip to some big company with lots of cash and no ideas. Basically it's a VC funded lottery system. This doesn't sound like the basis of successful and long lived platform such as what we've seen with Microsoft's Windows and Office platforms or Google's Search and Adwords ecosystem. In these platforms there are actually ways for companies to make money by adding value to the ecosystem, this seems more sustainable in the long run than what we have today in the various social networking widget platforms.

    It will be interesting to see if the history repeats itself in this instance.

    Now Playing: Rick Ross - Luxury Tax (featuring Lil Wayne, Young Jeezy and Trick Daddy)


    Categories: Platforms | Social Software

    Pablo Castro has a blog post entitled AtomPub support in the ADO.NET Data Services Framework where he talks about the progress they've made in building a framework for using the Atom Publishing Protocol (RFC 5023) as a protocol for communicating with SQL Server and other relational databases. Pablo explains why they've chosen to build on AtomPub in his post which is excerpted below

    Why are we looking at AtomPub?

    Astoria data services can work with different payload formats and to some level different user-level details of the protocol on top of HTTP. For example, we support a JSON payload format that should make the life of folks writing AJAX applications a bit easier. While we have a couple of these kind of ad-hoc formats, we wanted to support a pre-established format and protocol as our primary interface.

    If you look at the underlying data model for Astoria, it boils down to two constructs: resources (addressable using URLs) and links between those resources. The resources are grouped into containers that are also addressable. The mapping to Atom entries, links and feeds is so straightforward that is hard to ignore. Of course, the devil is in the details and we'll get to that later on.

    The interaction model in Astoria is just plain HTTP, using the usual methods for creating, updating, deleting and retrieving resources. Furthermore, we use other HTTP constructs such as "ETags" for concurrency checks,  "location" to know where a POSTed resource lives, and so on. All of these also map naturally to AtomPub.

    From our (Microsoft) perspective, you could imagine a world where our own consumer and infrastructure services in Windows Live could speak AtomPub with the same idioms as Astoria services, and thus could both have a standards-based interface and also use the same development tools and runtime components that work with any Astoria-based server. This would mean less clients/development tools for us to create and more opportunity for our partners in the libraries and tools ecosystem out there.

    Although I'm not responsible for any public APIs at Microsoft these days, I've found myself drawn into the various internal discussions on RESTful protocols and AtomPub due to the fact that I'm a busy body. :)

    Early on in the Atom effort, I felt that the real value wasn't in defining yet another XML syndication format but instead in the editing protocol. Still I underestimated how much mind share and traction AtomPub would eventually end up getting in the industry. I'm glad to see Microsoft making a huge bet on standards based, RESTful protocols especially given our recent history where we foisted Snakes On A Plane on the industry.

    However since AtomPub is intended to be an extensible protocol, Astoria has added certain extensions to make the service work for their scenarios while staying within the letter and spirit of the spec. Pablo talks about some of their design decisions when he writes

    We are simply mapping whatever we can to regular AtomPub elements. Sometimes that is trivial, sometimes we need to use extensions and sometimes we leave AtomPub alone and build an application-level feature on top. Here is an initial list of aspects we are dealing with in one way or the other. We’ll also post elaborations of each one of these to the appropriate Atom syntax|protocol mailing lists.
    c) Using AtomPub constructs and extensibility mechanisms to enable Astoria features:

    • Inline expansion of links (“GET a given entry and all the entries related through this named link”, how we represent a request and the answer to such a request in Atom?).
    • Properties for entries that are media link entries and thus cannot carry any more structured data in the <content> element
    • HTTP methods acting on bindings between resources (links) in addition to resources themselves
    • Optimistic concurrency over HTTP, use of ETags and in general guaranteeing consistency when required
    • Request batching (e.g. how does a client send a set of PUT/POST/DELETE operations to the server in a single go?)

    d) Astoria design patterns that are not AtomPub format/protocol concepts or extensions:

    • Astoria gives semantics to URLs and has a specific syntax to construct them
    • How metadata that describes the structure of a service end points is exposed. This goes from being to find out entry points (e.g. collections in service documents) to having a way of discovering the structure of entries that contain structured data

    Pablo will be posting more about the Astoria design decisions on atom-syntax and atom-protocol in the coming weeks. It'll be interesting to see the feedback on the approaches they've taken with regards to following the protocol guidelines and extending it where necessary.

    It looks like I'll have to renew my subscription to both mailing lists.

    Now Playing: Lil Jon & The Eastside Boyz - Grand Finale (feat Nas, Jadakiss, T.I., Bun B & Ice Cube)


    Categories: Platforms | XML Web Services

    On Friday of last week,  brad Fitzpatrick posted an entry on the Google code blog entitled URLs are People, Too where he wrote

    So you've just built a totally sweet new social app and you can't wait for people to start using it, but there's a problem: when people join they don't have any friends on your site. They're lonely, and the experience isn't good because they can't use the app with people they know. You could ask them to search for and add all their friends, but you know that every other app is asking them to do the same thing and they're getting sick of it. Or they tried address book import, but that didn't totally work, because they don't even have all their friends' email addresses (especially if they only know them from another social networking site!). What's a developer to do?

    One option is the new Social Graph API, which makes information about the public connections between people on the Web easily available and useful
    Here's how it works: we crawl the Web to find publicly declared relationships between people's accounts, just like Google crawls the Web for links between pages. But instead of returning links to HTML documents, the API returns JSON data structures representing the social relationships we discovered from all the XFN and FOAF. When a user signs up for your app, you can use the API to remind them who they've said they're friends with on other sites and ask them if they want to be friends on your new site.

    I talked to Dewitt Clinton, Kevin Marks and Brad Fitzpatrick about this API at the O'Reilly Social Graph FOO Camp and I think it is very interesting. Before talking about the API, I did want to comment on the fact that this is the second time I've seen a Google employee ship something that implies that any developer can just write custom code to do data analysis on top of their search index (i.e. Google's copy of the World Wide Web) and then share that information with the world. The first time was Ian Hickson's work with Web authoring statistics. That is cool.

    Now back to the Google Social Graph API. An illuminating aspect of my conversations at the Social Graph FOO Camp is that the scenario described by Brad where social applications would like to bootstrap the user's experience by showing them their friends who use the service is more important than the "invite my friends to join this new social networking site" for established social apps. This is interesting primarily because both goals are currently achieved by the current anti-pattern of requesting a user's username and password to their email service provider and screen scraping their address book. The social graph API attempts to eliminate the need for this ugly practice by providing a public API which will crawl a user's publicly articulated relationships and then providing an API that social apps can use to find the user's identities on other services as well as their relationships with other users on those services.

    The API uses URIs as the primary identifier for users instead of email addresses. Of course, since there is often an intuitive way to convert a username to a URI (e.g. 'carnage4life on Twitter' =>, users simply need to provide a username instead of a URI.

    So how would this work in the real world? So let's say I signed up for Facebook for the first time today. At this point my experience on the site would be pretty lame because I've made no friends so my news feed would be empty and I'm not connected to anyone I know on the site yet. Now instead of Facebook collecting the username and password for my email address provider to screen scrape my addres book (boo hiss) it shows a list of social networking sites and asks for just my username on those sites. On obtaining my username on Twitter, it maps that to a URI and passes that to the Social Graph API. This returns a list of people I'm following on Twitter with various identifiers for them, which Facebook in turn looks up in their user database then prompts me to add them as my friends on the site if any of them are Facebook users.

    This is a good idea that gets around the proliferation of applications that collect usernames and passwords from users to try to access their social graph on other sites. However there are lots of practical problems with relying on this as an alternative to screen scraping and other approaches intended to discover a user's social graph including

    • many social networking sites don't expose their friend lists as FOAF or XFN
    • many friend lists on social networking sites are actually hidden from the public Web (e.g. most friend lists on Facebook) which is by design
    • many friend lists in social apps aren't even on the Web (e.g. buddy lists from IM clients, address books in desktop clients)

    That said this is a good contribution to this space. Ideally, the major social networking sites and address book providers would also expose APIs that social applications can use to obtain a user's social graph without resorting to screen scraping. We are definitely working on that at Windows Live with the Windows Live Contacts API. I'd love to see other social software vendors step up and provide similar APIs in the coming months. That way everybody wins; our users, our applications and the entire ecosystem.

    Now Playing: Playaz Circle - Duffle Bag Boy (feat. Lil Wayne)


    Categories: Platforms | Social Software

    A few days ago I got a Facebook message from David Recordon about Six Apart's release of the ActionStreams plugin. The meat of the announcement is excerpted below

    Today, we're shipping the next step in our vision of openness -- the Action Streams plugin -- an amazing new plugin for Movable Type 4.1 that lets you aggregate, control, and share your actions around the web. Now of course, there are some social networking services that have similar features, but if you're using one of today's hosted services to share your actions it's quite possible that you're giving up either control over your privacy, management of your identity or profile, or support for open standards. With the Action Streams plugin you keep control over the record of your actions on the web. And of course, you also have full control over showing and hiding each of your actions, which is the kind of privacy control that we demonstrated when we were the only partners to launch a strictly opt-in version of Facebook Beacon. Right now, no one has shipped a robust and decentralized complement to services like Facebook's News Feed, FriendFeed, or Plaxo Pulse. The Action Streams plugin, by default, also publishes your stream using Atom and the Microformat hAtom so that your actions aren't trapped in any one service. Open and decentralized implementations of these technologies are important to their evolution and adoption, based on our experiences being involved in creating TrackBack, Atom, OpenID, and OAuth. And we hope others join us as partners in making this a reality.

    This is a clever idea although I wouldn't compare it to the Facebook News Feed (what my social network is doing) it is instead a self hosted version of the Facebook Mini-Feed (what I've been doing). Although people have been doing this for a while by aggregating their various feeds and republishing to their blog (life streams?), I think this is the first time that a full fledged framework for doing this has been shipped as an out of the box solution. 

    Mark Paschal has a blog post entitled Building Action Streams which gives an overview of how the framework works. You define templates which contains patterns that should be matched in a feed (RSS/Atom) or in an HTML document and how to convert these matched elements into a blog post. Below is the template for extracting and republishing links extracted from the site's RSS feeds.

            name: Links
            description: Your public links
            html_form: '[_1] saved the link <a href="[_2]">[_3]</a>'
                - url
                - title
            url: '{{ident}}'
            identifier: url
                foreach: //item
                    created_on: dc:date/child::text()
                    title: title/child::text()
                    url: link/child::text()

    It reminds me a little of XSLT. I almost wondered why they just didn't use that until I saw that it also supports pattern matching HTML docs using Web::Scraper [and that XSLT is overly verbose and difficult to grok at first glance].

    Although this is a pretty cool tool I don't find it interesting as a publishing tool. On the other hand, it's potential as a new kind of aggregator is very interesting. I'd love to see someone slap more UI on it and make it a decentralized version of the Facebook News feed. Specifically, if I could feed it a blogroll, have it use the Google Social Graph API to figure out the additional services that the people in my subscriptions have and then build a feed reader + news feed experience on top of it. That would be cool. 

    Come to think of it, this would be something interesting to experiment with in future versions of RSS Bandit.

    Now Playing: Birdman - Pop Bottles (remix) (feat. Jim Jones & Fabolous)


    Categories: Platforms | Social Software

    Two seemingly unrelated posts flew by my aggregator this morning. The first was Robert Scoble’s post The shy Mark Zuckerberg, founder of Facebook where he talks about meeting the Facebook founder. During their conversation, Zuckerburg admits they made mistakes with their implementation of Facebook Beacon and will be coming out with an improved version soon.

    The second post is from the Facebook developer blog and it is the announcement of the JavaScript Client Library for Facebook API which states

    This JavaScript client library allows you to make Facebook API calls from any web site and makes it easy to create Ajax Facebook applications. Since the library does not require any server-side code on your server, you can now create a Facebook application that can be hosted on any web site that serves static HTML.

    Although the pundits have been going ape shit over this on Techmeme this is an unsurprising announcement given the precedent of Facebook Beacon. With that announcement they provided a mechanism for a limited set of partners to integrate with their feed.publishTemplatizedAction API using a Javascript client library. Exposing the rest of their API using similar techniques was just a matter of time.

    What was surprising to me when reading the developer documentation for the Facebook Javascript client library is the following notice to developers

    Before calling any other Facebook API method, you must first create an FB.ApiClient object and invoke its requireLogin() method. Once requireLogin() completes, it invokes the callback function. At that time, you can start calling other API methods. There are two way to call them: normal mode and batched mode.

    So unlike the original implementation of Beacon, the Facebook developers aren’t automatically associating your Facebook account with the 3rd party site then letting them party on your data. Instead, it looks like the user will be prompted to login before the Website can start interacting with their data on Facebook or giving Facebook data about the user.

    This is a much better approach than Beacon and remedies the primary complaint from my Facebook Beacon is Unfixable post from last month.

    Of course, I haven’t tested it yet to validate whether this works as advertised. If you get around to testing it before I do, let me know if it works the way the documentation implies in the comments.


    Ari Steinberg who works on the Facebook developer platform has a blog post entitled New Rules for News Feed which states

    As part of the user experience improvements we announced yesterday, we're changing the rules for how Feed stories can be published with the feed.publishTemplatizedAction API method. The new policy moving forward will be that this function should only be used to publish actions actively taken by the "actor" mentioned in the story. As an example, feed stories that say things like "John has received a present" are no longer acceptable. The product motivation behind this change is that Feed is a place to publish highly relevant stories about user activity, rather than passive stories promoting an application.

    To foster this intended behavior, we are changing the way the function works: the "actor_id" parameter will be ignored. Instead the session_key used to generate the feed story will be used as the actor.

    In order to ensure a high quality experience for users, starting 9am Pacific time Tuesday 22 January we may contact you, or in severe cases initiate an enforcement action, if your stories are not complying with the new policy, especially if the volume of non-complying stories is high.

    If you are not a developer using the Facebook platform, it may be unclear what exactly this announcement means to end users or applications that utilize Facebook’s APIs.

    To understand the impact of the Facebook announcement, it would be useful to first talk about the malicious behavior that Facebook is trying to curb. Today, an application can call feed.publishTemplatizedAction and publish a story to the user’s Mini-feed (list of all the user’s actions) which will also show up in the News Feed of the users friends. Unfortunately some Facebook applications have been publishing stories that don’t really correspond to a user taking an action. For example, when a user installs the Flixster application, Flixster not only publishes a story to all of the user’s friends saying the user has installed the application but also publishes a story to the friends of each of the user’s friends that also have Flixster installed. This means my friends get updates such as

    being sent to my friends when I wasn’t actually doing anything with the Flixster application. I don’t know about you but this seems like a rather insiduous way for an application to spread “virally”.

    Facebook’s attempt to curb such application spam is to require that an application have a session key that identifies the logged in user when publishing the story which implies that the user is actually using the application from within Facebook when the story is published. The problem with this remedy is that it totally breaks applications that publish stories to the Facebook News Feed when the user isn’t on the site. For example, since I have the Twitter application installed on Facebook, my Facebook friends get an update sent to their News Feeds whenever I post something new on Twitter.

    The problem for Facebook is that by limiting a valid usage of the API, they may have closed off a spam vector but have also closed off a valuable integration point for third party developers and for their users.

    PS: There might be an infinite session key loophole to the above restriction which I’m sure Facebook will close off if apps start abusing it as well.

    Now playing: Silkk the Shocker - It Ain't My Fault


    Paul Buchheit, creator of Gmail now the founder of FriendFeed, has a blog post entitled Should Gmail, Yahoo, and Hotmail block Facebook? where he writes

    Apparently Facebook will ban you (or at least Robert Scoble) if you attempt to extract your friend's email addresses from the service. Automated access is a difficult issue for any web service, so I won't argue with their decision -- it's their service and they own you. However, when I signed up for Facebook I gave them my Gmail address and password, using their find friends feature:
    So the question is, should Gmail, Yahoo, and Hotmail block Facebook (or close the accounts of anyone who uses Facebook's "friend finder") for violating their Terms of Use?

    I don't want to single out Facebook here since pretty much every "Web 2.0" website with social features is very in-your-face about asking for your credentials from your email provider and then screen scraping your contact's email addresses. I just signed up for Twitter and the user interface makes it cumbersome to even start using the service after creating an account without giving up your email username and password.

    I think there are two questions here. The first is whether users should be able to extract their data [including social graph data] from one service and import it into another. I personally believe the answer is Yes and this philosophy underlies what we've been working on at Windows Live and specifically the team I'm on which is responsible for the social graph contacts platform.

    The next question is whether screen scraping is the way to get this data? I think the answer is definitely not. The first problem with this approach is that when I give some random "Web 2.0" social network my email username and password, I’m not only giving them access to my address book but also access to This seems like a lot of valuable data to trust  to some fly by night "Web 2.0"  service that can't seem to hire a full time sys admin or a full rack in a data center let alone know how to properly safeguard my personal information.

    Another problem with this approach is that it encourages users to give up their usernames and passwords when prompted by any random Web site which increases incidences of phishing. Some have gone as far as calling this approach an anti-pattern that is kryptonite to the Open Web.

    Finally, there is no way to identify the application that is accessing data on the user's behalf if it turns out to be a malicious application. For example, if you read articles like Are you getting Quechup spammed you'll note that there's been more than one incident where a "Web 2.0" company turned out to either be spamming users via the email addresses they had harvested in this manner or straight up just resold the email addresses to spammers. Have you ever wondered how much spam you get because someone who has your email address blithely gave up your email credentials to some social network site who in turn used a Web service that is run by spammers to retrieve your contact details?

    So if I think that user's should be able to get out their data yet screen scraping isn't the way, what should we do? At Windows Live, we believe the right approach is to provide user-centric APIs which allow users to grant and revoke permission to third party applications to access their personal data. For the specific case of social graph data, we've provided an ALPHA Windows Live Contacts API which is intended to meet exactly this scenario. The approach taken by this API and similar patterns (e.g. using OAuth) solves all three concerns I've raised above.

    Now given what I've written above, do you think Hotmail should actively block or hinder screen scraping applications used to obtain the email addresses of a user's contacts?


    Categories: Platforms | Windows Live

    December 30, 2007
    @ 11:19 PM


    POST /reader/api/0/subscription/edit HTTP/1.1
    Content-Type: application/x-www-form-urlencoded
    Cookie: SID=DQAAAHoAAD4SjpLSFdgpOrhM8Ju-JL2V1q0aZxm0vIUYa-p3QcnA0wXMoT7dDr7c5FMrfHSZtxvDGcDPTQHFxGmRyPlvSvrgNe5xxQJwPlK_ApHWhzcgfOWJoIPu6YuLAFuGaHwgvFsMnJnlkKYtTAuDA1u7aY6ZbL1g65hCNWySxwwu__eQ
    Content-Length: 182
    Expect: 100-continue


    HTTP/1.1 200 OK
    Content-Type: text/html; charset=UTF-8
    Set-Cookie: GRLD=UNSET;Path=/reader/
    Transfer-Encoding: chunked
    Cache-control: private
    Date: Sun, 30 Dec 2007 23:08:51 GMT
    Server: GFE/1.3

    <html><head><title>500 Server Error</title>
    <style type="text/css">
          body {font-family: arial,sans-serif}
          div.nav {margin-top: 1ex}
          div.nav A {font-size: 10pt; font-family: arial,sans-serif}
          span.nav {font-size: 10pt; font-family: arial,sans-serif; font-weight: bold}
          div.nav A,span.big {font-size: 12pt; color: #0000cc}
          div.nav A {font-size: 10pt; color: black}
          A.l:link {color: #6f6f6f}
    <body text="#000000" bgcolor="#ffffff"><table border="0" cellpadding="2" cellspacing="0" width="100%"></table>
    <table><tr><td rowspan="3" width="1%"><b><font face="times" color="#0039b6" size="10">G</font><font face="times" color="#c41200" size="10">o</font><font face="times" color="#f3c518" size="10">o</font><font face="times" color="#0039b6" size="10">g</font><font face="times" color="#30a72f" size="10">l</font><font face="times" color="#c41200" size="10">e</font>&nbsp;&nbsp;</b></td>
    <tr><td bgcolor="#3366cc"><font face="arial,sans-serif" color="#ffffff"><b>Error</b></font></td></tr>
    <blockquote><h1>Server Error</h1>
    The server encountered a temporary error and could not complete your request.<p></p> Please try again in 30 seconds.

    <table width="100%" cellpadding="0" cellspacing="0"><tr><td bgcolor="#3366cc"><img alt="" width="1" height="4"></td></tr></table></body></html>


    Categories: Platforms | Programming | XML Web Services

    Sometime last week, Amazon soft launched Amazon SimpleDB, a hosted service for storing and querying structured data. This release plugged a hole in their hosted Web services offerings which include the Amazon Simple Storage Service (S3) and the Amazon Elastic Compute Cloud (EC2). Amazon’s goal of becoming the “Web OS” upon which the next generation of Web startups builds upon came off as hollow when all they gave you was BLOB storage and hosted computation but not structured storage. With SimpleDB, they’re almost at the point where all the tools you need for building the next or Flickr can be provided by Amazon’s Web Services. The last bit they need to provide is actual Web hosting so that developers don’t need to resort to absurd dynamic DNS hacks when interacting with their Amazon applications from the Web.

    The Good: Comoditizing hosted services and getting people to think outside the relational database box

    The data model of SimpleDB is remarkably similar to Google’s BigTable in that instead of having multiple tables and relations between them, you get a single big giant table which is accessed via the tuple of {row key, column key}. Although, both SimpleDB and BigTable allow applications to store multiple values for a particular tuple, they do so in different ways. In BigTable, multiple values are additionally keyed by timestamp so I can access data such using tuples such as {””,  “incoming_links”, “12–12–2007”}. In Amazon’s SimpleDB I’d simply be able to store multiple values for a particular key pair so I could access {”Dare Obasanjo”, “weblogs”} and it would return (“”, “”, “”).

    Another similarity that both systems share, is that there is no requirement that all “rows” in a table share the same schema nor is there an explicit notion of declaring a schema. In SimpleDB, tables are called domains, rows are called items and the columns are called attributes. 

    It is interesting to imagine how this system evolved. From experience, it is clear that everyone who has had to build a massive relational database that database joins kill performance. The longer you’ve dealt with massive data sets, the more you begin to fall in love with denormalizing your data so you can scale. Taking to its logical extreme, there’s nothing more denormalized than a single table. Even better, Amazon goes a step further by introducing multivalued columns which means that SimpleDB isn’t even in First Normal Form whereas we all learned in school that the minimum we should aspire to is Third Normal Form.

    I think it is great to see more mainstream examples that challenge the traditional thinking of how to store, manage and manipulate large amounts of data.

    I also think the pricing is very reasonable. If I was a startup founder, I’d strongly consider taking Amazon Web Services for a spin before going with a traditional LAMP or WISC approach.  

    The Bad: Eventual Consistency and Data Values are Weakly Typed

    The documentation for the PutAttributes method has the following note

    Because Amazon SimpleDB makes multiple copies of your data and uses an eventual consistency update model, an immediate GetAttributes or Query request (read) immediately after a DeleteAttributes or PutAttributes request (write) might not return the updated data.

    This may or may not be a problem depending on your application. It may be OK for a style application if it took a few minutes before your tag updates were applied to a bookmark but the same can’t be said for an application like Twitter. What would be useful for developers would be if Amazon gave some more information around the delayed propagation such as average latency during peak and off-peak hours.

    There is another interesting note in the documentation of the Query method which states

     Lexicographical Comparison of Different Data Types

    Amazon SimpleDB treats all entities as UTF-8 strings. Keep this in mind when storing and querying different data types, such as numbers or dates. Design clients to convert their data into an appropriate string format, so that query expression return expected results.

    The following are suggested methods for converting different data types into strings for proper lexicographical order enforcement:

    • Positive integers should be zero-padded to match the largest number of digits in your data set. For example, if the largest number you are planning to use in a range is 1,000,000, every number that you store in Amazon SimpleDB should be zero-padded to at least 7 digits. You would store 25 as 0000025, 4597 as 0004597, and so on.

    • Negative integers should be offset and turned into positive numbers and zero-padded. For example, if the smallest negative integer in your data set is -500, your application should add at least 500 to every number that you store. This ensures that every number is now positive and enables you to use the zero-padding technique.

    • To ensure proper lexicographical order, convert dates to the ISO 8601 format.

    [Note] Note

    Amazon SimpleDB provides utility functions within our sample libraries that help you perform these conversions in your application.

    This is ghetto beyond belief. I should know ahead of time what the lowest number will be in my data set and add/subtract offsets from data values when inserting and retrieving them from SimpleDB? I need to know the largest number in my data set and zero pad to that length? Seriously, WTF?

    It’s crazy just thinking about the kinds of bugs that could be introduced into applications because of this wacky semantics and the recommended hacks to get around them. Even if this is the underlying behavior of SimpleDB, Amazon should have fixed this up in an APIs layer above SimpleDB then exposed that instead of providing ghetto helper functions in a handful of popular programming languages then crossing their fingers hoping that no one hits this problem.  

    The Ugly: Web Interfaces, that Claim to be RESTful but Aren’t

    I’ve talked about APIs that claim to be RESTful but aren’t in the past but Amazon’s takes the cake when it comes to egregious behavior. Again, from the documentation for the PutAttributes method we learn

    Sample Request

    The following example uses PutAttributes on Item123 which has attributes (Color=Blue), (Size=Med) and (Price=14.99) in MyDomain. If Item123 already had the Price attribute, this operation would replace the values for that attribute.
    &AWSAccessKeyId=[valid access key id]

    Sample Response

    <PutAttributesResponse xmlns="">

    Wow. A GET request with a parameter called Action which modifies data? What is this, 2005? I thought we already went through the realization that GET requests that modify data are bad after the Google Web Accelerator scare of 2005?

    Of course, I'm not the only one that thinks this is ridonkulous. See similar comments from Stefan Tilkov, Joe Gregorio, and Steve Loughran. Methinks, someone at Amazon needs to go read some guidelines on building RESTful Web services.

    Bonus points to Subbu Allamaraju for refactoring the SimpleDB API into a true RESTful Web service

    Speaking of ridonkulous APIs trends, it seems the SimpleDB Query method follows the lead of the Google Base GData API in stuffing a SQL-like query language into the query string parameters of HTTP GET requests. I guess it is RESTful, but Damn is it ugly.

    Now playing: J. Holiday - Suffocate


    Categories: Platforms | XML Web Services

    Yesterday I read about the Opening up Facebook Platform Architecture. My initial thoughts are that Facebook has done what Google claimed to have done but didn't with Open Social. Facebook seems to have provided detailed specs on how to build an interoperable widget platform unlike Google who unleashed a bunch of half baked REST API specs with no details about the "widget" aspect of the platform unless you are building an Orkut application.

    As I've thought about this over the past few weeks, building a widget platform that is competitive with Facebook's is hard work. Remember all those stories about OpenSocial apps being hacked in 45 minutes or less? The problem was that sites like Plaxo Pulse and Ning simply didn't think through all the ramifications of building a widget platform and bumped up against the kind of "security 101" issues that widget platforms like Netvibes, iGoogle and gadgets solved years ago.  I started to wonder exactly how many of these social networking sites will be able to keep up with the capabilities and features of platforms like Facebook's and Orkut's when such development is outside their core competency.

    In fact let's take a quote from the TechCrunch story First OpenSocial Application Hacked Within 45 Minutes 

    theharmonyguy says he’s successfully hacked Facebook applications too, including the Superpoke app, but that it is more difficult:

    Facebook apps are not quite this easy. The main issue I’ve found with Facebook apps is being able to access people’s app-related history; for instance, until recently, I could access the SuperPoke action feed for any user. (I could also SuperPoke any user; not sure if they’ve fixed that one. Finally, I can access all the SuperPoke actions - they haven’t fixed that one, but it’s more just for fun.) There are other apps where, last I checked, that was still an issue ( e.g. viewing anyone’s Graffiti posts).

    But the way Facebook setup their platform, it’s tons harder to actually imitate a user and change profile info like this. I’m sure this kind of issue could be easily solved by some verification code on RockYou’s part, but it’s not inherent in the platform - unlike Facebook. I could do a lot more like this on FB if Facebook hadn’t set things up the way they did.

    At that point I ask myself, how useful is it to have the specs for the platform if you aren't l337 enough to implement it yourself? [Update: It looks like Google is well aware of this problem and has launched an Apache project called Shindig which is meant to be an Open Source widget platform that implements the Open Social APIs. This obviously indicates that Google realizes the specs are worthless and instead shipping a reusable widget platform is the way to go. It’s interesting to note that with this move Google is attempting to be a software vendor, advertising partner and competitor to the Web’s social networking sites. That must lead to some confusing internal meetings. Smile ]

    For now, Facebook has definitely outplayed Google here. The most interesting part of the Facebook announcement to me is

    Now we also want to share the benefits of our work by enabling other social sites to use our platform architecture as a model. In fact, we’ll even license the Facebook Platform methods and tags to other platforms. Of course, Facebook Platform will continue to evolve, but by enabling other social sites to use what we’ve learned, everyone wins -- users get a better experience around the web, developers get access to new audiences, and social sites get more applications.

    it looks like Facebook plans to assert their Intellectual Property rights on anyone who clones their platform. This is one of the reasons I've found Open Social to be worrisome abuse of the term "open". Like Facebook, Google shipped specs for a proprietary platform whose copyrights, patents, etc belong to them. Any company that implements Open Social or even GData which it is built upon is using Google's intellectual property.

    What's to stop Google from asserting these intellectual property rights the way Facebook is doing today? What exactly is "open" about it that makes it any less proprietary than what Facebook just announced?


    I often tell people at work that turning an application into a platform is a balancing act, not only do you have to please the developers on your platform BUT you also have to please the users of your application as well.

    I recently joined the This has got to stop group on Facebook. If you don't use Facebook, the front page of the group is shown in the screenshot below.


    I've seen a bunch of tech folks blog about being overwhelmed by Facebook app spam like Tim Bray in his post Facebook Rules and Doc Searls in Too much face(book) time. However I assumed that the average college or high school student who used the site didn't feel that way. Looks like I was wrong.

    The folks at Facebook could fix this problem easily but it would eliminate a lot of the "viralness" that has been hyped about the platform. Personally, I think applications on the site have gotten to the point where the costs have begun to outweigh the benefits. The only way to tip the balance back is to rein them in otherwise it won't be long until the clean and minimal vs. cluttered and messy aesthetics stop working in their favor in comparisons with MySpace. When that happens there will be an opportunity for someone else to do the same thing to them.

    On an unrelated note,  the sponsored group about Facebook Beacon has 74,000 members which is less than half of the size of the This has got to stop group.  This is despite the fact that has had national media attention focused on that topic. I guess it goes to show that just because a story gets a lot of hype in blogs and the press doesn't mean that it is the most important problem facing the people it actually affects.

    Now playing: Jay-Z - Ignorant Shit


    I've seen a couple of recent articles talking about how Facebook has turned on it's platform developers with it's most recent announcements. Fortune magazine has an article today entitled Fear Among Facebook Developers which states

    Zuckerberg wouldn’t deny it. On stage at the Web 2.0 conference in October in San Francisco, he acknowledged that his company reserves the right to build anything it wants and compete with any of its independent developers, but that the company intends to do this fairly. “We need to make sure we have the flexibility to do what we need as the platform grows—to be flexible enough able to add in the next big component [like the News Feed],” he said.

    Yesterday Erick Schonfeld wrote an article on TechCrunch entitled iLike vs. Facebook: The Battle For The Music Artist that contains the following excerpt

    Instead, Facebook is treating music artists just like any other brands, which can also set up their own Facebook pages, collect fans, and market to them directly. Yet, when it comes to music artists, one of Facebook’s most popular application developers, iLike, is doing the exact same thing.
    So if you are a music artist, you now have to make a decision: Do you go with the iLike page as your main Facebook page (and take advantage of the nearly 10 million members who use the iLike app), or do you go with your own advertiser page on Facebook? Case in point: the new Facebook page for 50 Cent (shown left) had only three fans when it first went up just after midnight, compared to 1.2 million fans on his iLike page on Facebook.

    This is a tale as old as the hills. Software platforms evolve and often this means incorporating features that were once considered as "features to be provided by others" as core parts of the platform. There are thousands of examples of application developers adding value to a platform that eventually became features of the platform due to popular demand. Whether it is adding a TCP/IP stack to the operating system, tabbed browsing to a Web browser or adding persistent searches to a Web mail application, it's all the same story. It is hard to argue that it isn't better for users such functionality to be a native part of the platform or underlying application, however it often leaves the platform developers in a lurch.

    If the application developer cannot find a new way to add value to the platform then their usefulness to users comes to an end. This doesn't make it a slam dunk that once the platform vendor sees the value added by an application on it's platform, that things will eventually go sour for the application. There are many examples of vendors trying to compete with an application on their platform only to concede defeat and then try to acquire the company; PhotoBucket's acquisition by MySpace and Oracle's attempt to acquire BEA are two recent examples. [Editors note - I suspect that iLike vs. Facebook will end up going the same route as well]. In other cases, entry into the application space by the platform vendor helps to validate the market and draws more visibility to it from users.  

    At the end of the day, articles like the ones I've mentioned above serve to prove that Facebook has actually built a viable and successful platform given that it is following the patterns of other successful platforms from the past several decades of the software industry.


    November 6, 2007
    @ 02:37 PM

    Disclaimer: Although I work on the What’s New feed in Windows Live Spaces this should not be considered an announcement or precursor to an announcement of upcoming features of any Windows Live service.

    Yesterday, I got into a debate with Yaron about whether Facebook has done enough to allow applications built on top of the Facebook platform to feel to end users as if they are part of a unified whole instead of merely being bolted on. I argued that they had, while Yaron felt otherwise.

    The next time I logged into Facebook, I noticed the following which I hadn’t acknowledged up until that point. See if you can figure out the problem from the two screen shots. Mouse over for a hint.

    See the Flixster Movies entry?

    Where's Flixster?

    The problem with building a platform on top of an existing application is that any problems users have with these platform applications ends up affecting their perception of your application in a negative way. How many people care that 70% of Windows crashes were caused by buggy device drivers that were for the most part written by hardware manufacturers and not part of Windows itself?

    Recently I’ve seen a rush by some Web sites to jump on the platform bandwagon without clearly understanding how much work and how different a thought process it actually takes to get there. It will be an unfortunate shock for companies when they realize that it isn’t simply about chasing after feature sets. Building a platform is a holistic experience which includes getting the small details right, like giving users consistent opt out choices for the data they get in their news feed not just providing a bunch of APIs. Facebook is one of the few online services that gets it like 90% right and even they mess up on some things as I’ve pointed out above.

    Think about that the next time someone shows you a bunch of APIs and tells you they’ve turned their Web site into a Web platform.

    Now playing: G-Unit - I Wanna Get to Know You


    Categories: Platforms | Social Software

    A couple of people mentioned that my previous post on Google OpenSocial was too long and needed a five sentence elevator pitch style summary. Below is the five sentence summary of my analysis of Google OpenSocial that cuts through the hype and is just the facts.

    OpenSocial is billed as a standardized widget platform for the Web, it isn't. OpenSocial is a standard set of REST APIs which social networks can utilize to expose user profiles and relationship data. Everything else required by a widget platform from authentication and authorization to user interface integration and an application directory is unspecified. OpenSocial is to a standardized widget platform as an internal combustion engine is to an airplane. A step in the right direction but still very far from the end goal.

    Hope that helps. I have to go rake some leaves and run some puppy related errands so I might be slow in responding to comments over the rest of the day.


    Disclaimer: This post does not reflect the opinions, thoughts, strategies or future intentions of my employer. These are solely my personal opinions. If you are seeking official position statements from Microsoft, please go here.

    One of the Google folks working on OpenSocial sent me a message via Facebook asking what I thought about the technical details of the recent announcements. Since my day job is working on social networking platforms for Web properties at Microsoft and I'm deeply interested in RESTful protocols, this is something I definitely have some thoughts about. Below is what started off as a private message but ended up being long enough to be it's own blog post.

    First Impressions

    In reading the OpenSocial API documentation it seems clear that is intended to be the functional equivalent of the Facebook platform. Instead of the Facebook users and friends APIs, we get the OpenSocial People and Friends Data API. Instead of the Facebook feed API, we get the OpenSocial Activities API. Instead of the Facebook Data Store API, we get the OpenSocial Persistence Data API. Instead of FQL as a friendly alternative to the various REST APIs we get a JavaScript object model.  

    In general, I personally prefer the Facebook platform to OpenSocial. This is due to three reasons

    • There is no alternative to the deep integration into the Web site's user experience that is facilitated with FBML.  
    • I prefer idiomatic XML to tunnelling data through Atom feeds in ways that [in my opinion] add unnecessary cruft.
    • The Facebook APIs encourage developers to build social and item relationship graphs within their application while the OpenSocial seems only concerned with developers stuffing data in key/value pairs.

    The Javascript API

    At first I assumed the OpenSocial JavaScript API would provide similar functionality to FBML given the large number of sound bites quoting Google employees stating that instead of "proprietary markup" you could use "standard JavaScript" to build OpenSocial applications. However it seems the JavaScript API is simply a wrapper on top of the various REST APIs. I'm sure there's some comment one could make questioning if REST APIs are so simple why do developers feel the need to hide them behind object models?

    Given the varying features and user interface choices in social networking sites, it is unsurprising that there is no rich mechanism specified for adding entry points to the application into the container sites user interface. However it is surprising that no user interface hooks are specified at all. This is surprising given that there are some common metaphors in social networking sites (e.g. a profile page, a friends list, etc) which can be interacted with in a standard way.  It is also shocking that Google attacked Facebook's use of "proprietary markup" only to not even ship an equivalent feature.

    The People and Friends Data API 

    The People and Friends Data API is used to retrieve information about a user or the user's friends as an Atom feed. Each user is represented as an atom:entry which is a PersonKind (which should not be confused with an Atom person construct). It is expected that the URL structure for accessing people and friends feeds will be of the form  http://<domain>/feeds/people/<userid> and http://<domain>/feeds/people/<userid>/friends respectively.

    Compare the following response to a request for a user's information using OpenSocial with the equivalent Facebook API call response.

    <entry xmlns='' xmlns:georss='' xmlns:gd=''>
    <title>Elizabeth Bennet</title>
    <link rel='thumbnail' type='image/*' href=''/>
    <link rel='alternate' type='text/html' href=''/>
    <link rel='self' type='application/atom+xml' href=''/>
    <gml:Point xmlns:gml=''>
    <gml:pos>51.668674 -0.066235</gml:pos>
    <gd:extendedProperty name='lang' value='en-US'/>

    Below is the what the above information would look like if returned by Facebook's users.getInfo method


    <users_getInfo_response xmlns="" xmlns:xsi="" xsi:schemaLocation="" list="true">
    <city>Palo Alto</city>
    <country>United States</country>

    I've already mentioned that I prefer idiomatic XML to tunnelling data through Atom feeds. Comparing the readability of both examples should explain why.

    The Activities Data API 

    A number of social networking sites now provide a feature which enables users to see the recent activities of members of their social network in an activity stream. The Facebook news feed, Orkut's updates from your friends, and the Windows Live Spaces what's new page are all examples of this feature. The OpenSocial Activities Data API provides a mechanism for OpenSocial applications to access and update this activity stream as an Atom feed. All of the users activities or all activities from a specific application can be accessed using URIs of the form  http://<domain>/activities/feeds/activities/user/<userID> and http://<domain>/activities/feeds/activities/user/<userID>/source/<sourceID> respectively.  

    Currently there is no reference documentation on this API. My assumption is that since Orkut is the only OpenSocial site that supports this feature, it is difficult to produce a spec that will work for other services without it being a verbatim description of Orkut's implementation.

    There are some notes on how Orkut attempts to prevents applications from spamming a user's activity stream. For one, applications are only allowed to update the activity stream for their source directly instead of the activity stream for the user. I assume that Google applies some filter to the union of all the source specific activity streams before generating the user's activity feed to eliminate spam. Secondly, applications are monitored to see if they post too many messages to the activity stream or if they post promotional messages instead of the user's activities to the stream. All of this makes it seem difficult to see how one could specify the behavior of this API and feature set reliably for a diverse set of social networking sites.

    The Persistence Data API 

    The OpenSocial Persistence API allows applications to store and retrieve key<->value pairs that are either user-specific or are global to the application. An example of the former is a listing of company name and stock ticker pairs while an example of the latter is a user's stock portfolio. The feed of global key<->value pairs for an application can be accessed at a URL of the form http://<domain>/feeds/apps/<appID>/persistence/global for the entire feed and http://<domain>/feeds/apps/<appID>/persistence/global/<key> if seeking a particular key<->value pair. User-specific key<->value pairs are available at the URL of the form http://<domain>/feeds/apps/<appID>/persistence/<userID>/instance/<instanceID>.

    This is probably the least interesting aspect of the API. A simple persistence API like this is useful for applications with simple storage needs that need to store user preferences or simple textual data that is needed by the application. However you aren't going to use this as the data storage platform for applications like iLike, Flixster or Scrabulous.

    However I will add that an Atom feed seems like a horrible representation for a list of key<->value pairs. It's so bad that the documentation doesn't provide an example of such a feed.

    Hosting OpenSocial Applications

    The documentation on hosting OpenSocial applications implies that any site that can host Google gadgets can also host OpenSocial applications. In practice, it means that any site that you can place a <script> element on can point to a gadget and thus render it. Whether the application will actually work will depend on whether the hosting service has actually implemented the OpenSocial Service Provider Interface (SPI).

    Unfortunately, the documentation on implementing the OpenSocial SPI is missing in action. From the Google site

    To host OpenSocial apps, your website must support the SPI side of the OpenSocial APIs. Usually your SPI will connect to your own social network, so that an OpenSocial app added to your website automatically uses your site's data. However, it is possible to use data from another social network as well, should you prefer. Soon, we will provide a development kit with documentation and code to better support OpenSocial websites, along with a sample sandbox which implements the OpenSocial SPI using in-memory storage. The SPI implements:

    • Adding and removing friends
    • Adding and removing apps
    • Storing activities
    • Retrieving activity streams for self and friends
    • Storing and retrieving per-app and per-app-per-user data

    The OpenSocial website development kit will include full SPI documentation. It will provide open source reference implementations for both client and server components.

    I assume that the meat of the OpenSocial SPI is documentation is just more detailed rules about how to implement the REST APIs described above. The interesting bits will likely be the reference implementations of the API which will likely become the de facto standard implementations instead of encouraging dozens of buggy incompatible versions of the OpenSocial API to bloom.   


    In general I believe that any effort to standardize the widget/gadget APIs exposed by various social networking sites and AJAX homepages (e.g. iGoogle, Netvibes,, etc) is a good thing. Niall Kennedy has an excellent series of articles on Web Widget formats and Web Widget update technologies that shows how diverse and disparate the technologies that developers have to learn and utilize when they want to build widgets for various sites. Given that Web widgets are now a known quantity, the time is ripe for some standardization.

    That said, there are a number of things that give me cause to pause with regards to OpenSocial

    1. A common practice in the software industry today is to prefix "Open" to the name of your technology which automatically gives it an aura of goodness while attempting to paint competing technologies as being evil and "closed". Examples include OpenDocument, OpenID, OpenXML, OAuth, etc. In this case, OpenSocial is being positioned as an "open" alternative to the Facebook platform.  However as bloggers like Shelley Powers, Danny Ayers and Russell Beattie have pointed out, there isn't much "open" about OpenSocial. Russell Beattie asks in his post Where the hell is the Container API?

      Would people be jumping on this bandwagon so readily if it was Microsoft unilaterally coming up with an API, holding secret meetings geared towards undercutting the market leader, and then making sure that only those anointed partners get a head start on launch day by making sure a key part of the API isn't released - even in alpha. (It obviously exists already, all the partners have that spec and even sample code, I'm sure. The rest of us don't get access yet, until the GOOG says otherwise).

      Let's say we ignore that the process for creating the technology was not "open" nor have key aspects of the technology even been unveiled [which makes this more of a FUD announcement to take the wind out of Facebook's sails than an actual technology announcement], is the technology itself open? Shelley Powers points out her post Terms that

      Perhaps the world will read the terms of use of the API, and realize this is not an open API; this is a free API, owned and controlled by one company only: Google. Hopefully, the world will remember another time when Google offered a free API and then pulled it. Maybe the world will also take a deeper look and realize that the functionality is dependent on Google hosted technology, which has its own terms of service (including adding ads at the discretion of Google), and that building an OpenSocial application ties Google into your application, and Google into every social networking site that buys into the Dream.

      Google has announced a technology platform that is every bit as proprietary as Facebook's. The only difference is that they've cut deals with some companies to utilize their proprietary platform while Facebook's platform is only for use on the Facebook site. If Zuckerburg announces next week that the Facebook platform is freely implementable by any 3rd party Web site, where does that leave OpenSocial? After all, the Facebook platform is actually a proven, working system with complete documentation instead of the incomplete rush job that OpenSocial clearly is right now.

      There are all sorts of forums for proposing and discussing open Web technologies including the IETF, W3C, OASIS and even ECMA. Until all of the underlying technologies in OpenSocial have been handed over to one or more of these standards bodies, this is a case of the proprietary pot calling the proprietary kettle black.

    2. One of the things that comes along with OpenSocial is that Google has now proposed GData as the standard protocol for interacting with social graphs on the Web. This is something that I've been worried about for a while and I've written a couple of blog posts to address this topic because it is not clear that the Atom Publishing Protocol upon which GData is based works well outside it's original purpise of editing blog posts and the like. I'm not the only one that feels this way.

      Danny Ayers wrote in his post Open? Social?

      However the People Data API is cruel and unusual. It first stretches Atom until it creaks with "each entry in the People or Friends feed is a PersonKind"; then gives a further tug  (a person's name is represented using atom:title) then extends it even more (a person's email is gd:email) and finally mops up all the blood, sweat and dribble:

      Key value parameters - gd:extendedProperty - "As different social networks and other sources of People data have many different named fields, this provides a way for them to be passed on generally. Agreeing on common naming conventions is to be decided in future."

      Got to admire the attempt, but (to mix the metaphorical namespaces) silk purses don't make very good sow's ears either.

      In addition, AtomPub geek extraordinairre, Tim Bray wrote in his blog post entitled Web3S

      If you decide you totally can’t model your world as collections of entries populated with hyperlinks to express relationships, well then I guess APP’s not for you. And at the level of engineering intuition, I have to say that a monster online address book does feel different at a deep level from most online “publications” (I thought that was why we had LDAP... but I repeat myself).

      Now that we have AtomPub/GData as a de facto standard protocol for accessing various kinds of non-microcontent data on the Web as a reality, I'm done debating its suitability for the task since the horse has already left the barn. However I will continue to ask when will GData be RFC 5023 compliant?

    3. At the end of the day, the most disappointing thing about OpenSocial is that it doesn't really further the conversation about actual interoperability across social networking sites. If I use Orkut, I still need a MySpace account to interact with my friends on that site. Some people have claimed that OpenSocial will enable routing around such lock-in via applications like iLike and Flixster which have their own social networks and thus could build cross-site social networking services since they will be hosted on multiple social networking sites. However the tough part of this problem is how a hosted application knows that carnage4life@windowslivespaces is the same user as DareObasanjo@Facebook? It seems OpenSocial completely punts on satisfying this scenario even though it wouldn't be hard to add this as a requirement of the system. I guess the various applications can create their own user account systems and then do the cross-site social network bridging that way, which sucks because it will be a lot of duplicative work and will require users to create even more accounts with various services.

      Given that the big widget vendors like iLike, Slide and RockYou already have their users creating accounts on their sites that can be tied back to which social networking site the user utilizes their widgets on, this might be a moot point. Wouldn't it be mad cool if the Top Friends Facebook application could also show your top friends from MySpace or Orkut? I suspect the valuation of various widget companies will be revised upwards in the coming months.

    4. There is no mention of a user-centric application authorization model. Specifically, there is no discussion of how users grant and revoke permission to access their personal data to various OpenSocial applications. Regular readers of my blog are familiar with my mantra of putting the user in control which is why I've been so enthusiastic about OAuth. Although there is some mention of Google's Authentication for Web Application in the documentation, this seems specific to Google's implementation of OpenSocial hosting and it is unclear to me that we should expect that this is the same model that will be utilized by MySpace, Bebo, TypePad or any of the other social networking sites that have promised to implement OpenSocial. On the other hand, Facebook has a well thought out applications permission model and I would have thought it would be quite easy to simply reverse engineer that and add it to the OpenSocial spec than to simply punt on this problem.

    Despite these misgivings, I think this is a step in the right direction. Web widget and social graph APIs need to be standardized across the Web.

    PS: I've subscribed to the Google OpenSocial blog. So far there have only been posts by clueless marketing types but I'm sure interesting technical information that addresses some of the points above will be forthcoming.


    In a post entitled Checkmate? MySpace, Bebo and SixApart To Join Google OpenSocial (confirmed) Mike Arrington writes

    Google may have just come out of nowhere and checkmated Facebook in the social networking power struggle.

    Update (12:30 PST): On a press call with Google now. This was embargoed for 5:30 pm PST but they’ve moved the time up to 12:30 PST (now). Press release will go out later this evening. My notes:

    On the call, Google CEO Eric Schmidt said “we’ve been working with MySpace for more than a year in secret on this” (likely corresponding to their advertising deal announced a year ago).

    MySpace says their new platform efforts will be entirely focused on OpenSocial.

    The press release names, Friendster, hi5, Hyves, imeem, LinkedIn, Ning, Oracle, orkut, Plaxo,, Six Apart, Tianji, Viadeo, and XING as current OpenSocial partners.

    We’re seeing a Flixster application on MySpace now through the OpenSocial APIs. Flixster says it took them less than a day to create this. I’ll add screen shots below.

    Here’s the big question - Will Facebook now be forced to join OpenSocial? Google says they are talking to “everyone.” This is a major strategic decision for Facebook, and they may have little choice but to join this coalition.

    Bebo has also joined OpenSocial.

    I'm confused as to how Mike Arrington considers this a checkmate by Google. At the end of the day, this announcement is simply that folks like Slide and RockYou don't have to maintain multiple code bases for their widgets on various popular social networking sites. In addition, it brings the widget/gadget platform on these sites to a similar level to the Facebook platform. Of course, it won’t be on the same level unless it meets all the criteria from my post on how developers should evaluate the MySpace platform. Which is unlikely since besides MySpace, none of those sites have the userbase or engagement of Facebook users nor does any of them have the same kind of viral properties in distributing applications that Facebook platform has built-in

    At the end of the day, will we see widget developers like the folks at iLike, Slide or Scrabulous leave the Facebook platform because of these announcements? Unlikely.

    Will we see a mass migration from Facebook to MySpace or Orkut because you can now add Flixster or Scrabulous to your profile on these sites? Probably not.

    So how is this a checkmate again?

    OpenSocial simply keeps Facebook’s competitors in the game. It is more like a successful kingside castle than a checkmate.

    Now playing: Backstreet Boys - Incomplete


    There’s nothing like a successful company with a near monopoly to force the software industry to come up with standards. Or in this case, as in many others, force it’s competitors to band together and call what they are doing the standard because more than one vendor supports it.

    From TechCrunch’s article Details Revealed: Google OpenSocial(To Launch Thursday we learn

    Google wants to create an easy way for developers to create an application that works on all social networks. And if they pull it off, they’ll be in the center, controlling the network.

    What They’re Launching

    OpenSocial is a set of three common APIs, defined by Google with input from partners, that allow developers to access core functions and information at social networks:

    • Profile Information (user data)
    • Friends Information (social graph)
    • Activities (things that happen, News Feed type stuff)

    Hosts agree to accept the API calls and return appropriate data. Google won’t try to provide universal API coverage for special use cases, instead focusing on the most common uses. Specialized functions/data can be accessed from the hosts directly via their own APIs.

    Unlike Facebook, OpenSocial does not have its own markup language (Facebook requires use of FBML for security reasons, but it also makes code unusable outside of Facebook). Instead, developers use normal javascript and html (and can embed Flash elements). The benefit of the Google approach is that developers can use much of their existing front end code and simply tailor it slightly for OpenSocial, so creating applications is even easier than on Facebook.

    Similar details are available from folks like Om Malik and Marc Andreesen

    This is a brilliant move. I’ve blogged on multiple occassions that the disparate widget platforms in social networking sites is a burden for widget developers and will lead to a “winner takes all” situation because no one wants to support umpteen different platforms. If enough momentum gains around OpenSocial, then three things will happen

    • Widget developers will start to favor coding to OpenSocial because it supports multiple sites as well as targeting the Facebook platform  
    • Eventually Facebook platform developers will start asking Zuckerburg and company to support OpenSocial so they only need to worry about one code base (kinda, it won’t be that easy)  
    • Other companies with proprietary widget platforms or plans to create one will bow down to the tide and adopt OpenSocial

    Of course, this requires a popular social networking site with a wide audience (e.g. MySpace) to adopt the platform before we see this kind of traction.

    However this is the only thing Google could have done that makes any sense. Building a clone of the Facebook platform like some social networking sites planned would have been dumb because that would be the tail wagging the dog. Similarly building a competing proprietary platform would also have been dumb due to the winner takes all problem I mentioned earlier.

    This is the only move that has a chance of actually giving their anti-Facebook platform a chance of being successful.

    I wonder how my coworkers in Windows Live are going to take this news?

    Now playing: 50 Cent - All Of Me (Feat. Mary J Blige) (Prod by Jake One)


    Dave Winer has an interesting post where he asks Facebook app or Firefox plug-in? 

    Which is a more interesting platform -- Facebook or Firefox?

    This was a topic of conversation at the Web 2.0 Summit last week in SF, not on stage, but in a LobbyConversation between myself and venture capitalist Bijan Sabet.

    Bijan Sabet: "I like that Firefox developers don't have to live in a world where they lie awake at night worried that the platform company is going to make life hard for them."

    You can guess my answer to the question so I won’t bother answering. I will point out that I doubt that the developers of Flock would agree with the statement that Firefox doesn’t do anything to screw developers on their platform. 

    I also question anyone who claims that there are more opportunities to make money off of Firefox plugins than there are selling ads in your Facebook application.

    Now playing: 2Pac - Fight Music (feat. Xzibit)


    Categories: Platforms

    Today I saw a pretty useless product announced on TechCrunch called FriendCSV which extracts your Facebook friends into a CSV file sans their PII (i.e. no email address, no IM screen names, no telephone numbers, no street address). Amusingly enough, the authors of the application brag about using the Facebook platform as it was designed to be used as if they have figured out some major exploit. Smile

    However it did remind me of a pretty cool desktop application that I found via Bubba called OutSync. It synchronizes the photos of your contacts in Outlook with those of any matching people found on your friends list in Facebook. If you have a Windows Mobile smartphone that is synchronized with your Exchange server like I do, it means that you get all your friends photos on your phone which is pretty sweet especially when you get a call from that person. Screenshots below

    OutSync access request screen

    There’s that user-centric authentication model where the user grants the application access without giving the application their username/password. Again, I’m glad OAuth is standardizing this for the Web.

    When I think of the Facebook platform embracing the “Web as a Platform” I want to see more applications like this enabled. Instead of only utilizing my Facebook social graph in the Facebook Marketplace or the Facebook application, why can’t it be utilized on eBay or Craigslist? I want all the applications I use to be able to utilize my social graph to make themselves better without having to be widgets on a particular social networking site before they can take advantage of this knowledge. 

    Back in 2004, I wrote

    Basically, we've gotten rid of one of major complaints about online services; maintaining to many separate lists of people you know. One of the benefits of this is that you can utilize this master contact list across a number of scenarios outside of just one local application like an email application or an IM client. For example, in MSN Spaces we allow users to use their MSN Messenger allow list (people you've granted permission to contact you via IM) as a access control list for who can view your Space (blog, photo album, etc). There are a lot more interesting things you can do once the applications you use can tell "here are the people I know, these are the ones I trust, etc". We may not get there as an industry anytime soon but MSN users will be reaping the benefits of this integration in more and more ways as time progresses.

    It’s unfortunate that almost 3 years later we haven’t made much progress on this across the industry although it looks like the Facebook platform has gotten people finally thinking about unified social graphs. Better late than never, I guess. 

    The one mistake I’ve been making in my thinking has been narrowly defining the applications that should have access to your Windows Live social graph as Microsoft or Windows Live applications. My thinking has since evolved as has that of lots of folks in the B0rg cube. It will be an interesting couple of months with regards to social graph APIs especially with Google’s November 5th announcement coming up.

    PS: Anyone else noticed that the installer for OutSync is hosted on SkyDrive?

    PPS: I got a phone call from Rob Dolin earlier this evening and the pic from his Facebook profile showed up my phone. Sweet!

    Now playing: Soulja Boy - Crank That (Soulja Boy)


    Categories: Platforms | Social Software

    I recently read two blog posts on Microsoft's VisitMix site recently which show how conflicted large Web players can be about embracing the fact that The Web is the Platform. The first is a blog post by Scott Barnes entitled Rich Interactive Applications which contains the following excerpt

    When you think of RIA what is it your mind casts an image to first?
    RIA isn't about attention/eyeballs, it's supposed to be focused on empowering end users of a defined type, to carry out mundane task through an enriching user experience. User Experience is the key, in that a true RIA solution has the power to abstract complexity through aggregation or 360 degree view(s) of content without altering context.
    That is Rich Interactive Application (RIA) shifting the paradigm. It had nothing to do with the Internet, suffice to say it's housed within an agent which is connected to the Internet - or - Intranet.

    This blog post was the first time I've seen the term RIA defined as Rich Interactive Application instead of Rich Internet Application. Digging a little, it becomes obvious that Scott Barnes's post is just one of many in an ongoing flame war war of words between developer evangelists at Microsoft and developer evangelists at Adobe.

    Redefining a term in such a way that it becomes all-inclusive is a recipe for devaluing the term [which might be Scott's purpose]. This is the lesson from the all-inclusive definitions that started to swirl around industry terms like Service Oriented Architecture and Web 2.0. More importantly, the problem with using the term "Rich Interactive Application" to define what developers commonly describe as RIAs is that it completely misses the point. Developers and end users are not excited about the ability to build and use rich interactive applications, they are excited about being able to build and use rich interactive applications on the Web. They've had the former for as long as desktop computers have existed, the latter is what is currently jazzing people up (e.g. all the hype around AJAX, Flickr, YouTube, the Facebook platform, etc).

    Don't fight the Web. People don't get excited about "interactive" desktop applications. When was the last time your best friend, mom, daughter, sister, co-worker, etc told you about some cool desktop app they just found or use regularly? How does that compare that to the amount of times they've told about cool Web sites they found or use regularly?

    Think about that for a second, Mr. Rich Interactive Application. Embrace the Web or you will be left behind.

    Onto Joshua Allen's post entitled Web is THE Platform? SRSLY? which states

    Erick Schonfeld at TechCrunch reports on Google's presentation today at Web 2.0 Conference.  Jeff Huber of Google, trying to slam Facebook and MySpace, said "A lot that you have heard here is about platforms and who is going to win. That is Paleolithic thinking. The Web has already won. The web is the Platform. So let’s go build the programmable Web."

    I was rather surprised, because I heard that same line just two days ago, from Dare Obasanjo.  Jeff apparently reads Dare's blog, and was in a hurry to prepare his speech.
    When I hear someone talk about the web as a platform, I have a pretty clear picture:

    • Utilizes open standards, preferably mature specifications and preferably from W3C
    • Utilizes web client runtime that has massive deployment; depends only on functionality that can be found in the majority of browsers
    • Runs the same no matter who is hosting the code

    This is non-negotiable!  When any normal person writes "for the web", this is what she means! 

    Joshua goes on to cite Google for hypocrisy because it's widget platform is every bit as proprietary as those of MySpace and Facebook, and Google's doesn't use any of the ad-hoc standards for exposing social graph data in a shareable way (FOAF, XFN, etc).

    Although all the things Joshua lists are important, they aren't what I was really harping on when I wrote the post referenced by Joshua. The problem with the Facebook platform is that although you can use it to build Web applications, they are not on the Web. What do I mean by being on the Web? Here's a sampling of writings from across the Web that does a better of job of explaining this than I ever could

    Tim Berners-Lee

    When I invented the Web, I didn't have to ask anyone's permission. Now, hundreds of millions of people are using it freely.

    Jason Kottke

    Faced with competition from this open web, AOL lost...running a closed service with custom content and interfaces was no match for the wild frontier of the web. Maybe if they'd done some things differently, they would have fared better, but they still would have lost. In competitive markets, open and messy trumps closed and controlled in the long run.

    Anil Dash

    It's not true to say that Facebook is the new AOL, and it's oversimplification to say that Facebook's API is the new Blackbird, or the new Rainman. But Facebook is part of the web. Think of the web, of the Internet itself, as water. Proprietary platforms based on the web are ice cubes. They can, for a time, suspend themselves above the web at large. But over time, they only ever melt into the water.


    Categories: Platforms

    I’ve been reading recently that a number of social networking sites are rushing to launch [or re-launch] a widgets platform given the success of the Facebook platform. There have been announcements about a MySpace platform which claim that

  • it will essentially be a set of APIs and a new markup language that will allow third party developers to create applications that run within MySpace. Developers will be able to include Flash applets, iFrame elements and Javascript snippets in their applications, and access most of the core MySpace resources (profile information, friend list, activity history, etc.). Unlike existing widgets on MySpace, developers will be able to access deep profile and other information about users and bake it into the applications.
  • Advertising can be included on the application pages (called control pages) and developers will keep 100% of the revenue. Ads may not be placed within widgets that appear on MySpace pages, however.
  • There have been similar announcements from LinkedIn and Google. The problem is that every one of these widget platforms being proposed by the various social networking sites are incompatible. This means that Web developers has to build a separate application for each of these sites using proprietary technologies (e.g. FBML) and proprietary APIs (e.g. FQL). Since very few Web developers or Web companies will be able to support significant applications on every one of these platforms, the question then becomes “Which platform should you bet on?” and “How do you make the decision to bet on a platform.”

    Right now the gold standard in widget platforms for social networking sites is the Facebook platform. There are several reasons for this and competitors planning to build similar platforms need to meet the following criteria.

    1. Monetization: Facebook encourages developers to monetize their widgets by placing ads in their widgets. Although Facebook has not actively helped developers by providing an ad platform, there is now a healthy marketplace of Facebook ad networks that developers can choose from. It has even been rumored that Google will be getting in the Facebook ad provider game. +1 to Facebook.

    2. Distribution and Reach: A big problem you’ll face when you’ve built a great product is that it is a lot harder than you expect for people to actually find out about and try your product. This means any avenue that increases the potential reach and distribution of your product is bringing money in your pocket. Not only does Facebook have several million active and engaged users, the Facebook platform also provides several mechanisms that encourage the viral spread of applications which developers consistently rave about. No other social networking site’s widget platform even comes close. +1 to Facebook.

    3. Access to User Data: Social networking sites are all about connecting people to the people they care about and their interests. This means that applications built on these platforms should be able to determine a user’s friends and interests to be able to give an optimal experience. The Facebook platform is unprecedented in the arena of widget platforms when it comes to the amount of user information it exposes to applications with methods like friends.getusers.getInfo, photos.get and even marketplace.getListings. +1 to Facebook.

    4. Ability to Build an Integrated and Immersive Experience: One place where the Facebook platform really changed the game is that widgets weren’t just relegated to a tiny box on the user’s profile like they are on other social networking sites but instead developers could build full blown applications that integrated fully into the Facebook experience. It’s a lot easier to keep users engaged and build non-intrusive advertising into your application if your entire application doesn’t have to fit in some 4” X 4” box on the user’s profile. +1 to Facebook.

    5. Applications Shielded from the “Winner’s Curse” of Web Development: The more successful your application becomes on the Web, the more money you have to spend on server related resources. Everyone knows the story of iLike scrambling to borrow money and servers because their Facebook application was more successful than they anticipated. Since a lot of widget developers are not richly funded startups or big companies, they may not be able to bear the costs of actually building a successful Web application without help from the platform vendor. A number of platform vendors provides hosting for static files and data storage APIs although none go as far as full blown application hosting...yet.  +0.5 to Facebook.

    From my perspective, if a social networking site’s widget platform doesn’t meet all criteria, then it can’t be considered a real competitor to Facebook’s platform. And as a developer if I had to choose between that platform and Facebook’s, there would be no choice to make.

    Now if you can afford multiple development efforts building widgets/applications for several disparate social networking site platforms, the list above is a good starting point for prioritizing the which social networking site’s to build widgets/applications for first.   

    Now playing: T.I. - U Don't Know Me


    Categories: Platforms

    Mary Jo Foley has a delightful post entitled Are all ‘open’ Web platforms created equal? where she wonders why there is more hype around the Facebook platform, Google’s muched hyped attempt to counter it on November 5th and other efforts that Anil Dash has accurately described as the new Blackbird as opposed to open API efforts from Microsoft. She posits two theories which are excerpted below

    Who isn’t mentioned in any of these conversations? Microsoft. Is it because Microsoft hasn’t opened up its various Windows Live APIs to other developers? Nope. Microsoft announced in late April its plans for opening up and providing licensing terms for several of its key Windows Live APIs, including Windows Live Contacts, Windows Live Spaces Photo Control and Windows Live Data Protocols.

    So why is Microsoft seemingly irrelevant to the conversation, when it comes to opening up its Web platform? There are a few different theories.

    “I think the excitement about the Facebook platform stems from the fact that it addresses the problem of building publicity and distribution for a new application. Any developer can create an application for Facebook, and the social network will help propagate that application, exposing it to new users,” said Matt Rosoff, an analyst with Directions on Microsoft.

    Microsoft, for its part, believes it is offering Web platform APIs the way that developers want, making them available under different business terms and permitting third parties to customize them inside their own sites, according to George Moore, General Manager of Windows Live. But Moore also acknowledges Microsoft has a different outlook in terms of which data it exposes via its APIs.

    “Facebook gives you access to your social-graph (social-networking) data. We don’t do that. We have a gallery that allows users to extend Live Spaces,” Moore said.

    Moore declined to comment on when or if Microsoft planned to allow developers to tap directly into user’s social-graph data like Facebook has done.

    I see GeorgeM all the time, so I doubt he’ll mind if I clarify his statement above since it gives the wrong impression of our efforts given the context in which it was placed. If we go back to the definition of a social graph it’s clear that what is important is that it is a graph of user relationships not one that is tied to a particular site or service. From that perspective the Windows Live Contacts API which provides a RESTful interface to the contents of a user’s Windows Live address book complete with the list of tags/relationship types the user has applied to these contacts (e.g. “Family”, “Friends”, “Coworkers”, etc) as well as which of these contacts are the user’s buddies in Windows Live Messenger is a social graph API. 

    On the other hand, this API does not give you access to the user’s Spaces friends list.  My assumption is that Mary Jo’s questions were specific to social networking sites which is why George gave that misleading answer. In addition, Yaron is fond of pointing out to me that the API is in alpha so there is still a lot that can change from now until we stamp it as v1. Until then, I’ll also decline to comment on any future plans.

    As for the claim made by Matt Rosoff, I tend to agree with his assertion that the viral propagation of applications via the Facebook’s social graph is attractive to developers. However this attractiveness comes with the price of both the users and developers being locked in Facebook’s trunk.

    I personally believe that the Web is the platform and this philosophy shines through in the API efforts at Microsoft. It may be that this is not as attractive to developers today as it should be but eventually the Web will win. Everyone who has fought the Web has lost. Facebook will not be an exception.

    Now playing: Tony Yayo - I Know You Dont Love Me (feat. G-Unit)


    Categories: Platforms | Windows Live

    I recently got an email from a developer about my post Thoughts on Amazon's Internal Storage System (Dynamo) which claimed that I seem to be romanticizing distributed systems that favor availability over consistency. He pointed out that although this sounds nice on paper, it places a significant burden on application developers. He is 100% right. This has been my experience in Windows Live and I’ve heard enough second hand to assume it is the experience at Amazon as well when it comes to Dynamo.

    I thought an example of how this trade-off affects developers would be a useful excercise and may be of interest to my readers. The following example is hypothetical and should not be construed as describing the internal architectures of any production systems I am aware of.

    Scenario: Torsten Rendelmann, a Facebook user in Germany, accepts a friend request from Dare Obasanjo who is a Facebook user in the United States.

    The Distributed System: To improve the response times for users in Europe, imagine Facebook has a data center in London while American users a serviced from a Data Center in Palo Alto. To achieve this, the user database is broken up in a process commonly described as shardingThe question of if and how data is replicated across both data centers isn’t relevant to this example.

    The application developer who owns the confirm_friend_request() method, will ideally want to write code that took the following form 

    public void confirm_friend_request(user1, user2){

      update_friend_list(user1, user2, status.confirmed); //palo alto
      update_friend_list(user2, user1, status.confirmed); //london

    Yay, distributed transactions. You have to love a feature that every vendor advises you not to use if you care about performance. So obviously this doesn’t work for a large scale distributed system where performance and availabilty are important.  

    Things get particularly ugly when you realize that either data center or the specific server a user’s data is stored on could be unreachable for a variety of reasons (e.g. DoS attack, high seasonal load, drunk sys admin tripped over a power cord, hard drive failure due to cheap components, etc).

    There are a number of options one can consider when availability and high performance are considered to be more important than data consistency in the above example. Below are three potential implementations of the code above each with it’s own set of trade offs.

    OPTION I: Application developer performs manual rollback on error

    public void confirm_friend_request_A(user1, user2){

       update_friend_list(user1, user2, status.confirmed); //palo alto 
     }catch(exception e){ 

      update_friend_list(user2, user1, status.confirmed); //london
     }catch(exception e) {
      revert_friend_list(user1, user2);


    The problem here is that we don’t handle the case where revert_friend_list() fails. This means that Dare (user1) may end up having Torsten (user2) on his friend list but Torsten won’t have Dare on his friend list. The database has lied.

    OPTION II: Failed events are placed in a message queue to be retried until they succeed.   

    public void confirm_friend_request_B(user1, user2){

       update_friend_list(user1, user2, status.confirmed); //palo alto 
     }catch(exception e){ 
      add_to_retry_queue(operation.updatefriendlist, user1, user2, current_time()); 

      update_friend_list(user2, user1, status.confirmed); //london
     }catch(exception e) {
      add_to_retry_queue(operation.updatefriendlist, user2, user1, current_time());  


    Depending on how long the error exists and how long it takes an item to sit in the message queue, there will be times when the Dare (user1) may end up having Torsten (user2) on his friend list but Torsten won’t have Dare on his friend list. The database has lied, again.

    OPTION III: System always accepts updates but application developers may have to resolve data conflicts later. (The Dynamo approach)

    /* update_friend_list always succeeds but may enqueue an item in message queue to try again later in the event of failure. This failure is not propagated to callers of the method.  */

    public void confirm_friend_request_C(user1, user2){
       update_friend_list(user1, user2, status.confirmed); // palo alto
       update_friend_list(user2, user1, status.confirmed); //london 


    /* get_friends() method has to reconcile results returned by get_friends() because there may be data inconsistency due to a conflict because a change that was applied from the message queue is contradictory to a subsequent change by the user.  In this case, status is a bitflag where all conflicts are merged and it is up to app developer to figure out what to do. */ 

      public list get_friends(user1){ 
          list actual_friends = new list();
          list friends = get_friends();  

          foreach (friend in friends){     

            if(friend.status == friendstatus.confirmed){ //no conflict

            }else if((friend.status &= friendstatus.confirmed) 
                       and !(friend.status &= friendstatus.deleted)){

              // assume friend is confirmed as long as it wasn’t also deleted
              friend.status = friendstatus.confirmed;              
              update_friends_list(user1, friend, status.confirmed);

            }else{ //assume deleted if there is a conflict with a delete
              update_friends_list( user1, friend, status.deleted)


       return actual_friends;

    These are just a few of the many approaches that can be implemented in such a distributed system to get around the performance and availability implications of using distributed transactions. The main problem with them is that in every single case, the application developer has an extra burden placed on his shoulders due to inherent fragility of distributed systems. For a lot of developers, the shock of this realization is akin to the shock of programming in C++ after using C# or Ruby for a couple of years. Manual memory management? Actually having to perform bounds checking arrays? Being unable to use decades old technology like database transactions?

    The challenge in building a distributed storage system like BigTable or Dynamo is in balancing the need for high availability and performance while not building a system that encourages all sorts of insidious bugs to exist in the system by design.  Some might argue that Dynamo goes to far in the burden that it places on developers while there are others that would argue that it doesn’t go far enough.

    In what camp do you fall?

    Now playing: R. Kelly - Rock Star (feat. Ludacris & Kid Rock)


    Categories: Platforms | Programming | Web Development

    Yaron Goland has an entertaining post entitled Interoperability Wars - Episode 6 - Part 1 - Revenge of Babble about some of the philosophical discussions we’ve been having at work about the Atom Publishing Protocol (RFC 5023). The entire post is hilarious if you are an XML protocol geek and I recommend reading it. The following excerpt is a good starting point for another discussion about APP’s suitability as a general purpose editing protocol for the Web. Yaron writes

    Emperor Babble: Excellent, Weasdel's death will serve us well in lulling the forces of interoperability into thinking they are making progress. Welcome Restafarian, it is time you understood your true place in my plans.

    Luke Restafarian: Do what you want to me emperor, but the noble cause of interoperability will prevail!

    The Emperor turns to the center of the chamber where a form, almost blinding in its clarity, takes shape:

    GET /someuser/profile HTTP/1.1
    content-type: application/xml

    <profile xmlns="">







    Darth Sudsy is momentarily taken aback from the appearance of the pure interoperable data while Luke Restafarian seems strengthened by it. The Emperor turns back to Luke and then speaks.

    Emperor Babble: I see it strengthens you Restafarian, no matter. But look again at your blessed interoperability.

    When the Emperor turns back to the form the form begins to morph, growing darker and more sinister:

    GET /someuser/profileFeed HTTP/1.1
    content-type: application/ATOM+xml

    <feed xmlns="">
    <category scheme="" term="Profile"/>
    <content type="Application/XML" xmlns:e="">

    <content type="Application/XML" xmlns:e="">


    <content type="Application/XML" xmlns:e="">



    Luke, having recognized the syntax, clearly expects to get another wave of strength but instead he feels sickly. The emperor, looking slightly stronger, turns from the form to look at Luke.

    Luke Restafarian: What have you done? That structure is clearly taken from Princess Ape-Pea's system, she is a true follower of interoperability, I should be getting stronger but somehow it's making me ill.

    Emperor Babble: You begin to understand young Restafarian. Used properly Princess Ape-Pea's system does indeed honor all the qualities of rich interoperability. But look more carefully at this particular example of her system. Is it not beautiful? Its needless complexity, its useless elements, its bloated form, they herald true incompatibility. No developer will be able to manually deal with such an incomprehensible monstrosity. We have taken your pure interoperable data and hidden it in a mud of useless scaffolding. Princess Ape-Pea and your other minions will accept this poison as adhering to your precious principles of interoperability but in point of fact by turning Princess Ape-Pea's system into a generic structured data manipulation protocol we have forced the data to contort into unnatural forms that are so hard to read, so difficult to understand that no programmer will ever deal with it directly. We will see no repeat of those damned Hit-Tip Knights building their own stacks in a matter of hours in order to enable basic interoperability. In this new world even the most trivial data visualizations and manipulations become a nightmare. Instead of simple transversals of hierarchical structures programmers will be forced into a morass of categories and artificial entry containers. Behold!

    Yaron’s point is that since Atom is primarily designed for representing streams of microcontent, the only way to represent other types of data in Atom is to tunnel them as custom XML formats or proprietary extensions to Atom. At this point you’ve added an unnecessary layer of complexity.

    The same thing applies to the actual Atom Publishing Protocol. The current design requires clients to use optimistic concurrency to handle conflicts on updates which seems like unnecessary complexity to push to clients as opposed to a “last update wins” scheme. Unfortunately, APP’s interaction model doesn’t support granular updates which means such a scheme isn’t supported by the current design. A number of APP experts have realized this deficiency as you can see from James Snell of IBM’s post entitled Beyond APP - Partial updates and John Panzer of Google’s post entitled RESTful partial updates: PATCH+Ranges.

    A potential counter argument that can be presented when pointing these deficiencies of the Atom Publishing Protocol when used outside it’s core scenarios of microcontent publishing is that Google exposes all their services using GData which is effectively APP. This is true, but there are a couple of points to consider

    1. Google had to embrace and extend the Atom format with several proprietary extensions to represent data that was not simply microcontent.
    2. APP experts do not recommend embracing and extending the Atom format the way Google has done since it obviously fragments interoperability. See Joe Gregorio’s post entitled In which we narrowly save Dare from inventing his own publishing protocol and James Snell’s post entitled Silly for more details. 
    3. Such practices leads to a world where we have applications that can only speak Google’s flavor of Atom. I remember when it used to be a bad thing when Microsoft did this but for some reason Google gets a pass.
    4. Google has recognized the additional complexity they’ve created and now ship GData client libraries for several popular platforms which they recommend over talking directly to the protocol. I don’t know about you but I don’t need a vendor specific client library to process RSS feeds or access WebDAV resources, so why do I need one to talk to Google’s APP implementation?

    It seems that while we weren’t looking, Google move us a step away from a world of simple, protocol-based interoperability on the Web to one based on running the right platform with the right libraries

    Usually I wouldn’t care about whatever bad decisions the folks at Google are making with their API platform. However the problem is that it sends out the wrong message to other Web companies that are building Web APIs. The message that it’s all about embracing and extending Internet standards with interoperability being based on everyone running sanctioned client libraries instead of via simple, RESTful protocols is harmful to the Internet. Unfortunately, this harkens to the bad old days of Microsoft and I’d hate for us to begin a race to the bottom in this arena.

    On the other hand, arguing about XML data formats and RESTful protocols are all variations of arguments about what color to paint the bike shed. At the end of the day, the important things are (i) building a compelling end user service and (ii) exposing compelling functionality via APIs to that service. The Facebook REST APIs are a clusterfuck of inconsistency while the Flickr APIs are impressive in how they push irrelevant details of the internals of the service into the developer API (NSIDs anyone?). However both of these APIs are massively popular.

    From that perspective, what Google has done with GData is smart in that by standardizing on it even though it isn’t the right tool for the job, they’ve skipped the sorts of ridiculous what-color-to-paint-the-bike-shed discussions that prompted Yaron to write his blog post in the first place. Wink

    With that out of the way they can focus on building compelling Web services and exposing interesting functionality via their APIs. By the way, am I the last person to find out about the YouTube GData API?.

    Now playing: DJ Khaled - I'm So Hood (feat. T-Pain, Trick Daddy, Rick Ross & Plies)


    Categories: Platforms | XML Web Services

    This is another post I was planning to write a few weeks ago which got interrupted by my wedding and honeymoon.

    A few weeks ago, Joel Spolsky wrote a post entitled Strategy Letter VI which I initially dismissed as the ravings of a desktop developer who is trying to create an analogy when one doesn’t exist. The Web isn’t the desktop, or didn’t he read There is no Web Operating System (or WebOS)? By the second time I read it, I realized that if you ignore some of the desktop-centric thinking in Joel’s article, then not only is Joel’s article quite insightful but some of what he wrote is already coming to pass.

    The relevant excerpt from Joel’s article is

    Somebody is going to write a compelling SDK that you can use to make powerful Ajax applications with common user interface elements that work together. And whichever SDK wins the most developer mindshare will have the same kind of competitive stronghold as Microsoft had with their Windows API.

    If you’re a web app developer, and you don’t want to support the SDK everybody else is supporting, you’ll increasingly find that people won’t use your web app, because it doesn’t, you know, cut and paste and support address book synchronization and whatever weird new interop features we’ll want in 2010.

    Imagine, for example, that you’re Google with GMail, and you’re feeling rather smug. But then somebody you’ve never heard of, some bratty Y Combinator startup, maybe, is gaining ridiculous traction selling NewSDK,

    And while you’re not paying attention, everybody starts writing NewSDK apps, and they’re really good, and suddenly businesses ONLY want NewSDK apps, and all those old-school Plain Ajax apps look pathetic and won’t cut and paste and mash and sync and play drums nicely with one another. And Gmail becomes a legacy. The WordPerfect of Email. And you’ll tell your children how excited you were to get 2GB to store email, and they’ll laugh at you. Their nail polish has more than 2GB.

    Crazy story? Substitute “Google Gmail” with “Lotus 1-2-3”. The NewSDK will be the second coming of Microsoft Windows; this is exactly how Lotus lost control of the spreadsheet market. And it’s going to happen again on the web because all the same dynamics and forces are in place. The only thing we don’t know yet are the particulars, but it’ll happen

    A lot of stuff Joel asserts seems pretty clueless on the face of it. Doesn’t he realize that there are umpteen billion AJAX toolkits (e.g. Dojo, Google Web Toolkit, Yahoo! User Interface Library,, etc)  and rich internet application platforms (e.g. Flash, Silverlight, XUL, etc)? Doesn’t he realize that there isn’t a snowball’s chance in hell of the entire Web conforming to standard user interface guidelines let alone everyone agreeing on using the same programming language and SDK to build Web apps?

    But wait…

    What happens if you re-read the above excerpt and substitute NewSDK with Facebook platform?

    I didn’t classify Facebook as a Social Operating System for no reason. GMail and other email services have become less interesting to me because I primarily communicate with friends and family on the Web via Facebook and it’s various platform applications. I’ve stopped playing casual games at Yahoo! Games and now use Scrabulous and Texas Hold ‘Em when I want to idle some time away on the weekend. All of these applications are part of a consistent user interface, are all accessible from my sidebar and each of them has access to my data within Facebook including my social graph. Kinda like how Windows or Mac OS X desktop applications on my machine have a consistent user interface, are all accessible from my applications menu and can all access the data on my hard drive.


    I suspect that Joel is right about NewSDK, he’s just wrong about which form it will take. “Social operating system” does have a nice ring to it, doesn’t it?

    Now playing: Kanye West - Two Words (feat. Mos Def, Freeway & The Harlem Boys Choir)


    Categories: Platforms | Web Development

    Recently I took a look at CouchDB because I saw it favorably mentioned by Sam Ruby and when Sam says some technology is interesting, he’s always right. You get the gist of CouchDB by reading the CouchDB Quick Overview and the CouchDB technical overview.  

    CouchDB is a distributed document-oriented database which means it is designed to be a massively scalable way to store, query and manage documents. Two things that are interesting right off the bat are that the primary interface to CouchDB is a RESTful JSON API and that queries are performed by creating the equivalent of stored procedures in Javascript which are then applied on each document in parallel. One thing that not so interesting is that editing documents is lockless and utilizes optimistic concurrency which means more work for clients.

    As someone who designed and implemented an XML Database query language back in the day, this all seems strangely familiar.

    So far, I like what I’ve seen but there seems to already be a bunch of incorrect hype about the project which may damage it’s chances of success if it isn’t checked. Specifically I’m talking about Assaf Arkin’s post CouchDB: Thinking beyond the RDBMS which seems chock full of incorrect assertions and misleading information.

    Assaf writes

    This day, it happens to be CouchDB. And CouchDB on first look seems like the future of database without the weight that is SQL and write consistency.

    CouchDB is a document oriented database which is nothing new [although focusing on JSON instead of XML makes it buzzword compliant] and is definitely not a replacement/evolution of relational databases. In fact, the CouchDB folks assert as much in their overview document.

    Document oriented database work well for semi-structured data where each item is mostly independent and is often processed or retrieved in isolation. This describes a large category of Web applications which are primarily about documents which may link to each other but aren’t processed or requested often based on those links (e.g. blog posts, email inboxes, RSS feeds, etc). However there are also lots of Web applications that are about managing heavily structured, highly interrelated data (e.g. sites that heavily utilize tagging or social networking) where the document-centric model doesn’t quite fit.

    Here’s where it gets interesting. There are no indexes. So your first option is knowing the name of the document you want to retrieve. The second is referencing it from another document. And remember, it’s JSON in/JSON out, with REST access all around, so relative URLs and you’re fine.

    But that still doesn’t explain the lack of indexes. CouchDB has something better. It calls them views, but in fact those are computed tables. Computed using JavaScript. So you feed (reminder: JSON over REST) it a set of functions, and you get a set of queries for computed results coming out of these functions.

    Again, this is a claim that is refuted by the actual CouchDB documentation. There are indexes, otherwise the system would be ridiculously slow since you would have to run the function and evaluate every single document in the database each time you ran one of these views (i.e. the equivalent of a full table scan). Assaf probably meant to say that there aren’t any relational database style indexes but…it isn’t a relational database so that isn’t a useful distinction to make.

    I’m personally convinced that write consistency is the reason RDBMS are imploding under their own weight. Features like referential integrity, constraints and atomic updates are really important in the client-server world, but irrelevant in a world of services.

    You can do all of that in the service. And you can do better if you replace write consistency with read consistency, making allowances for asynchronous updates, and using functional programming in your code instead of delegating to SQL.

    I read these two paragraph five or six times and they still seem like gibberish to me. Specifically, it seems silly to say that maintaining data consistency is important in the client-server world but irrelevant in the world of services. Secondly, “Read consistency” and “write consistency” are not an either-or choice. They are both techniques used by database management systems, like Oracle, to present a deterministic and consistent experience when modifying, retrieving and manipulating large amounts of data.

    In the world of online services, people are very aware of the CAP conjecture and often choose availability over data consistency but it is a conscious decision. For example, it is more important for Amazon that their system is always available to users than it is that they never get an order wrong every once in a while. See Pat Helland’s (ex-Amazon architect) example of a how a business-centric approach to data consistency may shape one’s views from his post Memories, Guesses, and Apologies where he writes

    #1 - The application has only a single replica and makes a "decision" to ship the widget on Wednesday.  This "decision" is sent to the user.

    #2 - The forklift pummels the widget to smithereens.

    #3 - The application has no recourse but to apologize, informing the customer they can have another widget in one month (after the incoming shipment arrives).

    #4 - Consider an alternate example with two replicas working independently.  Replica-1 "decides" to ship the widget and sends that "decision" to User-1.

    #5 - Independently, Replica-2 makes a "decision" to ship the last remaining widget to User-2.

    #6 - Replica-2 informs Replica-1 of its "decision" to ship the last remaining widget to User-2.

    #7 - Replica-1 realizes that they are in trouble...  Bummer.

    #8 - Replica-1 tells User-1 that he guessed wrong.

    #9 - Note that the behavior experienced by the user in the first example is indistinguishable from the experience of user-1 in the second example.

    Eventual Consistency and Crappy Computers

    Business realities force apologies.  To cope with these difficult realities, we need code and, frequently, we need human beings to apologize.  It is essential that businesses have both code and people to manage these apologies.

    We try too hard as an industry.  Frequently, we build big and expensive datacenters and deploy big and expensive computers.   

    In many cases, comparable behavior can be achieved with a lot of crappy machines which cost less than the big expensive one.

    The problem described by Pat isn’t a failure of relational databases vs. document oriented ones as Assaf’s implication would have us believe. It is the business reality that availability is more important than data consistency for certain classes of applications. A lot of the culture and technologies of the relational database world are about preserving data consistency [which is a good thing because I don’t want money going missing from my bank account because someone thought the importance of write consistency is overstated] while the culture around Web applications is about reaching scale cheaply while maintaining high availability in situations where the occurence of data loss is unfortunate but not catastrophic (e.g. lost blog comments, mistagged photos, undelivered friend requests, etc).

    Even then most large scale Web applications that don’t utilize the relational database features that are meant to enforce data consistency (triggers, foreign keys, transactions, etc) still end up rolling their own app-specific solutions to handle data consistency problems. However since these are tailored to their application they are more performant than generic features which may exist in a relational database. 

    For further reading, see an Overview of the Flickr Architecture.

    Now playing: Raekwon - Guillotine (Swordz) (feat. Ghostface Killah, Inspectah Deck & GZA/Genius)


    Categories: Platforms | Web Development

    Yesterday the Wall Street Journal had an article entitled Why So Many Want to Create Facebook Applications which gives an overview of the burst of activity surrounding the three month old Facebook platform. If a gold rush, complete with dedicated VC funds targetting widget developers, around building embedded applications in a social networking site sounds weirdly familar to you, that’s because it is. This time last year people were saying the same thing about building MySpace widgets. The conventional wisdom at the time was that sites like YouTube (acquired for $1.65 billion) and PhotoBucket (acquired for $250 million) rose in popularity due to their MySpace widget strategy.

    So, why would developers who’ve witnessed the success of companies developing MySpace widgets rush to target a competing social networking site that has less users and requires more code to integrate with the site? The answer is that MySpace made the mistake of thinking that they were a distribution channel instead of a platform. If you are a distribution channel, you hold all the cards. Without you, they have no customers. On the other hand, if you are a platform vendor you realize that it is a symbiotic relationship and you have to make people building on your platform successful because of [not in spite of] your efforts.

    Here are the three classic mistakes the folks at MySpace made which made it possible for Facebook to steal their thunder and their widget developers.

    1. Actively Resent the Success of Developers on Your Platform: If you are a platform vendor, you want developers building on your platform to be successful. In contrast, MySpace’s executives publicly griped about the success of sites like YouTube and PhotoBucket that were “driven off the back of MySpace” and bragged about building competing services which would become as popular as them since “60%-70% of their traffice came from MySpace”. In a sign that things may have gotten out of hand is when MySpace blocked PhotoBucket widgets only to acquire the site a month later, indicating that this was an aggresive negotiation tactic intended to scare off potential buyers.  

    2. Limit the Revenue Opportunities of Developers on Your Platform: MySpace created all sorts of restrictions to make it difficult for widget developers to actually make money directly from the site. For one, they blocked any widget that contained advertising even though advertising is the primary way to make money on the Web. Secondly, they restricted the options widgets had in linking back to the widget developers website thus driving users to where they could actually show them ads. Instead of trying to create a win<->win situation for widget developers (MySpace gets free features thus more engagement from their users, widget developers get ad revenue and traffic) the company tipped the balance excessively in their favor with little upside for widget developers.  

    3. Do Not Invest in Your Platform: For a company that depends so much on developers building tiny applications that integrate into their site, it’s quite amazing that MySpace does not provide any APIs at all. Nor do they provide a structured way for their users to find, locate and install widgets. It turns out that Fox Interactive Media (MySpace’s parent company) did build a widget platform and gallery but due to internal politics these services are not integrated. In fact, one could say that MySpace has done as little as possible to making developing widgets for their platform a pleasant experience for developers or their users. 

    This is pretty much the story of all successful technology platforms that fall out of favor. If you do not invest in your platform, it will become obsolete. If people are always scared that you will cut off their air supply out of jealousy, they’ll bolt the first chance they get. And if people can’t make money building on your platform, then there is no reason for them to be there in the first place. Don’t make the same mistakes.

    Now playing: 50 Cent - Many Men (Wish Death)


    My job at Microsoft is working on the contacts platform that is utilized by a number of Windows Live services. The contacts platform is a unified graph of the relationships our users have created across Windows Live. It includes a user's Windows Live Hotmail contacts, their Windows Live Spaces friends, their Windows Live Messenger buddies and anyone they've added to an access control list (e.g. people who can access their shared folders in Windows Live Skydrive or the events in their calendar). Basically, a while ago one of our execs thought it didn't make sense to build a bunch of social software applications each acting as a silo of user relationships and that instead we should have a unified graph of the user to user relationships within Windows Live. Fast forward a couple of years and we now have a clearer idea of the pros and cons of building a unified social graph.

    Given the above, it should be no surprise that I read Brad Fitzpatrick's Thoughts on the Social Graph with keen interest since it overlaps significantly with my day job. I was particularly interested in the outlined goals for the developers API which are included below

    For developers who don't want to do their own graph analysis from the raw data, the following high-level APIs should be provided: 

    1. Node Equivalence, given a single node, say "brad on LiveJournal", return all equivalent nodes: "brad" on LiveJournal, "bradfitz" on Vox, and 4caa1d6f6203d21705a00a7aca86203e82a9cf7a (my FOAF mbox_sha1sum). See the slides for more info.
    2. Edges out and in, by node. Find all outgoing edges (where edges are equivalence claims, equivalence truths, friends, recommendations, etc). Also find all incoming edges.
    3. Find all of a node's aggregate friends from all equivalent nodes, expand all those friends' equivalent nodes, and then filter on destination node type. This combines steps 1 and 2 and 1 in one call. For instance, Given 'brad' on LJ, return me all of Brad's friends, from all of his equivalent nodes, if those [friend] nodes are either 'mbox_sha1sum' or 'Twitter' nodes.
    4. Find missing friends of a node. Given a node, expand all equivalent nodes, find aggregate friends, expand them, and then report any missing edges. This is the "let the user sync their social networking sites" API. It lets them know if they were friends with somebody on Friendster and they didn't know they were both friends on MySpace, they might want to be.

    here are the top three problems Brad and the rest of the Google folks working on this project will have to factor in as they chase the utopia that is a unified social graph.
    1. Some Parts of the Graph are Private: Although social networking sites with publicly articulated social networks are quite popular (e.g. MySpace) there are a larger number of private or semi-private social networks that either can only be viewed by the owner of the list (e.g. IM buddy lists) or some subset of the graph (e.g. private profiles on social networking sites MySpace, Facebook, Windows Live Spaces, etc). The latter is especially tricky to deal with. In addition, people often have more non-public articulated social networks (i.e. friends lists) than public ones despite the popularity of social networking sites with public profiles.

    2. Inadvertent Information Disclosure caused by Linking Nodes Across Social Networks: The "find missing friends of a node" feature in Brad's list sounds nice in theory but it includes a number of issues that users often consider to be privacy violations or just plain creepy. Let's say, I have Batman on my friend's list in MySpace because I think the caped crusader is cool. Then I join LiveJournal and it calls the find_missing_friends() API to identify which of my friends from other sites are using LiveJournal and it find's Bruce Wayne's LiveJournal? Oops, an API call just revealed Batman's secret identity. A less theoretical version of this problem occurred when we first integrated Windows Live Spaces with Windows Live Messenger, and some of our Japanese beta users were stunned to find that their supposedly anonymous blog postings were now a click away for their IM buddies to see. I described this situation briefly in my submission to the 2005 Social Computing Symposium.

    3. All "Friends" aren't Created Equal: Another problem is that most users don't want all their "friends" available in all their applications. One capability we were quite proud off at one time is that if you had Pocket MSN Messenger then we merged the contacts on your cell phone with your IM and email contacts. A lot of people were less than impressed by this behavior. Someone you have on your IM buddy list isn't necessarily someone you want in your cell phone address book. Over the years, I've seen more examples of this than I can count. Being "friends" in one application does not automatically mean that two users want to be "friends" in a completely different context.

    These are the kinds of problems we've had to deal with on my team while also trying to make this scale to being accessed by services utilized by hundreds of millions of users. I've seen what it takes to build a system like this first hand and Brad & company have their work cut out for them. This is without considering the fact that they may have to deal with ticked of users or ticked off social networking sites depending on how exactly they plan to build this giant database of user friend lists.

    PS: In case any of this sounds interesting to you, we're always hiring. :)


    Categories: Platforms | Social Software

    I learned about the Facebook Data Store API yesterday from a post by Marc Canter. The API is intended to meet the storage needs of developers building widgets applications on the Facebook widget platform. Before we decide if the API meets the needs of developers, we need to list what these needs are in the first place. A developer building a widget or application for a social network’s widget platform such as a gadget for Windows Live Spaces or an application for the Facebook platform needs to store

    1. Static resources that will be consumed or executed on the client such as images, stylesheets and script files. Microsoft provides this kind of hosting for gadget developers via Windows Live Gallery. This is all the storage needed for a gadget such as GMT clock.
    2. User preferences and settings related to the gadget. In many cases, a gadget may provide a personalized view of data (e.g. my Netflix queue or the local weather) or may simply have configuration options specific to the user which need to be saved. Microsoft provides APIs for getting, setting and deleting preferences as part of it’s Web gadgets framework. My Flickr badge gadget is an example of the kind of gadget that requires this level of storage.
    3. The application’s server-side code and application specific databases. This is the equivalent of the LAMP or WISC hosting you get from a typical Web hosting provider. No social networking site provides this for widget/gadget developers today. The iLike Facebook application is an example of the kind of application that requires this level of “storage” or at this level it should probably be called app hosting.

    Now that we have an idea of the data storage needs of Web widget/gadget developers, we can now discuss how the Facebook Data Store API measures up. The API consists of three broad classes of methods; User Preferences, Persistent Objects and Associations. All methods can return results as XML, JSON or JSONP.

    It is currently unclear if the API is intended to be RESTful or not since there is scant documentation of the wire format of requests or responses. 

    User Preferences

    Object Definition methods
    * data.setUserPreference update one preference
    * data.setUserPreferences update multiple preferences in batch
    * data.getUserPreference get one preference of a user
    * data.getUserPreferences get all preferences of a user

    These methods are used to store key value pairs which may represent user preferences or settings for an application. There is a limit of 201 key<->value pairs which can be stored per user. The keys are numeric values from 0 – 200 and the maximum length of a preference value is 128 characters. 

    Persistent Objects

    Object Definition methods
    * data.createObjectType create a new object type
    * data.dropObjectType delete an object type and all objects of this type
    * data.renameObjectType rename an object type
    * data.defineObjectProperty add a new property
    * data.undefineObjectProperty remove a previously defined property
    * data.renameObjectProperty rename a previously defined property
    * data.getObjectTypes get a list of all defined object types
    * data.getObjectType get detailed definition of an object type

    Developers can create new types which are analogous to SQL tables especially when you consider terminology like “drop” object, the ability to add new properties/columns to the type and being able to retrieve the schema  of the type which are all more common in relational database world than in object oriented programming.


    Object Manipulation methods
    * data.createObject create a new object
    * data.updateObject update an object's properties
    * data.deleteObject delete an object by its id
    * data.deleteObjects delete multiple objects by ids
    * data.getObject get an object's properties by its id
    * data.getObjects get properties of a list of objects by ids
    * data.getObjectProperty get an object's one property
    * data.setObjectProperty set an object's one property
    * data.getHashValue get a property value by a hash key
    * data.setHashValue set a property value by a hash key
    * data.incHashValue increment/decrement a property valye by a hash key
    * data.removeHashKey delete an object by its hash key
    * data.removeHashKeys delete multiple objects by their hash keys

    This aspect of the API is almost self explanatory, you create an object type (e.g. a movie) then manipulate instances of this object using the above APIs. Each object can be accessed via a numeric ID or a string hash value. The object’s numeric ID is obtained when you first create the object although it isn’t clear how you obtain an object’s hash key. It also seems like there is no generic query mechanism so you need to store the numeric IDs or hash keys of the objects you are interested in somewhere so you don’t have to enumerate all objects looking for them later. Perhaps with the preferences API?


    Association Definition methods
    * data.defineAssociation create a new object association
    * data.undefineAssociation remove a previously defined association and all its data
    * data.renameAssociation rename a previously defined association
    * data.getAssociationDefinition get definition of a previously defined association
    * data.getAssociationDefinitions get definitions of all previously defined associations

    An association is a named relationship between two objects. For example, "works_with" could be an association between two user objects. Associations don't have to be between the same types (e.g. a "works_at" could be an association between a user object and a company object). Associations take me back to WinFS and son of WinFS Entity Data Model which has a notion of a RelationshipType that is very similar to the above notion of an association. It is also similar to the notion of an RDF triple but not quite.

    Association Manipulation methods
    * data.setAssociation create an association between two objects
    * data.setAssociations create a list of associations between pairs of objects
    * data.removeAssociation remove an association between two objects
    * data.removeAssociations remove associations between pairs of objects
    * data.removeAssociatedObjects remove all associations of an object
    * data.getAssociatedObjects get ids of an object's associated objects
    * data.getAssociatedObjectCount get count of an object's associated objects
    * data.getAssociatedObjectCounts get counts of associated objects of a list of objects.
    * data.getAssociations get all associations between two objects

    All of these methods should be self explanatory. Although I think this association stuff is pretty sweet, I’m unclear as to where all of this is expected to fall in the hierarchy of needs of an Facebook application. The preferences stuff is a no brainer. The persistent object and association APIs could be treated as a very rich preferences API by developers but this doesn’t seem to be living up to their potential. On the other hand, without providing something closer to an app hosting platform like Amazon has done with EC2 + S3, I’m not sure there is any other use for them by Web developers using the Facebook platform.

    Have I missed something here?

    Now playing: UGK - International Players Anthem (feat. Outkast)


    Categories: Platforms

    August 14, 2007
    @ 03:19 AM

    Recently I've seen a bunch of people I consider to be really smart sing the praises of Hadoop such as Sam Ruby in his post Long Bets, Tim O’Reilly in his post Yahoo!’s Bet on Hadoop, and Bill de hÓra in his post Phat Data. I haven’t dug too deeply into Hadoop due to the fact that the legal folks at work will chew out my butt if I did, there a number of little niggling doubts that make me wonder if this is the savior of the world that all these geeks claim it will be. Here are some random thoughts that have made me skeptical

    1. Code Quality: Hadoop was started by Doug Cutting who created Lucene and Nutch. I don’t know much about Nutch but I am quite familiar with Lucene because we adopted it for use in RSS Bandit. This is probably the worst decision we’ve made in the entire history of RSS Bandit. Not only are the APIs a usability nightmare because they were poorly hacked out then never refactored, the code is also notoriously flaky when it comes to dealing with concurrency so common advice is to never use multiple threads to do anything with Lucene.

    2. Incomplete Specifications: Hadoop’s MapReduce and HDFS are a re-implementation of Google’s MapReduce and Google File System (GFS)  technologies. However it seems unwise to base a project on research papers that may not reveal all the details needed to implement the service for competitive reasons. For example, the Hadoop documentation is silent on how it plans to deal with the election of a primary/master server among peers especially in the face of machine failure which Google solves using the Chubby lock service. It just so happens that there is a research paper that describes Chubby but how many other services within Google’s data centers do MapReduce and Google File System (GFS)  depend on which are yet to have their own public research paper? Speaking of which, where are the Google research papers on their message queueing infrastructure? You know they have to have one, right? How about their caching layer? Where are the papers on Google’s version of memcached?Secondly, what is the likelihood that Google will be as forthcoming with these papers now that they know competitors like Yahoo! are knocking off their internal architecture?

    3. A Search Optimized Architecture isn’t for Everyone: One of the features of MapReduce is that one can move the computation close to the data because “Moving Computation is Cheaper than Moving Data”. This is especially important when you are doing lots of processing intensive operations such as the kind of data analysis that goes into creating the Google search index. However what if you’re a site whose main tasks are reading and writing lots of data (e.g. MySpace) or sending lots of transient messages back and forth yet ensuring that they always arrive in the right order (e.g. Google Talk) then these optimizations and capabilities aren’t much use to you and a different set of tools would serve you better. 

    I believe there are a lot of lessons that can be learned from how the distributed systems that power the services behind Google, Amazon and the like. However I think it is waaaay to early to be crowning some knock off of one particular vendors internal infrastructure as the future of distributed computing as we know it.


    PS: Yes, I realize that Sam and Bill are primarily pointing out the increasing importance of parellel programming as it relates to the dual trends of (i) almost major website that ends up dealing with lots of data and has lots of traffic eventually eschews relational database features like joins, normalization, triggers and transactions because they are not cost effective and (ii) the increased large amounts of data that the we generate and now have to process due to falling storage costs. Even though their mentions of Hadoop are incidental it still seems to me that it’s almost become a meme, one which deserves more scrutiny before we jump on that particular band wagon. 

    Now playing: N.W.A. - Appetite For Destruction


    Categories: Platforms

    Disclaimer: This blog post does not reflect future product announcements, technical strategy or advice from my employer. Disregard this disclaimer at your own risk.

    In my previous post Some Thoughts on Open Social Networks, I gave my perspective on various definitions of "open social network" in response to the Wired article Slap in the Facebook: It's Time for Social Networks to Open Up. However there was one aspect of the article that I overlooked when I first read it. The first page of the article ends with the following exhortation.

    We would like to place an open call to the web-programming community to solve this problem. We need a new framework based on open standards. Think of it as a structure that links individual sites and makes explicit social relationships, a way of defining micro social networks within the larger network of the web.

    This is a problem that interests me personally. I have a Facebook profile while my fiancée has a MySpace profile. Since I’m now an active user of Facebook, I’d like her to be able to be part of my activities on the site such as being able to view my photos, read my wall posts and leave wall posts of her own. I could ask her to create a Facebook account, but I already asked her to create a profile on Windows Live Spaces so we could be friends on that service and quite frankly I don’t think she’ll find it reasonable if I keep asking her to jump from social network to social network because I happen to try out a lot of these services as part of my day job. So how can this problem be solved in the general case?

    OpenID to the Rescue

    This is exactly the kind of problem that OpenID was designed to solve.  The first thing to do is to make sure we all have the same general understanding of how OpenID works. It's basically the same model as Microsoft Passport Windows Live ID, Google Account Authentication for Web-Based Applications and Yahoo! Browser Based Authentication. A website redirects you to your identity provider, you authenticate yourself (i.e. login) on your identity providers site and then are redirected back to the referring site along with your authentication ticket. The ticket contains some information about you that can be used to uniquely identify you as well as some user data that may be of interest to the referring site (e.g. username).

    So how does this help us? Let’s say MySpace was an OpenID provider which is a fancy way of saying that I can use my MySpace account to login to any site that accepts OpenIDs . And now let’s say Facebook was a site that accepted OpenIDs  as an identification scheme. This means that I could add my fiancée to the access control list of people who could view and interact with my profile on Facebook by using the URL to her MySpace profile as my identifier for her.  So when she tries to access my profile for the first time, she is directed to the Facebook login page where she has the option of logging in with her MySpace credentials. When she chooses this option she is directed to the MySpace login page. After logging into MySpace with proper credentials, she is redirected back to Facebook  and gets a pseudo-account on the service which allows her to participate in the site without having to go through an account creation process.

    Now that the user has a pseudo-account on Facebook, wouldn’t it be nice if when someone clicked on them they got to see a Facebook profile? This is where OpenID Attribute Exchange can be put to use. You could define a set of required and optional attributes that are exchanged as part of social network interop using OpenID. So we can insert an extra step [which is may be hidden from the user] after the user is redirected to Facebook after logging into MySpace where the user’s profile information is requested. Here is an example of the kind of request that could be made by Facebook after a successful log-in attempt by a MySpace user.,gender,location,looking_for,fav_music

    which could return the following results, WA, United States,country,pop

    With the information returned by MySpace, one can now populate a place holder Facebook profile for the user.

    Why This Will Never Happen

    The question at the tip of your tongue is probably “If we can do this with OpenID today, how come I haven’t heard of anyone doing this yet?”.  As usual when it comes to interoperability, the primary reasons for lack of interoperability are business related and not technical.  When you look at the long list of Open ID providers, you may be notice that there is no similar long list of sites that accept OpenID  credentials. In fact, there is no such list of sites readily available because the number of them is an embarassing fraction of the number of sites that act as Open ID providers. Why this discrepancy?

    If you look around, you’ll notice that the major online services such as Yahoo! via BBAuth, Microsoft via Passport Windows Live ID, and AOL via OpenID all provide ways for third party sites to accept user credentials from their sites. This increases the value of having an account on these services because it means now that I have a Microsoft Passport Windows Live ID I not only can log-in to various Microsoft properties across MSN and Windows Live but also non-Microsoft sites like Expedia. This increases the likelihood that I’ll get an account with the service which makes it more likely that I’ll be a regular user of the service which means $$$. On the other hand, accepting OpenIDs does the exact opposite. It actually reduces the incentive to create an account on the site which reduces the likelihood I’ll be a regular user of the site and less $$$. Why do you think there is no OpenID link on the AOL sign-in page even though the company is quick to brag about creating 63 million OpenIDs?

    Why would Facebook implement a feature that reduced their user growth via network effects? Why would MySpace make it easy for sites to extract user profile information from their service? Because openness is great? Yeah…right.

    Openness isn’t why Facebook is currently being valued at $6 billion nor is it why MySpace is currently expected to pull in about half a billion in revenue this year. These companies are doing just great being walled gardens and thanks to network effects, they will probably continue to do so unless something really disruptive happens.   

    PS: Marc Canter asks if I can attend the Data Sharing Summit between Sept. 7th – 8th. I’m not sure I can since my wedding + honeymoon is next month. Consider this my contribution to the conversation if I don’t make it.

    Now playing: Wu-Tang Clan - Can It Be All So Simple


    August 2, 2007
    @ 02:40 AM

    Yesterday, I was chatting with a former co-worker about Mary Jo Foley's article Could a startup beat Microsoft and Google to market with a ‘cloud OS’? and I pointed out that it was hard to make sense of the story because she seemed to be conflating multiple concepts then calling all of them a "cloud OS". It seems she isn’t the only one who throws around muddy definitions of this term as evidenced by C|Net articles like Cloud OS still pie in the sky and blog posts from Microsoft employees like Windows Cloud! What Would It Look Like!? 

    I have no idea what Microsoft is actually working on in this space and even if I did I couldn't talk about it anyway. However I do think it is a good idea for people to have a clear idea of what they are talking about when the throw around terms like "cloud OS" or "cloud platform" so we don't end up with another useless term like SOA which means a different thing to each person who talks about it. Below are the three main ideas people often identify as a "Web OS", "cloud OS" or "cloud platform" and examples of companies executing on that vision.

    WIMP Desktop Environment Implemented as a Rich Internet Application (The YouOS Strategy)

    Porting the windows, icons, menu and pointer (WIMP) user interface which has defined desktop computing for the last three decades to the Web is seen by many logical extension of the desktop operating system. This is a throwback to the Oracle's network computer of the late 1990s where the expectation is that the average PC is not much more than a dumb terminal with enough horse power to handle the display requirements and computational needs of whatever rich internet application platform is needed to make this work.

    A great example of a product in this space is YouOS. This seems to be the definition idea of a "cloud os" that is used by Ina Fried in the C|Net article Cloud OS still pie in the sky.

    My thoughts on YouOS and applications like it were posted a year ago, my opinion hasn't changed since then.

    Platform for Building Web-based Applications (The Amazon Strategy)

    When you look at presentations on scaling popular websites like YouTube, Twitter, Flickr, eBay, etc it seems everyone keeps hitting the same problems and reinventing the same wheels. They all start of using LAMP thinking that’s the main platform decision they have to make. Then they eventually add on memcached or something similar to reduce disk I/O. After that, they may start to hit the limits of the capability of relational database management systems and may start taking data out of their databases, denormalizing them or simply repartition/reshard them as they add new machines or clusters. Then they realize that they now have dozens of machines in their data center when they started with one or two and managing them (i.e. patches, upgrades, hard disk crashes, dynamically adding new machines to the cluster, etc) becomes a problem.

    Now what if someone who’d already built a massively scalable website and now had amassed a bunch of technologies and expertise at solving these problems decided to rent out access to their platform to startups and businesses who didn’t want to deal with a lot of the costs and complexities of building a popular Website beyond deciding whether to go with LAMP or WISC? That’s what Amazon has done with Amazon Web Services such as EC2 ,S3, SQS and the upcoming Dynamo.  

    The same way a desktop operating system provides an abstraction over the complexity of interacting directly with the hardware is the way Amazon’s “cloud operating system” insulates Web developers from a lot of the concerns that currently plague Web development outside of actually writing the application code and dealing with support calls from their customers.

    My thoughts on Amazon’s Web Services strategy remain the same. I think this is the future of Web platforms but there is still a long way to go for it to be attractive to today’s startup or business.

    NOTE: Some people have commented that it is weird for an online retailer to get into this business. This belies a lack of knowledge of the company’s history. Amazon has always been about gaining expertise at some part of the Web retailer value chain then opening that up to others as a platform. Previous examples include the Amazon Honor System which treats their payment system as a platforn, Fulfillment by Amazon which treats their warehousing and product shipping system as a platform, zShops allows you to sell your products on their Website as well as more traditional co-branding deals where other sites reused their e-commerce platform such as

    Web-based Applications and APIs for Integrating with Them (The Google Strategy)

    Similar to Amazon, Google has created a rich set of tools and expertise at building and managing large scale websites. Unlike Amazon, Google has not indicated an interest in renting out these technologies and expertise to startups and businesses. Instead Google has focused on using their platform to give them a competitive advantage in the time to market, scalability and capabilities of their end user applications. Consider the following… 

    If I use GMail for e-mail, Google Docs & Spreadsheets for my business documents, Google Calendar for my schedule, Google Talk for talking to my friends, Google search to find things on my desktop or on the Web and iGoogle as my start page when I get on the computer then it could be argued that for all intents and purposes my primary operating system was Google not Windows. Since every useful application eventually becomes a platform, Google’s Web-based applications are no exception. There is now a massive list of APIs for interacting and integrating with Google’s applications which make it easier to get data into Google’s services (e.g. the various GData APIs) or to spread the reach of Google’s services to sites they don’t control  (e.g. widgets like the Google AJAX Search API and the Google Maps API).

    In his blog post GooOS, the Google Operating System Jason Kottke argues that the combination of Google’s various Web applications and APIs [especially if they include an office suite] plus some desktop and mobile entry points into their services is effectively a Google operating system. Considering Google’s recent Web office leanings and its bundling deals with Dell and Apple, it seems Jason Kottke was particularly prescient given that he wrote his blog post in 2004.

    Now playing: Fabolous - Do The Damn Thing (feat. Young Jeezy)


    In my previous post, I mentioned that I'm in the early stages of building an application on the Facebook platform. I haven't yet decided on an application but for now, let's assume that it is a Favorite Comic Books application which allows me to store my favorite comic books and shows me to most popular comic books among my friends.

    After investigating using Amazon's EC2 + S3 to build my application I've decided that I'm better off using a traditional hosting solution running either a on the LAMP or WISC platform. One of the things I've been looking at is which platform has better support for providing an in-memory caching solution that works well in the context of a Web farm (i.e. multiple Web servers) out of the box. While working on the platforms behind several high traffic Windows Live services I've learned  that you should be prepared for dealing with scalability issues and caching is one of the best ways to get bang for the buck when improving the scalability of your service.

    I recently discovered memcached which is a distributed, object caching system originally developed by Brad Fitzpatrick of LiveJournal fame. You can think of memcached as a giant hash table that can run on multiple servers which automatically handles maintaining the balance of objects hashed to each server and transparently fetches/removes objects from over the network if they aren't on the same machine that is accessing an object in the hash table. Although this sounds fairly simple, there is a lot of grunt work in building a distributed object cache which handles data partitioning across multiple servers and hides the distributed nature of the application from the developer. memcached is a well integrated into the typical LAMP stack and is used by a surprising number of high traffic websites including Slashdot, Facebook, Digg, Flickr and Wikipedia. Below is what C# code that utilizes memcached would look like sans exception handling code

    public ArrayList GetFriends(int user_id){

        ArrayList friends = (ArrayList) myCache.Get("friendslist:" + userid);

        if(friends == null){
            // Open the connection

            SqlCommand cmd = new SqlCommand("select friend_id from friends_list where owner_id=" + "user_id", dbConnection);

            SqlDataReader reader = cmd.ExecuteReader();

            // Add each friend ID to the list
            while (reader.Read()){


            myCache.Set("friendslist:" + userid, friends);

        return friends;

    public void AddFriend(int user_id, int new_friends_id){

        // Open the connection

        SqlCommand cmd = new SqlCommand("insert into friends_list (owner_id, friend_id) values (" + user_id + "," + new_friend_id ")";

        //remove key from cache since friends list has been updated
        myCache.Delete("friendslist:" + userid);

        dbConnection .Close(); 

    The benefits of the using of the cache should be pretty obvious. I no longer need to hit the database after the first request to retrieve the user's friend list which means faster performance in servicing the request and less I/O.  The memcached automatically handles purging items out of the cache when it hits the size limit and also deciding which cache servers should hold individual key<->value pairs.

    I hang with a number of Web developers on the WISC platform and I don't think I've ever heard anyone mention memcached or anything like it.In fact I couldn't find a mention of it on Microsoft employee blogs, ASP.NET developer blogs or on MSDN. So I wondered what the average WISC developer uses as their in-memory caching solution.

    After looking around a bit, I came to the conclusion that most WISC developers use the built-in ASP.NET caching features. ASP.NET provides a number of in-memory caching features including a Cache class which provides a similar API to memcached, page directives for caching portions of the page or the entire page and the ability to create dependencies between cached objects and the files or database tables/rows that they were populated from via the CacheDependency and SqlCacheDependency classes. Although some of these features are also available in various Open Source web development frameworks such as Ruby on Rails + memcached, none give as much functionality out of the box as ASP.NET or so it seems.

    Below is what the code for the GetFriends and AddFriend methods would look like using the built-in ASP.NET caching features

    public ArrayList GetFriends(int user_id){

        ArrayList friends = (ArrayList) Cache.Get("friendslist:" + userid);

        if(friends == null){
            // Open the connection

            SqlCommand cmd = new SqlCommand("select friend_id from friends_list where owner_id=" + "user_id", dbConnection);

            SqlCacheDependency dependency = new SqlCacheDependency(cmd);
            SqlDataReader reader = cmd.ExecuteReader();

            // Add each friend ID to the list
            while (reader.Read()){


            //insert friends list into cache with associated dependency
            Cache.Insert("friendslist:" + userid, friends, dependency);
        return friends;

    public void AddFriend(int user_id, int new_friends_id){
        // Open the connection

        SqlCommand cmd = new SqlCommand("insert into friends_list (owner_id, friend_id) values (" + user_id + "," + new_friend_id ")";

        /* no need to remove from cache because SqlCacheDependency takes care of that automatically */
        // Cache.Remove("friendslist:" + userid);

        dbConnection .Close();

    Using the SqlCacheDependency class gets around a significant limitation of the ASP.NET Cache class. Specifically, the cache is not distributed. This means that if you have multiple Web front ends, you'd have to write your own code to handle partitioning data and invalidating caches across your various Web server instances. In fact, there are numerous articles showing how to implement such a solution including Synchronizing the ASP.NET Cache across AppDomains and Web Farms by Peter Bromberg and Use Data Caching Techniques to Boost Performance and Ensure Synchronization by David Burgett.

    However, let's consider how how SqlCacheDependency is implemented. If you are using SQL Server 7 or SQL Server 2000, then your ASP.NET process polls the database at regular intervals to determine whether the target(s) of the original query have changed. For SQL Server 2005, the database can be configured to send change notifications to the Web servers if the target(s) of the original query change. Either way, the database is doing work to determine if the data has changed. Compared to the memcached this still doesn't seem as efficient as we can get if we want to eke out every last out of performance out of the system although it does lead to simpler code.

    If you are a developer on the WISC platform and are concerned about getting the best performance out of your Web site, you should take a look at memcached for Win32. The most highly trafficked site on the WISC platform is probably MySpace and in articles about how they are platform works such as Inside they extol the virtues of moving work out of the database and relying on cache servers.


    Categories: Platforms | Programming | Web Development

    These are my notes from the talk Lessons in Building Scalable Systems by Reza Behforooz.

    The Google Talk team have produced multiple versions of their application. There is

    • a desktop IM client which speaks the Jabber/XMPP protocol.
    • a Web-based IM client that is integrated into GMail
    • a Web-based IM client that is integrated into Orkut
    • An IM widget which can be embedded in iGoogle or in any website that supports embedding Flash.

    Google Talk Server Challenges

    The team has had to deal with a significant set of challenges since the service launched including

    • Support displaying online presence and sending messages for millions of users. Peak traffic is in hundreds of thousands of queries per second with a daily average of billions of messages handled by the system.

    • routing and application logic has to be applied to each message according to the preferences of each user while keeping latency under 100ms.

    • handling surge of traffic from integration with Orkut and GMail.

    • ensuring in-order delivery of messages

    • needing an extensibile architecture which could support a variety of clients


    The most important lesson the Google Talk team learned is that you have to measure the right things. Questions like "how many active users do you have" and "how many IM messages does the system carry a day" may be good for evaluating marketshare but are not good questions from an engineering perspective if one is trying to get insight into how the system is performing.

    Specifically, the biggest strain on the system actually turns out to be displaying presence information. The formula for determining how many presence notifications they send out is

    total_number_of_connected_users * avg_buddy_list_size * avg_number_of_state_changes

    Sometimes there are drastic jumps in these numbers. For example, integrating with Orkut increased the average buddy list size since people usually have more friends in a social networking service than they have IM buddies.

    Other lessons learned were

    1. Slowly Ramp Up High Traffic Partners: To see what real world usage patterns would look like when Google Talk was integrated with Orkut and GMail, both services added code to fetch online presence from the Google Talk servers to their pages that displayed a user's contacts without adding any UI integration. This way the feature could be tested under real load without users being aware that there were any problems if there were capacity problems. In addition, the feature was rolled out to small groups of users at first (around 1%).

    2. Dynamic Repartitioning: In general, it is a good idea to divide user data across various servers (aka partitioning or sharding) to reduce bottlenecks and spread out the load. However, the infrastructure should support redistributing these partitions/shards without having to cause any downtime.

    3. Add Abstractions that Hide System Complexity: Partner services such as Orkut and GMail don't know which data centers contain the Google Talk servers, how many servers are in the Google Talk cluster and are oblivious of when or how load balancing, repartitioning or failover occurs in the Google Talk service.

    4. Understand Semantics of Low Level Libraries: Sometimes low level details can stick it to you. The Google Talk developers found out that using epoll worked better than the poll/select loop because they have lots of open TCP conections but only a relatively small number of them are active at any time.

    5. Protect Against Operational Problems: Review logs and endeavor to smooth out spikes in activity graphs. Limit cascading problems by having logic to back off from using busy or sick servers.

    6. Any Scalable System is a Distributed System: Apply the lessons from the fallacies of distributed computing. Add fault tolerance to all your components. Add profiling to live services and follow transactions as they flow through the system (preferably in a non-intrusive manner). Collect metrics from services for monitoring both for real time diagnosis and offline generation of reports.

    Recommended Software Development Strategies

    Compatibility is very important, so making sure deployed binaries are backwards and forward compatible is always a good idea. Giving developers access to live servers (ideally public beta servers not main production servers) will encourage them to test and try out ideas quickly. It also gives them a sense of empowerement. Developers end up making their systems easier to deploy, configure, monitor, debug and maintain when they have a better idea of the end to end process.

    Building an experimentation platform which allows you to empirically test the results of various changes to the service is also recommended.


    Categories: Platforms | Trip Report

    These are my notes from the talk Using MapReduce on Large Geographic Datasets by Barry Brummit.

    Most of this talk was a repetition of the material in the previous talk by Jeff Dean including reusing a lot of the same slides. My notes primarily contain material I felt was unique to this talk.

    A common pattern across a lot of Google services is creating a lot of index files that point and loading them into memory to male lookups fast. This is also done by the Google Maps team which has to handle massive amounts of data (e.g. there are over a hundred million roads in North America).

    Below are examples of the kinds of problems the Google Maps has used MapReduce to solve.

    Locating all points that connect to a particular road
    Input Map Shuffle Reduce Output
    List of roads and intersections Create pairs of connected points such as {road, intersection} or {road, road} pairs Sort by key Get list of pairs with the same key A list of all the points that connect to a particular road

    Rendering Map Tiles
    Input Map Shuffle Reduce Output
    Geographic Feature List Emit each feature on a set of overlapping lat/long rectangles Sort by Key Emit tile using data for all enclosed features Rendered tiles

    Finding Nearest Gas Station to an Address within five miles
    Input Map Shuffle Reduce Output
    Graph describing node network with all gas stations marked Search five mile radius of each gas station and mark distance to each node Sort by key For each node, emit path and gas station with the shortest distance Graph marked with nearest gas station to each node

    When issues are encountered in a MapReduce it is possible for developers to debug these issues by running their MapReduce applications locally on their desktops.

    Developers who would like to harness the power of a several hundred to several thousand node cluster but do not work at Google can try

    Recruiting Sales Pitch

    [The conference was part recruiting event so some of the speakers ended their talks with a recruiting spiel. - Dare]

    The Google infrastructure is the product of Google's engineering culture has the following ten characteristics

    1. single source code repository for all Google code
    2. Developers can checkin fixes for any Google product
    3. You can build any Google product in three steps (get, configure, make)
    4. Uniform coding standards across the company
    5. Mandatory code reviews before checkins
    6. Pervasive unit testing
    7. Tests run nightly, emails sent to developers if any failures
    8. Powerful tools that are shared company-wide
    9. Rapid project cycles, developers change projects often, 20% time
    10. Peer driven review process, flat management hierarchy


    Q: Where are intermediate results from map operations stored?
    A: In BigTable or GFS

    Q: Can you use MapReduce incrementally? For example, when new roads are built in North America do we have to run MapReduce over teh entire data set or can we only factor in the changed data?
    A: Currently, you'll have to process the entire data stream again. However this is a problem that is the target of lots of active research at Google since it affects a lot of teams.


    Categories: Platforms | Trip Report

    These are my notes from the keynote session MapReduce, BigTable, and Other Distributed System Abstractions for Handling Large Datasets by Jeff Dean.

    The talk was about the three pillars of Google's data storage and processing platform; GFS, BigTable and MapReduce.


    The developers at Google decided to build their own custom distributed file system because they felt that they had unique requirements. These requirements included

    • scalable to thousands of network nodes
    • massive read/write bandwidth requirements
    • ability to handle large blocks of data which are gigabytes in size.
    • need exremely efficient distribution of operations across nodes to reduce bottlenecks

    One benefit the developers of GFS had was that since it was an in-house application they could control the environment, the client applications and the libraries a lot better than in the off-the-shelf case.

    GFS Server Architecture

    There are two server types in the GFS system.

    Master servers
    These keep the metadata on the various data files (in 64MB chunks) within the file system. Client applications talk to the master servers to perform metadata operations on files or to locate the actual chunk server that contains the actual bits on disk.
    Chunk servers
    These contain the actual bits on disk and can be considered to be dumb file servers. Each chunk is replicated across three different chunk servers to create redundancy in case of server crashes. Client applications retrieve data files directly from chunk servers once they've been directed to the chunk server which contains the chunk they want by a master server.

    There are currently over 200 GFS clusters at Google, some of which have over 5000 machines. They now have pools of tens of thousands of machines retrieving data from GFS clusters that run as large as 5 petabytes of storage with read/write throughput of over 40 gigabytes/second across the cluster.


    At Google they do a lot of processing of very large amounts of data. In the old days, developers would have to write their own code to partition the large data sets, checkpoint code and save intermediate results, handle failover in case of server crashes, and so on as well as actually writing the business logic for the actual data processing they wanted to do which could have been something straightforward like counting the occurence of words in various Web pages or grouping documents by content checksums. The decision was made to reduce the duplication of effort and complexity of performing data processing tasks by building a platform technology that everyone at Google could use which handled all the generic tasks of working on very large data sets. So MapReduce was born.

    MapReduce is an application programming interface for processing very large data sets. Application developers feed in a key/value pair (e.g. {URL,HTML content} pair) then use the map function to extract relevant information from each record which should produce a set of intermediate key/value pairs (e.g. {word, 1 } pairs for each time a word is encountered) and finally the reduce function merges the intermediate values associated with the same key to produce the final output (e.g. {word, total count of occurences} pairs).

    A developer only has to write their specific map and reduce operations for their data sets which could run as low as 25 - 50 lines of code while the MapReduce infrastructure deals with parallelizing the task and distributing it across different machines, handling machine failures and error conditions in the data, optimizations such as moving computation close to the data to reduce I/O bandwidth consumed, providing system monitoring and making the service scalable across hundreds to thousands of machines.

    Currently, almost every major product at Google uses MapReduce in some way. There are 6000 MapReduce applications checked into the Google source tree with the hundreds of new applications that utilize it being written per month. To illustrate its ease of use, a graph of new MapReduce applications checked into the Google source tree over time shows that there is a spike every summer as interns show up and create a flood of new MapReduce applications that are then checked into the Google source tree.

    MapReduce Server Architecture

    There are three server types in the MapReduce system.

    Master server
    This assigns user tasks to map and reduce servers as well as keeps track of the state of these tasks.
    Map Servers
    Accepts user input and performs map operation on them then writes the results to intermediate files
    Reduce Servers
    Accepts intermediate files produced by map servers and performs reduce operation on them.

    One of the main issues they have to deal with in the MapReduce system is problem of stragglers. Stragglers are servers that run slower than expected for one reason or the other. Sometimes stragglers may be due to hardware issues (e.g. bad harddrive conttroller causes reduced I/O throughput) or may just be from the server running too many complex jobs which utilize too much CPU. To counter the effects of stragglers, they now assign multiple servers the same jobs which counterintuitively ends making tasks finish quicker. Another clever optimization is that all data transferred between map and reduce servers is compressed since the servers usually aren't CPU bound so compression/decompression costs are a small price to pay for bandwidth and I/O savings.


    After the creation of GFS, the need for structured and semi-structured storage that went beyond opaque files became clear. Examples of situations that could benefit from this included

    • associating metadata with a URL such as when it was crawled, its PageRank™, contents, links to it, etc
    • associating data with a user such as the user's search history and preferences
    • geographical data such as information about roads and sattelite imagery

    The system required would need to be able to scale to storing billions of URLs, hundreds of terabytes of satellite imagery, data associated preferences with hundreds of millions of users and more. It was immediately obvious that this wasn't a task for an off-the-shelf commercial database system due to the scale requirements and the fact that such a system would be prohibitively expensive even if it did exist. In addition, an off-the-shelf system would not be able to make optimizations based on the underlying GFS file system. Thus BigTable was born.

    BigTable is not a relational database. It does not support joins nor does it support rich SQL-like queries. Instead it is more like a multi-level map data structure. It is a large scale, fault tolerant, self managing system with terabytes of memory and petabytes of storage space which can handle millions of reads/writes per second. BigTable is now used by over sixty Google products and projects as the platform for storing and retrieving structured data.

    The BigTable data model is fairly straightforward, each data item is stored in a cell which can be accessed using its {row key, column key, timestamp}. The need for a timestamp came about because it was discovered that many Google services store and compare the same data over time (e.g. HTML content for a URL). The data for each row is stored in one or more tablets which are actually a sequence of 64KB blocks in a data format called SSTable.

    BigTable Server Architecture

    There are three primary server types of interest in the BigTable system.

    Master servers
    Assigns tablets to tablet servers, keeps track of where tablets are located and redistributes tasks as needed.
    Tablet servers
    Handle read/write requests for tablets and split tablets when they exceed size limits (usually 100MB - 200MB). If a tablet server fails, then a 100 tablet servers each pickup 1 new tablet and the system recovers.
    Lock servers
    These are instances of the Chubby distributed lock service. Lots of actions within BigTable require acquisition of locks including opening tablets for writing, ensuring that there is no more than one active Master at a time, and access control checking.

    There are a number of optimizations which applications can take advantage of in BigTable. One example is the concept of locality groups. For example, some of the simple metadata associated with a particular URL which is typically accessed together (e.g. language, PageRank™ , etc) can be physically stored together by placing them in a locality group while other columns (e.g. content) are in a separate locality group. In addition, tablets are usually kept in memory until the machine is running out of memory before their data is written to GFS as an SSTable and a new in memory table is created. This process is called compaction. There are other types of compactions where in memory tables are merged with SSTables on disk to create an entirely new SSTable which is then stored in GFS.

    Current Challenges Facing Google's Infrastructure

    Although Google's infrastructure works well at the single cluster level, there are a number of areas with room for improvement including

    • support for geo-distributed clusters
    • single global namespace for all data since currently data is segregated by cluster
    • more and better automated migration of data and computation
    • lots of consistency issues when you couple wide area replication with network partitioning (e.g. keeping services up even if a cluster goes offline for maintenance or due to some sort of outage).

    Recruiting Sales Pitch

    [The conference was part recruiting event so some of the speakers ended their talks with a recruiting spiel - Dare]

    Having access to lots of data and computing power is a geek playground. You can build cool, seemingly trivial apps on top of the data such which turn out to be really useful such as Google Trends and catching misspellings of "britney spears. Another example of the kinds of apps you can build when you have enough data treating the problem of language translation as a statistical modeling problem which turns out to be one of the most successful methods around.

    Google hires smart people and lets them work in small teams of 3 to 5 people. They can get away with teams being that small because they have the benefit of an infrastructure that takes care of all the hard problems so devs can focus on building interesting, innovative apps.


    Categories: Platforms | Trip Report

    In his post Implementing Silverlight in 21 Days Miguel De Icaza writes

    The past 21 days have been some of the most intense hacking days that I have ever had and the same goes for my team that worked 12 to 16 hours per day every single day --including weekends-- to implement Silverlight for Linux in record time. We call this effort Moonlight.

    Needless to say, we believe that Silverlight is a fantastic development platform, and its .NET-based version is incredibly interesting and as Linux/Unix users we wanted to both get access to content produced with it and to use Linux as our developer platform for Silverlight-powered web sites.

    His post is a great read for anyone who geeks out over phenomenal feats of hackery. Going over the Moonlight Project Page it's interesting to note how useful blog posts from Microsoft employees were in getting Miguel's team to figure out the internal workings of Silverlight.

    In addition, it seems Miguel also learned a lot from hanging out with Jason Zander and Scott Guthrie which influenced some of the design of Moonlight. It's good to see Open Source developers working on Linux having such an amicable relationship with Microsoft developers.

    Congratulations to Mono team, it looks like we will have Silverlight on Linux after all. Sweet.


    Categories: Platforms | Programming | Web Development

    I read Marc Andreessen's Analyzing the Facebook Platform, three weeks in and although there's a lot to agree with I was also confused by how he defined a platform in the Web era. After a while, it occurred to me that Marc Andreessen's Ning is part of an increased interest by a number of Web players in chasing after what I like to call the the GoDaddy 2.0 business model. Specifically, a surprisingly disparate group of companies seem to think that the business of providing people with domain names and free Web hosting so they can create their own "Web 2.0" service is interesting. None of these companies have actually come out and said it but whenever I think of Ning or Amazon's S3+EC2, I see a bunch of services that seem to be diving backwards into the Web hosting business dominated by companies like GoDaddy.

    Reading Marc's post, I realized that I didn't think of the facilities that a Web hosting provider gives you as "a platform". When I think of a Web platform, I think of an existing online service that enables developers to either harness the capabilities of that service or access its users in a way that allows the developers add value to user experience. The Facebook platform is definitely in this category.  On the other hand, the building blocks that it takes to actually build a successful online service including servers, bandwidth and software building blocks (LAMP/RoR/etc) can also be considered a platform. This is where Ning and Amazon's S3+EC2. With that context, let's look at the parts of Marc's Analyzing the Facebook Platform, three weeks in which moved me to write this. Marc writes 

    Third, there are three very powerful potential aspects of being a platform in the web era that Facebook does not embrace.

    The first is that Facebook itself is not reprogrammable -- Facebook's own code and functionality remains closed and proprietary. You can layer new code and functionality on top of what Facebook's own programmers have built, but you cannot change the Facebook system itself at any level.

    There doesn't seem to be anything fundamentally wrong with this. When you look at some of the most popular platforms in the history of the software industry such as Microsoft Windows, Microsoft Office, Mozilla Firefox or Java you notice that none of them allow applications built on them to fundamentally change what they are. This isn't the goal of an application platform. More specifically, when one considers platforms that were already applications such as Microsoft Office or Mozilla Firefox it is clear that the level of extensibility allowed is intended to allow improving the user experience while utilizing the application and thus make the platform more sticky as opposed to reprogramming the core functionality of the application.

    The second is that all third-party code that uses the Facebook APIs has to run on third-party servers -- servers that you, as the developer provide. On the one hand, this is obviously fair and reasonable, given the value that Facebook developers are getting. On the other hand, this is a much higher hurdle for development than if code could be uploaded and run directly within the Facebook environment -- on the Facebook servers.

    This is one unfortunate aspect of Web development which tends to harm hobbyists. Although it is possible for me to find the tools to create and distribute desktop applications for little or no cost, the same cannot be said about Web software. At the very least I need a public Web server and the ability to pay for the hosting bills if my service gets popular. This is one of the reasons I can create and distribute RSS Bandit to thousands of users as a part time project with no cost to me except my time but cannot say the same if I wanted to build something like Google Reader.

    This is a significant barrier to adoption of certain Web platforms which is a deal breaker for many developers who potentially could add a lot of value. Unfortunately, building an infrastructure that allows you to run arbitrary code from random Web developers and gives these untrusted applications database access without harming your core service costs more in time and resources than most Web companies can afford. For now.

    The third is that you cannot create your own world -- your own social network -- using the Facebook platform. You cannot build another Facebook with it.

    See my response to his first point. The primary reason for the existence of the Facebook platform  is to harness the creativity and resources of outside developers in benefiting the social networks within Facebook. Allowing third party applications to fracture this social network or build competing services doesn't benefit the Facebook application. What Facebook offers developers is access to an audience of engaged users and in exchange these developers make Facebook a more compelling service by building cool applications on it. That way everybody wins.

    An application that takes off on Facebook is very quickly adopted by hundreds of thousands, and then millions -- in days! -- and then ultimately tens of millions of users.

    Unless you're already operating your own systems at Facebook levels of scale, your servers will promptly explode from all the traffic and you will shortly be sending out an email like this.
    The implication is, in my view, quite clear -- the Facebook Platform is primarily for use by either big companies, or venture-backed startups with the funding and capability to handle the slightly insane scale requirements. Individual developers are going to have a very hard time taking advantage of it in useful ways.

    I think Marc is over blowing the problem here, if one can even call it a problem. A fundamental truth of building Web applications is that if your service is popular then you will eventually hit scale problems. This was happening last century during "Web 1.0" when eBay outages were regular headlines and website owners used to fear the Slashdot effect. Until the nature of the Internet is fundamentally changed, this will always be the case.

    However none of this means you can't build a Web application unless you have VC money or are big company. Instead, you should just have a strategy for how to deal with keeping your servers up and running if you service becomes a massive hit with users.  It's a good problem to have but one needs to remember that most Web applications will never have that problem. ;)

    When you develop a new Facebook application, you submit it to the directory and someone at Facebook Inc. approves it -- or not.

    If your application is not approved for any reason -- or if it's just taking too long -- you apparently have the option of letting your application go out "underground". This means that you need to start your application's proliferation some other way than listing it in the directory -- by promoting it somewhere else on the web, or getting your friends to use it.

    But then it can apparently proliferate virally across Facebook just like an approved application.

    I think the viral distribution model is probably one of the biggest innovations in the Facebook platform. Announcing to my friends whenever I install a new application so that they can try them out themselves is pretty killer. This feature probably needs to be fine tuned I don't end up recommending or being recommending bad apps like X Me but that is such a minor nitpick. This is potentially a game changing move in the world of software distribution. I mean, can you imagine if you got a notification whenever one of your friends discovered a useful Firefox add-on or a great Sidebar gadget? It definitely beats using TechCrunch or as your source of cool new apps.


    Categories: Platforms | Social Software

    Earlier this week, there were a flurry  of blog posts about the announcement of the Facebook platform. I've taken a look at the platform and it does seem worthy of the praise that has been heaped on it thus far.  

    What Facebook has announced is their own version of a widget platform. However whereas most social networking sites like MySpace, Bebo and Windows Live Spaces treat widgets as second class citizen that exist within some tiny box on the user's profile page, widgets hosted on Facebook are allowed to deeply integrate into the user experience. The page Anatomy of a Facebook Application shows ten integration points for hosted widgets including integration into the left nav, showing up in the users news feed and adding themselves as action links within the user's profile. This is less of a widgts platform and more like the kind of plug-in architecture you see in Visual Studio, Microsoft Office or the Firefox extension infrastructure. This is an unprecedented level of integration into a Web site being offered to third party developers by Facebook

    Widgets for the Facebook platform have to be written in a proprietary markup language called Facebook Markup Language (FBML). The markup language is a collection of "safe" HTML tags like blockquote and table, and custom Facebook-only tags like fb:create-button, fb:friend-selector and fb:if-is-friends-with-viewer. The most interesting tag I saw was fb:silverlight which is currently undocumented but probably does something similar to fb:swf. Besides restricting HTML to a "safe" set of tags there are a number of other security measures such as disallowing event handlers like onclick, stripping Javascript from CSS style attributes and requesting images referenced in FBML via their proxy server so Web bugs from 3rd party applications can't track their users.

    Facebook widgets can access user data by using either the Facebook REST API or Facebook Query Language (FQL) which is a SQL-like query language for making requests from the API for developers who think constructing RESTful requests is too cumbersome.

    Color me impressed. It is clear the folks at Facebook are brilliant on several levels. Not only have they built a killer social software application but they've also pulled off one of the best platform plays I've seen on the Web yet. Kudos all around.


    Categories: Social Software | Platforms