Earlier this week, there were a flurry  of blog posts about the announcement of the Facebook platform. I've taken a look at the platform and it does seem worthy of the praise that has been heaped on it thus far.  

What Facebook has announced is their own version of a widget platform. However whereas most social networking sites like MySpace, Bebo and Windows Live Spaces treat widgets as second class citizen that exist within some tiny box on the user's profile page, widgets hosted on Facebook are allowed to deeply integrate into the user experience. The page Anatomy of a Facebook Application shows ten integration points for hosted widgets including integration into the left nav, showing up in the users news feed and adding themselves as action links within the user's profile. This is less of a widgts platform and more like the kind of plug-in architecture you see in Visual Studio, Microsoft Office or the Firefox extension infrastructure. This is an unprecedented level of integration into a Web site being offered to third party developers by Facebook

Widgets for the Facebook platform have to be written in a proprietary markup language called Facebook Markup Language (FBML). The markup language is a collection of "safe" HTML tags like blockquote and table, and custom Facebook-only tags like fb:create-button, fb:friend-selector and fb:if-is-friends-with-viewer. The most interesting tag I saw was fb:silverlight which is currently undocumented but probably does something similar to fb:swf. Besides restricting HTML to a "safe" set of tags there are a number of other security measures such as disallowing event handlers like onclick, stripping Javascript from CSS style attributes and requesting images referenced in FBML via their proxy server so Web bugs from 3rd party applications can't track their users.

Facebook widgets can access user data by using either the Facebook REST API or Facebook Query Language (FQL) which is a SQL-like query language for making requests from the API for developers who think constructing RESTful requests is too cumbersome.

Color me impressed. It is clear the folks at Facebook are brilliant on several levels. Not only have they built a killer social software application but they've also pulled off one of the best platform plays I've seen on the Web yet. Kudos all around.


 

Categories: Social Software | Platforms

A little while ago Facebook added the News Feed feature which is basically a river of news on your Facebook home page which shows what people in your social network are up to. What I hadn't noticed until today is that sometimes they have ads in there masquerading as updates from people in your friends list. Here's what I saw when I logged in today to take a look at the much ballyhoed Facebook developer platform.

It's one thing to inject ads into a social experience like Facebook has done here, it's another thing for the ad to contain disgusting imagery from a big budget snuff film. Seriously...WTF?


 

Categories: Rants | Social Software

May 26, 2007
@ 01:00 AM

Mike Champion has a blog post entitled WS-* and the Hype Cycle where he writes

There's a persistent theme talked up by WS-*ophobes that it's all just a fad, rapidly sliding down toward the "Trough of Dilillusionment" in the Gartner Hype Cycle. I've come to the opposite conclusion after six weeks back in the web services world.  The WS technologies are taking hold, deep down in the infrastructure, doing the mundane but mission critical work for which they were designed. Let's consider one example, WS-Management, which I had barely heard of when I started in CSD Interoperability.
...
At first glance this appears to duplicate widely deployed bits of the web.  For example, it depends on the oft-criticized WS-Transfer spec, and some are advocating using Atom and the Atom Publishing Protocol rather than WS-* for describing collections and subscribing to notifications of their contents.  On closer examination, WS-Management is widely used today in situations where the web-scale alternatives really don't fit, such as deep within operating systems or in the firmware of chips.
...
In short, from what I have learned recently, the trajectory of WS-* isn't pointing toward oblivion, it looks headed toward the same sort of pragmatic ubiquity that XML has achieved.  That's not to say all is rosy; there is lots of confusion and dissension, again just like there was in the early days of the Web and XML.   Likewise, "ubiquity" doesn't mean that the WS technologies are the best or the only option across the board,  but that it they are increasingly available as a very viable option when developers need protocol-neutrality, security, identity, management capability, etc.

I was going to post a response when I first read his post on Monday but I decided to wait a few days because I couldn't think of anything nice to say about this post and Mike Champion is someone I consider a friend. After a few days of calm reflection, the only thing I can say about this post is...So what?

It seems Mike is trying to argue that contrary to popular belief WS-* technologies are still useful for something. Seriously, who cares? The general craptacular nature of WS-* technologies was a major concern when people were marketing them as a way to build services on the Web. Now it is quite obvious to anyone with a clue about Web development that this is not the case. None of the major Web companies or "Web 2.0" sites is taking a big bet on WS-* Web services for providing APIs on the Web, on the other hand they all are providing RESTful XML or JSON Web services.

Now if WS-* technologies wants to own the niche of one proprietary platform technology talking to another in a homogeneous, closed environment...who cares? Good riddance I say. Just keep that shit off the Web.


 

Categories: XML Web Services

Duncan Riley over at TechCrunch let's us know Digg API Visualization Contest Delivers Apollo Powered Applications, specifically

The Digg API Visualization Contest held to celebrate the launch of the Digg API is now in its final stages with 10 shortlisted candidates.

Four of the ten finalists are Abode Apollo based applications, remarkable for a platform launched just over 2 months ago.

Agreed, it's pretty remarkable to see so many desktop applications in a Web mashup contest. As John Dowdell warns in his post Apollo ain't casual an Apollo application is a desktop app with all the security implications that come with that. So it is definitely impressive and a little scary to see so many people downloading random executables off of the Web and voting for them in what you'd expect to be a Web-based mashup contest.

It's also somewhat interesting that all the apps seem to be written using some variation of the Flash platform; Apollo, Flex or Flash Lite. I guess it just goes to show that for snazzy data visualization, you really can't beat Flash today. On reading the contest rules it seems it's sponsored by Adobe given that the prizes are primarily Adobe products and submissions are required to be written in Flash. Too bad, it would have been interesting to see some AJAX/DHTML or Silverlight visualizations going up against the Flash apps. 


 

Categories: Programming

I've been reading the Google Data APIs blog for a few months and have been impressed at how Google has been quietly executing on the plan of having a single uniform RESTful Web service interface to their various services. If you are unfamiliar with GData, you can read the GData overview documentation. In a nutshell, GData is Google's implementation of the Atom 1.0 syndication format and the Atom Publishing Protocol with some extensions. It is a RESTful XML-based protocol for reading and writing information on the Web. Currently one can use GData to manipulate and access data from the following Google services

with more on the way. Contrast this with the API efforts on Yahoo! Developer Network or Windows Live Dev which are an inconsistent glop of incompatible RESTful protocols, SOAP APIs and XML-RPC methods all under the same roof. In the Google case, an app that can read and write data to Blogger can also do so to Google Calendar or Picasa Web Albums with minimal changes. This is not the case when using APIs provided by two Yahoo! services (e.g. Flickr and del.icio.us) or two Windows Live services (e.g. Live Search and Windows Live Spaces) which use completely different protocols, object models and authentication mechanisms even though provided by the same vendor.

One way to smooth this disparity is to provide client libraries that aim to provide a uniform interface to all of the vendors services. However even in that case, the law of leaky abstractions holds. Thus the fact that these services use different protocols, object models and authentication mechanisms ends up surfacing in the client library. Secondly, not only is it now possible to create a single library that knows how to talk to all of Google's existing and future Web services since they all use GData. It is also a lot easier to provide "tooling" for these services than it would be for Yahoo's family of Web services given that they use a simple and uniform interface. So far none of the big Web vendors have done a good job of providing deep integration with popular development environments like Eclipse or Visual Studio. However I suspect that when they do, Google will have an easier time than the others due to the simplicity and uniform interface that is GData.


 

May 25, 2007
@ 09:19 PM

Via Robert Scoble's blog post entitled Microsoft postpones PDC we learn

Mary Jo Foley (she’s been covering Microsoft for a long time) has the news: Microsoft has postponed the PDC that it had planned for later this year.

The PDC stands for “Professional Developer’s Conference.” It happens only when Microsoft knows it’ll have a major new platform to announce. Usually a new version of Windows or a new Internet strategy.

So, this means a couple of things: no new Windows and no major new Internet strategy this year.
...
Now that Google, Amazon, Apple, are shipping platforms that are more and more interesting to Microsoft’s developer community Microsoft has to play a different game. One where they can’t keep showing off stuff that never ships. The stakes are going up in the Internet game and Microsoft doesn’t seem to have a good answer to what’s coming next.

Interesting analysis from Robert, I agree with him that Microsoft no longer has the luxury of demoing platforms it can't or won't ship given how competent a number of competitors have shown themselves on the platform front. The official Microsoft cancellation notice states

As the PDC is the definitive developer event focused on the future of the Microsoft platform, we try to align it to be in front of major platform milestones. By this fall, however, upcoming platform technologies including Windows Server 2008, SQL Server codenamed "Katmai", Visual Studio codenamed "Orcas" and Silverlight will already be in developers’ hands and approaching launch

This makes sense, all the interesting near term future stuff has already been announced at other recent events. In fact, when you think about it, it is kinda weird for Microsoft to have a conference for showing next generation Web platform stuff (i.e. MIX) and another for showing general next generation platform stuff (i.e. PDC). Especially since the Web is the only platform that matters these days.

My assumption is that Microsoft conference planners will figure this out and won't make the mistake of scheduling MIX and PDC a few months from each other next time.
 

Every couple of months I like to give a shout out to the blogs I'm currently reading and think are worth recommending. Below is my current list of top five blogs.

  1. Jeff Atwood: Every modern developer worth their salt should have read Mythical Man-Month, should know the common refactorings by heart, and should be reading Jeff Atwood's blog. It's that good. He covers a broad range of topics which are always of interest to developers from interesting glimpses into our shared computing history in posts such as Meet The Inventor of the Mouse Wheel and EA's Software Artists to excellent advice on designing applications for non-programmers such as his post Reducing User Interface Friction as well as a the occasional rant on pet peeves that a lot of developers share such as when he pointed out C# and the Compilation Tax.

  2. The Secret Diary of Steve Jobs : This is the best fake celebrity blog I've ever seen. The author is definitely up on his knowledge of Steve Jobs and Apple. The funniest posts are the ones where he gives an [evil] Steve Jobs perspective on current Apple affairs in posts such as So, you leaked an email to Engadget?, They call me Mr. Integrity and . Congratulations, Jon Ive

  3. Pat Helland: An old school Microsoft architect from Developer Division who recently came back to Microsoft after a two year stint at Amazon. Before leaving Microsoft two years ago, Pat wrote some well respected articles on building distributed systems such as Metropolis & Data on the Outside vs. Data on the Inside. He has now come back to the company with some practical experience from working on one of the largest Web sites on the planet. His most recent post, SOA and Newton's Universe, introduced me to the CAP Conjecture. Consistency, Availability, and Partition-tolerance. Pick two. Specifically, trying to maintain data consistency in a distributed system is in direct opposition to having high availability. I'd observed anecdotally while working on services in Windows Live but it was still interesting to read papers explaining this complete with mathematical proofs of why this is the case.

  4. Uncov: This site picks up where Dead 2.0 left off as the anti-TechCrunch by attempting to inject some snarky reality in the face of all the overhyped, me too, built to flip, "Web 2.0" startups we keep hearing about these days. Some of the more amusing recent posts are Meebo: Yahoo Chat was awesome in 1997, Mpire: Liked It Better When It Was Called Pricewatch and of course Web 2.0: So great you can't define it.

  5. Casey Serin: Since I recently became a homeowner, I've become interested in all this talk of real estate collapses and subprime loan crises. The USA Today article 10 mistakes that made flipping a flop describes Casey Serin as a poster child for everything that went wrong in the real estate boom. In under a year, the 24-year-old website-designer-turned-real estate-flipper bought eight homes in four states — and in every case but one, he put no money down. Over half of the homes have been foreclosed and he now has over $140,000 in debt. His blog documents his trials and tribulations trying to get out of debt. The comments are the best part, it seems his audience is split down the middle between people who cheer him for trying to get out of debt and others who attack him for seemingly getting away with abusing the system.

Do you have any similar recommendations?


 

From Mike Arrington's post $100 Million Payday For Feedburner - This Deal Is Confirmed we learn

Rumors about Google acquiring RSS management company Feedburner from last week, started by ex-TechCrunch UK editor Sam Sethi, are accurate and are now confirmed according to a source close to the deal. Feedburner is in the closing stages of being acquired by Google for around $100 million. The deal is all cash and mostly upfront, according to our source, although the founders will be locked in for a couple of years.

I use FeedBurner to track stats for my blog and RSS feed so this is great news because it means the service is here to stay. I've exchanged mail with Eric Lunt a bunch of times about issues I've had with the service and he was always quick to respond with a solution or an ETA for a fix. Google has landed some great folks who built a killer service.

I hope now that they have Google level resources at their disposal users of the service can now get historical statistics for their blogs and feeds instead of being limited to only the last 30 days. I'm curious about what my most popular posts of all time are not just the most popular in the last 30 days. 


 

Thanks to books like the Innovators Dilemma it is now an oft repeated bit of business lore, especially within the technology industry, that you should kill your cash cows before two guys in a garage do it for you. The skeptic in me suspects that this bit of industry truism is part of The Halo Effect at work. People have sought out examples that confirm this statement and ignored the hundreds of counter-examples that show how dangerous this kind of thinking can be to a business.

Recently I wrote a blog post entitled Arguing Intelligently About Copyright on the Internet which addressed some of the most common anti-copyright arguments you see on the Web on sites such as TechDirt. Mike Masnick of TechDirt, took umbrage at my post and followed up with a comment to my post as well as a TechDirt article entitled The Grand Unified Theory On The Economics Of Free. In the article, Mike Masnick makes a number of assertions that are similar to the truism around killing your cash cows.

Mike Masnick writes

First off, and this is key, none of what I put forth is about defending unauthorized downloads. I don't download unauthorized content (never have) and I certainly don't suggest you do either. You may very well end up in a lawsuit and you may very well end up having to pay a lot of money. It's just not a good idea. This whole series is from the other perspective -- from that of the content creator and hopefully explaining why they should encourage people to get their content for free. That's because of two important, but simple points:
  1. If done correctly, you can increase your market-size greatly.
  2. If you don't, someone else will do it correctly, and your existing business model will be in serious trouble
If that first point is explained clearly, then hopefully the second point becomes self-evident. However, many people immediately ask, how is it possible that giving away a product can guarantee that you've increased your market size? The first thing to understand is that we're never suggesting people just give away content and then hope and pray that some secondary market will grant them money. Giving stuff away for free needs to be part of a complete business model that recognizes the economic realities. We'll get to more details on that in a second.

As a business, increasing your market size is nice but maintaining your profits is even nicer. If you have 200,000 customers and make $80 profit per customer, would you be interested in doubling your customer base while making $20 profit per customer due to lowering your prices? The point here is that simply increasing the size of your market or the number of your customers does not translate to increasing the business's bottom line. As for the second point listed above, healthy paranoia is good but it shouldn't replace good business sense. After all, the list of successful fast followers includes some of the biggest companies in the world. If it worked for Google and Microsoft, it can work for your business.

Mike Masnick also outlines his business advice for purveyors of intellectual property digital content which is excerpted below

So, the simple bulletpoint version:
  1. Redefine the market based on the benefits
  2. Break the benefits down into scarce and infinite components.
  3. Set the infinite components free, syndicate them, make them easy to get -- all to increase the value of the scarce components
  4. Charge for the scarce components that are tied to infinite components
You can apply this to almost any market (though, in some it's more complex than others). Since this post is already way too long, we'll just take an easy example of the recording industry:
  1. Redefine the market: The benefit is musical enjoyment
  2. Break the benefits down (not a complete list...): Infinite components: the music itself. Scarce components: access to the musicians, concert tickets, merchandise, creation of new songs, CDs, private concerts, backstage passes, time, anyone's attention, etc. etc. etc.
  3. Set the infinite components free: Put them on websites, file sharing networks, BitTorrent, social network sites wherever you can, while promoting the free songs and getting more publicity for the band itself -- all of which increases the value for the final step
  4. Charge for the scarce components: Concert tickets are more valuable. Access to the band is more valuable. Getting the band to write a special song (sponsorship?) is more valuable. Merchandise is more valuable.
What the band has done in this case is use the infinite good to increase the value of everything else they have to offer.

The implicit assumption that Mike Masnick makes here is that losing the profits from cheaply copyable and easily distributed digital content will be made up by selling goods and services that is related to the digital content. I am highly suspicious of the theory that replacing the profits CDs, digital music and ringtone sales with the profits from increasing concert ticket prices ends up being a net positive for successful musicians.  The key reason for this is that,  there are physical limits on how many concerts a band can have or how many people can attend in a given location but such limits barely exist with regards to distributing digital content.

A concrete example is comparing the relative profits of the proprietary software companies with the Open Source software companies. In a recent blog post entitled The 'we win by killing' days are passing Tim O'Reilly wrote

1. Pure open source software businesses are orders of magnitude less profitable than their closed source brethren even as they close in on them in terms of the number of customers. (Compare Red Hat and Microsoft, MySQL and Oracle.) Meanwhile, companies built on top of open source but with new layers of closed source (iconically, Google) are building the kinds of outsized profits that once were the sole province of old style software companies. As growth slows, as it inevitably will (even if it takes another decade), these companies too will seek to maintain their outsized profits.

2. Outsized profits come from lock-in of one kind or another. Yes, there are companies that have no lock-in that gain outsized profits merely by means of scale, but they are few and far between.

The experiences of the software industry seem to contradict Mike Masnick's diagnoses and recommendations for the music industry. Giving away your most valuable asset and hoping to make it up by selling peripheral services and add-ons is more likely to destroy your company than become your redemption.

Counter arguments welcome.


 

Categories: Ramblings

May 22, 2007
@ 02:44 AM

Pete Lacey has a blog post entitled Rethinking Apollo where he writes

So I dug around in Apollo a little bit, and I did a little bit more thinking about my reflexive dismissal of the technology. And I admit to misunderstanding and miscategorizing Apollo. Here’s what I learned.

Apollo is not a browser plugin, nor does it leverage or extend your browser in any way. It runs completely outside the browser. It is a run-time environment for building cross-platform desktop applications.
...
Lets say you want to build a RSS/Atom reader...But lets add a requirement: my news reader must be cross-platform. That eliminates .NET as a development platform, but still leaves C++. However, with C++ I have to carefully separate my common functionality from my OS-specific functionality and become more of an expert on OS and windowing quirks then I would like, so that’s out. Fortunately, there’s quite a few other ways to go:

  1. Browser based
  2. Java based
  3. Dynamic language based: Perl, Python, Ruby, Tcl
  4. Native cross-platform development environment, e.g Qt
  5. Apollo
  6. Others, e.g. Eclipse RCP

All of these have pros and cons. Browsers are limited in functionality, and quirky. Java is a pain to develop towards-especially GUI apps, has spotty HTML rendering ability, and a non-native look and feel. The dynamic languages are far from guaranteed to be installed on any particular machine—especially Windows machines, and (likely) also have their own look and feel issues. Qt still leaves me in C++ land; that is it’s hard to develop towards. Apollo also has its own look and feel, and will require a download of the runtime environment if it’s not already there (I’m ignoring its alpha release state). I don’t care about any others cross-platform techniques right now.

I think I've found interesting is how a lot of blogosphere pundits have been using Microsoft's Silverlight and Adobe's Apollo in the same sentence as if they are similar products. I guess it's more proof that the popular technology blog pundits don't do much research and in many cases aren't technical enough to do the research anyway.

Although Pete does a good job of explaining the goals of Adobe Apollo with a great example, I think there is a simpler and more cynical way of spelling out the difference between Silverlight and Apollo. I'd describe the projects as 

Apollo is Adobe's Flash based knock off competitor to the .NET Framework while Silverlight is Microsoft's .NET Framework based knock off competitor to the Flash platform.
A lot shorter and more to the point. :)

PS: Shame on Pete for equating dynamic languages with the runtimes for certain popular Open Source dynamic programming languages. The programming language is not the platform and vice versa. After all, both Jython and IronPython are instances of a dynamic programming language that don't have any of the problems he listed as reasons to eliminate a dynamic programming language as a choice for building a cross-platform desktop application. :). 


 

Categories: Programming | Web Development