December 17, 2005
@ 05:15 PM

A friend of mine called me yesterday to ask for my opinions on the fact that Meebo just got a few million dollars in funding. For those not in the know, Meebo is an AJAX version of Trillian. And what is Trillian? It's an instant messaging client that supports AIM, ICQ, MSN, Yahoo Messenger, and IRC.

In his post How Much Did Meebo Get? Om Malik asks

Here is the rub: Since the company basically aggregates all four major IM networks in a browser, all the four major IM owners - AMYG are out of the acquisition game. One of them buys the company, the others shut down access to their respective networks. The very quality that makes Meebo attractive to end-users will make it difficult for them to be acquired. But there is one option: eBay. When all fails, you know who to call. Skype did. Interactive Corp is another long shot, but they are bargain hunters not premium payers.

Regarding acquisitions, there are three reasons why one of the major players would buy a company; users, technology and people. Unless the start up is in a brand new market that the major player isn't playing in, buying a company for its users is usually not the case. This is because big players like Google, Yahoo! and Microsoft usually either have orders of magnitude more users than the average 'popular' startup or could get just as many or more users when they ship a rival service. The more common reason for a big player like Microsoft or Yahoo! buying a company is for exclusive technology/IP and for the development team. Yahoo! buying del.icio.us or Flickr isn't about getting access to the 250,000 - 300,000 users of these services given that they have less users than the least popular services on Yahoo!'s network. Instead it's about getting people like Joshua Schachter, Caterina Fake and Stewart Butterfield building the next generation of Yahoo!'s products. Don Dodge covers this in slightly more detail in his post Microsoft will acquire my company.

Let's talk about Meebo specifically. The user base is too small to be a factor so the interesting things are the technology and the people. First we should look at the technology. An AJAX instant messaging client isn't novel and companies like Microsoft have been providing one for years. A framework built on reverse engineering IM protocols is cool but not useful. As Om Malik points out, the major players tolerate activities like companies like Meebo & Trillian because it is counterproductive for [example] AOL suing a tiny company like Trillian for misusing its network. On the other hand, they wouldn't tolerate it from a major player like Microsoft primarily because that becomes a significant amount of traffic on their network and licensing access becomes a valid revenue generating scenario. Thus, the technology is probably not worth a great deal to one of the big players. That leaves the people,  according to the Meebo team page there are three people; a server dev, a DHTML/AJAX dev and a business guy (likely to be useless overhead in an acquisition). The question then is how many million dollars would Google, Yahoo! or Microsoft think is worth for the skills of both [most likely excellent] developers? Then you have to factor in the markup because the company got VC funding...

You can probably tell that I agree with Om Malik that it is unlikely that this company would be of interest to any of the four major IM players.

If you are building a Web startup with the intention of flipping it to one of the majors, only three things matter; technology/IP, users and
the quality of your technical team. Repeatedly ask yourself: would Microsoft want our users? would Google want our technology? would Yahoo! want our people?

It's as simple as that.


 

Categories: Technology

Clemens Vasters has written two interesting atricles on building RESTful services using Indigo Windows Communication Foundation entitled Teaching Indigo to do REST/POX, Part 1 and Teaching Indigo to do REST/POX, Part 2. His first articles begins

A little bit more than half a year ago I got invited to a meeting at Microsoft in Redmond and discussed with Steve Swartz, Yasser Shohoud and Eugene Osovetsky how to implement POX and REST support for Indigo. ... I witnessed the definition a special capability for the HTTP transport that I am exploiting with a set of Indigo extensions that I’ll present in this series of blog posts. The consensus in the meeting was that the requirements for building POX/REST support into the product weren’t generally clear enough in the sense that when you ask 100 people in the community you get 154 ever-changing opinions about how to write such apps. As a consequence it would not really be possible to define a complete programming model surface that everyone would be happy with, but that a simple set of hooks could be put into the product that people could use to build programming models rather easily.

And so they did, and so I did. This new capability of the HTTP transport first appeared in the September CTP of Indigo/WCF and surfaces to the developer as properties in the Message class Properties collection or the OperationContext.Incoming/OutgoingMessageProperties.

If you are using the Indigo HTTP transport on the server, the transport will always stick a HttpRequestMessageProperty instance into the incoming message properties, which provides access to the HTTP headers, the HTTP method (GET, POST, PUT, etc.) and the full query string. On the client, you can create an instance of this property class yourself and stick it into any outbound message’s properties collection  and, with that, control how the transport performs the request. For sending replies from the server, you can put a HttpResponseMessageProperty into the message properties (or, again, into the OperationContext) and set the HTTP status code and description and of course the HTTP reply headers.  

And since I have nothing better to do, I wanted to know whether this rather simple control feature for the HTTP transport would indeed be enough to build a POX/REST programming model and application when combined with the rest of the Indigo extensibility features. Executive Summary: Yes.

One of the first work items I got assigned when I joined my current team at MSN Windows Live was responsibility for our Indigo migration. This was pretty ambiguous and it turned out this was a place holder work item for "Figure out why we should move to Indigo and when". So I talked to our devs for a while and after thinking about where I saw our services evolving it seemed there were two main things we wanted from Indigo; better performance and support for a more diverse view of what it meant to be a Web service. 

Why we need better performance is pretty straightforward. We provide services that are depended on by hundreds of millions of users across a variety of MSN properties including Hotmail, MSN Messenger, and MSN Spaces as well as newer Windows Live properties like Windows Live Favorites and Windows Live Fremont. The more performance we can squeeze out of our services stack, the better things look on our bottom line. 

Then there is support for a broader view of what it means to be a Web service. When I worked on the XML team, I used to interact regularly with the Indigo folks. At the time, I got the impression that they had two clear goals (i) build the world's best Web services framework built on SOAP & WS-* and (ii) unify the diverse distributed computing offerings produced by Microsoft. As I spent time on my new job I realized that the first goal of Indigo folks didn't jibe with the reality of how we built services. Despite how much various evangelists and marketing folks have tried to make it seem otherwise, SOAP based Web services aren't the only Web service on the planet. Technically they aren't even the most popular. If anything the most popular Web services is RSS which for all intents and purposes is a RESTful Web service. Today, across our division we have services that talk  SOAP, RSS, JSON, XML-RPC and even WebDAV. The probability of all of these services being replaced by SOAP-based services is 0. I remember sending mail to a number of folks on the Indigo team about this disconnect including Doug Purdy, Don Box, Steve Swartz and Omri Gazitt. I remember there being some initial resistance to my feedback but eventually opinions started to thaw and today I'm glad to read posts where Doug Purdy calls himself one of the original REST-heads in the big house.

Anyway, the point is that there is more than one way to build services on the Web. Services frameworks like Indigo Windows Communication Foundation need to support this diverse world instead of trying to shove one solution down the throats of their customers. We at Microsoft now understand this, I hope eventually the rest of the industry does as well. 


 

Categories: XML Web Services

Alan Kleymeyer has a post entitled NewsGator Online where he writes

I've switched my online news aggregator from Bloglines to Newsgator.  First, I wanted to try it out and compare it to Bloglines.  I like the interface better, especially in how you mark things as read.  I've swithched for good.  I mainly switched so that I can continue using RSS Bandit and get the benefit of syncing between it and an online news aggregator (supported in latest RSS Bandit 1.3.0.38 release)

Alan's post describes exactly why creating APIs for your online service and treating it as a Web platform and not just a web site is important. What would you rather use, a web-based aggregator which provides limited integration with a few desktop aggregators (i.e.Bloglines) OR a web-based aggregator which provides full integration with a variety of free and payware aggregators including RSS Bandit, NetNewsWire and FeedDemon? Building out a Web platform is about giving users choice which is what the NewsGator guys have done by providing the NewsGator API.

The canonical example of the power of Web APIs and Web platforms is RSS. Providing an RSS feed liberates your readers from the limitations of using one application (the Web browser) and one user interface (your HTML website) to view your content. They can consume it on their own terms using the applications that best fit their needs. Blogging wouldn't be as popular as it is today if not for this most fundamental of web services.

The topic of my ThinkWeek paper was turning Web sites into Web platforms and I was hoping to get to give a presentation about it at next year's O'Reilly Emerging Technology Conference but it got rejected. I guess I'll just have to keep shopping it around. Perhaps I can get it into Gnomedex or Mix '06. :)


 

December 15, 2005
@ 06:25 PM

Don Demsak has a post entitled XSLT 2.0, Microsoft, and the future of System.Xml which has some insightful perspectives on the future of XML in the .NET Framework

Oleg accidentally restarted the XSLT 2.0 on .Net firestorm by trying to startup an informal survey.  Dare chimed in with his view of how to get XSLT 2.0 in .Net.  M. David (the guy behind Saxon.Net which let .Net developers use Saxon on .Net) jumped in with his opinion.
...

One of the things that I’ve struggled with in System.Xml is how hard it is sometimes to extend the core library.  The XML MVPs have done a good job with some things, but other things (like implementing XSLT 2.0 on top of the XSLT 1.0 stuff) are impossible because so much of the library is buried in internal classes.  When building a complex library like System.Xml, there are 2 competing schools of thought:

  1. Make the library easy to use and create a very small public facing surface area.
  2. Make the library more of a core library with most classes and attributes public, and let others build easy (and very specific) object models on top of it.

The upside of the first methodology is that it is much easier to test, and the library just works out of the box.  The downside is that it very hard to extend the library, so it can only be used in very specific ways.

The upside of the second methodology is that you don’t have to trying to envision all the ways the library should be used.  Over time others will extend it to accomplish things that the original developers never thought of.  The downside is that you have a much larger surface area to test, and you are totally reliant on other projects to make your library useful.  This goes for both projects internal to Microsoft and external projects like the Mvp.Xml lib.

The System.Xml team has tended to use the first methodology, where the ASP.Net team tends to build their core stuff according to the second methodology, and then have a sub-team create another library using the first methodology, so developers have something to use right out of the box (think of System.Web.UI.HtmlControls as the low level API and System.Web.UI.WebControls as the higher level API).  The ASP.Net team builds their API this way because, from the beginning, they have always envisioned 3rd parties extending their library.  At the moment, this is not the case for the System.Xml library.  But the question is, should System.Xml be revamped and become a lower level API, and then rely on 3rd parties (like the Mvp.Xml project) to create more specific and easier to use APIs?  Obviously this is not something to be taken lightly.  It will be more costly to expose more of the internals of System.Xml.  But, if only the lower level API was part of the core .Net framework, it may then be possible to roll out newer, higher level, APIs on a release schedule different then the .Net framework schedule.  This way projects like XSLT 2.0 could be released without having to what for the next version of the framework.

 I’ve always been of the opinion that XSLT 2.0 does not need to be part of the core .Net framework.  Oleg doesn’t believe that the .Net open source community is as passionate as some of the other communities, so he would like to see Microsoft build XSLT 2.0.  I’d rather see the transformation of the System.Xml team into more of an ASP.Net like team.  If .Net is the future of development on the Windows platform, and XML is the future of Microsoft, then the System.Xml team needs to grow beyond its legacy as just an offshoot of the SQL Server team.  The System.Xml team still resides in the SQL Server building.  Back before .Net, the System.Xml was known as the SQL Web Data team, and unfortunately, still carries some of that mentality.  Folks like Joshua Allen and Dare (who are both not on the team anymore) fought to bring the team out from the shadows of SQL Server.  With new XML related groups, like XLinq and Windows Communication Framework, popping up within the company the System.Xml group is at a major crossroads.  They will either grow (in status and budget) and become more like the ASP.Net or they will get absorbed into one of the new groups.

 I’d prefer to see the System.Xml team grow and become full partners with teams like ASP.Net and the CLR team.  I’d like to see the XML based languages become first class programming languages within the Visual Studio IDE.  That means not only using things like XSLT and XML Schema as dynamic languages, but also be able to compile them down to IL and compiled with the other .Net languages.  I want to be able to create projects that contain not only VB or C#, but also XSLT and XML Schema (to name a couple), and have them compile into one executable.  Then developers can use things like XSLT 2.0, or the next in vogue XML based language, and take advantage of that language’s unique benefits, without having to choose between a compiled procedural language (like C# or VB) and dynamic functional languages like XSLT.  Linq is starting to bring in more of the functional programming style to the average procedural programmer, so I can start to see the rise public awareness of functional programming.  It is only a matter of time before the average programmer feels as comfortable with functional programming as they do with procedural programming, so we need to look towards including these languages within the Visual Studio IDE (which then leads into my discussion about evolving Visual Studio into more of an IDE Framework, and extended with add-ins.)

There is a lot of stuff which I agree with in Don's post which is why I forwarded it to some of the folks on the XML team. I'll be having lunch over there today to talk about some of the topics it raised

Don does gloss over something when it comes to the decision between whether Microsoft should implement a technology like XSLT 2.0 or whether we should just make it easy for third parties to do so. The truth is that Microsoft now has a large number of products which utilize XML-related technologies. For example, implementing something like XSLT 2.0 isn't just about providing a new version of the System.Xml.Xsl.XslCompiledTransform Class in the .NET Framework. It's also about deciding whether to update the XSLT engine used by Internet Explorer to support XSLT 2.0 (which is an entirely different code base), it's about adding support to the XSLT debugger in Visual Studio to support XSLT 2.0, and maybe even updating the Biztalk Mapper. Users of Microsoft products expect a cohesive and comprehensive experience. In many cases, only Microsoft can provide that experience (e.g. supporting XSLT 2.0 across our entire breadth of technologies and products that use XSLT). It was a really tough balance deciding what we should make extensible and what was too expensive to make extensible since we'd probably be the only ones who could take proper advantage of it when I was on the XML team. I'm glad to see that some of our MVPs understand how delicate of a balancing act shipping platform technologies can be.


 

Categories: Life in the B0rg Cube | XML

I'm a day late to blog this but it looks like we announced releases in both the consumer and business instant messaging space yesterday.

From the InfoWorld article Microsoft uses Ajax to Web-enable corporate IM we learn

Microsoft Corp. Tuesday released a Web-based version of its corporate instant-messaging software that gives users access when they are working remotely or from non-Windows computers. Gurdeep Singh Pall, a Microsoft corporate vice president, unveiled the product, Office Communicator Web Access, in a keynote at the Interop New York 2005 show.

Office Communicator Web Access includes support for Ajax (Asynchronous Javascript and XML), a programming technology that enables developers to build applications that can be altered dynamically on a browser page without changing what happens on the server. The product provides a Web front end to Microsoft's Office Communicator desktop application, and is available to customers of Live Communications Server 2005 for immediate download at www.microsoft.com/rtc, said Paul Duffy, a senior product manager at Microsoft.

I'm confused as to why InfoWorld feels the need to mention AJAX in their story. It's not like when other products are announced they trumpet the fact that they are built using C++ or ASP.NET. The AJAX hype is definitely getting ridiculous.

From the blog post Windows Live Messenger Beta - Released from the Windows Live Messenger team's blog we learn

 Windows Live Messenger Beta is now available for use and testing to a limited set of users in the US, UK, Japan, Australia, Canada, China, France, Germany, Brazil, Korea, Netherlands, and Spain. More and more of you will be invited to join over the coming weeks/months.

They also have a blog post on the Official Feature List for Windows Live Messenger. Unfortunately, none of the features I'm working on are in this release. I can't wait until the features I'm working on finally get out to the public. :)


 

Categories: Social Software | Windows Live

December 14, 2005
@ 05:44 PM

An artist's transition from gangsta rapper to pop star is always a weird one for fans. For example, there was a joke on this week's episode of the Boondocks about how Ice Cube "the guy who makes family movies" used to be a hard core gangsta rapper. I've personally been amused by how the subject matter of their songs changes as they realize that their fan base is dominated by prepubescent and teenage suburbanites as opposed to hip hop heads from the 'hood. 

On the album Blueprint 2: The Gift & The Curse Jay-Z has a song called Poppin' Tags which is about going to the mall and shopping. The subject matter of the song is the kind of thing you'd expect from Hilary Duff not Jigga. 

However 50 Cent has Jay-Z beat when it comes to songs targetted at the teenage mallrat crowd. On the soundtrack to his movie Get Rich or Die Tryin' 50 has two songs that belie his status as a gangsta rapper. There's the poorly crooned Window Shopper about how 50 Cent gets to go to the mall to buy stuff you can't afford. Then there's Best Friend where he begs to be some girl's "best friend" if the other guy in her life is "just a friend". 

But it gets worse.

Mike Torres sent me a link to a post entitled 50 Cent Caught Red Handed which is excerpted below

Remember that story about 50 Cent performing at some little girl's bat mitzvah? Yeah, you wish it didn't really happen. Nothing says hardcore gangster rapper like a teenie-bop white girl dancing to your music with two hundred of her closest white teenie-bop friends.

More pictures from the $500,000 bat mitzvah after the jump.

UPDATE: You can see all the photos from the bat mitzvah here.

Keep it real, Fiddy.


 

Categories: Music

December 14, 2005
@ 02:21 AM

One of the interesting things I've noticed due to the discussion about Yahoo!'s purchase of del.icio.us is how differently success is judged in the post-dotbomb technology startup scene. Specifically, I'll focus on two posts that gave me cause to pause this afternoon.

In his post Learning from mistakes Anil Dash writes

Best post I've seen today: Ari Paparo talks about the differences between del.icio.us and Blink. Blink was Ari's startup during the bubble, which raised $13 million (!) to build an online bookmarking service, but didn't take off with users.

The only way any of us gets to be a successful entrepreneur is by learning from others' mistakes, yet a lot of business culture focuses around never admitting that errors are ever made. So kudos to Ari, not just for being brave enough to be self-critical, but for helping a lot of new aspiring entrepreneurs to succeed

The only quibble I'd have is that Ari presents del.icio.us as having succeeded already. Josh and his team at del.icio.us have built a great app, but for as popular as they are with geeks, the hard work is to bring the concept of social bookmarking (or, if you prefer, a shared recollection tool) to a larger audience.

In his post Getting it right Ari Paporo writes

Congratulations to Josh on the del.ico.us acquisition. Yahoo will make a great partner for the bookmarking service.

Now a little part of me is cringing as I write this. Having founded a bookmarking company in 1999 with pretty much the exact same vision as the new crop of services, I’ve got to feel, well, a little stupid. (or angry, or depressed, or whatever). Maybe writing about it will make me feel better and maybe even help me make a point or two about product development.

When we founded Blink.com (no link love, it’s a crappy search site now) the founders and I imagined a self-reinforcing product cycle:

1. Consumers needed portable bookmarks so they wouldn’t lose them, would be able to access them from any computer, and could share them with friends or coworkers;

2. As part of the process of bookmarking sites and organizing them into "folders" users would be indicating a measure of quality and connectedness among the URLs;

3. Profit!

OK, step 3 was a little more complicated. But the essence was that we would use the personal-backup product attributes to create a public search engine and "discovery engine" (I believe the marketing folks wanted to use that phrase!) based on user bookmarks.

This really shouldn’t sound too different from what del.ico.us was able to do, and we had something like $13 million to play with to make it happen. Not to mention that there were others with the same idea. Remember Backflip? So (besides the money),why did we fail and del.ico.us and the other Web 2.0 companies succeed?

I don’t think it was that we were "too early" or that we got killed when the bubble burst. I believe it all came down to product design, and to some very slight differences in approach.

To start, we launched Blink with a bevy of marketing dollars and a message very much focused on the individual storage benefits. We were very successful at attracting users (at its height Blink has 1.5 million members, del.ico.us currently has 300,000) and getting them to import their bookmarks into our system.

What I find interesting about this pair of posts is the thought that a company that had 5 times the user base of del.icio.us could be considered a failure while del.icio.us is not. This makes me wonder what defines success here...That the VCs made a profit? I assume that must have been the case with the del.icio.us sale while it clearly was not with the original Blink.com service. Perhaps it's that the founders end up as millionaires? What ever it is, it definitely doesn't seem to be about users.

I tend to agree with Anil Dash, del.icio.us isn't yet a success except for being successful at making the founders and VCs a good return on their investment. If a service can grow to be 5 times as large and still be considered a failure then I think it is safe to say that calling del.icio.us a success is at best premature.

My question for all you budding entrepreneurs out there, what are your definitions of success and failure?


 

Categories: Social Software

For the developers out there who'd like to ask questions about or report bugs in our implementation the MetaWeblog API for MSN Spaces  there is now a place to turn.

The MSN Spaces Development forum is where to go to ask questions about the MetaWeblog API for MSN Spaces, file bug reports and discuss with members of our developer community or the Spaces team about what you'd like to see us open up next.

There is even an RSS feed so I can keep up to date with recent postings using my favorite RSS reader. If you are interested in our API story, you should subscribe.


 

Categories: Windows Live

December 13, 2005
@ 06:27 PM

Nicholas Carr has a post entitled Sun and the data center meltdown which has an insightful excerpt on the kind of problems that sites facing scalability issues have to deal with. He writes

a recent paper on electricity use by Google engineer Luiz André Barroso. Barroso's paper, which appeared in September in ACM Queue, is well worth reading. He shows that while Google has been able to achieve great leaps in server performance with each successive generation of technology it's rolled out, it has not been able to achieve similar gains in energy effiiciency: "Performance per watt has remained roughly flat over time, even after significant efforts to design for power efficiency. In other words, every gain in performance has been accompanied by a proportional inflation in overall platform power consumption. The result of these trends is that power-related costs are an increasing fraction of the TCO [total cost of ownership]."

He then gets more specific:

A typical low-end x86-based server today can cost about $3,000 and consume an average of 200 watts (peak consumption can reach over 300 watts). Typical power delivery inefficiencies and cooling overheads will easily double that energy budget. If we assume a base energy cost of nine cents per kilowatt hour and a four-year server lifecycle, the energy costs of that system today would already be more than 40 percent of the hardware costs.

And it gets worse. If performance per watt is to remain constant over the next few years, power costs could easily overtake hardware costs, possibly by a large margin ... For the most aggressive scenario (50 percent annual growth rates), power costs by the end of the decade would dwarf server prices (note that this doesn’t account for the likely increases in energy costs over the next few years). In this extreme situation, in which keeping machines powered up costs significantly more than the machines themselves, one could envision bizarre business models in which the power company will provide you with free hardware if you sign a long-term power contract.

The possibility of computer equipment power consumption spiraling out of control could have serious consequences for the overall affordability of computing, not to mention the overall health of the planet.

If energy consumption is a problem for Google, arguably the most sophisticated builder of data centers in the world today, imagine where that leaves your run-of-the-mill company. As businesses move to more densely packed computing infrastructures, incorporating racks of energy-gobbling blade servers, cooling and electricity become ever greater problems. In fact, many companies' existing data centers simply can't deliver the kind of power and cooling necessary to run modern systems. That's led to a shortage of quality data-center space, which in turn (I hear) is pushing up per-square-foot prices for hosting facilities dramatically. It costs so much to retrofit old space to the required specifications, or to build new space to those specs, that this shortage is not going to go away any time soon.

When you are providing a service that becomes popular enough to attract millions of users, your worries begin to multiply. Instead of just worrying about efficient code and optimal database schemas, things like power consumption of your servers and data center capacity become just as important.

Building online services requires more than the ability to sling code and hack databases. Lots of stuff gets written about the more trivial aspects of building an online service (e.g. switch to sexy, new platforms like Ruby on Rails) but the real hard work is often unheralded and rarely discussed.