August 31, 2006
@ 07:25 PM

"Social Search" is like "Web 2.0" in that if you ask five people what it means you'll get back seven different definitions. To me, the user experience for 'Social Search' is pretty straightforward. I'd like to ask questions and get back answers that take advantage of the knowledge the application has about my social circle (e.g. friends, family, trusted experts, etc).

The incident that got me interested in social search happened two years ago. The apartment complex I lived in [Avalon Belltown -- don't stay here, my car got broken into in their "secure" parking deck and they acted like it was my fault] raised my rent by $300 when my lease was up. I thought that was fairly obnoxious but didn't have the time to do an exhaustive survey of apartment complexes in the Seattle area to find one that met my desires. I knew that one or more of my co-workers or friends would be able to give me a suggestion for a cheaper apartment complex that would meet my requirements but short of spamming a bunch of people at work, I didn't have a good way to get this information out of my social circle. So I stayed there after renegotiating the lease [which they later reneged on but that is another story].

Since then I've been interested in the notion of 'social search' and other ways to make the user experience on the Web better by taking advantage of the knowledge applications have about our relationships to other people. This is why I ended up working on the team that I work on today and have been involved in building features such as Social Networking for Windows Live. I believe that we are now about halfway to what I'd like to see in the 'social search' arena at Windows Live. We have Windows Live QnA, Windows Live Expo, and Windows Live Spaces which I see as different pieces of the puzzle.

The next step for me has been thinking about how to extend this notion of applications being smarter because they know about our relationships outside Windows Live by exposing APIs to the different kind of relationship information we have today. This is one of the reasons I find the Facebook API quite fascinating. However I'm not sure what the right forum is to get feedback on what kinds of APIs people would like to see from us. Maybe asking here in my blog will get some bites. :)


Categories: Social Software | Windows Live

August 31, 2006
@ 07:08 PM

In another episode of the "Google is the new Microsoft" meme, I've been amused to see some VCs brag about how they plan to not invest in any company that potentially competes with Google in any space. Below are two examples I've noticed so far, I'm sure there are more that I've missed

In his blog post entitled The Kiko Affair, Paul Graham writes

Google may be even more dangerous than Microsoft, because unlike Microsoft it's the favorite of technically minded users. When Microsoft launched an application to compete with yours, the first users they'd get would alway be the least sophisticated-- the ones who just used whatever happened to be already installed on their computer. But a startup that tries to compete with Google will have to fight for the early adopters that startups can ordinarily treat as their birthright.
The best solution for most startup founders would probably be to stay out of Google's way. The good news is, Google's way is narrower than most people realize. So far Google only seems to be good at building things for which Google employees are the canonical users. That's because they develop software by using their own employees as their beta users
They tried hard; they made something good; they just happened to get hit by a stray bullet. Ok, so try again. Y Combinator funded their new idea yesterday. And this one is not the sort of thing Google employees would be using at work. In fact, it's probably the most outrageous startup idea I've ever heard. It's good to see those two haven't lost their appetite for risk.

In his blog post entitled Thoughts on Google Apps, Paul Kedrosky writes

Finally, and this is mostly directed at people sending "Enterprise 2.0" business plans my way: If you're thinking of doing something squarely in Google's enterprise-lusting aim you need to ask yourself one question only: Why? What makes you think that you can do it so much better than Google can that the inevitable free Google Apps product doesn't kick your ass out of the office market? I'm not saying it's impossible, and there are plenty of things outside Google's aim -- including apps that are much more social by design than what Google builds -- but the gate is 99% closed for bringing vanilla,mass-market office apps to the web.

I guess these VCs are planning to stop investing in software companies since Google seems to be increasingly involved in almost every category of software products. I thought the entire point of being a VC was accepting the element of risk involved?


August 31, 2006
@ 06:48 PM

I've slowly begun to accept the fact that the term Web 2.0 is here to stay. This means I've had to come up with a definition of the term that works for me. Contextually, the term still is meant to capture the allure of geek-loved sites like Flickr and Being "Web 2.0" means having the same characteristics features of these sites like open APIs, tagging and AJAX.

One of the things I've realized while reading TechCrunch and sitting in meetings at work is that there is a big difference between folks like Caterina Fake or Joshua Schachter and the thousands of wannabes walking the halls in Redmond and Silicon Valley. The difference is the difference between building features because you want to improve your user's experience and building features you've been told those features are how to improve your user's experience.

Everytime I see some website that provides APIs that aren't useful enough to build anything interesting I think "There's somebody who heard or was told that building APIs was important without why it was important". Everytime I see some website implement tagging systems that are not folksonomies I think "There's somebody who doesn't get why tagging is useful". And every single time I see some site add AJAX or Flash based features that makes it harder to use the site than when it was more HTML-based I wonder "What the fuck was the point?".

I guess the truth is that TechCrunch depresses me. There is such lack of original thinking, failure to empathize with end users and just general unawareness of the trends that led to the features we describe as being Web 2.0 in our industry today. Sad.


Categories: Ramblings

August 29, 2006
@ 09:00 PM

From the Windows Live QnA team blog post entitled Welcome to the public beta for! we learn

It’s with great pleasure, a lot of late nights, and barrels of caffeine, that our team launches the public Windows Live QnA beta.

For all you thousands of beta testers who took a chance on us, nagged us, mocked us, and made us better – we thank you. Keep doing it. Enjoy. Obey the code of conduct. We see you getting hooked.

The site is now available to all at Try it out and let the team know what you think.

Update: There is an interview with Betsy Aoki about Windows Live Qna on If you look closely, you'll also notice a cameo by RSS Bandit.


Categories: Windows Live

August 28, 2006
@ 05:07 PM

I was surprised by the contents of two blog posts I read over the weekend on the same topic. In his post Web 2 bubble ain’t popped yet: Kiko sells for $258,100 Robert Scoble writes

How many employees did Kiko have again? Three, right? Well, they just sold their “failure” for $258,100. Not too shabby!

On the same topic, Om Malik writes in his post Kiko Sells for $258,100

The company recently became talk of the blogs, when the founders decided to cut their losses, and put the company on sale on eBay. Niall and I devoted a big portion of our latest podcast, Snakes on a Business Plan to the Kiko affair. Well, the auction just closed and brought in $258,100. A tidy sum! This explains why Paul was smiling today at The FOO Camp <img alt=" src=""> Apparently, Kiko’s angel round was $50,000 in convertible debt, and this sale should cover that. Graham’s YCombinator which did the seed round could come out ahead as well.

I'm confused as to how anyone can define this as good. After you take out however much the investors get back after investing $50,000 there really isn't much left for the three employees to split especially when you remember that one of the things you do as the founder of a startup is not pay yourself that much. At best I can see this coming out as a wash (i.e. the money made from the sale of Kiko is about the same as if the founders had spent the time getting paid working for Google or Yahoo! as full time employees) but I could be wrong. I'd be surprised if it was otherwise.


One of the biggest surprises for me over the past year is how instead of Sun or IBM, it's Amazon that looks set to become the defining face of utility computing in the new millenium. Of course, the other shoe dropped when I read about Amazon Elastic Compute Cloud (Amazon EC2) which is described below

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.

Just as Amazon Simple Storage Service (Amazon S3) enables storage in the cloud, Amazon EC2 enables "compute" in the cloud. Amazon EC2's simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon's proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use.

All Amazon needs to do is to add some SQL-like capabilities on top of Amazon S3 and I can't see any reason why any self respecting startup would want to host their own datacenters with the high bandwidth, staff, server, space and power costs that it entails. Anecdotes such as the fact that SmugMug is now storing 300 terabytes of data on Amazon's servers for cheaper than they could themselves will only fuel this trend. I definitely didn't see this one coming but now that it is here, it seems pretty obvious. Companies like IBM and Sun, simply don't have the expertise at building something that has to handle traffic/capacitye at mega-service scale yet be as cheap as possible. Companies like Yahoo!, MSN/Windows Live and Google have this expertise but these are competitive advantages that they likely won't or can't give away for a variety of reasons. However Amazon does have he expertise at building a mega-scale service as cheaply as possible as well as the experience of opening it up as a platform for other companies to build businesses on. With the flood of startups looking to build out services cheaply due to the "Web 2.0" hype, this is a logical extension of Amazon's business of enabling companies to build eCommerce businesses on their platform.

With a few more high profile customers like SmugMug, Amazon could easily become the "dot in dotcomm Web 2.0". Of course, this means that like Sun was during the 90s they'll be pretty vulnerable once the bubble pops.


It looks like the big news this morning is that Google just announed Google Apps for your Domain. From the press release Google Launches Hosted Communications Services we learn

Google Apps for Your Domain, an expansion of the Gmail for Your Domain service that launched in February 2006, currently includes Gmail web email, the Google Talk instant messaging and voice calling service, collaborative calendaring through Google Calendar, and web page design, publishing and hosting via Google Page Creator. Domain administrators use a simple web-based control panel to manage their user account list, set up aliases and distribution lists, and enable the services they want for their domain. End users with accounts that have been set up by their administrator simply browse to customized login pages on any web-connected computer. The service scales easily to accommodate growing user bases and storage needs while drastically reducing maintenance costs.

Google will provide organizations with two choices of service.

  • A standard edition of Google Apps for Your Domain is available today as a beta product without cost to domain administrators or end users. Key features include 2 gigabytes of email storage for each user, easy to use customization tools, and help for administrators via email or an online help center. Furthermore, organizations that sign up during the beta period will not ever have to pay for users accepted during that period (provided Google continues to offer the service).
  • A premium version of the product is being developed for organizations with more advanced needs. More information, including details on pricing, will be available soon.

If this sounds familiar to you, that's because it is. This is pretty much the same sales pitch as Microsoft's Office Live. Right down to having tiered versions that range from free (i.e. Office Live Basics) to paid SKUs for businesses with more 'advanced' needs (i.e. Office Live Essentials). With Google officially entering this space, I expect that the Office Live team will now have some pressure on their pricing model as well as an incentive to reduce or remove some of the limitations in the services they offer (e.g. the fairly low limits on the amount of email addresses one can create per domain).

As usual, the technology blogs are full of the Microsoft vs. Google double standard. When Microsoft announced Office Live earlier this year, the response was either muted or downright disappointed because it wasn't a Web-based version of Microsoft Office. An example of such responses is Mike Arrington's post entitled Microsoft Office Live goes into Beta. On the flip side, the announcement of Google Apps for your Domain which is basically a "me too" offering from Google is heralded by Mike Arrington in his post Google Makes Its Move: Office 2.0 as the second coming of the office suite. The difference in the responses to what are almost identical product announcements is an obvious indication at how both companies are perceived by the technology press and punditry.

I personally prefer Om Malik's take in his post Web Office Vs Microsoft Office where he states

"Web Office should not be about replacing the old, but inventing the new web apps that solve some specific problems".

This is pretty much the same thing I heard Ray Ozzie and Sergey Brin say at last years Web 2.0 conference when they were both asked [on different occassions] about the possibility of replacing desktop office suites with Web-based software. Enabling people in disparate locations to collaborate and communicate is the next major step for office productivity suites. One approach could be replacing everything we have today with Web-based alternatives, the other could be making the desktop software we have today more Web savvy (or "live" if you prefer the Microsoft lingo). I know which one I think is more realistic and more likely to be acceptable to businesses today. What do you think?

My next question is whether Google is going to ship consumer targetted offerings as Microsoft has done with Windows Live Custom Domains or is the free version of Google Apps for your Domain expected to be the consumer version?

Disclaimer: The above statements are my opinions and do not in any way reflect the plans, thoughts, intentions or strategies of my employer.


Recently I asked one of the Javascript devs on the Windows Live Spaces team to review the code for some of my gadgets to see if he could point out areas for improvement. One thing he mentioned was that there were a ton of memory leaks in my gadgets. This took me by surprise since the thought of a memory leak in some AJAX code running in a browser sounded like a throwback to the days of writing apps in C/C++. So I went back and took a look at the Windows Live gadget SDK, and sure as hell there was a section of the documentation entitled Avoiding Memory Leaks which states

Memory leaks are the number one contributor to poor performance for AJAX-style websites. Often code that appears to be written correctly will turn out to be the source of a massive memory leak, and these leaks can be very difficult to track down. Luckily, there are a few simple guidelines which can help you avoid most memory leaks. The developers follow these rules religiously, and recommend that you do the same when implementing Gadgets.
  • Make sure you null any references to DOM elements (and other bindings for good measure) in dispose().
  • Call the base dispose method at the end of your dispose method. (conversely, call the base initialize at the beginning of your initialize method)
  • Detach all events that you attach.
  • For any temp DOM element vars created while constructing the DOM, null the temp var before exiting the method.
  • Any function parameters that are DOM elements coming in should be nulled before returning from the method.
There are a number of websites and blog entries that document approaches for identifying and fixing memory leaks in AJAX applications. One such helpful site can be found here.

A great way to see whether your Gadget is leaking memory is to use the following URL to load your Gadget: Open Task Manager in Windows and monitor the memory usage of the Internet Explorer window. Keep reloading the Gadget in Internet Explorer to see if the memory usage increases over time. If the memory usage increases, it is indicative that your Gadget is leaking memory.

This is the entirety of the documentation on avoiding memory leaks in Windows Live gadgets. Granted there is some useful information in the blog post referenced from the SDK. The post implies that memory leaks in AJAX code are an Internet Explorer problem as opposed to a general browser issue. 

Most of the guidelines in the above excerpt were pretty straightforward except for the one about detaching all events you attach. I wasn't sure how event handling differed between Firefox and IE (the only browsers I test gadgets on) so I started down the path of doing some research. and this led me to a number of informative posts on Quirksmode. They include

  1. Traditional event registration model
  2. Advanced event registration models
  3. addEvent() considered harmful

The information in the above pages is worth its weight in gold if you're a Javascript developer. I can't believe I spent all this time without ever reading Quirksmode. The Windows Live gadgets team would be doing gadgets developers a lot of favors by including the above links to their documentation.

There is also an interesting observation about the end user perceptions about who's to blame for badly written gadgets. The comment about memory leaks in my gadgets answered the question of why Internet Explorer uses as much as 200MB of memory when running my start page. At first, I assumed the problem was with and then after switching browsers to Firefox I saw some improvement and then assumed the problem was with IE. It never crossed my mind that the problem was the poor coding in the gadgets on my page. This may just be because I was the author of many of the gadgets on my page but I suspect that when the average end user hits problems with poorly written gadgets causing issues with or Windows Live Spaces pages, Microsoft is the entity that gets the blame not the gadget developers.

Just like with Windows®, poorly written applications often reflect badly on the platform and not just the application. Interesting food for thought if you are interested in building Web platforms. 


Categories: Web Development | Windows Live

The Windows Live Wifi team has an introductory blog post entitled Hello World... which is excerpted below

I’m Stefan Weitz, director of planning for Windows Live WiFi. The team has been developing Windows Live WiFi Center over the past few months and it’s now time to let others experiment with it. The beta is currently limited to 5,000 people but will open up more broadly in the coming months.  If you are interested in participating please email your Windows Live ID (ex. to and we’ll get you on the list of interested parties.

Getting online in a WiFi world
Windows Live is all about unifying our customer’s online experience.  Well, let’s face it – you need to be connected (in one way or another) to have that world unified :).  The Windows Live WiFi Center is all about helping people get connected in a secure way – it’s essentially our first step at creating an integrated software solution that helps people find and securely connect to wireless networks around the world.  The Windows Live WiFi Center has a number of great features in this beta version (hint: beta = more features are coming soon…).

 ·         Hotspot locator:  Provides you with the ability to search for free and fee-based wireless networks in countries around the world.  The locator shows you the address, description, available amenities, service providers and shows you a map of the location.

 ·         Network Management: Helps you see what networks are available and makes it easy to get technical information about them, including their security configuration, signal strength, etc.  In addition, you can tag networks as ‘favorites’ for future connections, track connection history, and manage network preferences. 

 ·         Security: Our built-in security, using VPN technology, allows you to secure a wireless Internet connection on unsecured networks like those in hotels and coffee shops.  This security feature comes free with the Windows Live WiFi Center product. 

Sounds pretty sweet. I've known this product was coming but hadn't tried it out yet. Looks like I need to get hooked up with the beta. The HotSpot Locator sounds particularly interesting to me.


Categories: Windows Live

Nick Gall has a blog post entitled What were we thinking? where he writes

It just struck me, after 5+ years of analyzing the ins and outs of SOAP, how little need there has turned out to be for the SOAP binding model (i.e., binding SOAP onto various "transport" protocols). If some endpoint is going to go to all the trouble of enabling processing of URIs and XML (a prerequisite for processing the SOAP envelope), what are the chances that said endpoint would not go ahead and enable HTTP processing? The scenario of a mainframe endpoint that is able to process a SOAP envelope, but is unable to process HTTP to receive the envelope strikes me as ludicrous.
So who really cares that SOAP is able to be bound to MQ or IIOP or SMTP, today? Apparently, very few--since there has been virtually no progress towards standardizing any SOAP binding other than to HTTP for years.
Accordingly, it seems to me that the WS-* stack could be made a lot less complex for the average developer if the SOAP and WSDL binding models were simply deprecated and replaced with simpler "native HTTP" binding

This is one of those blog posts where I simultaneously agree and disagree with the author. I agree that a lot of the complexity in WS-*/SOAP/WSDL/etc has to do with the notion of "protocol independence". As I mentioned in a previous post entitled Protocol Independence is a Leaky Abstraction, the way SOAP and the various WS-* technologies achieve protocol independence is by basically ignoring the capabilities of the Web (i.e. HTTP and URIs) and re-inventing them in various WS-* specifications. This leads to a lot of unnecessary complexity and layering when you are already using HTTP as the transport protocol (i.e. the most common usage of SOAP).

On the flip side, there is something to be said for being able to use one distributed application model and support multiple protocols. For example, if you read the Windows Communication Foundation whitepaper you'll see it mentioned that WCF supports sending messages via HTTP, as well as the Transmission Control Protocol (TCP), named pipes, and Microsoft Message Queuing (MSMQ). I've actually been working on using this functionality in some of our internal Windows Live platform services since the performance benefits of using TCP for communications over SOAP/HTTP are considerable. However we'll still have SOAP/HTTP end points since that is the lowest common denominator that SOAP-based services that interact with our services understand. In addition, I'd like to see some straight up PlainOldXml/HTTP or RESTful end points as well.

One of the main problems we've faced in our evaluation of moving to multiprotocol SOAP services is how much of a leaky abstraction the "protocol independent" nature of SOAP tends to be in real life. My favorite issue thus far is that we actually use HTTP redirects in our current SOAP web services. Guess what? There is no "protocol independent" WS-Redirect specification. So we have to roll our own solution for non-HTTP protocols.

We've hit a couple of other minor issues but in general the support we've gotten from Omri, Doug Purdy and others on the WCF team has been great. In fact, I've started to lose some of my skepticism about the WS-* family of technologies. I still think they are overkill for the Web though. ;)


Categories: XML Web Services