August 31, 2006
@ 07:25 PM

"Social Search" is like "Web 2.0" in that if you ask five people what it means you'll get back seven different definitions. To me, the user experience for 'Social Search' is pretty straightforward. I'd like to ask questions and get back answers that take advantage of the knowledge the application has about my social circle (e.g. friends, family, trusted experts, etc).

The incident that got me interested in social search happened two years ago. The apartment complex I lived in [Avalon Belltown -- don't stay here, my car got broken into in their "secure" parking deck and they acted like it was my fault] raised my rent by $300 when my lease was up. I thought that was fairly obnoxious but didn't have the time to do an exhaustive survey of apartment complexes in the Seattle area to find one that met my desires. I knew that one or more of my co-workers or friends would be able to give me a suggestion for a cheaper apartment complex that would meet my requirements but short of spamming a bunch of people at work, I didn't have a good way to get this information out of my social circle. So I stayed there after renegotiating the lease [which they later reneged on but that is another story].

Since then I've been interested in the notion of 'social search' and other ways to make the user experience on the Web better by taking advantage of the knowledge applications have about our relationships to other people. This is why I ended up working on the team that I work on today and have been involved in building features such as Social Networking for Windows Live. I believe that we are now about halfway to what I'd like to see in the 'social search' arena at Windows Live. We have Windows Live QnA, Windows Live Expo, and Windows Live Spaces which I see as different pieces of the puzzle.

The next step for me has been thinking about how to extend this notion of applications being smarter because they know about our relationships outside Windows Live by exposing APIs to the different kind of relationship information we have today. This is one of the reasons I find the Facebook API quite fascinating. However I'm not sure what the right forum is to get feedback on what kinds of APIs people would like to see from us. Maybe asking here in my blog will get some bites. :)


 

Categories: Social Software | Windows Live

August 31, 2006
@ 07:08 PM

In another episode of the "Google is the new Microsoft" meme, I've been amused to see some VCs brag about how they plan to not invest in any company that potentially competes with Google in any space. Below are two examples I've noticed so far, I'm sure there are more that I've missed

In his blog post entitled The Kiko Affair, Paul Graham writes

Google may be even more dangerous than Microsoft, because unlike Microsoft it's the favorite of technically minded users. When Microsoft launched an application to compete with yours, the first users they'd get would alway be the least sophisticated-- the ones who just used whatever happened to be already installed on their computer. But a startup that tries to compete with Google will have to fight for the early adopters that startups can ordinarily treat as their birthright.
...
The best solution for most startup founders would probably be to stay out of Google's way. The good news is, Google's way is narrower than most people realize. So far Google only seems to be good at building things for which Google employees are the canonical users. That's because they develop software by using their own employees as their beta users
...
They tried hard; they made something good; they just happened to get hit by a stray bullet. Ok, so try again. Y Combinator funded their new idea yesterday. And this one is not the sort of thing Google employees would be using at work. In fact, it's probably the most outrageous startup idea I've ever heard. It's good to see those two haven't lost their appetite for risk.

In his blog post entitled Thoughts on Google Apps, Paul Kedrosky writes

Finally, and this is mostly directed at people sending "Enterprise 2.0" business plans my way: If you're thinking of doing something squarely in Google's enterprise-lusting aim you need to ask yourself one question only: Why? What makes you think that you can do it so much better than Google can that the inevitable free Google Apps product doesn't kick your ass out of the office market? I'm not saying it's impossible, and there are plenty of things outside Google's aim -- including apps that are much more social by design than what Google builds -- but the gate is 99% closed for bringing vanilla,mass-market office apps to the web.

I guess these VCs are planning to stop investing in software companies since Google seems to be increasingly involved in almost every category of software products. I thought the entire point of being a VC was accepting the element of risk involved?


 

August 31, 2006
@ 06:48 PM

I've slowly begun to accept the fact that the term Web 2.0 is here to stay. This means I've had to come up with a definition of the term that works for me. Contextually, the term still is meant to capture the allure of geek-loved sites like Flickr and http://del.icio.us. Being "Web 2.0" means having the same characteristics features of these sites like open APIs, tagging and AJAX.

One of the things I've realized while reading TechCrunch and sitting in meetings at work is that there is a big difference between folks like Caterina Fake or Joshua Schachter and the thousands of wannabes walking the halls in Redmond and Silicon Valley. The difference is the difference between building features because you want to improve your user's experience and building features you've been told those features are how to improve your user's experience.

Everytime I see some website that provides APIs that aren't useful enough to build anything interesting I think "There's somebody who heard or was told that building APIs was important without why it was important". Everytime I see some website implement tagging systems that are not folksonomies I think "There's somebody who doesn't get why tagging is useful". And every single time I see some site add AJAX or Flash based features that makes it harder to use the site than when it was more HTML-based I wonder "What the fuck was the point?".

I guess the truth is that TechCrunch depresses me. There is such lack of original thinking, failure to empathize with end users and just general unawareness of the trends that led to the features we describe as being Web 2.0 in our industry today. Sad.


 

Categories: Ramblings

August 29, 2006
@ 09:00 PM

From the Windows Live QnA team blog post entitled Welcome to the public beta for Qna.live.com! we learn

It’s with great pleasure, a lot of late nights, and barrels of caffeine, that our team launches the public Windows Live QnA beta.

For all you thousands of beta testers who took a chance on us, nagged us, mocked us, and made us better – we thank you. Keep doing it. Enjoy. Obey the code of conduct. We see you getting hooked.

The site is now available to all at http://qna.live.com/. Try it out and let the team know what you think.

Update: There is an interview with Betsy Aoki about Windows Live Qna on On10.net. If you look closely, you'll also notice a cameo by RSS Bandit.


 

Categories: Windows Live

August 28, 2006
@ 05:07 PM

I was surprised by the contents of two blog posts I read over the weekend on the same topic. In his post Web 2 bubble ain’t popped yet: Kiko sells for $258,100 Robert Scoble writes

How many employees did Kiko have again? Three, right? Well, they just sold their “failure” for $258,100. Not too shabby!

On the same topic, Om Malik writes in his post Kiko Sells for $258,100

The company recently became talk of the blogs, when the founders decided to cut their losses, and put the company on sale on eBay. Niall and I devoted a big portion of our latest podcast, Snakes on a Business Plan to the Kiko affair. Well, the auction just closed and brought in $258,100. A tidy sum! This explains why Paul was smiling today at The FOO Camp <img alt=" src="http://gigaom.com/wp-includes/images/smilies/icon_wink.gif"> Apparently, Kiko’s angel round was $50,000 in convertible debt, and this sale should cover that. Graham’s YCombinator which did the seed round could come out ahead as well.

I'm confused as to how anyone can define this as good. After you take out however much the investors get back after investing $50,000 there really isn't much left for the three employees to split especially when you remember that one of the things you do as the founder of a startup is not pay yourself that much. At best I can see this coming out as a wash (i.e. the money made from the sale of Kiko is about the same as if the founders had spent the time getting paid working for Google or Yahoo! as full time employees) but I could be wrong. I'd be surprised if it was otherwise.


 

One of the biggest surprises for me over the past year is how instead of Sun or IBM, it's Amazon that looks set to become the defining face of utility computing in the new millenium. Of course, the other shoe dropped when I read about Amazon Elastic Compute Cloud (Amazon EC2) which is described below

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.

Just as Amazon Simple Storage Service (Amazon S3) enables storage in the cloud, Amazon EC2 enables "compute" in the cloud. Amazon EC2's simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon's proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use.

All Amazon needs to do is to add some SQL-like capabilities on top of Amazon S3 and I can't see any reason why any self respecting startup would want to host their own datacenters with the high bandwidth, staff, server, space and power costs that it entails. Anecdotes such as the fact that SmugMug is now storing 300 terabytes of data on Amazon's servers for cheaper than they could themselves will only fuel this trend. I definitely didn't see this one coming but now that it is here, it seems pretty obvious. Companies like IBM and Sun, simply don't have the expertise at building something that has to handle traffic/capacitye at mega-service scale yet be as cheap as possible. Companies like Yahoo!, MSN/Windows Live and Google have this expertise but these are competitive advantages that they likely won't or can't give away for a variety of reasons. However Amazon does have he expertise at building a mega-scale service as cheaply as possible as well as the experience of opening it up as a platform for other companies to build businesses on. With the flood of startups looking to build out services cheaply due to the "Web 2.0" hype, this is a logical extension of Amazon's business of enabling companies to build eCommerce businesses on their platform.

With a few more high profile customers like SmugMug, Amazon could easily become the "dot in dotcomm Web 2.0". Of course, this means that like Sun was during the 90s they'll be pretty vulnerable once the bubble pops.


 

It looks like the big news this morning is that Google just announed Google Apps for your Domain. From the press release Google Launches Hosted Communications Services we learn

Google Apps for Your Domain, an expansion of the Gmail for Your Domain service that launched in February 2006, currently includes Gmail web email, the Google Talk instant messaging and voice calling service, collaborative calendaring through Google Calendar, and web page design, publishing and hosting via Google Page Creator. Domain administrators use a simple web-based control panel to manage their user account list, set up aliases and distribution lists, and enable the services they want for their domain. End users with accounts that have been set up by their administrator simply browse to customized login pages on any web-connected computer. The service scales easily to accommodate growing user bases and storage needs while drastically reducing maintenance costs.

Google will provide organizations with two choices of service.

  • A standard edition of Google Apps for Your Domain is available today as a beta product without cost to domain administrators or end users. Key features include 2 gigabytes of email storage for each user, easy to use customization tools, and help for administrators via email or an online help center. Furthermore, organizations that sign up during the beta period will not ever have to pay for users accepted during that period (provided Google continues to offer the service).
  • A premium version of the product is being developed for organizations with more advanced needs. More information, including details on pricing, will be available soon.

If this sounds familiar to you, that's because it is. This is pretty much the same sales pitch as Microsoft's Office Live. Right down to having tiered versions that range from free (i.e. Office Live Basics) to paid SKUs for businesses with more 'advanced' needs (i.e. Office Live Essentials). With Google officially entering this space, I expect that the Office Live team will now have some pressure on their pricing model as well as an incentive to reduce or remove some of the limitations in the services they offer (e.g. the fairly low limits on the amount of email addresses one can create per domain).

As usual, the technology blogs are full of the Microsoft vs. Google double standard. When Microsoft announced Office Live earlier this year, the response was either muted or downright disappointed because it wasn't a Web-based version of Microsoft Office. An example of such responses is Mike Arrington's post entitled Microsoft Office Live goes into Beta. On the flip side, the announcement of Google Apps for your Domain which is basically a "me too" offering from Google is heralded by Mike Arrington in his post Google Makes Its Move: Office 2.0 as the second coming of the office suite. The difference in the responses to what are almost identical product announcements is an obvious indication at how both companies are perceived by the technology press and punditry.

I personally prefer Om Malik's take in his post Web Office Vs Microsoft Office where he states

"Web Office should not be about replacing the old, but inventing the new web apps that solve some specific problems".

This is pretty much the same thing I heard Ray Ozzie and Sergey Brin say at last years Web 2.0 conference when they were both asked [on different occassions] about the possibility of replacing desktop office suites with Web-based software. Enabling people in disparate locations to collaborate and communicate is the next major step for office productivity suites. One approach could be replacing everything we have today with Web-based alternatives, the other could be making the desktop software we have today more Web savvy (or "live" if you prefer the Microsoft lingo). I know which one I think is more realistic and more likely to be acceptable to businesses today. What do you think?

My next question is whether Google is going to ship consumer targetted offerings as Microsoft has done with Windows Live Custom Domains or is the free version of Google Apps for your Domain expected to be the consumer version?

Disclaimer: The above statements are my opinions and do not in any way reflect the plans, thoughts, intentions or strategies of my employer.


 

Recently I asked one of the Javascript devs on the Windows Live Spaces team to review the code for some of my gadgets to see if he could point out areas for improvement. One thing he mentioned was that there were a ton of memory leaks in my gadgets. This took me by surprise since the thought of a memory leak in some AJAX code running in a browser sounded like a throwback to the days of writing apps in C/C++. So I went back and took a look at the Windows Live gadget SDK, and sure as hell there was a section of the documentation entitled Avoiding Memory Leaks which states

Memory leaks are the number one contributor to poor performance for AJAX-style websites. Often code that appears to be written correctly will turn out to be the source of a massive memory leak, and these leaks can be very difficult to track down. Luckily, there are a few simple guidelines which can help you avoid most memory leaks. The Live.com developers follow these rules religiously, and recommend that you do the same when implementing Gadgets.
  • Make sure you null any references to DOM elements (and other bindings for good measure) in dispose().
  • Call the base dispose method at the end of your dispose method. (conversely, call the base initialize at the beginning of your initialize method)
  • Detach all events that you attach.
  • For any temp DOM element vars created while constructing the DOM, null the temp var before exiting the method.
  • Any function parameters that are DOM elements coming in should be nulled before returning from the method.
There are a number of websites and blog entries that document approaches for identifying and fixing memory leaks in AJAX applications. One such helpful site can be found here.

A great way to see whether your Gadget is leaking memory is to use the following URL to load your Gadget: http://gadgets.start.com/gadget.aspx?manifestUrl=gadgetManifestUrl. Open Task Manager in Windows and monitor the memory usage of the Internet Explorer window. Keep reloading the Gadget in Internet Explorer to see if the memory usage increases over time. If the memory usage increases, it is indicative that your Gadget is leaking memory.

This is the entirety of the documentation on avoiding memory leaks in Windows Live gadgets. Granted there is some useful information in the blog post referenced from the SDK. The post implies that memory leaks in AJAX code are an Internet Explorer problem as opposed to a general browser issue. 

Most of the guidelines in the above excerpt were pretty straightforward except for the one about detaching all events you attach. I wasn't sure how event handling differed between Firefox and IE (the only browsers I test gadgets on) so I started down the path of doing some research. and this led me to a number of informative posts on Quirksmode. They include

  1. Traditional event registration model
  2. Advanced event registration models
  3. addEvent() considered harmful

The information in the above pages is worth its weight in gold if you're a Javascript developer. I can't believe I spent all this time without ever reading Quirksmode. The Windows Live gadgets team would be doing gadgets developers a lot of favors by including the above links to their documentation.

There is also an interesting observation about the end user perceptions about who's to blame for badly written gadgets. The comment about memory leaks in my gadgets answered the question of why Internet Explorer uses as much as 200MB of memory when running my Live.com start page. At first, I assumed the problem was with Live.com and then after switching browsers to Firefox I saw some improvement and then assumed the problem was with IE. It never crossed my mind that the problem was the poor coding in the gadgets on my page. This may just be because I was the author of many of the gadgets on my page but I suspect that when the average end user hits problems with poorly written gadgets causing issues with Live.com or Windows Live Spaces pages, Microsoft is the entity that gets the blame not the gadget developers.

Just like with Windows®, poorly written applications often reflect badly on the platform and not just the application. Interesting food for thought if you are interested in building Web platforms. 


 

Categories: Web Development | Windows Live

The Windows Live Wifi team has an introductory blog post entitled Hello World... which is excerpted below

I’m Stefan Weitz, director of planning for Windows Live WiFi. The team has been developing Windows Live WiFi Center over the past few months and it’s now time to let others experiment with it. The beta is currently limited to 5,000 people but will open up more broadly in the coming months.  If you are interested in participating please email your Windows Live ID (ex. JaneDoe@hotmail.com) to BellBeta@microsoft.com and we’ll get you on the list of interested parties.

Getting online in a WiFi world
Windows Live is all about unifying our customer’s online experience.  Well, let’s face it – you need to be connected (in one way or another) to have that world unified :).  The Windows Live WiFi Center is all about helping people get connected in a secure way – it’s essentially our first step at creating an integrated software solution that helps people find and securely connect to wireless networks around the world.  The Windows Live WiFi Center has a number of great features in this beta version (hint: beta = more features are coming soon…).

 ·         Hotspot locator:  Provides you with the ability to search for free and fee-based wireless networks in countries around the world.  The locator shows you the address, description, available amenities, service providers and shows you a map of the location.

 ·         Network Management: Helps you see what networks are available and makes it easy to get technical information about them, including their security configuration, signal strength, etc.  In addition, you can tag networks as ‘favorites’ for future connections, track connection history, and manage network preferences. 

 ·         Security: Our built-in security, using VPN technology, allows you to secure a wireless Internet connection on unsecured networks like those in hotels and coffee shops.  This security feature comes free with the Windows Live WiFi Center product. 

Sounds pretty sweet. I've known this product was coming but hadn't tried it out yet. Looks like I need to get hooked up with the beta. The HotSpot Locator sounds particularly interesting to me.


 

Categories: Windows Live

Nick Gall has a blog post entitled What were we thinking? where he writes

It just struck me, after 5+ years of analyzing the ins and outs of SOAP, how little need there has turned out to be for the SOAP binding model (i.e., binding SOAP onto various "transport" protocols). If some endpoint is going to go to all the trouble of enabling processing of URIs and XML (a prerequisite for processing the SOAP envelope), what are the chances that said endpoint would not go ahead and enable HTTP processing? The scenario of a mainframe endpoint that is able to process a SOAP envelope, but is unable to process HTTP to receive the envelope strikes me as ludicrous.
...
So who really cares that SOAP is able to be bound to MQ or IIOP or SMTP, today? Apparently, very few--since there has been virtually no progress towards standardizing any SOAP binding other than to HTTP for years.
...
Accordingly, it seems to me that the WS-* stack could be made a lot less complex for the average developer if the SOAP and WSDL binding models were simply deprecated and replaced with simpler "native HTTP" binding

This is one of those blog posts where I simultaneously agree and disagree with the author. I agree that a lot of the complexity in WS-*/SOAP/WSDL/etc has to do with the notion of "protocol independence". As I mentioned in a previous post entitled Protocol Independence is a Leaky Abstraction, the way SOAP and the various WS-* technologies achieve protocol independence is by basically ignoring the capabilities of the Web (i.e. HTTP and URIs) and re-inventing them in various WS-* specifications. This leads to a lot of unnecessary complexity and layering when you are already using HTTP as the transport protocol (i.e. the most common usage of SOAP).

On the flip side, there is something to be said for being able to use one distributed application model and support multiple protocols. For example, if you read the Windows Communication Foundation whitepaper you'll see it mentioned that WCF supports sending messages via HTTP, as well as the Transmission Control Protocol (TCP), named pipes, and Microsoft Message Queuing (MSMQ). I've actually been working on using this functionality in some of our internal Windows Live platform services since the performance benefits of using TCP for communications over SOAP/HTTP are considerable. However we'll still have SOAP/HTTP end points since that is the lowest common denominator that SOAP-based services that interact with our services understand. In addition, I'd like to see some straight up PlainOldXml/HTTP or RESTful end points as well.

One of the main problems we've faced in our evaluation of moving to multiprotocol SOAP services is how much of a leaky abstraction the "protocol independent" nature of SOAP tends to be in real life. My favorite issue thus far is that we actually use HTTP redirects in our current SOAP web services. Guess what? There is no "protocol independent" WS-Redirect specification. So we have to roll our own solution for non-HTTP protocols.

We've hit a couple of other minor issues but in general the support we've gotten from Omri, Doug Purdy and others on the WCF team has been great. In fact, I've started to lose some of my skepticism about the WS-* family of technologies. I still think they are overkill for the Web though. ;)


 

Categories: XML Web Services

August 25, 2006
@ 12:25 AM

It looks like I'm now writing a Windows Live gadget every week. My latest gadget is a port of the Flickr badge to a Windows Live gadget. It's in the approval pipeline and should show up under the list of gadgets I've written in the next day or so. To get the gadget working, I had to use the Flickr API. Specifically, I used the flickr.people.findByUsername method to convert a username to an internal Flickr ID. Ironically Coincidentally, I had recently read something by Alex Bosworth criticizing this very aspect of the Flickr API in his post How To Provide A Web API where he wrote

Simple also means don’t be too abstract. Flickr for example chooses in its API to require the use of its internal ids for all API calls. This means for example that every call to find information about a user requires a call first to find the internal id of the user. Del.icio.us on the other hand just requires visible names, in fact internal ids are hidden everywhere.

Actually, it's much worse than this. It seems that Flickr is inconsistent in how it maps user names back to internal IDs. For example, take Mike Torres who has 'mtorres' as his Flickr ID. I can access his Flickr photos by going to http://flickr.com/photos/mtorres. When I use the Flickr API explorer for flickr.people.findByUsername and pass in 'mtorres' as the username I get back the following ID; 25553748@N00. When I go to http://flickr.com/photos/25553748@N00 I end up going to some other person's page who seems to be named 'mtorres' as well.

However when I plug "Mike Torres" into the flickr.people.findByUsername method instead of 'mtorres' I get '37996581086@N01' which turns out to be the right ID since going to http://www.flickr.com/photos/37996581086@N01 takes me to the same page as http://flickr.com/photos/mtorres. Weird.

Perhaps this is a naming collision caused by the merging of Flickr & Yahoo! IDs?


 

Over the weekend I blogged that Kiko was an example of sprinkling AJAX on old ideas that had failed in the 1990s. I found an even better example from a blog post by Richard MacManus entitled Ex-Googler starts Webwag, new personalized start page where he writes

"According to Poisson, Webwag’s revenue streams will include affiliate marketing – something Netvibes is doing via Kelkoo - and B2B deals, an as yet unexplored area. Chris previously suggested that white labelling this technology is one key revenue opportunity for these firms to consider.

Poisson said: "As Web 2.0 develops over the next three three to five years, two things will remain. Firstly, everyone will have their own blog, and over 75% of people will have their own personalised start pages.

"My belief is the big search portals (My Yahoo etc) will get 50% of that market, and 50% will be taken by three to four independents.”"

Personally I think that 50% figure for independents is too ambitious. I also question his claim that 75% of people will have a start page in 3-5 years, unless you count the likes of Yahoo.com as a 'personalized start page' (actually I suspect the distinction will be moot in 5 years time).

If someone would have told me that AJAX versions of My Yahoo! (i.e. a portal homepage) except with none of the integration with a family of websites that a portal provides would be a hot startup idea I'd have laughed them out of my office. This market was saturated five years ago. The thought that adding drag & drop to a portal homepage and not having any rich integration with a family of sites is a viable business seems pretty absurd to me. The barrier to entry is pretty much zero, all you need to do is grab the Yahoo! Javascript UI library and write some code to parse RSS feeds to get started. That's besides the fact that the portal homepage is primarily a means to an end since they are there to encourage people to stick on the site and use other services (i.e. search) and not the core product in itself.

The quote that 75% of people will have a personalized start page is the best part though. As if the lack of AJAX and drag & drop is the reason that 75% of the population don't have My Yahoo!, My MSN or My AOL pages. Yeah, right. 

This reminds me of a conversation I was having with Eric Fleischman about blogging and RSS becoming mainstream yesterday. We agreed that blogging is already mainstream because everyone has a MySpace from politicians and school teachers to movie stars and DJs. On the other hand, I didn't think subscribing to feeds in a conventional aggregator would ever become used by a widespread percentage of the population. Subscribing to feeds seems cool to geeks because it solves a geek problem; having too many sources of information to keep track of and optimizing how this is done. The average person doesn't think it's cool to be able to keep track of 10 - 20 websites a day using a some tool because they aren't interested in 10 - 20 websites on a daily basis in the first place. I'm sure a light sprinkling of AJAX can solve that problem as well.

*sprinkle* *sprinkle* *sprinkle*


 

Categories: Web Development

August 21, 2006
@ 11:14 PM

Matt Mullenweg has a blog post entitled MSN Spaces Numbers where he writes

Scoble has been questioning the claimed numbers of MSN Spaces and somehow the conversation got sidetracked in the technicalities of “what’s a blog?” I’m not sure what Microsoft hopes to gain by inflating their numbers so much, now claiming 70 million “blogs”, but it’s interesting to note back in March they were claiming 123 million blogs at SxSW (Flickr photo of their booth). Of course that was like 2 name changes and reorgs ago. Maybe 50 million people left the service?

I wasn't planning to blog about the recent round of player hating on Windows Live spaces certain bloggers but the above claim by Matt Mullenweg that we are 'inflating' our numbers really got my goat.

First of all, the two numbers quoted above by Matt are unrelated metrics. The count of 123 million users is explained in the press release MSN Spaces Now Largest Blogging Service Worldwide which states that comScore Media Metrix has measured the service's reach as being 100 million unique vistors a month and this number is in addition to 20 million unique visitors from using the chinese version of MSN Spaces. The 70 million number is the number of blogs spaces that have been created since inception. This number isn't particularly interesting since it doesn't correlate to how many people are actually getting value out of the service.

For example, according to the LiveJournal statistics page their current statistics are

How many users, and how many of those are active?

  • Total accounts: 10945719
  • ... active in some way: 1870731
  • ... that have ever updated: 7278240
  • ... updating in last 30 days: 1164416
  • ... updating in last 7 days: 679693
  • ... updating in past 24 hours: 204465

According to those statistics only 1 out of 5 LiveJournal accounts is actually active. Of course, it would sound impressive to tout 11 million LiveJournal accounts even though the number of active accounts is much less. For that reason, the number of spaces on Windows Live Spaces isn't a particularly interesting metric to me nor is it to anyone I know who works on the product. We are more interested in the number of people who actually use our service and get value added to their lives by being able to share, discuss and communicate with their friends, families and total strangers. 


 

Categories: Windows Live

August 21, 2006
@ 05:22 PM

It's hard for me to believe that it's been five years since I was an intern at Microsoft. It's still fun to go back to read my blog posts about my Microsoft interview, my impressions halfway through the experience and my parting thoughts at the end if the experience. I've started thinking about my internship again because I'm going to be the mentor/manager of an intern in a couple of weeks and I've been taking strolls down memory lane trying to remember the experiences that made my internship worthwhile. 

My favorite experience is the story behind how I got the article Using the ECMA Standards: An Interview with Miguel de Icaza published on MSDN while I was still college and Microsoft had only said negative things about Miguel's Mono project up until that article was published.

It all started with an article on C|Net entitled Open source steps in to duplicate .Net which implied that Microsoft's licensing terms may not be favorable for Open Source implementations of the .NET Framework such as Mono and DotGNU. At the time, I thought it was rather two-faced of Microsoft to claim that the CLI and C# were going to be open ECMA standards but then threaten to prohibit Open Source implementations. So I fired of an ranting mail to the internal discussion list focused on the .NET Framework pointing out this inconsistency in Microsoft's position. At first, I got a bunch of replies smacking me down for daring to question Microsoft's strategy but after a couple of supportive mails from coworkers like Fadi Fakhouri, Omri Gazitt and a couple of others I eventually got routed to the right person. I met with Tony Goodhew who was quoted in the C|Net article and he set me straight. When I found out that this wasn't the case, I mentioned that it would be a great sign of goodwill to the Open Source community if Microsoft showed just how much they were supportive of such projects. Since I'd also gotten to know the author of the Dr. GUI columns on MSDN via another flame war email discussion, I had connections at MSDN and mentioned the idea to them as well. The MSDN folks liked the idea and when I pitched the idea to Miguel De Icaza he did as well. Although it only took a few email exchanges between Miguel and I to get the meat of the interview done, I didn't get the article completely edited and approved by MSDN until after my internship was done.

It was a pretty big deal for me when the article was published especially since Slashdot ran the story multiple times. The fact that I was just some punk intern and I got Microsoft to officially endorse Mono on MSDN was a big deal to me. The entire event made me appreciate Microsoft as a company and was a key factor in my decision to come to work for Microsoft full-time.

Now I'm trying to make sure I create an environment where the intern I'll be mentoring over the next few months can have similar experiences. If you are or have been an intern at Microsoft and don't mind sharing what rocked or sucked about your internship, I'd appreciate your comments.


 

Categories: Life in the B0rg Cube

The Windows Live Dev website has a new entry entitled New! Windows Live Contacts Gadget (beta) which states

Learn how, with nothing more than a little JavaScript, you can allow customers to use their Windows Live Contacts (Hotmail/Windows Live Mail and Messenger contacts) directly from your Web site.

To get started check out all of our developer info, the two working samples we’ve posted, and read the blog posts by one of the guys who developed it: Danny Thorpe.

What the gadget does is pretty simple yet powerful. It allows you to add a gadget to your page which logged-in Windows Live users can use to retrieve information about their Windows Live Messenger or Hotmail contacts and then input that data into your service. Think of it as adding a form fill or address auto-complete functionality to your site which uses that person's address book from Windows Live services to power it.


 

Categories: Windows Live

UPDATE: On inspecting the code it seems that my assertions in this post are incorrect. The change we made in the last release was not to enable Javascript by default. Instead it was to always ignore the Javascript setting chosen by the user for the newspaper view. This means that the current release of RSS Bandit is vulnerable to the majority of the flaws outlined in article linked below. I'll work on getting a release out that addresses this issue as soon as I can although this is complicated by the fact that we may not have a snapshot for the last release AND the first half of this week is very busy for me at work. If this security issue is a serious concern to you, my advice is to not use RSS Bandit until a release that addresses this issues is released or to switch to v1.3.0.29 of RSS Bandit which does honor the specified Security restrictions for the newspaper view.

A number of people have either sent me email or posted on the RSS Bandit forums asking whether RSS Bandit is vulnerable to the various issues raised in the article Blog feeds may carry security risk which states

LAS VEGAS--Reading blogs via popular RSS or Atom feeds may expose computer users to hacker attacks, a security expert warns.

Attackers could insert malicious JavaScript in content that is transferred to subscribers of data feeds that use the popular RSS (Really Simple Syndication) or Atom formats, Bob Auger, a security engineer with Web security company SPI Dynamics, said Thursday in a presentation at the Black Hat security event here.
...
"A large percentage of the readers I tested had some kind of an issue," he said. In his presentation, Auger listed Bloglines, RSS Reader, RSS Owl, Feed Demon, and Sharp Reader as vulnerable.

As protection, people could switch to a nonvulnerable reader. Also, feed publishers could ensure that their feeds don't include malicious JavaScript or any script at all, Auger said. Some services, however, rely on JavaScript to deliver ads in feeds, he noted.

To prevent this sorts of issues RSS Bandit allows users to optionally disable the running of Javascript, ActiveX or Java code in its Options dialog. Up until the last release we disabled Javascript, ActiveX and Java by default. However in the last release, we switched on Javascript by default to enable a particular features (i.e. specifically when you click on the envelope or flag on an item in the newspaper view to change the read or flagged state of an item). This means that by default RSS Bandit is vulnerable to the Javascript related issues mentioned in this article.

How to change this state of affairs is mentioned in the section of our user documentation entitled Changing the web browser security settings which has a screenshot of the Web Browser tab of the Options dialog where browser security restrictions can be set. 

Our users should configure the options to what best eases their security concerns. I'm still debating on what we need to do here in the long term but one thing I doubt we'll do is striping potentially malicious HTML tags since this seems to be a sledgehammer-like approach which may strip valid markup (e.g. <style> tags) from content. It's more likely that I'll remove our features that require enabling Javascript by default than go that route. I'd appreciate thoughts from our users on this.

Update: I was one of the developers contacted by James Snell and have failed to get back to him since I haven't gone through all of the tests he sent me yet.


 

Categories: RSS Bandit

Jon Udell proves again why he's my favorite technology journalist with his piece Why Microsoft should open XAML where he writes

The WPF/E runtime won’t implement all of XAML (XML Application Markup Language), a .Net language tuned for declarative application layout. But “the portion of XAML we’ve picked,” Gates told me, “will be everywhere, absolutely everywhere, and it has to be.”

“Everywhere” means the kind of ubiquity that the Flash player enjoys on Windows and Mac desktops, and to a lesser extent on Unix and handheld devices. And it sets up an arms race between Adobe and Microsoft, each giving away razors (that is, players) in order to sell blades (development tools).

Here’s a crazy idea: Open-source the WPF/E, endorse a Mono-based version, and make XAML an open standard. Why? Because an Adobe/Microsoft arms race ignores the real competition: Web 2.0, and the service infrastructure that supports it.

The HTML/JavaScript browser has been shown to be capable of tricks once thought impossible. Meanwhile, though, we’re moving inexorably toward so-called RIAs (rich Internet applications) that are defined, at least in part, by such declarative XML languages as Adobe’s MXML, Microsoft’s XAML, Mozilla’s XUL (XML User Interface Language), and a flock of other variations on the theme.

Imagine a world in which browsers are ubiquitous, yet balkanized by incompatible versions of HTML. That’s just where RIA players and their XML languages are taking us. Is there an alternative? Sure. Open XAML. There’s a stake in the ground that future historians could not forget.

When building rich internet applications today, the primary choices are AJAX and Flash. The reason that these are the two primary choices versus other options like Java, ActiveX, XUL, etc is their ubiquity. And AJAX is typically preferred over Flash because it doesn't require expensive development tools and there is the perception that AJAX is less proprietary than Flash.

Any technology that aims to compete with Flash and AJAX, has to be cross platform (i.e. works in Firefox and Internet Explorer at the minimum) and ubiquitous. Ubiquity can be gained either by taking advantage of the existing technologies within the browsers or by ensuring that the process for getting the runtimes on user's machines is seamless for end users. I have no doubt that Microsoft can eventually get development platforms ubiquitous on Windows. Earlier this week, I was reading a number of blog posts from people who tried out Windows Live Writer and don't remember anyone complaining about needing to have the .NET Framework installed to run it. It took a few years but it seems the .NET Framework is now on a majority of PCs running Windows if those blog posts is any indication. However it's taken a couple of years for that to happen.

If WPF/E is meant to be used in the same situations that AJAX and Flash are used today then it needs to give developers better advantages than the incumbents. If it was ubiquitous and cross platform, that would still just get it in the door. Jon Udell's idea to make it an Open platform on the other hand may take it to the tipping point. At the end of the day, Microsoft should favor building the ecosystem of rich internet applications that are accessible from Windows PCs than competing with Adobe for dollars from selling development tools for rich internet applications. This seems to be a better strategy to me. 

Disclaimer: The above post contains my own opinions and does not reflect the intentions, strategies, plans or thoughts of my employer


 

Categories: Programming | Web Development

I was just reading Paul Graham's post entitled The Kiko Affair which talks about the recent failure of Kiko, an AJAX web-calendaring application. I was quite surprised to see the following sentence in Paul Graham's post

The killer, unforseen by the Kikos and by us, was Google Calendar's integration with Gmail. The Kikos can't very well write their own Gmail to compete.

Integrating a calendaring application with an email application seems pretty obvious to me especially since the most popular usage of calendaring applications is using Outlook/Exchange to schedule meetings in corporate environments. What's surprising to me is how surprised people are that an idea that failed in 1990s will turn out any differently now because you sprinkle the AJAX magic pixie dust on it.

Kiko was a feature, not a full-fledged online destination let alone a viable business. There'll be a lot more entrants into the TechCrunch deadpool that are features masquerading as companies before the "Web 2.0" hype cycle runs its course. 


 

I just uploaded a few gadgets to Windows Live Gallery and thought I should share something cool I learned from Jay Fluegel, the PM for gadgets in Windows Live Spaces. If you see a cool gadget on someone's space that you'd like to add to your space or portal page, all you need to do is click the '+' in the top-right corner of the gadget as shown in the screenshot below and viola

That's pretty hot and brain-dead simple too. Definitely beats having to trawl Windows Live Gallery everytime you see a cool gadget that you'd like to add to your space or personalized home page.


 

Categories: Windows Live

Caterina Fake of Flickr has a blog post entitled BizDev 2.0 where she writes

Several companies -- probably more than a dozen -- have approached us to provide printing services for Flickr users, and while we were unable to respond to most of them, given the number of similar requests and other things eating up our time, one company, QOOP, just went ahead and applied for a Commercial API key, which was approved almost immediately, and built a fully-fleshed out service. Then after the fact, business development on our side got in touch, worked out a deal -- and the site was built and taking orders while their competitors were still waiting for us to return their emails. QOOP even patrols the discussions on the Flickr boards about their product, and responds and makes adjustments based on what they read there. Now that's customer service, and BizDev 2.0.

Traditional business development meant spending a lot of money on dry cleaning, animating your powerpoint, drinking stale coffee in windowless conference rooms and scouring the thesaurus looking for synonyms for "synergy". Not to mention trying to get hopelessly overbooked people to return your email. And then after the deal was done, squabbling over who dealt with the customer service. Much, much better this way!

I know exactly where Catrina is coming from. Given that I work on the platform that powers Windows Live Spaces which has over 100 million users and 5.2 billion photos with over 6 million being uploaded daily, I've been on the receiving end of similar conversations about business partnerships revolving around integrating with the blogs, photo albums, lists and user profiles in our service. All of these partnerships have sounded obsolete to me in the age of open APIs. It seems to me to be much better to support de-facto industry standards like the MetaWeblog API that enables any tool or website to integrate with our service than have proprietary APIs that can only be accessed by people who we've made exclusive business deals with us. That seems better for our service and better for our users to me.

This definitely changes the game with regards to how our business development folks approach certain types of business partnerships. I probably wouldn't have called it BizDev 2.0 though. ;) 


 

August 16, 2006
@ 12:56 PM

In the post entitled Something went wrong at the W3C? Anne van Kesteren has a collection of links to rants about the W3C from Web-standards geeks that is sober reading. The post is excerpted below

Something went wrong at the W3C? Lets see:

  1. To Hell with WCAG 2
  2. Leaving W3C QA Dev.
  3. An angry fix
  4. SVG12: brief clarification on formal objections
  5. SVG Tiny 1.2 in Candidate Wreckommendation stage
  6. What's Wrong With The SVG Working Group
  7. Angry Indeed

Reading some of these rants takes me back to days I used to work on the XML team at Microsoft and how I grew to loathe the W3C and standards bodies in general. All of the above links are recommended reading for anyone who is interested in Web standards. An observation that stood out for me was taken from Joe Clark's rant, To Hell with WCAG 2 where he wrote

And now a word about process, which you have have to appreciate in order to understand the result. The Web Content Accessibility Guidelines Working Group is the worst committee, group, company, or organization I’ve ever worked with. Several of my friends and I were variously ignored; threatened with ejection from the group or actually ejected; and actively harassed. The process is stacked in favour of multinationals with expense accounts who can afford to talk on the phone for two hours a week and jet to world capitals for meetings.

The WCAG development process is inaccessible to anyone who doesn’t speak English. More importantly, it’s inaccessible to some people with disabilities, notably anyone with a reading disability (who must wade through ill-written standards documents and e-mails—there’s already been a complaint) and anyone who’s deaf (who must listen to conference calls). Almost nobody with a learning disability or hearing impairment contributes to the process—because, in practical terms, they can’t.

This sounds like an apt description of the W3C working groups I used to track, namely the XML Schema working group and the XML Query working group. Both of which [in my opinion] have done more harm than good for the Web and XML by simply existing and retarding progress with the technologies they have failed to produced.

The question I sometimes ponder is what's the alternative? De-facto standards based on proprietary technologies seem to be one option as evidenced by the success of RSS and IXMLHttpRequest. There is also something to be said about the approach taken by Microformats community. Either approach seems preferable to the current mess we have with the W3C's approach to standards development. 


 

Categories: Web Development

August 16, 2006
@ 11:01 AM

Robert Scoble has a blog post entitled Blogs and Digg, not geeky enough? where he writes

I notice a general trend looking through blogs, TechMeme, and Digg. There aren’t many coders anymore.

Five years ago the discussions were far more technical and geeky. Even insiderish. When compared to the hype and news of today.

It makes me pine for ye old RSS vs. Atom geek flamefests.

Anyone else notice this trend?

Sites like TechMeme and Digg hone in on what is popular to the general audience even if it is the general audience interested in software. There are more people interested in the impact of software-powered companies like Google, Yahoo!, Microsoft, MySpace, Youtube, and so on than there are people interested in the technology that powers these companies. There are going to be more people speculating about Google's next new service than those interested in a dissection of how the AJAX on one of Google's sites works. There are more people talking about Google Maps mashups than there are people talking about how to build them. There are more people interested in the next "Web 2.0" startup that Yahoo! is going to buy than are interested in technical language wars about whether Flash or AJAX is the way to go in building such sites. That's why you won't see Raymond Chen, Simon Willison or Jon Udell on TechMeme and Digg as often as you'll see the Michael Arringtons, Robert Scobles and  Om Maliks of the world.

This doesn't mean "there aren't many coders anymore" as Robert Scoble suggests. It just means that there are more people interested in the 'industry' part of the "software industry" than in the 'software' part. What else is new?


 

Categories: Technology

August 16, 2006
@ 03:45 AM

Today I learned about developers.facebook.com which proclaims

Welcome to Facebook Developers (beta), where you can create your own projects using our application programming interface (API).

In case you've been living under a rock for the past couple of months, The Facebook is like MySpace but for college kids. It seems to have made the transformation from a cool application of the moment (a.k.a. a fad) to an actual must-have utility among the college students I've talked to about it. I've heard college girls say they look guys up on The Facebook as part of pre-date screening and these were sorority girls not geeks. The fact that they are providing an API is a very interesting turn of events especially when you consider their dominant position in their market. 

I'm particularly interested in the Facebook API because I've been thinking about what we should do to expose the Windows Live friends list via an API. The problem with exposing APIs for social networks and contact lists is that the worst case scenario is that your API gets a lot of usage from your competitors (e.g. Zooomr vs. Flickr). I've been thinking about this problem on-and-off for the past couple of months and was interested to see how Facebook API handled this problem or whether, like me, they'd come to the conclusion that if the main use of your API is people trying to leave your service then you've got other problems than just the API. I checked out the definition of the facebook.friends.get method and left with more questions than answers. The API states

facebook.friends.get

Returns the identifiers of the current user's Facebook friends. The current user is determined from the session_key. The values returned from this call are not storable.

Response

The friend ids returned are those friends visible to the calling application. If no friends are found, the method will return an empty result element.

The parts highlighted in red are interesting to say the least. I wonder what exactly is meant by "values returned from this call are not storable". Is this legal wording? Are the values encrypted in some way? What exactly does that mean? It looks like I may need to do some sleuthing around the forums except I don't have a Facebook account. Hmmm...

I was also interested in the authentication model used by the Facebook API. From reading the documentation, their authentication scheme reminds me of Yahoo's Browser Based Authentication Scheme (scheme) in that it requires users to always log-in from a browser and then either be redirected back to the calling page (much like Microsoft's Passport Windows Live ID) or for the target application to re-use the URL it got after the browser was closed if it is a desktop application. Surely, there must be a better way to authenticate desktop applications against online services than habing them launch a Web browser and having a separate, dissonant sign-in process.

PS: If the Facebook API sounds interesting to you and you'd like to do similar things with the Windows Live friends list I'd love to hear what your scenarios are. Holla at me. 


 

I've been spending some time over the past couple of months thinking about Web services and Web APIs. Questions like when a web site should decide to expose an API, what form the API should take and what technologies/protocols should be used are topics I've rehashed quite a lot in my head. Recently I came to the conclusion that if one was going to provide a Web service that is intended to be consumed by as many applications as possible, then one should consider exposing the API using multiple protocols. I felt that at least two protocols should be chosen SOAP over HTTP (for the J2EE/.NET crowd) and Plain Old XML (POX) over HTTP (for the Web developer crowd).

However, I've recently started spending a bunch of time writing Javascript code for various Windows Live gadgets and I've begun to appreciate the simplicity of using JSON over parsing XML by hand in my gadgets. I've heard similar comments echoed by co-workers such as Matt who's been spending a bunch of time writing Javascript code for Live Clipboard and Yaron Goland who's one of the minds working on the Windows Live developer platform. JSON has similar goals to XML-RPC and W3C XML schema in that it provides a platform agnostic way to transfer data which is encoded as structured types consisting of name<->value pairs and collections of name<->value pairs. It differs from XML-RPC by not getting involved with defining a mechanism for remote procedure calls and from W3C XML schema by being small, simple and focused.

Once you start using JSON in your AJAX apps, it gets pretty addictive and it begins to seem like a hassle to parse XML even when it's just plain old XML such as RSS feeds not complex crud like SOAP packets. However being an XML geek, there are a couple of things I miss from XML that I'd like to see in JSON especially if it's usage will grow to become as widespread as XML is on the Web today. Yaron Goland feels the same way and has started a series of blog posts on the topic.

In his blog post entitled Adding Namespaces to JSON Yaron Goland writes

The Problem

If two groups both create a name "firstName" and each gives it a different syntax and semantics how is someone handed a JSON document supposed to know which group's syntax/semantics to apply? In some cases there might be enough context (e.g. the data was retrieved from one of the group's servers) to disambiguate the situation but it is increasingly common for distributed services to be created where the original source of some piece of information can trivially be lost somewhere down the processing chain. It therefore would be extremely useful for JSON documents to be 'self describing' in the sense that one can look at any name in a JSON document in isolation and have some reasonable hope of determining if that particular name represents the syntax and semantics one is expecting.

The Proposed Solution

It is proposed that JSON names be defined as having two parts, a namespace name and a local name. The two are combined as namespace name + "." + local name to form a fully qualified JSON name. Namespace names MAY contain the "." character. Local names MUST NOT contain the "." character. Namespace names MUST consist of the reverse listing of subdomains in a fully qualified DNS name. E.g. org.goland or com.example.bigfatorg.definition.

To enable space savings and to increase both the readability and write-ability of JSON a JSON name MAY omit its namespace name along with the "." character that concatenated it to its local name. In this case the namespace of the name is logically set to the namespace of the name's parent object. E.g.

{ "org.goland.schemas.projectFoo.specProposal" :
"title": "JSON Extensions",
"author": { "firstName": "Yaron",
"com.example.schemas.middleName":"Y",
"org.goland.schemas.projectFoo.lastName": "Goland",
}
}

In the previous example the name firstName, because it lacks a namespace takes on its parent object's namespace. That parent is author which also lacks a namespace so recursively author looks to its parent specProposal which does have a namespace, org.goland.schemas.projectFoo. middleName introduces a new namespace "com.example.schemas", if the value was an object then the names in that object would inherit the com.example.schemas namespace. Because the use of the compression mechanism is optional the lastName value can be fully qualified even though it shares the same namespace as its parent. com.example.taxonomy

My main problem with the above approach is echoed by the first comment in response to Yaron's blog post; the above defined namespace scheme isn't completely compatible with XML namespaces. This means that if I have a Web service that emits both XML and JSON, I'll have to use different namespace names for the same elements even though all that differs is the serialization format. Besides the disagreement on the syntax of the namespace names, I think this would be a worthwhile addition to JSON.

In another blog post entitled Adding Extensibility to JSON Data Formats Yaron Goland writes

The Problem

How does one process JSON messages so that they will support both backwards and forwards compatibility? That is, how does one add new content into an existing JSON message format such that those who do not understand the extended content will be able to safely ignore it?

The Proposed Solution

In the absence of additional information providing guidance on how to handle unrecognized members a JSON processor compliant with this proposal MUST ignore any members whose names are not recognized by the processor.

For example, if a processor was expecting to receive an object that contained a single member with the name "movieTitle" and instead it receives an object with multiple members including "movieTitle", "producer" and "director" then the JSON processor would, by default, act as if the "producer" and "director" members were not present.

An exception to this situation would be a member named "movie" whose value is an object where the semantics of the members of that object is "the local name of the members of this object are suitable for presenting as titles and their values as text under those titles". In that case regardless of the processor's direct knowledge of the semantics of the members of the object (e.g. the processor may actually know about movieTitle but not "producer" or "directory") the processor can still process the unrecognized members because it has additional information about how to process them.

This requirement does not apply to incorrect usage of recognized names. For example, if the definition of an object only allowed a single "movieTitle" member then having two "movieTitle" members is simply an error and the ignore rule does not apply.

This specification does not require that ignored members be removed from the JSON structure. It is quite possible that other processors who will deal with the message may recognized members the current processor does not. Therefore it would make sense to let unrecognized members remain in the JSON structure so that others who process the structure may benefit from the extended information.

Definition: Simple value - A value of a type other than array or object.

If a JSON processor encounters an array where it had expected to encounter a simple value the processor MUST retrieve the first simple value in the array and treat that as the value it was expecting and ignore the other elements in the array.

Again, it looks like I'm going to go ahead and parrot the same feedback as a commenter to the original blog post. Defining an extensibility model where simple types can be converted to arrays in a future version seems like overkill and unnecessary complexity. It's not like it's that hard to another field to the type. The other thing I wondered about this blog post is that it seemed to define a problem set that doesn't really exist. It's not like there are specialized JSON parsers that barf if they see a field that they don't understand in widespread use. Requiring that the fields of various types be defined up front or else barfing when encountering undefined fields over the wire is primarily a limitation of statically typed languages and isn't really a problem for dynamic languages like JavaScript. Or am I missing something?


 

Categories: XML Web Services

August 14, 2006
@ 07:24 PM

I've been late to blog about this because I was out on vacation but better late than never. J.J. Allaire (yes, that one) has a blog post entitled Introducing Windows Live Writer which announces Microsoft's desktop blogging tool called Windows Live Writer. He writes

Introducing Windows Live Writer

Welcome to the Windows Live Writer team blog! We are excited to announce that the Beta version of Windows Live Writer is available for download today.

Windows Live Writer is a desktop application that makes it easier to compose compelling blog posts using Windows Live Spaces or your current blog service. Blogging has turned the web into a two-way communications medium. Our goal in creating Writer is to help make blogging more powerful, intuitive, and fun for everyone. Writer has lots of features which we hope make for a better blogging experience. Some of the ones we are most excited about include:

WYSIWYG Authoring

 The first thing to notice about Writer is that it enables true WYSIWYG blog authoring. You can now author your post and know exactly what it will look like before you publish it. Writer knows the styles of your blog such as headings, fonts, colors, background images, paragraph spacing, margins and block quotes and enables you to edit your post using these styles. ...

Photo Publishing

Writer makes inserting, customizing, and uploading photos to your blog a snap. You can insert a photo into your post by browsing image thumbnails through the “Insert Picture” dialog or by copying and pasting from a web page...Photos can be either uploaded directly to your weblog provider (if they support the newMediaObject API) or to an FTP server.

Writer SDK

 Already thinking of other cool stuff you want to insert into your blog? Good!

The Windows Live Writer SDK allows developers to extend the capabilities of Writer to publish additional content types. Examples of content types that can be added include:

  1. Images from online photo publishing sites
  2. Embedded video or audio players
  3. Product thumbnails and/or links from e-commerce sites
  4. Tags from tagging services

This is one project I've been dying to blog about for months. Since I was responsible for the blogging and the upcoming photo publishing APIs for Windows Live Spaces, I've spent the last couple of weeks working with the team to make sure that the user experience when using Windows Live Writer and Windows Live Spaces is great. I'd like to hear if you think we've done a good job.

PS: The application is not only chuck full of features but is also very extensible. Tim Heuer has already written plugins to enable integration with Flickr and added tagging support. If you are a developer, you should also download the SDK and see what extensions you can hack into the app.


 

Categories: Windows Live

In my previous post, I talked about some of the issues I saw with the idea of doing away with operations teams and merging their responsibilities into the development team's tasks [as practised at companies like Amazon]. Justin Rudd, who is a developer at Amazon, posts his first-hand perspective of this practice in his blog post entitled Expanding on the Pain where he writes

Since I am a current employee of Amazon in the software development area, I probably shouldn’t be saying this, but…
...

First a few clarifications - there is no dedicated operations team for Amazon as a whole that is correct.  But each team is allowed to staff as they see fit.  There are teams within Amazon that have support teams that do handle quite a bit of the day to day load.  And their systems tend to be more “smooth” because this is what that team does - keep the system up and running and automate keeping the system up and running so they can sleep at night.

There are also teams dedicated to networking, box failures, etc.  So don’t think that developers have to figure out networking issues all the time (although there are sometimes where networking doesn’t see a problem but it is affecting a service).

Now for those teams that do not have a support team (and I am on one of them), at 3 in the morning you tend to do the quickest thing possible to get the problem rectified.  Do you get creative?  After being in bed for 3 hours (if you’re lucky) and having a VP yell at you on the phone that this issue is THE most important issue there is or having someone yell at you that they are going to send staff home, how creative do you think you can be?  Let me tell you, not that creative.  You’re going to solve the problem, make the VP happy (or get the factory staff back to work), and go back to bed with a little post it note to look for root cause of the problem in the AM.

Now 1 of 2 things happens.  If you have a support team, you let them know about what happened, you explain the symptoms that you saw, how you fixed it, etc.  They take your seed of an idea, plant it, nurture it, and grow it.

If you don’t have a support team and you are lucky, in the morning there won’t be another THE most important thing to work on and you can look at the problem with some sleep and some creativity.  But the reality is - a lot of teams don’t have that luxury.  So what happens?  You end up cronning your solution which may be to bounce your applications every 6 hours or run a perl script that updates a field at just the right place in the database, etc.

We all have every intention of fixing it, but remember that VP that was screaming about how this issue had to be fixed?  Well now that it isn’t an issue anymore and it’s off his radar screen, he has new features that he wants pushed into your code.  And those new features are much more important than you fixing the issue from the night before because the VP really doesn’t care if you get sleep or not at night.

Justin's account jibes with other accounts I've heard [second hand] from other ex-Amazon developers about what it means to live without an operations team. Although it sounds good on paper to have the developers responsible for writing the code also responsible when there are issues with the code on the live site, it leads to burning the candle at both ends. Remember, division of labor exists for a reason.
 

Categories: Web Development

A few weeks ago, I bookmarked a post from Sam Ruby entitled Collapsing the Stack where he wrote

Werner Vogels: Yep, the best way to completely automate operations is to have to developers be responsible for running the software they develop. It is painful at times, but also means considerable creativity gets applied to a very important aspect of the software stack. It also brings developers into direct contact with customers and a very effective feedback loop starts. There is no separate operations department at Amazon: you build it; you run it.

Sounds like a very good idea.

I don't see how this sounds like a good idea. This reminds me of a conversation I once had with someone at Microsoft who thought it would be a good idea to get rid of their test team and replace them all with developers once they moved to Test Driven Development. I used to be a tester when I first joined Microsoft and this seemed to me to be the kind of statement made by someone who assumed that the only thing testers do is write unit tests. Good test teams don't just write unit tests. They develop and maintain test tools. They perform system integration testing. They manage the team's test beds and test labs. They are the first line of defence when attempting to reproduce customer bug reports face before pulling in developers who may be working on your next release. All of this can be done by the development team but it means that your developers spend less time developing and more time testing. This cost will show up either as an increment in the amount of time it takes to get to market or a reduction in quality if schedules are not adjusted to account for this randomization of the development team. Eventually you'll end up recreating your test team so there are specific people responsible for test-related activities [which is why software companies have test teams in the first place].

The same reasoning applies to the argument for folding the responsibilities of your operations team into the development team's tasks. A good operations team isn't just responsible deployment/setup of applications on your servers and monitoring the health of the Web servers or SQL databases inyour web farm. A good operations team is involved in designing your hardware SKUs and understanding your service's peak capacity so as to optimize purchase decisions. A good operations team makes the decisions around your data centers from picking locations with the best power prices and ensuring that you're making optimal use of all the physical space in your data center to making build . A good operations team is the first line of defence when your service is being hit by a Denial of Service attack. A good operations team insulates the team from worrying about operating system, web server or database patches being made to the live site. A good operations team is involved in the configuration, purchase, deployment and [sometimes] development of load balancing, database partitioning and database replication tools. Again, you can have your development team do this but eventually it would seem to make sense that these tasks be owned by specific individuals instead of splitting them across the developers building one's core product.

PS: I've talked to a bunch of folks who know ex-Amazon developers and they tend to agree with my analysis above. I'd be interested in getting the perspective of ex-Amazon developers like Greg Linden on replacing your operations team with your core development staff.

PPS: Obviously this doesn't apply if you are a small 2 to 5 person startup. Everyone ends up doing everything anyway. :)


 

Categories: Web Development

August 8, 2006
@ 08:30 PM

This morning I got an IM from Niall Kennedy letting me know that he was Leaving Microsoft. He begins his blog post about leaving by writing

I am leaving Microsoft to start my own company. My last day at Microsoft is next Friday, August 18. It's uncertain whether Microsoft will continue the feed platform work I started, but it's some good stuff so I hope they do.

As the person who referred Niall to the company and gave him some advice when he was weighing whether to join Windows Live, I am sad to see him leave so soon. I sympathize with his reasons for leaving although some of what he wrote is inaccurate and based on speculation rather than the actual facts of the matter. That said, I found Niall to be quite knowledgeable, smart and passionate, so I expect him to do well in his endeavors.

Good luck, Niall.


 

Don Demsak has a blog post entitled Open Source Projects As A Form Of Community Service which links to a number of blog posts about the death of the NDoc project. He writes

Open source projects have been the talk of the tech blogs recently with the announcement that NDoc 2 is Offcially Dead, along with the mention that the project's sole develop was a victim of an automated mail-bomb attack because the project wasn't getting a .Net 2.0 version out fast enough for their liking.  Kevin has decided to withdraw from the community, and fears for himself and his family.  The .Net blogging community has had a wide range of reactions:

  • Phil Haack talks about his ideas behind helping/saving the open source community and laid down a challenge. 
  • Eric Wise mentions that he will not work on another FOSS project. 
  • Scott Hanselman laments that Microsoft hasn't put together an Ineta like organization to handle giving grants to open source projects, and also shows how easy it is to submit a patch/fix to a project.
  • Peter Provost worries that bringing money into the equation may spoil the cool part of community developed software, and that leadership is the key to good open source projects.
  • Derek Denny-Brown says that "Microsoft needs to understand that Community is more than just lots of vendors creating commercial components, or MVPs answering questions on newsgroups".

I've been somewhat disappointed by the Microsoft developer division's relationship with Open Source projects based on the .NET Framework and it's attitude towards source code availability in general. Derek Denny-Brown's post entitled Less Rambling Venting about Developing for .Net hit the nail on the head for me. There are a number of issues with the developer community around Visual Studio and the .NET Framework that are raised in Derek's post and the others mentioned above. The first, is what seems like a classic case of Not Invented Here (NIH) in how Microsoft has not only failed to support Open Source projects that fill useful niches in the Visual Studio ecosysem but eventually competes with them (Nant vs. MSBuild, NUnit vs. Visual Studio Team System and now Sandcastle vs. NDoc). My opinion is that this is a consequence of Microsoft's strategy of integrated innovation which encourages Microsoft's product teams to pursue a seamless end-to-end experience where every software application in the business process is a Microsoft product. 

Another issue is Microsoft's seeming ambivalence and sometimes antipathy towards Open Source software. This is related to the fact that the ecosystem around Microsoft's software platforms (i.e. customers, ISVs, etc) is heavily tilted towards commercial software development. Or is that vice versa? Either way, commercial software developers tend to view Open Source as the bane of their existence. This is unfortunate given that practically every major software development platform that the .NET Framework and Visual Studio competes with is either Open Source (e.g. PHP, Perl, Python, Ruby) or at the very least encourages source code availability (e.g. Java). Quite frankly, I personally would love to the .NET Framework class libraries being Open Source or at the very least have the source code available in the same way Sun has done with the JDK. I know that there is the Shared Source Common Language Infrastructure (SSCLI) which I have used on occassion when having issues during RSS Bandit development but it isn't the same.

So we have a world where the developer community around Microsoft's products is primarily interested in building and using commercial software while the company pursues an integration strategy that guarantees that it will compete with projects that add value on its platform. The questions then are whether this is a bad thing and if so, how do we fix it?


 

August 8, 2006
@ 01:29 AM

Like every other company out there, Microsoft likes to encourage employees to refer people they know who might be a good fit to join the company. It seems the HR department sent out a mass mailing last week and guess whose ugly mug was used as part of the campaign?

Front

Back

I assume this was just reusing the stock photo from my page on the Microsoft Careers site as opposed to an exhortation to Microsoft employees to hire more people like me. We only need so many paper pushing PMs. ;)

It's still pretty sweet though.


 

Categories: Life in the B0rg Cube

August 6, 2006
@ 04:47 PM

I've seen Yochai Benkler mentioned twice in the past few weeks in blogs I read semi-regularly. It seems that he recently tangled with Jason Calacanis over his attempt to pay top contributors of social bookmarking sites like Digg to using his service. Jason Calacanis documents their encounter in his post entitled Calacanis vs. Benkler Round One. Yochai Benkler also posted a comment in Nick Carr's post which was then elevated to a post by Carr entitled . Below is an excerpt from Yochai Benkler's comment.

The reason is that the power of the major sites comes from combining large-scale contributions from heterogeneous participants, with heterogeneous motivations. Pointing to the 80/20 rule on contributions misses the dynamic that comes from being part of a large community and a recognized leader or major contributors in it, for those at the top, and misses the importance of framing this as a non-priced social process. Adding money alters the overall relationship. It makes some people "professionals," and renders other participants, "suckers." It is not impossible to mix paid and unpaid participants, as we see in free and open source software and even to a very limited extent in Wikipedia. It is just hard, and requires a cultural form that is definitely not "now at long last we can tell who's worth something and pay them, while everyone else is just worthelss." What Calacanis is doing now with his posts about the top contributors to Digg is trying to alter the cultural interpretation of what they are doing: from leaders in an engaged community, to suckers who are being taken for a ride by Ross.Maybe he will succeed in raining on Digg's parade, though I doubt it, but that does not mean that he will succeed in building an alternative sustained social process of peer production, or in replacing peer production with a purely paid service. Once you frame the people who aren't getting paid as poor sods being taken for a ride, for example, the best you can hope for is that some of the "leaders" elsewhere will come and become your low-paid employees (after all, what is $1,000 a month relative to the millions Calacanis would make if his plan in fact succeeds? At that point, the leaders are no longer leaders of a community, and they turn out to be suckers after all, working for pittance, comparatively speaking.)

I'm quite surprised to see Benkler mention and dismiss the example of Open Source software since what is happening between Calacanis and Digg seems to be history repeating itself. Back in the day, Open Source software like Linux was primarily built by hobbyists who worked on such projects in their free time without intention of getting financially rewarded. Later on, companies showed up that wanted to make money from Open Source software and there was a similar kind of angst to what we are seeing today about social bookmarking. If you want to take a trip down memory lane, go back and read all the comments on the various stories about the Redhat IPO on Slashdot to see the same kind of arguments and hurt feelings that you see in the arguments made by Benkler and from people such as the Backstabbed by Netscape blogger.

The fact is that since we are human, the 80/20 rule still applies when it comes to the value of the contributions by individuals. This means that it is beneficial to the 'community' if those that contribute the most value to the system are given as much incentive as possible to contribute. After all, I doubt that there is anyone who would argue that the fact that Linus Torvalds and Alan Cox are paid to work on Linux or that Miguel De Icaza is paid to work on Mono is harmful to the communities around these projects or that it makes the unpaid contributors to these projects "suckers".

I think where people are getting confused is that they are mixing up giving the most valuable contributors to the system more incentive with trying to incentivize the entire community with financial reward. They are not the same thing. Open Source projects wouldn't be successful if everyone contributing to a project did so with the expectation of being paid. On the flip side, Open Source projects benefit the most when the top contributors to the project can dedicate their 100% of their efforts on the project without having to worry about a day job. That's the difference.

Also it seems that Benkler seems to think that Whuffie (aka respect from the community) is a better incentive than money when it comes to influencing top contributors. I think that's a pipe dream that will only occur when we live in a world where money can't buy you anything such as the one in Cory Doctorow's Down and Out in the Magic Kingdom.


 

Categories: Social Software

August 4, 2006
@ 07:55 PM

Jason Fried over at the Signal vs. Noise blog has an entry entitled Don’t believe BusinessWeek’s bubble-math where he writes

This week’s BusinessWeek cover story features a beaming Kevin Rose from Digg. Across his chest it says “How this kid made $60 million in 18 months.” Wow, now that sounds like a great success story.

Too bad it’s a blatent lie. BusinessWeek knows it. They prove it themselves in the article:

So far, Digg is breaking even on an estimated $3 million annually in revenues. Nonetheless, people in the know say Digg is easily worth $200 million.

$3 million in revenues and they’re breaking even. That means no meaningful profits. That’s the first hint no one has made $60,000,000. Their gross revenues aren’t even anywhere close to that number. And let’s leave out the “people in the know say it’s easily worth” fantasy numbers. And certainly don’t use those numbers to do the math that makes the cover (we’ll get to that in a minute).

I can't believe BusinessWeek ran such a misleading cover story. I guess sensational, fact-less headlines aren't just for the supermarket tabloids these days.


 

Earlier this week I was showing off the new Windows Live Spaces to my girlfriend in an attempt to try and explain what I do at Microsoft all day. When I showed her the Friends list feature she was surprised that the site had morphed from a blogging service into a social networking service and wondered whether our users wouldn't react negatively to the change. Actually that's paraphrasing what she said. What she said was

If I was using your site to post my blog and my pictures for people I know, I'd be really annoyed if I started having to deal with strangers asking me to be their friend. If I wanted to deal with that shit crap I'd have just gone to MySpace.
That's valid criticism and is something the people who worked on the design of this feature (i.e. me, Mike, Neel, Matt, John and a bunch of others) took into account. One of the key themes of Windows Live is that it puts users in control of their Web experience. Getting repeated email requests to be some stranger's "friend" without a way to stop them doesn't meet that requirement. This is why there is a communications preferences feature in Windows Live Spaces which can be reached by clicking on http://[yourspacename].spaces.live.com/Settings/Communication/

Below is a screenshot of the page

Don't like getting friend requests as email? Disable this by unchecking 'Also send invitations and e-mails to my email address'. Don't want to deal with requests from total strangers wanting to be on your friend's list? Then move the setting on 'These people can request to be on your friends list' to something more restrictive like 'Messenger buddies' so that you only get friend requests from people on your IM buddy list who you already know.You hang out a virtual DoNotDisturb sign and we'll honor it.

Making sure that our users have total control over who they communicate and share information with is key to how we approach building social software for Windows Live. Thanks to all the users of Windows Live Spaces who have made it the most popular blogging service in the world. 


 

Categories: Windows Live

I've gotten some reports from people that they've had problems using their blogging tool of choice to post to Windows Live Spaces using the MetaWeblog API. I haven't had any problems posting using W.Bloggar but have had some issues using Blogjet recently. I'd appreciate it if anyone else who's having issues posting to their blog responds with a comment to this blog post with information about which blogging tool you are using and what error message the tool reports.

Thanks.


 

Categories: Windows Live

A couple of days ago, I wrote that based on my experiences with Kuro5hin I wouldn't be surprised by small sets of users dominating the content and focus of Digg since it is practically the same kind of site. Duggtrends has a blog post entitled Digg user statisitcs & trends which confirms these suspicions. It states

From our database, for the period of 6/19/2006 9:31:28 PM to 7/30/2006 4:41:34 PM a total of 6013 stories were promoted to front page of these
  • top 10 users contributed 1792 i.e 29.8%
  • top 100 contributed 3324 stories i.e 55.28% (which is again what exactly SEOMOZ reported)
This clearly shows the shift from the Kevin Rose reported numbers from 26.8% to 55.28%; top users are contributing more & more to digg

As per Jason Martinez (and Calacanis points in his blog) 60% of Digg’s front page is the top 0.03% users and provides this graph.

It looks like the 1% rule is in full effect. This has always been the nature of online communication from mailing lists and newsgroups to Web-based message boards and social bookmarking sites. Human nature will always shine through at the end.

PS: Has anyone else seen that the Digg REST API has now been documented? Thanks to Michael Brundage for the tip.


 

I just read the rather brief Feed Access Control RSS and ATOM specification from the Bloglines team. It defines the access:control element as

<access:restriction> element
Sub element of <rss> or <feed>. Used to indicate the re-distribution restrictions for a feed. The 'relationship' attribute is used to indicate whether a feed will 'allow' or 'deny' access.

To 'allow' access means a feed may be redistributed to other public sources, including search. To allow access, for example:

    <access:restriction relationship="allow" />

To 'deny' access means a feed should not be redistributed to other public sources, including search. To deny access, for example:

    <access:restriction relationship="deny" />

The default relationship is to allow access. However, if a feed is currently set to 'deny', the relationship must be explicitly set back to 'allow' for it to be registered (Simply ommiting it from the feed is not sufficient to turn access back on).

The problem with this 'specification' is that it says nothing about its goals, scenarios or expected use cases. Without these it is hard to tell whether this is a good idea or a bad idea. Danny Ayers points out that this mimics the behavior of the Robots META tag that can be placed in HTML pages. I guess this means it prevents search engines from indexing your page and showing it in search results which makes sense in certain limited scenarios. For example, it makes sense to exclude a search engine from indexing the search results page of another search engine or the RSS feed of some search results. Hints like the Robots META tag and robots.txt are good ways to prevent this from happening for HTML pages. I guess this proposal does the same for RSS and Atom feeds.

On the other hand, it is definitely not an access control mechanism. You wouldn't want your bank to tell you that the way that they prevent anyone from viewing your bank account details is via robots.txt would you?


 

August 2, 2006
@ 02:56 PM

Windows Live Spaces is live. This is pretty sweet since the most visible feature I've worked on while at Microsoft is now available to the general public. On the Windows Live Spaces team blog we get the post Windows Live Spaces - It’s Here! which states

1. Set-up your friends list.  Simply add the new Friends Module to your space, or click here to automatically add it to your space, and start inviting your friends.  Once your friends accept your invitation, they will appear to visitors of your space. You can also explore your contacts’ friends (and their friends too) directly from Windows Live Messenger.  Simply click on your contact’s Messenger icon to view their contact card, and then click on the “View this contact’s Friends list” icon on the bottom right hand corner of the contact card.  This will launch the cool new Friends Explorer feature that will allow you to easily navigate through lists of friends. 

2. Add gadgets.  Jazz up your space by adding cool new gadgets.  All you need to do is click on the “Customize” link in your Space when you are in the editor mode, and then click on the link titled “Add gadgets from Windows Live Gallery” to be taken to the Windows Live Gallery where you can select gadgets you want to add to your space. Check out the “Updated Spaces” gadget we just added to The Spacecraft!  You can add this gadget automatically to your space too by simply clicking here.

The platform behind the Friend's List was one of my features and I'm glad to see it rolling out to the hundreds of millions of Windows Live Spaces and Windows Live Messenger users. You can check out my friend's list to browse my social network which currently consists of Microsoft employees.

Since most features in Windows Live Spaces are integrated with Windows Live Messenger, so also is the Friends List feature. Users of Windows Live Messenger will have three integration points for interacting with the Friends List. The first is that one can right-click on Messenger contacts and select "View->Friends List" to browse their Friends List. Another integration point is that one can respond to pending requests from people to add them to your Friends List directly from Messenger client (this is also the case with other features like Live Contacts). Finally, one can also browse the Friends List from their Contact Card. Below is a screenshot of what happens when a Windows Live Messenger user right-clicks on one of their Messenger contacts and selects "View->Friends List".

friends list in Windows Live Messenger

However it is the announcement about support for gadgets in Windows Live Spaces which I find even cooler than the fact that my feature is finally shipping. With this release, one can add almost any gadget from the Windows Live Gallery to one's space. You'll find some screenshots of gadgets on a space in Mike Arrington's post entitled Windows Live Spaces Launches, Replaces MSN Spaces.

If you are a developer who'd like to build gadgets for Windows Live Spaces you should check out the post in the Windows Live Spaces developer platform blog Gadget devs, come out and play! which provides the following information for developers interested in building gadgets

How do I get started?

1.  Build a Windows Live web gadget according to the SDK available at the Windows Live Dev site
 
2.  If your gadget has any settings/edit UI that visitors shouldn't see, then use the following code to detect whether Spaces is running the gadget in author mode and show/hide the UI accordingly.  There is a p_args argument outlined in the gadgets SDK and we've added a new method off of that called getMode().  You can do a simple comparison of the value returned from that method call to determine author vs. visitor mode.  
Something like the following:
          
         foo = function(p_elSource, p_args, p_namespace)
         p_args.module.getMode() == Web.Gadget.Mode.author
3.  Add the gadget to your own space using the following Spaces API: 
 
Switch between "Edit your space" and "View your space" to see how it behaves in both author and visitor modes.  If your manifest file, Javascript, and CSS are hosted anywhere but Windows Live Gallery (gallery.live.com), the gadget can only be added for editing/viewing by the space owner.  It will be hidden to visitors.    
4.  Zip up your manifest file and supporting Javascript/CSS files and submit that gadget package to the Windows Live Gallery so other visitors can add it to their space by going to Customize --> Modules --> "Add gadgets from Windows Live Gallery".

Now that Windows Live Spaces has shipped, I can now write that article I've been talking about for a while on building Windows Live gadgets powered by RSS for XML.com. You can expect a bunch of gadgets from me over the next few weeks.


 

Categories: Windows Live

I have some bad news and some good news. RSS Bandit is built using user interface controls that are not provided by default by the .NET Framework to enhance it's look and feel. A common practice among vendors of such user interface controls is to offer them for free to developers to gain mindshare and once these developers are 'hooked on their product' they withdraw the free version. This means that developers of applications that use these user interface controls will end up having to pay the vendors if they want to use newer versions of these controls. This has happened twice to me with RSS Bandit. The first time was when DotNetMagic went from free to being only available for purchase. The second time happened a few months ago when Divelements cancelled the free version of their controls which are used extensively in current versions of RSS Bandit.

We were in a lurch and just as I thought that I'd have to start some sort of blog fundraiser so we could pay for these controls and keep RSS Bandit free to use, Torsten was contacted by the good folks at Infragistics who've donated use of their controls to our project. This means that we'll be replacing some of the user interface controls used by RSS Bandit and adding some new functionality to the UI. See the screenshot below for some of these visual enhancements.

I'd also like to take this time to welcome Ariel Selig to the RSS Bandit development team. He's already made some decent contributions in replacing our old UI components with the new ones.


 

Categories: RSS Bandit

User interfaces for computers in general and web sites in particular seem to be getting on my nerves these days. It's really hard to browse for what you are looking for on a number of buzzword-complaint websites today. Most of them seem to throw a tag cloud and/or search box at you and call it a day.

Search boxes suck as a navigation interface because they assume I already know what I'm looking for. I went to the Google Code - Project Hosting website and wanted to see the kinds of projects hosted there. Below is a screenshot of the website from a few minutes ago.

Notice that the list of project labels (aka tags) shown below the search box are just 'sample labels' as opposed to a complete classification scheme or hierarchy. They don't even list fairly common programming topics like VisualBasic, Javascript or FreeBSD. If I want to browse any of these project labels, I have to resort to poring over search results pages with minimal information about the projects.

Using tag clouds as a navigation mechanism is even more annoying. I recently visited Yahoo! Gallery to see what the experience was like when downloading new plugins for Yahoo! Messenger. On the main page, there is a link that says Browse Applications which takes me to a page that has the following panel on the right side. So far so good.

I click on the Messenger link and then was taken to the following page.

What I dislike about this page is how much space is taken up by useless crap (i.e. the tag cloud full of uninformative tags) while the actual useful choices for browsing such as 'most popular' and 'highest rated' are given so little screen real estate and actually don't even show up on some screens without scrolling down. The tag cloud provides little to no value on this page except to point out that whoever designed is hip to all the 'Web 2.0' buzzwords.

PS: Before anyone bothers to point this out, I realize a number of Microsoft sites also have similar issues.


 

Instead of seeing Clerks II last week, I ended up seeing My Super Ex-Girlfriend which turned out to have been a bad choice. I wanted to go catch Clerks II this past weekend but my girlfriend decided that it was her turn to pick movies after My Super Ex-Girlfriend was such a disappointment. So we saw Miami Vice instead. Below are some brief thoughts on both movies

Miami Vice

This was nothing like the TV show. It was more like a darker, grittier version of Bad Boys 2 complete with trips to exotic South American locales to cavort with drug dealers. There wasn't a lot of action but whenever the guns did blaze the scenes were pretty intense.

My only complaint was that Colin Farrell's acting seemed pretty wooden at times.

Rating: **** out of *****

My Super Ex-Girlfriend
The core premise of the movie, what if your crazy ex-girlfriend had super powers, seemed like an interesting premise and I expected good things from a movie starring Luke Wilson and Uma Thurman. The movie started off well enough with a number of funny scenes including a few about sex with super heroes. However the movie peaked somewhere around the halfway mark in the scene where Uma Thurman spies on Luke Wilson and Ana Faris (the girl from the Scary Movie movies) having sex and throws a shark at them through the bedroom window.

It goes downhill pretty fast from there, it's as if the writers only had a few gags involving super-powered revenge pranks without any thought of how to conclude the movie. The last couple of scenes such as the super heroine cat fight and the over-the top obnoxiousness of Rainn Wilson's character left a foul taste in my mouth. It was a promising movie which failed to live up to its promise. 

Another problem I had with the movie was failing to believe that anyone would pick a ditzy Ana Faris over a super-powered Uma Thurman regardless of how 'psycho' of a girlfriend she was. That was a bad casting choice in my book.

Rating: *** out of *****


 

Categories: Movie Review