If you are a geek, you may have heard of the Firefox extension called GreaseMonkey which lets you to add bits of DHTML ("user scripts") to any web page to change its design or behavior. This basically enables you to remix the Web and either add features to your favorite web sites or fix broken UI design.

Over the Memorial day weekend, I got the hookup on where to obtain a similar application for Internet Explorer named Trixie. Below are some excerpts from the Trixie website

What is a Trixie script?

Any Greasemonkey script is a Trixie script.  Though, due to differences between Firefox and Internet Explorer, not all Greasemonkey scripts can be executed within IE.  Trixie makes no attempt to allow Greasemonkey scripts to run unaltered, since it is best to have the script author account for the differences and have the script run on both browsers if he/she so chooses.

Refer to the excellent Greasemonkey documentation to learn how to write Greasemonkey/Trixie scripts.  Note that some of the information there won't be applicable to Internet Explorer and Trixie.

Installing Trixie

Trixie requires the Microsoft .NET framework to be installed.

To install Trixie, download and run TrixieSetup  (latest version: 0.2.0).  You should ideally close all instances of Internet Explorer before doing this.  By default, TrixieSetup installs Trixie to your Program Files\Bhelpuri\Trixie directory (you can of course change this location).  It also installs a few scripts to the Program Files\Bhelpuri\Trixie\Scripts directory.

Restart IE after installing Trixie.  Once you have restarted, go to the Tools menu.  You should see a menu item called "Trixie Options".  Selecting that will show you the list of scripts installed and which one's are enabled or disabled.

Once you have installed Trixie, you browse the Web just like you always do.  Trixie works in the background executing the scripts on the designated pages and customizing them to your needs.

I've been using Trixie for the past few days and so far it rocks. I also looked at the code via Reflector and it taught me a thing or two about writing browser plugins with C#. So far my favorite Trixie Script is the one that adds site statistics to an MSN Spaces blog when you view your own space.

It looks like I need to spend some time reading Dive Into Greasemonkey so I can write my own user scripts to fix annoyances in web sites I use frequently. Remix the web, indeed.

Update: It took me a few minutes to figure out why the project was called Trixie. If you're a Speed Racer fan, you probably figured it out after reading the second paragraph of this post. :)


Over the past week and a half, I've been fixing bugs in the implementation of newsgroups support in RSS Bandit. Yesterday I celebrated getting the last major networking issue being fixed by posting a somewhat empty message to the microsoft.public.xml newsgroup from RSS Bandit. The main thrust of what's left is for Torsten and I to decide how we want newsgroups to be exposed in the user interface.

It is time to start thinking about another item on the roadmap for our next release. This time I want to focus on what we'll do to better support RSS enclosures and podcasting. If you are unfamiliar with either of these concepts, you should read Dave Winer's blog post Payloads for RSS from 2001 which describes the concept of subscribing to digital media content such as songs and videos as opposed to just text as is common today.  

Although I am not interested in any of the amateur talk radio or audioblogging that is typically associated with podcasting I still think this is a very interesting development that should be supported by RSS Bandit. I would love to be able to subscribe to the G-Unit website and get new songs that have been released to the public or subscribe to the various XBox 360 rumor mills to get new videos of game demos.

Below is a list of features I'd like to see in RSS Bandit.

  1. Ability to download RSS enclosures to a folder of my chosing. This folder can be configured per feed so I can have some stuff go to C:\G-Unit and other stuff go to C:\XBox360.
  2. A list of files currently being downloaded or that have been downloaded.
  3. A download meter to indicate how much of a file has been downloaded.
  4. Ability to add files to my Windows Media Player or iTunes playlist of my choosing on successful download. The level of configurability of this feature will depend on how much work is entailed. :)

If you have any ideas that I have left out or comments about the above list, just go ahead and reply to this post.


Categories: RSS Bandit

May 27, 2005
@ 03:52 PM

Of the major online service providers, Yahoo! has always been one of my favorites. I may use the Google search engine to find stuff online or MSN Messenger to chat with friends and coworkers, but I utilize more services from Yahoo! than any Web company. In fact, I'm the main reason that all the Windows machines in the computer labs of the College of Computing at Georgia Tech had the Yahoo! Toolbar installed on them when I was in college.

Now that I work at MSN, I get to work on stuff that makes me as excited about software as I used to be in college when I first marvelled at the utility of Yahoo!'s toolbar. Even better is seeing a lot of the stuff I work on either directly or indirectly, influence one of my favorite online services.

In a recent blog post entitled Not Nerdvana, But Maybe The Suburbs, Stowe Boyd writes

I was out in California on the 12th, getting briefed by the Yahoo Messenger folks about the newest release of their instant messaging suite
Note As I sketched in a recent post (Nerdvana: A Better Tool For Communication (I Can Dream, Can't I?)), I would really like a rich client on my desktop that put the buddy list firmly at the center of the universe, and all other stuff -- email, blog posts, to-dos, appointments, geographical location, whatever -- hanging off the buddy list as a collection of attributes. Because people and social relatedness is the center of the universe, not documents, calendars, email, etc.
Note the 'contact card' in the screenshow above, where various elements of Jessica's digital relationship to me are displayed. We see various icons, representing ways I can contact her. But better, much better, we see the music she is playing, and new profile info and blog entry.

The contact card is something we pioneered in the most recent release of MSN Messenger. It's illuminating to compare Stowe's screenshot of the Yahoo! Messenger contact card with a screenshot I took last year of the MSN Messenger 7.0 beta contact card.  I totally agree with Stowe Boyd that software should help people better communicate and interact with each other in social contexts. This is something that lots of folks at MSN are passionate about and it is edifying to see our competition follow our lead in this area.

This morning I saw an article in Infoworld titled The battle for the blogosphere which had some interesting comments about competition between Yahoo! and MSN in the blogging arena. Some of the excerpts from the article are

Introduced in beta form just last December, MSN Spaces now hosts over 10 million blogs, an eye-popping adoption rate that has blown past internal Microsoft expectations. "MSN Spaces is the fastest growing service MSN has ever introduced," said Brooke Richardson, lead product manager at MSN communication services.

The significant thing for the blogging market is that Microsoft is doing it its way, designing MSN Spaces to have a central text-blogging core but complemented by and integrated with a suite of MSN online services, such as instant messaging, e-mail, music playlist posting, and photo sharing. Microsoft also built into the service access control features to let users determine who can view their blogs, although they can make their blogs totally open if they want. MSN Spaces will also notify users when blogs from friends have been updated.

In March, Yahoo introduced in limited beta a service called Yahoo 360 whose concept and design are similar to MSN Spaces. This service comes as no surprise, because Yahoo, like Microsoft's MSN, has a wide variety of online services with which to surround its blogging service. As two leading Web portals, MSN and Yahoo have an amount and variety of online services under one roof that few others can rival, and blogging is something they're weaving into their overall fabric.

Later on the in the article, the author takes issue with the lack of customizability of MSN Spaces in comparison to other online services such as Google's Blogger and even mentions Robert Scoble's post from last year, MSN Spaces isn't the blogging service for me. We've gotten a lot of feedback about customizability and we definitely will be looking into how we can offer more flexibility. 

From my perspective, the competition between Yahoo! and MSN around who can build a better social computing experience for end users is a lot more exciting than the competition around search engines that have garnered all the press. This is definitely a fun time to be working at MSN.


Categories: MSN

There have been a number of events over the past week that have gotten me thinking about XML Web Services and how the technologies have been handicapped by the complexities of the W3C's XML Schema Definition Language (XSD) and Microsoft & IBM's Web Service Definition Language (WSDL). The events that brought this issue to mind have been

  • Contributing to Mike Champion's efforts in putting together Microsoft's submission to the W3C Workshop on XML Schema 1.0 User Experiences and having to revisit the various interoperability problems caused by the complexity of XSD.

  • A recent paper by Steve Loughran and Edmund Smith entitled Rethinking the Java SOAP Stack which argues that since the complexity of Object<->XML mapping based on XSD/WSDL is too difficult that Java SOAP toolkits should simply work with XML directly instead of doing data binding of XML to objects.

  • A recent post by Steve Maine, who works on the XML Web Services team at Microsoft, entitled Travelling through hyperspace where in response to the various arguments about the complexities of XSD and WSDL he states "WSDL, XSD, and SOAP are facts of life in the web services world"

  • The W3C's creation of the public-web-http-desc@w3.org mailing list to discuss the creation of a web services description format (i.e. a WSDL replacement) for RESTful web services

These are definitely interesting times for XML Web Services. The complexity of the technologies that form the foundations of SOA is now pretty much acknowledged across the industry. At the same time more and more people are taking the idea of building web services using REST very seriously. I suspect that there might be an opportunity here for Microsoft to miss the boat if we aren't careful. 


Categories: XML Web Services

May 24, 2005
@ 05:28 PM

Paul Thurrott has an interesting series of articles about MSN's past, present and future. In some ways it could be called How MSN Got It's Groove. 

    1. Part One: Beginnings
    2. Part Two: Fly, Butterfly
    3. Part Three: Services That Communicate

The article does a good job of capturing a bunch of the changes that have happened in the organization over time as well as offers some hints as to some of our future plans. My favorite part of the article is this excerpt from its conclusion.

When I speak with people from the Windows division at Microsoft about new product releases, there's always an unspoken assumption that I won't see these people again for many months or even years. With MSN, it's sort of a running joke that I'll often be speaking with them just days later. They just have so much going on.

And we have a bunch of stuff planned for this year. The recently announced MSN Virtual Earth is just one of many cool things we'll be shipping this year.


Categories: MSN

There is an article on PC World entitled Gates Unveils MSN Virtual Earth which describes upcoming releases from MSN that were recently announced by Bill Gates at a recent event. Excerpts from the article are provided below

Microsoft's MSN division will enhance its search engine in the next couple of weeks by adding a local search index for finding business directory listings. Later, the company will supplement this with MSN Virtual Earth, a free new service that will pinpoint places in maps and satellite images.   Local search is a gaping hole in the MSN search engine. All other major search engine providers, including Google, Yahoo, Ask Jeeves, and America Online, have a local search tab on their search Web sites. As part of its local search service, Google also provides maps and satellite images, offering functionality that is similar to what Microsoft is aiming for with MSN Virtual Earth.
In addition to complementing MSN's local search index, MSN Virtual Earth will let users overlay maps and satellite photos in order to create hybrid images that combine the best of both mediums, says Stephen Lawler, general manager of Microsoft's MapPoint unit.

MSN Virtual Earth will also be integrated with users' preferred e-mail application and, with a single click, will place links in e-mail messages to maps and images, says Steve Lombardi, a MapPoint program manager. Users will also be able to post images to their MSN Spaces Weblog from within the MSN Virtual Earth interface, Lombardi says.

Users will be encouraged to provide feedback on business directory listings so that the MSN local search service will be able to give users a sense of what a particular area is like, Lawler says.

MSN Virtual Earth uses technology from MapPoint and from Terra Server, a database of satellite images Microsoft has had for about ten years, the officials say.

I was in a meeting with Steve Lombardi and a couple of other folks from the Virtual Earth team a while ago and they definitely have built a compelling product that they are quite passionate about. As mentioned in the article, there are interesting avenues for integration with other MSN properties such as MSN Search and MSN Spaces.

It's great to see the innovation and competition coming out of the online mapping space. 

Update: There is an interview with the Virtual Earth team complete with demos up on Channel 9.


Categories: MSN

May 21, 2005
@ 02:35 AM

From Bob Wyman's post Microsoft to support Atom!

Robert Scoble, a Microsoft employee/insider very familiar with Microsoft's plans for syndication, declares in comments on his blog " we are supporting Atom in any aggregator we produce ." Microsoft's example in supporting Atom should be followed by all other aggregator developers in the future and Microsoft should be commended for supporting the adoption of openly defined standards for syndication.

Given that virtually every aggregator in use today and virtually every blog hosting and syndication platform (expect MSN Spaces) already supports both RSS and Atom, it is clear that the heyday for the historical RSS format has passed. RSS is a historical format, Atom represents the future. We don't need two formats -- or twenty... We should consolidate on that format which incorporates the most learning and experience with the syndication problem. That format is Atom V1.0.

I'm surprised that Bob Wyman is crowing at such a non-issue. Supporting both versions of Atom (Atom 0.3 and Atom 1.0) is a must for any information aggregator that wants to be taken seriously. This is all covered in my post from a year and a half ago (damn, has this debate been going on for that long?) entitled Mr. Safe's Guide to the RSS vs. ATOM debate where I wrote

The Safe Syndication Consumer's Perspective
If you plan to consume feeds from a wide variety of sources then one should endeavor to support as many syndication formats as possible. The more formats a feed consumer supports the more content is available for its users.

Based on their current popularity, degree of support and ease of implementation one should consider supporting the major syndication formats in the following order of priority

  1. RSS 0.91/RSS 2.0
  2. RSS 1.0
  3. Atom

RSS 0.91 support is the simplest to implement and most widely supported by websites while Atom is the most difficult to implement being the most complex and will be least supported by websites in the coming years.

The Safe Syndication Producer's Perspective
The average user of a news aggregator will not be able to tell the difference between an Atom or RSS feed from their aggregator if it supports both. However users of aggregators that don't support Atom will not be able to subscribe to feeds in that format. In a few years, the differences between RSS and Atom will most likely be the same as those that are different between RSS 1.0 and RSS 0.91/RSS 2.0; only of interest to a handful of XML syndication geeks. Even then the simplest and safest bet would still be to use RSS as a syndication format. This is the same as the fact that even though the W3C has published XHTML 1.0 & XHTML 1.1 and is working on XHTML 2.0, the safest bet to get the widest reach with the least problems is to publish a website in HTML 3.2 or HTML 4.01.

So far this thinking has aligned with the thinking around RSS I have seen at MSN, so it is extremely unlikely that MSN Spaces will do something as disruptive as switching its RSS feeds to Atom feeds or as confusing to end users as providing multiple feeds in different formats.


From the RSS feed for the post MSN: RSS Everywhere


MSDN just published my article, Fun with IXMLHttpRequest and RSS. The article attempts to illuminate two growing trends; using DHTML & IXMLHttpRequest to build dynamic web applications and the building of interesting applications layered on top of RSS.

In my recent post Ideas for my next Extreme XML column on MSDN, I asked what people would like to see me write about next. Although this topic came second, I felt that it highlighted some interesting disruptive trends that warranted writing about sooner rather than later.

Coincidentally, I checked my favorite RSS reader this morning and find out that our CEO decided to downplay the importance of RSS this morning in favor of XML Web Services in a Q&A on the RSS weblog. I find it interesting that his core argument against RSS is that it is not as complex as XML Web Service technologies. On the flip side, we have Mark Lucovsky who in his post Don Box and Hailstorm argues that the simple technologies and techniques of RSS may succeed in building an ecosystem of applications built on open data access where his attempt with Hailstorm at Microsoft failed. Combining this with the thinking in Adam Bosworth's Web of Data, it seems clear that key people at Google are beginning to understand the power of REST in combination with the flexible nature of RSS.

This all seems like classic Innovator's Dilemma stuff. Thankfully, in this case even though there are lots of people who want us [Microsoft] to bury our heads in the sand when it comes to recognizing these emerging trends, there are also annoying people like me at work who keep preaching this stuff to anybody who is willing to listen.

Will RSS change the world? That's a silly question, it already has.


Chris Anderson has been learning Python via Jim Hugunin's excellent IronPython and came to the conclusion that Microsoft has been missing the boat in programming language trends. In his post The Hobbyist and the Script  he writes

Scripting is great for glue code - the same thing that VB1 and 2 used to be great at. VB3 started to get into the serious app development with rich data integration, and VB4 brought us into the 32-bit era, and in a way back to glue code. VB4's embracing ActiveX controls gave VB developers an entirely new place to play. I remember working on an application using a beta of VB5 and writing my "hard core code" in MFC ActiveX controls. After a while I started writing more and more of the code in VB, because it worked and wasn't the bottle neck in anyway for the application.

I think that scripting and many dynamic languages are in the same camp. They are great for small applications and writing glue code. Look at Google Maps, the real processing is on the server, and the client side AJAX is just the glue between the user, the browser, and the backend server. I would argue that something more beefy like Outlook Web Access (a Microsoft AJAX application, writen before AJAX was the name) demonstrates more of the limitations of writing the majority of your client interface in script.

Regardless of the limitations, our singular focus on strongly typed compiled languages has blinded us to the amazing productivity and approachability of dynamic scripting langauges like Python and Ruby. I'm super excited that we are changing this. Hiring Jim Hugunin is a great start. I hope we continue this, and really look to add a strong dynamic language and scripting story to our developer portfolio.

I've actually been quite upset by how many programming language camps Microsoft has neglected in its blind pursuit of competition with Java and the JVM. Besides the scripting language camps, there are significant customer camps we have neglected as well. We have neglected Visual Basic developers, we gave them a poor replacement in Visual Basic.NET which the market has rejected which can be gleaned by posts such as Chris Sells's pointing out that publishers don't want to VB.NET books and the Classic VB petition. We have neglected DHTML developers who have been building web sites and web applications against Internet Explorer, we don't even have a Javascript IDE let alone a story for people trying to build complex AJAX/DHTML applications like GMail or Google Maps.

The main problem is that Microsoft is good at competing but not good at caring for customers. The focus of the developer division at Microsoft is the .NET Framework and related technologies which is primarily a competitor to Java/JVM and related technologies. However when it comes to areas where there isn't a strong, single competitor that can be focused on (e.g. RAD development, scripting languages, web application development) we tend to flounder and stagnate. Eventually I'm sure customer pressure will get us of our butts, it's just unfortunate that we have to be forced to do these things instead of doing them right the first time around.

On a similar note, thank God for Firefox.


The Inside Microsoft blog has an entry where he describes the Syndicate Conference Keynote by Phil Holden of MSN. Phil talks about MSN Spaces and the various things we are doing on Start.com. Excerpts from the post are below

Next, Phil shows off MSN Spaces, which is very consumer oriented and easy to set up. He says, "The product is clearly not for everyone". It is very much targeted to the Friends/Friends market. The conducted a survey to see what people thought Spaces did well. Number one was sharing photos, while in last place was staying up to date on hot topics.

He shows how many people use MSN Spaces. In December, it was one million. In January, 2; February, 3.5; and March, 4. However, since Messenger 7 went live and Spaces left beta, Spaces exploded, going up to 10 million Spaces at the end of April. Wow. They are adding 100,000 Spaces a day.

Moving on, Phil gets into notifications. He mentions "gleaming", where Microsoft leverages Messenger to notify people that their friends have updated their Space. Clicking on a person reveals their contact card, and that can take you to the blog. This is the single-source personal notification system.

Phil shows how a less personal, single-source public contact notification occurs, through MSN Alerts. This is what they acquired MessageCast for last week. It will work to notify people of updated blogs.

The multiple-sourse public notification is through MSN's "What's Your Story" page, which highlights the most interesting Spaces.

Then he introduces Kyle Von Haden, program manger of Global Site & Develoment at MSN, who shows off Start.com/1/, which is to be the multiple-source personal notification system, or RSS reader. He explains how they have been silently releasing and updating stuff on Start.com, with no publicity for the time being. Start.com can be a superfast loading home page. The whole thing is built in Ajax, so they can change the page without refreshing. They plan to add sections that suggest feeds that a person's friends like. The focus is making it easy, simple, fast, and with no learning curve.

Then he shows off something new, what will be next: Start.com/3/, a much richer interface, with folders, logos, custom pages. You can add custom feeds, like weather feeds. It looks very, very impressive.
Phil says one good idea would be to build RSS readers into existing applications, making it easier for users to adopt and use. Challenges exist in making it easier for users to understand what's going on, so they don't just get confused by little orange boxes. Industry challenges exist from different formats and authentication for private content.

Kyle gets back up and shows off the next thing from MSN: a (currently alpha) screensaver that uses RSS to show recent news, much like Apple's Tiger has, but much cleaner looking. Its very easy to add any feed, especially easy for Spaces. You can subscribe to image feeds and have those feeds supply the photos on the screen, as well as showing blog entries and news articles. The product didn't work perfectly yet, but seeing the photos full-screen with other articles in the corner looked very useful.

Phil wraps thing off with some goals, including serving multiple segments of consumers, new ways of getting content (like the screensaver), and strategies to let the public know how useful syndication can be.

A great presentation. MSN clearly gets RSS better than most, and they've got some very interesting stuff coming down the pike.

A few weeks ago I was chatting with Steve Rider about a feature I'd like to see on http://www.start.com/2 which I thought would be rather cool. Yesterday I got a demo of the next version of the site from Scott Isaacs, and not only was my idea implemented but the implementation was a lot better than what I requested. Excellent!!

Now I can't wait for the next version to ship.  


Categories: MSN

 From the Business Week article, The World According to Ballmer

Clearly alluding to Microsoft's key Internet search rival, Ballmer said: "The hottest company right now -- the one nobody thinks can do any wrong -- may just be a one-hit wonder."

Actually Google is already at least a two hit wonder; AdSense/Adwords and the Google search engine. Given that revenues from AdSense are growing to match those from the Google website, it seems inevitable that in a few years it'll be more important to Google that they are the #1 ad provider on the Web not whether they are the #1 search engine.




A few years ago I used to participate in an online community called Kuro5hin which was founded by Rusty Foster. K5, as we affectionately called it, eventually became a haven for people trying to escape from the ills of Slashdot. There were several problems with Slashdot that folks like Rusty and Karsten Self planned to fix with K5. These included
  • Lack of democracy in selecting stories
  • Moderation system that encouraged group think and punished not toeing the party line
  • Visible karma score which encouraged treating participating in the community as a game

The solution was to allow all users to create stories, vote on the stories and to rate comments. There were a couple of other features that distinguished the K5 community such as diaries but the democratic aspect around choosing what was valuable content was key. K5 was a grand experiment to see if one could build a better Slashdot and for a while it worked.  For a while, it worked fairly well although the cracks had already begun to show within the first year. A lot of the spirit of the first year of the site can be gleaned from the post K5, A One Year Retrospective.

Now five years later, I still read Slashdot every day but only check K5 out every couple of months out of morbid curiosity. The democracy of K5 caused two things to happen that tended to drive away the original audience. The first was that the focus of the site ended up not being about technology mainly because it is harder for people to write technology articles than write about everyday topics that are nearer and dearer to their hearts. Another was that there was a steady influx of malicious users who eventually drove away a significant proportion of K5's original community, many of whom migrated to HuSi.  This issue is lamented all the time on K5 in comments such as an exercise for rusty and the editors. and You don't understand the nature of what happened.

Besides the malicious users one of the other interesting problems we had on K5 was that the number of people who actually did things like rate comments was very small relative to the number of users on the site. Anytime proposals came up for ways to fix these issues, there would often be someone who disregarded the idea by stating that we were "seeking a technical solution to a social problem". This interaction between technology and social behavior was the first time I really thought about social software.

Fast forward a few years to earlier this week. I wrote a blog post entitled When did Blogrolls Become Evil? which started an interesting dialog in the comments which I've been thinking about for a few days. Below are excerpts from my post and its comments which got me thinking

Dare Obasanjo:  I was going to write a lengthy counterargument to the various posts by Shelley Powers about blogrolls then wondered whether the reason I even cared about this was that her writing had convinced Uche Ogbuji to drop me from his blogroll? Wouldn't I then be justifying some of the arguments against blogrolls? It's all so confusing...

While I'm still trying to figure this out, you should read Shelley's original post, Steve Levy, Dave Sifry, and NZ Bear: You are Hurting Us and see whether you think the arguments against blogrolls are as wrong as I think they are.

Kingsley Idehen: Anyway, Shelley raises a really important issue which actually highlights what is more than likely a flaw in the concept of blogrolls. I certainly saw you vanish from Uche's blogroll, and I was really curious about the underlying algorithm that lead to this (I suspected that Uche wouldn't have done this by hand :-) ).

Blogrolls are static and to some degree completely ambiguous, what are they really? My published blogroll is at most .01% or lower of the total blogs that I subscribe to, and actualy read with varying degrees of frequency

Dare Obasanjo: There is a social and technological aspect to blogrolls. The fact that to some degree a blogroll is a way for people to indicate what community they belong to and/or share links to people they find interesting, it is social. Whether the blogroll is static or dynamically updated is an implementation detail that is wholly technological.

The fact that Technorati uses blogroll links as part of their mechanism for calculating 'authority' is quite broken from my perspective and too easily gamed. Just a few short months ago they had to scramble because they were counting the various links in the 'Updated Spaces' and 'Newly Created Spaces' modules that exist on millions of spaces. We've since used rel=nofollow on these links to prevent other simplistic link calculators from being similarly confused.

Going back to social aspect of blogrolls, I liked the fact that one could go to Uche's blog and find links to other Nigerians who were doing interesting things in the technology industry. I don't think the fact that Technorati would use that link to count one more point of 'authority' to our URLs is any reason to stop doing this. We shouldn't change social behavior due to design flaws in software.

As for blogrolls being static or dynamic, to me this is just an implementation detail and where there is a will there is a way.

Uche Ogbuji:Kingsley,

I've learned an important lesson from the fact that both you and Dare noticed that he went missing from the Copia blogroll. We hacked at the blogroll in a bit of a careless manner and that was wrong. We always planned to go back and fix things as we clarified what those links actually meant, but things got in the way, and the result looked worse than the intention.


Good points.

"I liked the fact that one could go to Uche's blog and find links to other Nigerians who were doing interesting things in the technology industry."

Very important note. I think it's very fair for such a dispersed group as we are to have these sorts of mini-hubs.

"The fact that Technorati uses blogroll links as part of their mechanism for calculating 'authority' is quite broken from my perspective and too easily gamed."

Yikes! That is friggin' broken, and perhaps helps explain some of the sensitivity of this issue when Shelley took it on.

Blogrolls are definitely a social construct as are popularity lists. However how they are generated and calculated often boils down to a technology issue. This overlap between technology and social behavior is what I find so interesting about social software.

We face a lot of the issues surrounding the overlap between technology and social behavior everyday at work as we build software that millions of people use to communicate with each other. We regularly have to answer questions like how to deal with rejection in instant messenger invitation scenarios (Joe asks Jess to be his IM buddy but she declines), how to deal with people impersonating other users in blog comments, and whether it is a good idea to let people have access to the blogs of their IM buddies in two clicks.

I had assumed that the renewed interest in social software in recent years would create an environment where discourse about the overlap between technology and social behavior would become more commonplace. However I haven't found good communities or blogs where this discourse abounds. I recently unsubscribed from the Many-to-Many weblog because I got tired of reading Clay Shirky raving about how tagging/folksonomies are the second coming, David Weinberger and Ross Mayfield's verbose yet obvious observations on the state of the art in social software, and Danah Boyd's inane prattling. I also subscribed to the blogs of various members of the Social Computing Group at Microsoft Research but not only do they post infrequently there's also the fact that most of them just got re-orged into the Windows division to work on Longhorn.

Any pointers to other communities on social software or even just interesting research papers would be much appreciated. Holla back!


Categories: Social Software

I intended that we'd have newsgroup support in the last version of RSS Bandit but eventually we had to pull it out so we could get the release out earlier. I can definitely promise that you will be able to subscribe to whatever newsgroups tickle your fancy in the Nightcrawler release of your favorite RSS reader.

I've checked in support to finally enable this functionality into CVS. There are still some bugs to work out such as dealing with various funky headers and binary attachments as well as design questions such as whether to merge the UI for subscribing to feeds and newsgroups or keep them separate.

Below is a screenshot showing the microsoft.public.xml newsgroup (note that the dates might be wrong)


Categories: RSS Bandit

It seems Jonathan Marsh has joined the blogosphere with his new blog Design By Committee. If you don't know Jonathan Marsh, he's been one of Microsoft's representatives at the W3C for several years and has been an editor of a variety of W3C specifications including XML:Base, XPointer Framework, and XInclude.

In his post XML Base and open content models Jonathan writes

There is a current controversy about XInclude adding xml:base attributes  whenever an inclusion is done.  If your schema doesn't allow those attributes to appear, you're document won't validate.  This surprises some people, since the invalid attributes were added by a previous step in the processing chain (in this case XInclude), rather than by hand.  As if that makes a difference to the validator!

Norm Walsh , after a false start, correctly points out this behavior was intentional.  But he doesn't go the next step to say that this behavior is vital!  The reason xml:base attributes are inserted is to keep references and links from breaking.  If the included content has a relative URI, and the xml:base attribute is omitted, the link will no longer resolve - or worse, it will resolve to the wrong thing.  Can you say "security hole"?

Sure it's inconvenient to fail validation when xml:base attributes are added, especially when there are no relative URIs in the included content (and thus the xml:base attributes are unnecessary.)  But hey, if you wanted people or processes to add attributes to your content model, you should have allowed them in the schema! 

I agree that the working group tried to address a valid concern. But this seems to me to be a case of the solution being worse than the problem. To handle a situation for which workarounds will exist in practice (i.e. document authors should use absolute URIs instead of relative URIs in documents) the XInclude working group handicapped using XInclude as part of the processing chain for documents that will be validated by XML Schema.

Since the problem they were trying to solve exists in instance documents, even if the document author don't follow a general guideline of favoring absolute URIs over relative URIs, these URIs can be expanded in a single pass using XSLT before being processed up the chain by XInclude. On the other hand if a schema doesn't allow xml:base elements everywhere (basically every XML format in existence) then one cannot use XInclude as part of the pipeline that creates the document if the final document will undergo schema validation.

I think the working group optimized for an edge case but ended up breaking a major scenario. Unfortunately this happens a lot more than it should in W3C specifications.


Categories: XML

From Greg Reinacker we find out that Newsgator Acquires FeedDemon and Nick Bradbury confirms this in his post NewsGator Acquires FeedDemon, TopStyle...and Me!. I think this is a great acquisition for Newsgator. Acquisitions are usually about getting great people, key technology or lots of users. With this acquisition Newsgator gets all three although it'll be interesting seeing how they manage to deal with rationalizing the existence of two desktop clients even if one of them is a Microsoft Office Outlook plugin.

I think this will have an interesting ripple effect on the aggregator market. Both Nick & Greg have already raised the bar for RSS readers to include synchronization with a web-based aggregator. Desktop aggregators that don't do this will eventually be left in the dust. So far Newsgator Online and Bloglines have been the premiere web-based aggregators but its been difficult building applications that synchronize with them. Newsgator Online doesn't seem to have any public documentation about their API while the Bloglines API is not terribly useful for synchronization.

This definitely puts pressure on Bloglines to provide a richer API since two of the most popular desktop aggregators on the Windows platform will now have a richer synchronization story with its most notable competitor. It also puts pressure on other desktop aggregators to figure out a strategy for their own synchronization stories. For example, I had planned to add support for synchronizing with both services in the Nightcrawler release of RSS Bandit but now it is unclear whether there is any incentive for Newsgator to provide an API for other RSS readers. Combining this with the fact that the Bloglines API isn't terribly rich means I'm between a rock and a hard place when it comes to providing synchronization with a web-based aggregator in the next version of RSS Bandit.

Sometimes I wonder whether my energies wouldn't be better spent convincing Steve Rider to let me hack an API for http://www.start.com/1. :)


It was announced at E3 this week that XBox 360 will be backwards compatible with the original XBox. My good friend, Michael Brundage, was the dev for this feature. Read about XBox Backwards Compatibility from the horses mouth. My favorite quote from his article is

The first impression you should get is that these numbers are fantastic for high-definition Xbox 360 games. Wow! But on further reflection, they're not so good for emulating Xbox games at 30 fps. On my 1.25GHz G4 PowerBook, VPC 7 emulates a 295MHz x86 processor -- so the emulator is more than 4 times faster than the machine it's emulating. So most people look at these numbers and conclude that Xbox backwards compatibility can't be done.

Then there are a few people who understand emulators at a very technical level, or understand both Xbox systems inside and out to an expert level of detail that I'm not about to go into here. They perform more sophisticated calculations using the Art of Software Engineering, but ultimately reach the same conclusions as those not skilled in the Art: Backwards compatibility can't be done. [One such skeptic interviewed me for my current job, and pointedly asked during the interview how I planned to handle the project's certain future cancellation.]

And yet, here it is. It's magic!

Last year I got to meet J Allard and one of the questions I asked was about backwards compatibility in the next version of the XBox. He gave the impression that they wanted to do it but it would be a very difficult task. I would never have guessed that Mr. XQuery would be the one to get the job done.

Great job, Michael.


Categories: Technology

James Snell posted IBM's blogging guidelines today in his post Blogging@IBM. Some have heralded this as another triumph for corporate blogging, I'm, not so sure this is the case. The particular sticking points for me are the following

2. Blogs, wikis and other forms of online discourse are individual interactions, not corporate communications. IBMers are personally responsible for their posts. Be mindful that what you write will be public for a long time -- protect your privacy.

3. Identify yourself -- name and, when relevant, role at IBM -- when you blog about IBM or IBM-related matters. And write in the first person. You must make it clear that you are speaking for yourself and not on behalf of IBM.

Basically IBM states in these two bullet points that blogging isn't a way for IBM as a corporate entity to engage in conversation with its customers, partners and competitors. Instead its a way for regular people to talk about their lives including their work. What IBM seems to have done is give its employees permission to state that they work for IBM, and recommended that its employees post a disclaimer. For people like Sam Ruby of IBM, this is actually a step back since he now has to post a disclaimer on his personal weblog.

As I mentioned in a comment on Sam Ruby's blog I guess I must be tainted by Microsoft where product teams use blogs to announce features (e.g. the IE team) or engage in conversations with customers about product pricing (e.g. a conversation and its results). Simply giving your employees permission to mention their employer in their personal blogs doesn't a corporate blogging initiative make. In addition, the position that one has to give employees permission to state where they work if communicating in public is also rather startling.

Why I like blogging as a Microsoft employee is that it allows me to have conversations with our customers, partners and competitors. It isn't just me spouting off about my likes and dislikes, it is a way to communicate to our customers and partners. I've lost count of the amount of times I've referred people to posts like What Blog Posting APIs are supported by MSN Spaces? or Why You Won't See XSLT 2.0 or XPath 2.0 in the Next Version of the .NET Framework. Posts like these led to interesting conversations both internally and externally to Microsoft and informed our decision making processes. Additionally, our customers also appreciated the fact that we are up front with them about our plans and kept them in the loop.

I think communications via "official channels" and "company spokesmen" pales in comparison to the various conversations I've mentioned above. IBM has taken the first step towards accepting corporate blogging. Hopefully, they'll eventually go all the way.


Categories: Life in the B0rg Cube

Stan Kitsis, who replaced me as the XML Schema program manager on the XML team, has a blog post about XInclude and schema validation where he writes

A lot of people are excited about XInclude and want to start using it in their projects.  However, there is an issue with using both XInclude and xsd validation at the same time.  The issue is that XInclude adds xml:* attributes to the instance documents while xsd spec forces you to explicitly declare these attributes in your schema.  Daniel Cazzulino, an XML MVP, blogged about this a few months ago: "W3C XML Schema and XInclude: impossible to use together???"

To solve this problem, we are introducing a new system.xml validation flag AllowXmlAttributes in VS2005.  This flag instructs the engine to allow xml:* attributes in the instance documents even if they are not defined in the schema.  The attributes will be validated based on their data type.

This design flaw in the aforementioned XML specifications is a showstopper that prevents one from performing schema validation using XSD on documents that were pre-processed with XInclude unless the schema designer decided up front that they want their format to be used with XInclude. This is fundamentally broken. The sad fact is that as Norm Walsh pointed out in his post XInclude, xml:base and validation this was a problem the various standards groups were aware of but decided to dump on implementers and users anyway. I'm glad the Microsoft XML team decided to take this change and fix a problem that was ignored by the W3C standards groups involved. 


Categories: XML

From the press release Microsoft Delivers Powerful Upgrade to Desktop Search Capability for Windows Customers we find out

The MSN® network of Internet services today launched the new MSN Search Toolbar with Windows® Desktop Search, a suite of tools that helps people rapidly search across the Web or their PC and provides easy access to world-leading MSN services. The final version of the MSN Search Toolbar includes free enhancements for Windows® 2000 and Windows XP customers, providing a dramatically upgraded desktop search experience. These new innovations for Windows customers will make it easier than ever to find and retrieve documents, e-mail, images, video and more on their Windows-based personal computers.

"By offering the most integrated desktop search capabilities for Windows, now people can search their PC as fast as they can search the Web," said Yusuf Mehdi, senior vice president for the MSN Information Services & Merchant Platform division at Microsoft. "The new MSN Search Toolbar makes it easy for customers to find precisely what they're looking for, no matter where it resides."

The MSN Search Toolbar, available for free download today at http://desktop.msn.com, enables people to conveniently search their desktops from within familiar applications they use every day -- including Microsoft Windows Explorer, Internet Explorer and Microsoft Office Outlook® -- combining the one-step ease of a Web search with the richness and power of the PC environment.

I was a bit confused by some aspects of this release. One of the reasons for this is that http://desktop.msn.com, http://toolbar.msn.com and http://toolbar.msn.com/default.aspx all have different content. It looks like the various toolbar related subdomains are now deprecated in favor of http://desktop.msn.com.

As mentioned in my post from last year, Some Thoughts On the MSN Toolbar Suite Beta, the main feature I use is desktop search within Microsoft Office Outlook®. I notice that there has been one nice improvement in this feature; the inclusion of a preview pane for search results. This is very useful for skimming through mail messages returned by the search engine without having to open multiple mails. Even cooler is that the preview pane supports lots of different file types not just mail. Sweet.

One interesting data point is that at around the same time the MSN Search team blogged about adding tabbed browsing to Internet Explorer in a future release, the Internet Explorer team announced that IE 7 will have tabbed browsing. It looks like one way or another I'm going to get tabbed browsing in Internet Explorer in the near future.


Categories: MSN

In his post Reconsidering blogrolls (and what the heck are "folks", anyway?) Uche Ogbuji writes

In Shelley Powers entries "Ms Pancake" and "Let’s keep the Blogroll and throw away the writing", I've learned that there is some controversy about blogrolls. When I threw together Copia I tossed in a blogroll, which was just a random list of blogs I read. I hardly worried that the list would grow too long because I have limited time for reading blogs.

Shelley's posts made me think about the matter more carefully. To draw the basic lesson out of the long and cantankerous points in her blog entries (and comments), a blog is about communication, and in most cases communication within a circle (if an open and, one hopes, expanding one). Based on that line of thinking, Chime and I had a discussion and thought it would be best if rather than having a "blogroll" list of blogs we read, we had a list of other Weblogs with which we have some more direct and reciprocal connection. This includes people with whom we've had personal and professional relationships, and also people who have taken the time to engage us here on Copia. There is still some arbitrariness to this approach, and there is some risk of turning such a listing into the manifestation of a mutual back-slapping club, but it does feel more rightly to me. We do plan to post an OPML as a link on the page template, so people can check out what feeds we read (if they care); this feels the right compromise to me.

I was going to write a lengthy counterargument to the various posts by Shelley Powers about blogrolls then wondered whether the reason I even cared about this was that her writing had convinced Uche Ogbuji to drop me from his blogroll? Wouldn't I then be justifying some of the arguments against blogrolls? It's all so confusing...

While I'm still trying to figure this out, you should read Shelley's original post, Steve Levy, Dave Sifry, and NZ Bear: You are Hurting Us and see whether you think the arguments against blogrolls are as wrong as I think they are.


Today I was reading a blog post by Dave Winer entitled Platforms where he wrote

It was both encouraging and discouraging. It was encouraging because now O'Reilly is including this vital topic in its conferences. I was pitching them on it for years, in the mid-late 90s. It should have been on the agenda of their open source convention, at least. It was discouraging, because with all due respect, they had the wrong people on stage. This is a technical topic, and I seriously doubt if any of the panelists were actually working on this stuff at their companies. We should be hearing from people who are actually coding, because only they know what the real problems are.

I was recently thinking the same thing after seeing the attendance list for the recent O'Reilly AJAX Summit. I was not only surprised by the people who I expected to see on the list but didn't but also by who they did decide to invite. There was only one person from Google even though their use of DHTML and IXMLHttpRequest is what started the AJAX fad. Nobody from Microsoft even though Microsoft invented DHTML & IXMLHttpRequest and has the most popular web browser on the planet. Instead they have Anil Dash talk about the popularity of LiveJournal and someone from Technorati talk about how they plan to jump on the AJAX bandwagon.

This isn't to say that some good folks weren't invited. One of the guys behind the Dojo toolkit was there and I suspect that toolkit will be the one to watch within the next year or so. I also saw from the comments in Shelley Powers's post, Ajax the Manly Technology, that Chris Jones from Microsoft was invited. Although it's good to see that at least one person from Microsoft was invited, Chris Jones wouldn't be on my top 10 list of people to invite. As Dave Winer stated in the post quoted above, you want to invite implementers to technical conferences not upper management.

If I was going to have a serious AJAX summit, I'd definitely send invites to at least the following people at Microsoft.

  1. Derek Denny-Brown: Up until a few weeks ago, Derek was the development lead for MSXML which is where IXMLHttpRequest comes from. Derek has worked on MSXML for several years and recently posted on his blog asking for input from people about how they'd like to see the XML support in Internet Explorer improve in the future.

  2. Scott Isaacs: The most important piece of AJAX is that one can modify HTML on the fly via the document object model (DOM) which is known by many as DHTML. Along with folks like Adam Bosworth, Scott was one of those who invented DHTML while working on Internet Explorer 4.0. Folks like Adam Bosworth and Eric Sink have written about how significant Scott was in the early days of DHTML.  Even though he no longer works on the browser team, he is still involved with DHTML/AJAX as an architect at MSN which is evidenced by sites such as http://www.start.com/2 and http://spaces.msn.com

  3. Dean Hachamovitch: He runs the IE team. 'nuff said.

I'd also tell them that the invitations were transferrable so in case they think there are folks that would be more appropriate to invite, they should send them along instead.

It's kind of sad to realize that the various invite-only O'Reilly conference are just a way to get the names of the same familiar set of folks attached to the hot new technologies as opposed to being an avenue to get relevant people from the software industry to figure out how they can work together to advance the state of the art.


Categories: Technology

A few months ago Robert Scoble wrote a post titled Yahoo announces API for its search engine where he asked

Seriously. Blogs are increasing noise to lots of searches. We already have good engines that let you search blogs (Feedster, Pubsub, Newsgator, Technorati, and Bloglines all are letting you search blogs). What about an engine that lets you search everything BUT blogs? Where's that?

Is Yahoo's API good enough to do that? It doesn't look like it. It looks like Yahoo just gave us an API to embed its search engine into our applications. Sigh. That's not what I want. OK, MSN, your turn. Are you gonna really give us an API that'll let us build a custom search engine and let us have access to the variables that determine the result set?

The first question Robert asks is hard but you can take shortcuts to get approximate results. How do you determine what a blog is? Do you simply exclude all results from LiveJournal, Blogspot and MSN Spaces? That would exclude millions of blogs but it wouldn't catch the various blogs on self hosted domains like mine. Of course, you could get even trickier by always asking to exclude pages that match certain words like "DasBlog", "Movable Type" or "WordPress" which would probably take out another large chunk. By then the search results would probably blog free as you can get without resorting to expensive matching techniques. For icing on the cake it would probably be useful to also be able to skew results by popularity or freshness.

The second question Scoble asks is whether there is a search engine that gives you an API that can do all this stuff. Well MSN Search gives you RSS feeds which as I've mentioned in a previous post is sometimes the only API your website needs. More importantly, as pointed out in a recent post by Andy Edmonds entitled Search Builder Revealed, one can control how variables such as popularity or freshness affect search results. For example,

  1. Search results for "star wars revenge of the sith" by popularity

  2. Search results for "star wars revenge of the sith" by freshness

One could probably write a first cut at the search engine Robert is asking for using the MSN Search RSS feeds in about an hour or so. In a day, it could be made to be quite polished with most of the work being in the user interface. Yet another coding project for a rainy day.


Categories: MSN | Syndication Technology

Since we launched MSN Spaces I've found it interesting to see how our competitors have reacted by producing competing services or features. First it was Yahoo! 360 which showed up 4 months after we went into widespread beta with a press release whose primary pitch -- an innovative new service that allows people to easily share what matters most to them and keep connected with the people they know -- was a lot like ours.

Now according to the Infoworld article Google ponders Blogger, Gmail integration we find out

Google is also evaluating an enhancement that lets users natively upload images to their blogs from within the Blogger interface, Stone said. Currently, images can be posted to Blogger via e-mail or using other indirect methods, such as Google's Hello image-transmission service. "There is a button there now [in the Blogger interface for image uploading] so we're working on making that a useful button," Stone said. "We're looking into that right now."
Although users can password protect their Blogger blogs with third-party software or services, Blogger currently doesn't offer native ways for users to limit access to their blogs. However, Google is mulling over the possibility of adding some native privacy features, such as the ability for users to create private groups and that way control who can view their blogs, Stone said.
Google introduced the latest enhancement to Blogger last week, when it launched Blogger Mobile, a feature that lets users create a new blog and post to it from mobile devices. "There's lots of people walking around with little blogging appliances which others may call mobile phones," Stone said.

Although it is interesting that most of the features described in the article is stuff that we shipped last year, what I find even more interesting is that they aren't pre-announcing any features that aren't playing catchup with MSN Spaces or Yahoo! 360. I wonder if this is because they want to underpromise and overdeliver or whether this means 2005 is the year Blogger ends up playing catchup with the rest of the industry. Only time will tell.


Categories: MSN

May 12, 2005
@ 11:02 AM

From the press release MSN Acquires MessageCast to Expand Automated Alerting Services 

The MSN® network of Internet services today announced that it has acquired substantially all the assets of MessageCast Inc., a leading provider of automated alerting and messaging technology that currently supports the MSN Alerts service. This acquisition builds on the robust MSN Alerts platform, enabling tighter integration with the MessageCast technology. It also extends MSN Alerts to new content channels, helping to expand the ability of MSN to connect consumers to the information and people they care about most.

MSN Alerts empowers consumers to receive time-sensitive information from sources such as MSNBC, Fox Sports, Xbox®, MSN Hotmail®, MSN Money and other content partners. MSN has worked with MessageCast since 2003 to make it easier for content providers to offer free MSN Alerts services to MSN customers. These alerts can be delivered via any combination of MSN Messenger, Microsoft Windows® Messenger, e-mail and text messaging on mobile or handheld devices that are compatible with MSN Mobile. The service is currently offered in Chinese, English, French, German, Japanese, Korean and Spanish, and is available throughout the U.S. as well as select markets in Europe and Asia.

MessageCast was an obvious acquisition target and we've been working so intimately with them I'd wondered why they weren't part of MSN. By the way, if you're an MSN Spaces user you can read Ryan's post Got Alerts? from last year to find out how to enable MessageCast alerts for your space. This gives your readers yet another way to get updated about changes to your space in case they aren't savvy enough to use an RSS reader.


Categories: MSN

Oleg Tkachenko has a post about one of the changes I was involved in while the program manager for XML programming models in the .NET Framework. In the post foreach and XPathNodeIterator - finally together Oleg writes

This one little improvement in System.Xml 2.0 Beta2 is sooo cool anyway: XPathNodeIterator class at last implements IEnumerable! Such unification with .NET iteration model means we can finally iterate over nodes in an XPath selection using standard foreach statement:

XmlDocument doc = new XmlDocument();
XPathNavigator nav = doc.CreateNavigator();
foreach (XPathNavigator node in nav.Select("/orders/order"))

Compare this to what we have to write in .NET 1.X:

XmlDocument doc = new XmlDocument();
XPathNavigator nav = doc.CreateNavigator();
XPathNodeIterator ni = nav.Select("/orders/order");
while (ni.MoveNext())      

Needless to say - that's the case when just a dozen lines of code can radically simplify a class's usage and improve overall developer's productivity. How come this wasn't done in .NET 1.1 I have no idea.

One of the reasons we were hesitant in adding support for the IEnumerable interface to the XPathNodeIterator class is that the IEnumerator returned by the IEnumerable.GetEnumerator method has to have a Reset method. However a run of the mill XPathNodeIterator does not have a way to reset its current position. That means that code like the following has problems

XmlDocument doc = new XmlDocument();
XPathNodeIterator it = doc.CreateNavigator().Select("/root/*");
foreach (XPathNavigator node in it) 
foreach (XPathNavigator node in it) 

The problem is that after the first loop the XPathNodeIterator is positioned at the end of the sequence of nodes so the second loop should not print any values. However this violates the contract of IEnumerable and the behavior of practically every other class that implements the interface. We considered adding an abstract Reset() method to the XPathNodeIterator class in Whideby but this would have broken implementations of that class written against previous versions of the .NET Framework.

Eventually we decided that even though the implementation of IEnumerable on the XPathNodeIterator would behave incorrectly when looping over the class multiple times, this was an edge case that shouldn't prevent us from improving the usability of the class. Of course, it is probable that someone may eventually be bitten by this weird behavior but we felt the improved usability was worth the trade off.

Yes, backwards compatibility is a pain.

UPDATE: Andrew Kimball, who's one of the developers of working on XSLT and XPath technologies in System.Xml posted a comment that corrected some of my statements. It seems that some different implementation decisions were made after I left the team. He writes

"You know how I hate to contradict you, but the example you give actually does work correctly in 2.0. The implementation of IEnumerable saves a Clone of the XPathNodeIterator so that Reset() can simply reset to the saved Clone. There were a couple of limitations/problems, but neither was serious enough to forego implementing IEnumerable:

1. Performance -- 2 clones of the XPathNodeIterator must be taken, one in case Reset is called, and one to iterate over. In addition, getting the Current property must clone the current navigator so that the navigator's position is independent of the iterator's position.

2. Mid-Iteration Weirdness -- If MoveNext() is called several times on the XPathNodeIterator, and *then* GetEnumerator() is called, the enumerator will only enumerate the remaining nodes, not the ones that were skipped over. Basically, users should either use the XPathNodeIterator methods to iterate, *or* the IEnumerable/IEnumerator methods, not both."

I guess it just goes to show how quickly knowledge can get obsoleted in the technology game. :)


Categories: XML

David Orchard has a post entitled Underlying Protocol is a completely leaky abstraction which does a great job of explaining why the notion of protocol independence that is espoused by some SOA proponents is problematic. An example of such thinking is the article Protocol independence in SOA which argues that one can use SOAP to abstract away whether the underlying protocol is SMTP, FTP, HTTP, IIOP or some custom proprietary protocol.

David's post is full of esoteric information from various XML Web Services specs like WSDL, WS-Addressing & SOAP so it may be hard to follow. The gist of his post is captured in this excerpt

Point #1: The myth of protocol independence

This comparison of in vs robust in-only has shown that the application level WSDL MEP will be determined by the underlying protocol, assuming that the underlying protocol's information is fully utilized.

Another possibility is to throw out anything "extra" from the underlying protocol, that is effectively dumbing HTTP down to UDP. In general, Web services using SOAP and WSDL 1.1 has already done that by ignoring the HTTP Operation. The Web services "architecture" has further done that by ignoring the protocol capabilities, such as security, encoding, caching. We could have utilized the capabilities of HTTP and other protocols if we'd agreed how to describe the capabilities, but the "features and properties" work of SOAP 1.2 and WSDL 2.0 looks pretty much DOA.

Corollary to point #1: True protocol independence means dumbing down every protocol to UDP OR a framework for expressing protocol capabilities
I've shown how we are achieving true protocol independence by throwing away everything that makes up HTTP: the operations, status code, response, encodings, security, and all that.

The way SOAP and the various WS-* technologies achieve protocol independence is by basically ignoring the capabilities of the Web (i.e. HTTP and URIs) and re-inventing them in various WS-* specifications. This is one of the reasons I prefer the term Service Oriented Architecture (SOA) for describing these family of technologies than XML Web Services since a core design goal is to not solely target the Web or use XML. One of the simple examples David uses in his post is to point out that HTTP is a request/response protocol where error codes can be returned in the response while protocols like UDP are fire & forget. To build a service that is independent of both protocols one must either ignore the request response capabilities of HTTP (thus treating it like UDP) or build a protocol independent request/response model for UDP (thus treating it like HTTP).

The approach of re-inventing the capabilities of HTTP in the various WS-* specification does cause confusion and irritation amongst developers who plan to only use these technologies on the Web. For example, in a recent post entitled Is WS-MetadataExchange really necessary? Phil Windley argues that WS-MetadataExchange which is a SOAP based mechanism for retrieving WSDL, Policy and XSD files for a service endpoint can be can be replaced by URI such as http://www.example.com/service_path?meta=wsdl . In a response entitled WS-MetadataExchange Don Box points out that one of the reasons the aforementioned specification exists is so that metadata discovery can happen in a protocol agnostic manner. There have been similar discussions about WS-Addressing and HTTP which are summed up rather well in Stefan Tilkov's posts WS-Addressing and Protocol Independence and  Mapping WS-Addressing to HTTP.

The main message here is that there is no free lunch. If developers want to build protocol independent services then there is an added complexity cost which hardly seems justified if one can simply build XML Web services as opposed to SOAs.  

One of the things I've been nagging the XML Web Services folks at Microsoft about since getting to MSN is to do a better job of targetting folks who want to build XML web services as opposed to focusing all their energy on SOA scenarios only. I've tried to frame this as distinguishing between 'enterprise' scenarios and 'web' scenarios but would love to hear more articulate ways of phrasing the distinction.


Categories: XML Web Services

A little while ago I noticed a post by Oleg Tkachenko entitled Microsoft licensed Mvp.Xml library where he wrote

On behalf of the Mvp.Xml project team our one and the only lawyer - XML MVP Daniel Cazzulino aka kzu has signed a license for Microsoft to use and distribute the Mvp.Xml library. That effectively means Microsoft can (and actually wants to) use and distribute XInclude.NET and the rest Mvp.Xml goodies in their products. Wow, I'm glad XML MVPs could come up with something so valuable than Microsoft decided to license it.

Mvp.Xml project is developed by Microsoft MVPs in XML technologies and XML Web Services worldwide. It is aimed at supplementing .NET framework functionality available through the System.Xml namespace and related namespaces such as System.Web.Services. Mvp.Xml library version 1.0 released at January 2005 includes Common, XInclude.NET and XPointer.NET modules.

As a matter of interest - Mvp.Xml is an open-source project hosted at SourceForge.

Since Oleg wrote this I've seen other Microsoft XML MVPs mention the event including Don Demsak and Daniel Cazzulino. I think this is very cool and something I feel somewhat proud of since I helped get the XML MVP program started.

A few years ago, as part of my duties as the program manager responsible for putting together a community outreach plan for the XML team I decided that we needed an MVP category for XML. I remember the first time I brought it up with the folks who run the Microsoft MVP program and they thought it was such a weird idea since there were already categories for languages like C# and VB but XML was just a config file format and didn't need enough significant expertise. I was persistent and pointed out that a developer could be a guru at XSLT, XPath, XSD, DOM, etc without necessarily being a C# or VB expert. Eventually they buckled and an MVP category for XML was created.  Besides getting the XML Developer Center on MSDN launched, getting the XML MVP program started was probably my most significant achievement as part of my community outreach duties while on the XML team at Microsoft.

Now it is quite cool to see this community of developers contributing so much value to the .NET XML community that Microsoft has decided to license their technologies.

I definitely want to build a similar developer community around the stuff we are doing at MSN once we start shipping our APIs. I can't wait to get our next release out the door so I can start talking about stuff in more detail.


Categories: Life in the B0rg Cube | XML

Phil Haack, who has helped out quite a bit with developing RSS Bandit, recently posted on a new project he is starting in his post Announcing Subtext, A Fork Of .TEXT For Your Blogging Pleasure. Phil writes

What is .TEXT?

.TEXT is a popular (among .NET loving geeks), scalable, and feature rich blogging engine started by Scott Watermasysk and released as an open source project under a BSD license . Scott did a wonderful job with .TEXT as evidenced by its widespread use among bloggers and being the blogging engine for http://blogs.msdn.com/ among others.

Sounds great. So why fork it?

There are several reasons I think a fork is waranted.

.TEXT is dead as an open source product.

.TEXT is dead as a BSD licensed open source project. Out of its ashes has risen Community Server which integrates a new version of the .TEXT source code with Forums and Photo Galleries. Community Server is now a product being sold by Telligent Systems . There is a non-commercial license available, but it requires displaying the telligent logo and restricts usage to non-commercial purposes. I'd prefer to use a blogging engine with an OSI approved license , in particular the BSD license works well for me.

I wish Phil luck with his new project. While on IM with Phil I mentioned that my experience with Open Source projects is that things work best when you have working code before announcing your project. In a recent post entitled Finding Discord in Harmony Charles Miller pointed that working code attracts people who want to code. Design documents attract people who want to talk about coding. I have found this to be very true. Another thing I pointed out was that he shouldn't be fooled by offers for help. With RSS Bandit, for every 100 or so people who offers to help, there are about 10 or so who come through with actual code then of those maybe 1 person who comes through with a substantial change.

I'm also glad to see that the .TEXT codebase isn't going to die. I've used Community Server briefly and I disliked it quite a bit. The main reason I stopped updating http://blogs.msdn.com/dareobasanjo is because they switched the site from .TEXT to Community Server which led to a number of small annoyances that piled up. The annoyances ranged from removing the login link from the blog page to sporting Atom 0.3 feeds which I've mentioned before is a sign of incompetence.

Competition is always good for users.


The recent flap about the anti-discrimination bill and Microsoft got a couple of us talking at dinner about how progressive Microsoft's benefits package was compared to the rest of the software industry. One of the the things that came up is also mentioned in the Music for America blog which points out

Microsoft has a stellar record as pioneering same-sex partner rights, and they haven't reneged on this stance internally. Microsoft continues to offer same-sex partner domestic benefits – benefits which are exemplary, especially for health insurance coverage.

Being that Microsoft is the only company I've worked for as a full time employee I do take a bunch of the benefits we get for granted. Although I often end up surprised to hear that other companies in the software industry fall far short of what Microsoft offers.

For example in his post, uh oh, what happened to my bank account? Mark Jen wrote

next, let's look at the health care benefit provided. arguably, this is the biggest benefit companies pay out for their employees. google definitely has a program that is on par with other companies in the industry; but since when does a company like google settle for being on par? microsoft's health care benefits shame google's relatively meager offering. for those of you who don't know, microsoft pays 100% of employees' premiums for a world-class PPO. everything you can possibly imagine is covered. the program has no co-pays on anything (including prescription drugs); you can self-refer to any doctor in the blue cross blue shield network, which pretty much means any licensed professional; and you can even get up to 24 hour-long massage sessions per year.

I also saw similer sentiments in a recent post by Jim Stroud entitled Life with Bill... where he wrote

To all concerned, I am now on the Microsoft payroll effective immediately. Google was a groovy gig, but Bill gave me an offer I could not refuse. (Ask me about the benefits. WOW!) I am SO happy and can truly see myself here for the long haul.

I guess it's a sign I'm getting older that I actually care about things like benefits packages. It's likely that companies like Google tailor their compensation package to what the young, single geek fresh out of college is interested in while as the average age at Microsoft has gone up its compensation package has targeted the older, married geek instead.



Categories: Life in the B0rg Cube

From the Text of Steve Ballmer E-Mail to U.S. Microsoft Employees Regarding Public Policy Engagement

Date: May 6, 2005

To: All U.S. Microsoft Employees

Subject: Microsoft’s principles for public policy engagement

During the past two weeks I’ve heard from many of you with a wide range of views on the recent anti-discrimination bill in Washington State, and the larger issue of what is the appropriate role of a public corporation in public policy discussions. This input has reminded me again of what makes our company unique and why I care about it so much.

One point really stood out in all the e-mails you sent me. Regardless of where people came down on the issues, everyone expressed strong support for the company’s commitment to diversity. To me, that’s so critical. Our success depends on having a workforce that is as diverse as our customers – and on working together in a way that taps all of that diversity.

I don’t want to rehash the events that resulted in Microsoft taking a neutral position on the anti-discrimination bill in Washington State. There was a lot of confusion and miscommunication, and we are taking steps to improve our processes going forward.

To me, this situation underscores the importance of having clearly-defined principles on which we base our actions. It all boils down to trust. Even when people disagree with something that we do, they need to have confidence that we based our action on thoughtful principles, because that is how we run our business.

I said in my April 22 e-mail that we were wrestling with the question of how and when the company should engage on issues that go beyond the software industry. After thinking about this for the past two weeks, I want to share my decision with you and lay out the principles that will guide us going forward.

First and foremost, we will continue to focus our public policy activities on issues that most directly affect our business, such as Internet safety, intellectual property rights, free trade, digital inclusion and a healthy business climate.

After looking at the question from all sides, I’ve concluded that diversity in the workplace is such an important issue for our business that it should be included in our legislative agenda. Since our beginning nearly 30 years ago, Microsoft has had a strong business interest in recruiting and retaining the best and brightest and most diverse workforce possible. I’m proud of Microsoft’s commitment to non-discrimination in our internal policies and benefits, but our policies can’t cover the range of housing, education, financial and similar services that our people and their partners and families need. Therefore, it’s appropriate for the company to support legislation that will promote and protect diversity in the workplace.

Accordingly, Microsoft will continue to join other leading companies in supporting federal legislation that would prohibit employment discrimination on the basis of sexual orientation -- adding sexual orientation to the existing law that already covers race, sex, national origin, religion, age and disability. Given the importance of diversity to our business, it is appropriate for the company to endorse legislation that prohibits employment discrimination on all of these grounds. Obviously, the Washington State legislative session has concluded for this year, but if legislation similar to HB 1515 is introduced in future sessions, we will support it.

I also want to be clear about some limits to this approach. Many other countries have different political traditions for public advocacy by corporations, and I’m not prepared to involve the company in debates outside the US in such circumstances. And, based on the principles I’ve just outlined, the company should not and will not take a position on most other public policy issues, either in the US or internationally.

I respect that there will be different viewpoints. But as CEO, I am doing what I believe is right for our company as a whole.

This situation has also made me stop and think about how well we are living our values. I’m deeply encouraged by how many employees have sent me passionate e-mails about the broad respect for diversity they experience every day at Microsoft. I also heard from some employees who underscored the importance of feeling that their personal values or religious beliefs are respected by others. I’m adamant that we must do an even better job of pursuing diversity and mutual respect within Microsoft. I expect everyone at this company -- particularly managers -- to take a hard look at their personal commitment to diversity, and redouble that commitment.

The questions raised by these issues are important. At the same time, we have a lot of other important work to do. Over the next 18 months we’ll release a broader, more advanced and more exciting set of products than at any time in the company’s history. Let’s all recommit to the job ahead, using our diversity as a strength to work together creatively and with respect for each other.



Wow. They listened to us. I guess when a couple thousand employees let you know how they feel about decisions the company is making, it's smart to take their opinions into consideration.

Microsoft definitely does not suck as an employer.


Categories: Life in the B0rg Cube

In a recent post entitled Replacing WSDL, Twice  Tim Bray writes

Lets make three assumptions: First, that Web Services are important. Second, that to make Web Services useful, you need some sort of declaration mechanism. Third, that WSDL and WSDL 2, despite being the work of really smart people, are so complex and abstract that they have unacceptably poor ease-of-use. What then? Naturally, the mind turns to a smaller, simpler successor, sacrificing generality and eschewing abstraction; in exactly the same way that XML was a successor for SGML. Well anyhow, thats the direction my mind turned. So did Norm Walshs; his proposal for NSDL also includes a helpful explanation of why Web-Service description is important. My sketch is called SMEX-D.

Although the XML geek in me wants this blog post to be a critical analysis of NSDL and SMEX-D, I think a more valuable discussion is questioning the premise behind Tim Bray's post. I agree with his first assumption; web services are important. I also agree with his third assumption that the various flavors of WSDL are too abstract and complex to be of easily useful. More importantly, most of the interoperability problems in the XML Web Service space are usually the fault of WSDL and the XSD type system which it depends on.

However I happen to disagree with his second premise; for Web Services to be useful you need some sort of declaration mechanism. Just to contradict by example, I can point to a wide number of web services such as the Yahoo! Search web services, Flickr web services, del.icio.us web services, 43 Things web services, Bloglines web services and every web site that provides an RSS feed as examples of useful web services that don't use some sort of declarative mechanism to describe the services.

Before deciding to reinvent WSDL, people like Tim Bray and Norm Walsh should ask themselves what purpose a description language like WSDL serves. This is exactly what Mark Nottingham does in his post Questions Leading to a Web Description Format where he writes

A while back, I published a series of entries ( 1 , 2 , 3 , 4 ) about would-be Web Description Formats, with the intent of figuring out which (if any) is suitable, or whether a new one is required.

As I said, Ive talked about specific use cases for a Web description format before, but to recap, the big ones are:

Code Generation Its very useful to start with a description of the site, and then code by filling in the blanks. In the Web services world, this is referred to as coding to the contract or contract-first development, and it makes a lot of sense (although I think contract is needlessly legalistic, and misleading too; it implies a closed world, when in reality a description is very open, in that its always subject to additional information being discovered).

A couple of ways that this might manifest is through stub and skeleton generation, and auto-completion and other whizz-bang hinting in tools. Wouldnt it be nice for Eclipse to give you a drop-down of the valid URIs that will give you a certain type when youre coding?


Dynamic Configuration Ive complained about the poor state of Web server configuration before, so Ill spare you a repeat of the full polemic. A proper description format would be one mechanism to allow more transparent configuration of servers, and better use of the Web (and HTTP).


Application Modeling and Visualisation Finally, theres a considerable amount of value in having a standard representation (thats intentional, folks) of a sites layout and configuration; you can discuss it with peers, evolve it over time in a manner thats independent to the implementation, develop tools to manipulate and visualise it, and so forth.

Looking at this list of benefits of having a description language for web services, I don't see anything that leaps out to me as being a must have. Being able to generate skeleton code from an XML description of a web service is nice but isn't a showstopper. And in the past, the assumptions caused by such toolkits has led to interoperability problems so I'm not enthusiastic about code generation being a good justification for anything in this area.

The dynamic configuration bit is interesting but it is unlikely that we will come up with a description language generic enough to unify formats as diverse as RDDL, RSD, P3P, WSDL, etc that won't be too complex or too simple to be useful.

As for application modelling and visualization, I'm not sure why we need an XML format for that. If someone decides to come up with a dialect of UML for describing web services there doesn't need to be a reason for it to have an XML serialization format except for code generation which as previously stated isn't such a great idea anyway.

For me the question isn't whether we should replace WSDL but rather whether we even needed it in the first place.


Categories: XML Web Services

Every once in a while I find myself still having to check a few websites directly instead of reading them in my favorite RSS reader. The top 5 sites I still check by hand which I'd love to see get RSS feeds are

  1. The Latest Editorial Cartoons of Clay Bennett 
  2. Paul Graham's Essays
  3. The Misanthropic Bitch
  4. The Tucker Max Stories
  5. Malcolm Gladwell's Articles

What are your list of sites that should have RSS feeds but don't? Maybe we can get together and sign a petition. :)


Russell Beattie has a post entitled Spotlight Comments are the Perfect Spot for Tags! where he writes

I read just about every sentence of the Ars Technica overview of OSX Tiger and learned a lot, especially the parts where the author drones on about OSX's support for meta-data in the filesystem. I originally thought the ability to add arbitrary meta data to any file or folder was an interesting capability, albeit not particularly useful in day-to-day activities. But then I was just playing around and saw the Spotlight Comments field that's now included at the very top of a file or folder's Info box and I grokked it! Now that there's actually an easy way to both add and to search for meta-data on files and folders, then there's actually a reason to put it in! But not just any meta-data... What's the newest and coolest type of meta-data out there? Yep, tags! And the comments fields is perfect for this!

Obviously nothing has changed in terms of the UI or search functionality, just the way I think about meta data. Before I may have ignored an arbitray field like "comments" even if I could search on it (haven't I been able to do something similar in Windows?). But now that I "get" tagging, I know that this isn't the place for long-winded description of the file or folder, just keywords that I can use to refer to it later. Or if those files are shared on the network, others can use these tags to find the files as well. Fantastic!

This sounds like a classic example of "When you have a hammer, everything looks like a nail". One of the interesting things about the rush to embrace tagging by many folks is the refusal to look at the success of tagging in context. Specifically, how did successful systems like del.icio.us get around the Metacrap problem which plague all attempts to create metadata systems? I see two aspects of the way del.icio.us applied tagging which I believe were key to it becoming a successful system.

  1. Tagging is the only organizational mechanism: In del.icio.us, the only way to organize your data is to apply tags to it. This basically forces users to tag their data if they want to use the service otherwise it quickly becomes difficult to find anything.

  2. It's about the folksonomy: What really distinguishes services like del.icio.us from various bookmarks sites that have existed since I was in college is that users can browse each other's data based on their metadata. The fact that del.icio.us is about sharing data encourages users to bookmark sites more than they typically do and to apply tags to the data so that others may find the links.

Neither of the above applies when applying tags to files on your hard drive. My personal opinion is that applying tagging to the file system is applying an idea that works in one context in another without understanding why it worked in the first place.


Categories: Technology

Over the past few weeks there have been a bunch of reports on internal mailing lists about problems with MSN Spaces RSS feeds and Bloglines. The specific problem is that every once in a while old posts containing photos are marked as being new in Bloglines. There have also been some complaints that indicate this problem also manifests itself in Newsgator as well.

After some investigation we discovered that this problem seemed to only occur in RSS items containing links to photos hosted on our storage servers such as blog posts with photo attachments or photo albums. This led to a hunch that this problem only affected RSS readers that mark old posts as new if any content in the <description> element changes. Once this was confirmed then we had our answer. For certain reasons, the permalink URL to an image stored on our storage servers changes over time*. Whenever one of these changes to the URLs of images takes place, then RSS readers that detect changes to the content <description> element of a feed will indicate that this post has been altered. 

A brief discussion with the folks behind Bloglines indicates that there isn't a straightforward solution to this problem. It is unlikely that they will change their RSS parsing code to deal with the idiosyncracies of RSS feeds provided by MSN Spaces. Being the author of an RSS reader as well, I can understand not wanting to litter the code with special cases. Similarly it is unlikely that we will be changing the behavior that causes URLs to images hosted on our servers to change in the short term.

After chatting with Mike and Jason about this one of the solutions we came up with was to use the dcterms:modified element in our RSS feeds. The element would contain the date of the last time a user directed change was made to the item, in this case the item would be a blog post or photo album. This means that RSS readers can simply test the value of the dcterms:modified element to determine if a post was changed by the user instead of performing inefficient textual comparisons of the contents of the post. In fact, the main reason I don't provide support for detecting changes in RSS items in RSS Bandit is the high rate of false positives as well as slowdowns caused by performing lots of text comparisons. Having this element in RSS feeds would make it a lot easier for me to support detecting changes to the contents of items in an RSS feed without degrading the user experience in the general case.

Of course, without RSS readers deciding to support the use of the dcterms:modified element in RSS feeds this will continue to be a problem. I need to send some mail to Mark Fletcher and the RSS-AggDev mailing list to see what people think about supporting this element as a way to get around the "bogus new items" problem.

* Note that this doesn't break links that reference that image with the old URL.


Categories: MSN | Syndication Technology

The Microsoft Careers website has a section entitled Meet Our People where various profiles of employees of various divisions at Microsoft are highlighted. For whatever reason, I was one of the people picked this year and you can go there find my employee profile here.

The process was quite lightweight. I spoke to an interviewer on the phone for several minutes and then she sent me a transcript of key parts of our conversation which I got to edit or veto. After that there was the photo shoot and a couple of weeks later, voila.

Reading the profile, there is some stuff that stands out to me. I've already been here 3 years so I'm no longer a new guy yet I still don't feel like part of the B0rg. There is also the point about management being interested in your career and you as a person.

  • A while ago Mike Torres and I had a presentation for Brian Arbogast which was my first presentation for anyone with Vice President in their job title. While setting up my laptop, Brian chatted with Mike about a post he'd read in Mike's blog about wanting to upgrade his home sound system. Brian also mentioned that he'd remembered seeing my name come up in a lengthy internal email thread where I was in a debate with Vic Gundotra and didn't back down, he commended me for sticking to my guns. That was cool.

  • After a recent all-hands meeting I disagreed with parts of the presentations made by David Cole and Steve Liffick, so I sent them some critical feedback. They not only received it well but after exchanging some mail, Steve suggested we meet in person to discuss things further. When I eventually met with Steve, he readily accepted my  feedback so we spent a most of the time talking about Social Software as the Platform of the Future. That was fun.  

  • Our dev manager, Farookh Mohammed is just as interested as I am in making sure we have something akin to an MSN Developer Network when we start shipping our APIs for accessing MSN Spaces and we'll be working together to evangelize some ideas around this topic to upper management.

  • My boss's boss knows I'm fairly good at communicating my ideas in writing and that I'm passionate about social software so he's been encouraging me to produce a whitepaper that can be shared with various folks around work. 

  • My boss, Mike Pacholec, has told me I need to attend more conferences and has suggested that I should plan to attend at least one more conference before the end of the year.  

At the end of the profile it states that I have my dream job which is true. I get to work on software that I like using and which directly affects millions of users. All of this in a ship cycle that is measured in months instead of years. What's not to love?


Categories: Life in the B0rg Cube

May 2, 2005
@ 06:40 PM

Happy 50th birthday to Dave Winer. He is one of the few people in our industry to say that he has changed the world with the work he has done. From being one of the co-authors of the SOAP 1.1 specification and the author of the XML-RPC specification to being a key evangelist of the power of weblogging and RSS, Dave's impact has been felt across the world.

Dave and I have had our differences in the past (and probably still do now) but there are few people who I can point to in the software industry whose existence has been as much of a net positive to us all.


A bunch of folks from the Spaces team were in Asia recently talking to customers about how they used social software applications like instant messaging, blogging tools, email, social networking services, chat and dating sites. Recently Moz Hussain who's one of the product managers for the Spaces team provided some insights on current thinking about classifying users of social networking applications.

Below are excerpts from his blog post

In a people centric world, I see two major dimensions of people interaction: who I want to know about and who I want to share information with. This leads to four distinct segments as shown below.


1) The Content Consumer 

This group values their privacy but is voyeuristic in its desire to learn about others. One focus group participant explained how they like to compare thoughts and lifestyles of people in their social status and age bracket. They primarily use the Internet to search for information but rely on traditional communication methods for keeping in touch with close friends.

This group needs easier ways to find information, including user generated content, on a particular topic. A company that does this well in Japan is Livedoor with rich categorization and editorials on user generated content.

This group is often slightly older, but not exclusively so.

2) The Relationship Builder

This group is interested ONLY in their close circle of friends. They neither care about or want to share information with strangers. We have seen much higher prevalence of this group in Europe than elsewhere.

Relationship Builders use a variety of online and offline communications tools to share private thoughts and memories with those close to them. This can include photos, opinions, what's going on in their lives. The reason can be to keep in touch, or just for fun with friends. In China, MSN Messenger is seen as a great product for this group. In Japan, MIXI is the leading web based service for this group.

As many social networking tools are new to this group, they would benefit from greater education on the scenarios that are applicable to them.

This group is often slightly older, but not exclusively so.

3) The Social Networker

This group enjoys meeting people, even strangers, online and interacting to kill time. They enjoy chat rooms, dating services and generally having multiple superficial relationships. It is not uncommon for this group to have more than 200 contacts in their Messenger contact list.

This group is often younger and accesses the Internet from net cafes or mobile phones, i.e. away from prying eyes of parents and room mates. They often use paid for content to enrich their entertainment experience.

Many early web based social networking products such as Friendster and Orkut effectively targeted this segment.

4) The Content Creator

This group is the classic "Maven". They consider themselves experts in a field (ranging from shopping to high tech) or have a desire to express their creativity publicly. They want to get their opinions heard. They use the Internet to research topics of interest, and then create blogs to write their take on the situation.

This group is also interested in rewards for their content and opinions. There is an opportunity to align this group to service provider interests with appropriate reward mechanisms.

This group is also slightly older and has a narrower range of feature interests.

These groups are present in every geography, their relative size varies. The challenge now lies in addressing their needs and figuring out how to use each group to create a synergistic ecosystem of viewer and authors.

All great fun and why I love this job so much!

I think this classification of users of social networking services hits the mark. I also think it is quite cool that we are actually sharing this kind of information with the community of social software enthusiasts as opposed to keeping this as private market research.  I wonder what the various folks on the Corante Many2Many blog would have to say about the above classification scheme.

I like the fact that this classification takes into account online social butterflies like Robert Scoble as well people who simply want to use social software to enhance their existing real world relationships.

Putting the above data together with the Degrees of Kevin Bacon post by Mike Torres seems to imply that we are very interested in how people use social networking applications. I wonder why?



Categories: MSN