September 30, 2005
@ 08:14 PM

There have been a number of amusing discussions in the recent back and forth between Robert Scoble and several others on whether OPML is a crappy XML format. In posts such as OPML "crappy" Robertson says and More on crappy formats Robert defends OPML. I've seen some really poor arguments made as people rushed to bash Dave Winer and OPML but  none made me want to join the discussion until this morning.

In the post Some one has to say it again… brainwagon writes

Take for example Mark Pilgrim's comments:

I just tested the 59 RSS feeds I subscribe to in my news aggregator; 5 were not well-formed XML. 2 of these were due to unescaped ampersands; 2 were illegal high-bit characters; and then there's The Register (RSS), which publishes a feed with such a wide variety of problems that it's typically well-formed only two days each month. (I actually tracked it for a month once to test this. 28 days off; 2 days on.) I also just tested the 100 most recently updated RSS feeds listed on (a weblog tracking site); 14 were not well-formed XML.

The reason just isn't that programmers are lazy (we are, but we also like stuff to work). The fact is that the specification itself is ambiguous and weak enough that nobody really knows what it means. As a result, there are all sorts of flavors of RSS out there, and parsing them is a big hassle.

The promise of XML was that you could ignore the format and manipulate data using standard off-the-shelf-tools. But that promise is largely negated by the ambiguity in the specification, which results in ill-formed RSS feeds, which cannot be parsed by standard XML feeds. Since Dave Winer himself managed to get it wrong as late as the date of the above article (probably due to an error that I myself have done, cutting and pasting unsafe text into Wordpress) we really can't say that it's because people don't understand the specification unless we are willing to state that Dave himself doesn't understand the specification.

As someone who has (i) written a moderately popular RSS reader and (ii) worked on the XML team at Microsoft for three years, I know a thing or two about XML-related specifications. Blaming malformed XML in RSS feeds on the RSS specification is silly. That's like blaming the large number of HTML pages that don't validate on the W3C's HTML specification instead of on the fact that instead of erroring on invalid web pages web browsers actually try to render them. If web browsers didn't render invalid web pages then they wouldn't exist on the Web.

Similarly, if every aggregator rejected invalid feeds then they wouldn't exist. However, just like in the browser wars, aggregator authors consider it a competitive advantage to be able to handle malformed feeds. This has nothing to do with the quality of the RSS specification [or the HTML specification], this is all about applications trying to get marketshare.  

As for whether OPML is a crappy spec? I've had to read a lot of technology specifications in my day from W3C recommendations and IETF RFCs to API documentation and informal specs. They all suck in their own ways. However experience has thought me that the bigger the spec, the more it sucks. Given that, I'd rather have a short, human readable spec that sucks a little (e.g. RSS, XML-RPC, OPML etc.) than a large, jargon filled specificaton which sucks a whole lot more (e.g. WSDL, XML Schema, C++, etc). Then there's the issue of using the right tool for the job but I'll leave that rant for another day.


Categories: XML

While using Firefox this morning, I just realized something was missing. There is a Google Toolbar for Firefox, there is a Yahoo! Toolbar for Firefox, so how come there isn't an MSN Toolbar for Firefox? Just yesterday, Ken Moss who runs the MSN Search team posted on their blog about MSN Search Plugins for Firefox where he wrote

However, some of our customers prefer using Firefox and we respect that choice.  Some developers in our user community have created Firefox plug-ins to make it easy to do searches on MSN from the Firefox search box.  Even though it’s currently buried in Firefox under “Add Engines… Find lots of other search engines…”, it seems that our customers have been finding it since we’re listed as one of the most popular search engine plugins.

I use Firefox sometimes in the course of my job – and when I do, I love having the MSN Search engine plugged-in up in the chrome.  If you’re currently a Firefox user – I hope you’ll enjoy this little nugget. For more MSN Search fun with Firefox (or IE!), try out the PDC version of MSN Search enabled by a Trixie / Greasemonkey script.

It's cool to see the MSN Search team giving a shout out to plugins built by the developer community but I think it would be even cooler if we step up to the plate like Yahoo! and Google have done by providing an official,  full fledged toolbar for Firefox.


Categories: MSN

September 29, 2005
@ 07:30 PM

Kitty came by my office to remind me that the Web 2.0 conference is next week. As part of the lead up to the conference I can see the technology geek blogosphere is buzzing with the What is Web 2.0? discussion which was sparked off by Tim O'Reilly's posting of the Web 2.0 meme map created during FooCamp. The meme map is below for the few folks who haven't seen it 

The meme map is visual indication that "Web 2.0" has joined "SOA" as a buzzword that is too ill-defined to have a serious technical discussion about. It is now associated with every hip trend on the Web. Social Networking? That's Web 2.0. Websites with APIs? That's Web 2.0. The Long Tail? That's Web 2.0. AJAX? That's Web 2.0. Tagging and Folksonomies? That's Web 2.0 too. Even blogging? Yep, Web 2.0.

I think the idea and trend towards the 'Web as a platform' is an important one and I find it unfortunate that the discussion is being muddied by hypesters who are trying to fill seats in conference rooms and sell books.

I'm in the process of updating my Billl Gates Thinkweek paper on MSN and Web platforms to account for the fact that some of my recommendations are now a reality (I helped launch and more importantly given recent developments it needs to change tone from a call to action to being more prescriptive. One of the things I'm considering is removing references to "Web 2.0" in the paper given that it may cause a bozo bit to be flipped. What do you think?


Categories: Web Development

We're almost ready to begin public beta testing of our implementation of the MetaWeblog API for MSN Spaces. As with most technology specifications, the devil has been in the details of figuring out how common practice differs from what is in the spec.

One place where we hit on some gotchas is how dates and times are defined in the XML-RPC specification which the MetaWeblog API uses. From the spec

Scalar <value>s

<value>s can be scalars, type is indicated by nesting the value inside one of the tags listed in this table:

Tag Type Example
<dateTime.iso8601> date/time 19980717T14:08:55

The reason the above definition of a date/time type is a gotcha is that the date in the example is in the format YYYYMMDDTHH:MM:SS. Although this is a valid ISO 8601 date, most Web applications that support ISO 8601 dates usually support the subset defined in the W3C Note on Dates and Time Formats which is of the form YYYY-MM-DDTHH:MM:SS. Subtle but important difference.

Another thing that had me scratching my head was related to timezones in XML-RPC. The spec states

  • What timezone should be assumed for the dateTime.iso8601 type? UTC? localtime?

    Don't assume a timezone. It should be specified by the server in its documentation what assumptions it makes about timezones.

This just seems broken to me. What if you are a generic blog posting client like Blogjet or W.Bloggar which isn't tied to one particular server? It would seem that the only sane thing that can happen here is for the dates & times from the server to always be used and dates & times from clients be ignored since they are useless without timezones. If I get a blog post creation date of September 29th at 4:30 PM from a client, I can't use it since without a timezone I'll likely date the entry incorrectly by anything from a few hours to an entire day.

It probably would have been better to retrofit timezones into the spec than just punting on the problem as is the case now. I wonder what other interesting gotchas are lurking out there for our beta testers to find. :)


I've been in the process of moving apartments so I haven't had free time to work on RSS Bandit. In the meantime, we've been getting lots of excellent bug reports from people using the alpha version of the Nightcrawler release. One of the bug reports we've gotten was that somewhere along the line we introduced a bug that caused significant memory consumption (hundreds of megabytes). Since I've been busy, Torsten tracked it down and wrote about it in his post More to Know About .NET Timers. He wrote

As you may know, .NET 1.1 supports three different Timer classes:

  1. Windows timers with the System.Windows.Forms.Timer class
  2. Periodical delegate calling with System.Threading.Timer class
  3. Exact timing with the System.Timers.Timer class

The timings are more or less accurate (see CodeProject: Multithreading in .NET), but that is not the point I want to highlight today. Two sentences from the mentioned codeproject article are important for this post:

"... Events raised from the windows forms timer go through the message pump (together with all mouse events and UI update messages)..."


"...the System.Timers.Timer class. It represents server-based timer ticks for maximum accuracy. Ticks are generated outside of our process..."

To report state and newly retrieved items from requested feeds we used a concept to serialize the asynchronous received results from background threads with the help of a timer. This was introduced in the NightCrawler Alpha Dare Obasanjo posted last week for external tests. Some users reported strange failures, memory hog up and bad UI behavior with this Alpha so I would suggest here to not use it anymore for testing if your subscribed feeds count is higher than 20 or 30 feeds.

The idea was not as bad as it seems to be (if you only look at the issues above). The real issue in our case was to use simply the wrong timer class! The UI state refresh includes an update of the unread counters that is reported to the user within the treeview as number postfixes and (more important here) a font refresh (as user decides, default is to mark the feed caption text bold).

So what happens exactly now if the timer fires? I used the CLR Profiler to get the following exiting results. The event is called in sync. with the SynchronizingObject, means Control::WndProc(m) calls calls into System.Windows.Forms.Control::InvokeMarshaledCallbacks void(), MulticastDelegate::DynamicInvokeImpl()... and then our event method OnProcessResultElapsed(). The allocation graph mentions 101 MB (44.78%) used here!

So what to do to fix the problem(s)? Simply use the Windows.Forms.Timer! Think about it: it is driven by the main window message pump and runs always in the right context of the main UI thread (no .InvokeRequired calls). Timing isn't an important point here, we just want to process one result each time we are called. Further: no cross-AppDomain security check should happen anymore! With that timer it is just a simple update control(s) with some fresh data!

So take care of the timer class(es) you may use in your projects! Check their implications!

Tracking down bugs is probably the most satisfying and yet frustrating things about programming. I'm glad we got to the root of this problem.

By the way, don't forget to cast your vote in the RSS Bandit Logo Design contest. The time has come for us to update the imagery related the application and we thought it'd be great t have both the new logo and the decision on what it should be in the hands of our users.



Categories: RSS Bandit

Robert Scoble has a post entitled Search for Toshiba music player demonstrates search engine weakness where he complains about relevance of search results returned by popular web search engines. He writes

Think search is done? OK, try this one. Search for:

Toshiba Gigabeat

Did you find a Toshiba site? All I see is a lot of intermediaries.

I interviewed a few members of the MSN Search team last week and I gave them hell about this. When I'm writing I want to link directly to the manufacturer's main Web site about their product. Why? Because that's the most authoritative place to go.

But I keep having trouble finding manufacturer's sites on Google, MSN, and Yahoo.

Relevancy ratings on search engines still suck. Let's just be honest about it as an industry.

Can the search researchers find a better algorithm? I sure hope so.

Here, compare for yourself. If you're looking for the Toshiba site, did you find what you're looking for when you do searches Google ? On Yahoo ? On MSN ?

Here's the right answer: . Did you find it with any of the above searches? I didn't.

The [incorrect] assumption in Robert Scoble's post is that the most relevant website for a person searching for information on a piece of electronic equipment is the manufacturer's website. Personally, if I'm considering buying an MP3 player or other electronic equipment I'm interested in (i) reviews of the product and (ii) places where I can buy it. In both cases, I'd be surpised if the manufacturer's website would be the best place to get either.

Relevancy of search results often depends on context. This is one of the reasons why the talk on Vertical Search and at ETech 2005 resonated so strongly with me. The relevancy of search results sometimes depends on what I want to do with the results. tries to solve this by allowing users to customize the search engines they use when they come to the site. Google has attempted to solve this by mixing in both traditional web search results with vertical results inline. For example, searching for MSFT on Google returns traditional search results and a stock quote. Also searching for "Seattle, WA" on Google returns traditional web search results and a map. And finally, searching for "Toshiba Gigabeat" on Google returns traditional web search reults and a list of places where you can buy one. 

Even with these efforts, it is unlikely any of them would solve the problem Scoble had as well as if he just used less ambiguous searches. For example, a better test of relevance is which search engine gives the manufacturer's website for the search for "toshiba gigabeat website".

I found the results interesting and somewhat surprising. There definitely is a ways to go in web search.


Categories: Technology

From the press release MSN Launches Paid-Search Service in France and Singapore we learn

NEW YORK — Sept. 26, 2005 — Today, Yusuf Mehdi, senior vice president of the MSN® Information Services & Merchant Platform division, opened the second annual Advertising Week 2005 in New York City by announcing the official launch of adCenter in France and Singapore. adCenter powers a paid-search service from MSN that provides advanced audience intelligence and targeting capabilities to help advertisers improve their return on investment when it comes to paid-search advertising.The official launch of adCenter in France today and Singapore on Aug. 31 follows successful pilot programs in both countries. U.S. testing of adCenter is set to begin in October.
Powerful campaign management tools and deep audience intelligence unique to MSN make it easy for advertisers to optimize and refine their campaigns to reach a specific audience. Some of those tools include the following:

  •  Keyword Selection allows advertisers to indicate whom they want to reach based on geographic location, gender, age range, time of day and day of week, and suggests keywords based on the desired audience.

  • Site Analyzer assists advertisers by suggesting keywords based on the content of their Web site, rather than on another keyword.

  • Audience Profiler provides advertisers with an expected profile of those customers who are most likely to search for specific keywords.

  • Cost Estimator helps advertisers remain within their budget by estimating rank, traffic and cost per month per keyword.

  • Campaign Optimization allows advertisers to respond quickly and decisively throughout the campaign to easily refine budget allocations and keywords, as well as apply targeting filters such as geographic, demographic and dayparting.

  • Post Sales Audience Intelligence & Reporting provides advertisers with detailed reports on campaign performance and audiences reached including click-through rate, estimated position and spending levels.

"We’re excited by the positive feedback we have received from advertisers thus far," Mehdi said. "The launch of adCenter in France and Singapore is a great first step to delivering on our global vision to connect advertisers to consumers in a much more meaningful way."

In the near future, adCenter will become a one-stop shop from which advertisers can manage all their MSN advertising campaigns, end to end, including display and direct ads. In addition, advertisers will be able to use advanced targeting tools and audience intelligence data to reach their desired audiences across the MSN network. Advertisers interested in learning more or signing up for adCenter can go to

Our team got a demo of adCenter a few months ago and it definitely looks like it hits all the right points which is impressive for a version 1.0 offering. Given how much revenue MSN gets from advertising it's good to see us giving advertisers more tools to improve the value to get from advertising on MSN. Based on the fact that competitors like Yahoo! and Google already have offerings in this space, adCenter is an overdue addition to our stable of services.


Categories: MSN

September 23, 2005
@ 02:06 PM

From Omar Shahine's post we learn

Well, we launched Kahuna Milestone 3 (M3) yesterday with a new URL ( We are building Kahuna iteratively, and plan on releasing much goodness on a frequent basis. This is very different from the way that Hotmail and MSN has typically released software, but we feel it’s the best way to achieve success.

As I mentioned recently I've been using the Hotmail beta for a while now and it's a phenomenal improvement over the current version. Below is a screenshot of the beta in action.

Hotmail Beta screenshot

If you'd like invites to the beta, you should keep an eye on the Hotmail team's space. You can also find more screenshots of the Hotmail beta on their space as well.


Categories: MSN

September 23, 2005
@ 01:40 PM

I got an email about Six Apart's Project Comet yesterday and it definitely made me smile. It's good to see more validation of the idea that personal publishing (weblogging) should evolve into personal self expression (my blog, my photos, my relationships, my media, etc). This is the same direction taken by services such as MySpace, MSN Spaces and Yahoo! 360°.

Weblogs are replacing the personal homepage and thus to reflect all the facets of one's personality, they need to be more than 'my online journal'. Most of the big vendors in this space have cottoned onto this idea slowly but surely. I wonder when the coin will drop over in the Blogger offices at Google.


Categories: Social Software

I've been participating in the inaugural podcast for the Microsoft Architecture series. From the website

ARCast is an ongoing podcast series created by the Architect Strategy Team with the goal of spawning insightful, enlightening and sometimes contentious conversations about the hottest topics in Architecture today

The topic for the first series of podcasts are the problems facing interoperability in web services today. The participants are myself, Jeffrey Schlimmer, Michele Leroux Bustamante, Roger Sessions, and Chris Haddad. Most of the discussion has been about adoption of WS-* which even though I work on web services that are utilized by millions of people every day, I've never really found the technologies outside the core XML web services standards (SOAP/WSDL/XSD) to be of much importance to our main scenarios.


Categories: XML Web Services

September 22, 2005
@ 03:46 PM

Feeding poor people is useful tech, but it's not very sexy and it won't get you on the cover of Wired. Talk about it too much and you sound like an earnest hippie. So nobody wants to do that.

They want to make cell phones that can scan your personal measurements and send them real-time to potential sex partners. Because, you know, the fucking Japanese teenagers love it, and Japanese teenagers are clearly the smartest people on the planet.

The upshot of all of this is that the Future gets divided; the cute, insulated future that Joi Ito and Cory Doctorow and you and I inhabit, and the grim meathook future that most of the world is facing, in which they watch their squats and under-developed fields get turned into a giant game of Counterstrike between crazy faith-ridden jihadist motherfuckers and crazy faith-ridden American redneck motherfuckers, each doing their best to turn the entire world into one type of fascist nightmare or another.

Of course, nobody really wants to talk about that future, because it's depressing and not fun and doesn't have Fischerspooner doing the soundtrack. So everybody pretends they don't know what the future holds, when the unfortunate fact is that -- unless we start paying very serious attention -- it holds what the past holds: a great deal of extreme boredom punctuated by occasional horror and the odd moment of grace.

By Joshua Ellis, found via Jamie Zawinski.


September 22, 2005
@ 03:36 PM

A few days ago I mentioned in my post, Microsoft's Innovation Pipeline, that I suspect that a lot of folks at Microsoft will be looking to try something new after working on large software projects that in some cases have taken 3 to 5 years to ship. I personally have had conversations with half a dozen people over the last month or so who are either looking for something new or have found it.

In his post, Something New, Derek Denny-Brown writes

What kept me at Microsoft, and what I will miss the most, is the people. I worked with such diverse collection of wonderful people... mostly. Not that you can't get that elsewhere, but the 'individual contributors' (as they are called at MS) are really one of Microsoft's assets. I felt like a I was leaving my family. I have worked with some of these people for my entire time at Microsoft. That is a long, and intense, time to build a friendship.

I've had almost everyone I know ask me "why I are you leaving?" Some factors: Whidbey is basically done, as is Yukon. Microsoft definitely is more bureaucratic that it used to be, as well. Mostly though, it was just time to move on. I was presented with an opportunity that fit my interests. (And no, I'm not going to Google... too big. I decided long ago, that if I was going to leave, I wanted it to be for a small company, something less than 100 people.)

Derek is a good friend and I'll hate to see him leave. At least he's staying in Seattle so we'll still get to hang out every couple of weekends. I didn't try really hard to pitch him on coming to MSN but after a second friend [who hasn't posted to his blog about leaving yet] told me he was leaving the company I've switched tactics. All my friends are getting the pitch now. :)


Categories: Life in the B0rg Cube

There are two videos about MSN's AJAX efforts on Channel 9 today.

  1. Omar Shahine and team - New Hotmail "Kahuna": Hundreds of millions of people use Hotmail. Here's the first look at the next-generation of Hotmail, code-named "Kahuna."

    You meet the team which is located at Microsoft's Silicon Valley campus, hear their design philosophy, and get a first look.

  2. Scott Isaacs - MSN DHTML Foundation unveiled: Scott Isaacs is one of the inventors of DHTML. He is one of Microsoft's smartest Web developers and built a framework that's being used on, the future Hotmail, and other places like the new gadgets in Windows Vista. Hope you enjoy meeting Scott, sorry for the bad lighting at the beginning. If you've done any AJAX development, you'll find this one interesting and you'll get a look at some bleeding-edge Web development that MSN is doing.

I've been using the Hotmail beta and it is definitely hot. My girlfriend saw me using it and when I told her that was the next version of Hotmail she told me to send hugs and kisses to the Hotmail team. Kudos to Omar, Aditya, Steve Kafka, Imran, Walter, Reeves and all the other folks at Hotmail who're making Kahuna happen.

Additionally it looks like I'll be working on Hotmail features in our next release. So maybe I'll get some of those hugs and kisses next time. ;)


Categories: MSN

From the press release Microsoft Realigns for Next Wave of Innovation and Growth

REDMOND, Wash. Sept. 20, 2005 — In order to drive greater agility in the execution of its software and services strategy, Microsoft Corp. today announced a realignment of the company into three newly formed divisions, each of which will be led by its own president.  The Microsoft Platform Products & Services Division will be led by Kevin Johnson and Jim Allchin as co-presidents; Jeff Raikes has been named president of the Microsoft Business Division; and Robbie Bach has been named as president of Microsoft Entertainment & Devices Division. In addition, the company said Ray Ozzie will expand his role as chief technical officer by assuming responsibility for helping drive its software-based services strategy and execution across all three divisions.

The company also announced that Allchin plans to retire at the end of calendar year 2006 following the commercial availability of Windows Vista™, the next-generation Microsoft® Windows® operating system....

Microsoft Platform Products & Services Division

Johnson will succeed Allchin, taking ownership of the Microsoft Platform Products & Services Division, which comprises Windows Client, Server and Tools, and MSN®. To ensure a smooth transition, Johnson and Allchin will serve as co-presidents until Allchin’s retirement next year. The new division’s mission is to enable exciting user experiences and drive customer value through continued innovation in the software platform and software services delivered over the Internet.

"We are focused on creating exciting user experiences and enabling developers to build great applications with the combination of software and software-based services," Ballmer said. "Our MSN organization has great expertise in innovating quickly and delivering software-based services at scale. The platform groups have great expertise in creating a software platform and user experience that touches millions of people. By combining these areas of expertise, we will deliver greater value to our customers. David Cole, senior vice president, will continue to lead MSN, reporting to Johnson.

It seems I was wrong about how long it would take MSN to become more like Windows. The fun thing about Microsoft is that just when you think you have the org chart figured out, we have a reorg.  :)


Categories: Life in the B0rg Cube

September 20, 2005
@ 01:06 PM

In a recent interview with Business Week, Microsoft's CEO stated

We certainly have the best pipeline of new innovation [over the next 12 months] we've ever had in our history.

I was thinking about that line on my drive back from work yesterday and I think he has a point. Over the next year or so Microsoft is going to ship Windows Vista, Office 12, IE 7, Visual Studio 2005, BizTalk Server 2006, SQL Server 2005, .NET Framework v2.0, Windows Communications Foundation (Indigo), Windows Presentation Foundation (Avalon), Xbox 360 as well as the next iterations of various offerings from the MSN division including Hotmail, MSN Spaces, MSN Messenger, MSN Virtual Earth,, etc. That's a lot of stuff and is probably more stuff than has ever shipped in a 12 - 18 month time period in the company's history.

I suspect that one of the interesting consequences of this will be a significant diffusion of talent across the company and perhaps across the industry. A lot of people have been working on a big pieces of software for several years and will be looking for something new. The most restless have already started moving around (e.g. I went from the XML team to MSN, Joshua went from the XML team to Internet Explorer via the Passport team, and Michael went from the XML team to XBox). I've started having more conversations with folks interested in a change and I expect this will only increase over the next 12 months. Definitely interesting times ahead.

On an unrelated note, I have updated the track of the week from my high school days on my space. Just click on the play button on the Windows Media player module to hear this week's track.


Categories: Life in the B0rg Cube

Below is an excerpt from a transcript of an interview with Bill Gates by Jon Udell during last week's Microsoft Professional Developer Conference (PDC).

JU: So a few people in the audience spontaneously commented when they saw the light version of the presentation framework, I heard the words "Flash competitor" in the audience. Do you think that's a fair observation? And do you think that that's potentially a vehicle for getting Avalon interfaces onto not just devices but non-Windows desktops? To extend the reach of Avalon that way?

BG: From a technology point of view, what the Windows Presentation Foundation/Everywhere thing does -- I think it was called Jolt internally. It overlaps what Flash does a lot. Now, I don't think anybody makes money selling lightweight presentation capability onto phones and devices and stuff like that. We're making this thing free, we're making it pervasive. I don't think anybody's business model is you get a bunch of royalties for a little presentation runtime. So there'll certainly be lots of devices in the world that are going to have Flash and they're going to have this WPF/E -- which they say they're going to rename, but that's the best they could do for now -- there'll be lots of devices that have both of those, but they don't conflict with each other. It's not like a device maker say -- oh my god, do I pick Flash, do I pick WPF/E? You can have both of those things and they co-exist easily. They're not even that big.

JU: And it's a portable runtime at this point, so is it something that conceivably takes XAML apps to a Mac desktop or a Linux desktop? Is that a scenario?

BG: The Mac is one of the targets that we explicitly talked about, so yes. Now it's not 100 percent of XAML, we need to be clear on that. But the portion of XAML we've picked here will be everywhere. Absolutely everywhere. And it has to be. You've got to have, for at least reading, and even some level of animation, you've got to have pervasiveness. And will there be multiple things that fit into that niche? Probably. Because it's not that hard to carry as a user don't even know when you're seeing something that's WPF/E versus Flash versus whatever. It just works.

One of my concerns when it came to the adoption of the the Windows Presentation Foundation (formerly Avalon) has been the lack of cross platform/browser support. A couple of months ago, I wrote about this concern in my post The Lessons of AJAX: Will History Repeat Itself When it Comes to Adoption of XAML/Avalon?. Thus it is great to see that the Avalon folks have had similar thoughts and are working on a cross-platform story for the Windows Presentation Foundation.

I spoke to one of the guys behind WPF/E yesterday and it definitely looks like they have the right goals. This will definitely be a project to watch.


Categories: Technology | Web Development

September 20, 2005
@ 12:22 PM

I have to agree with Robert Scoble that Google's blog search not as good at link searching.

The only feature I use the various blog search engines like Feedster, Technorati, IceRocket and Google Blog Search for is looking for references to my posts which may not have shown up in my referer logs. Therefore, the only feature I care about is link searching and my main quality criteria is how fresh the index is. Here, Bloglines Citations Search is head and shoulders above everything else out there today. I've been using the various blog search engines every day for the past few weeks and Bloglines is definitely at the head of the pack.

Compare and contrast,






I mentioned last week that currently with traditional portal sites like MyMSN or MyYahoo, I can customize my data sources by subscribing to RSS feeds but not how they look. Instead all my RSS feeds always look like a list of headlines. fundamentally changes this model by turning it on its head. I can create an RSS feed and specify how it should render in using JavaScript in extension elements which basically makes it a gadget, no different from the default ones provided by the site. For example, I can create an RSS feed for weekly weather reports and specify that they should rendered as shown below within

Scott Isaacs gives some descriptions of how the RSS extensions used by work in his post Declaring Gadgets for using "RSS". He writes

Introduction to the Gadget Manifest

First, let's look at the Gadget manifest format. For defining manifests, we basically reused the RSS schema.  This format decision was driven by the fact we already have a parser in's application model for RSS, there is broad familiarity with the RSS format, and I personally did not want to invent yet another schema :-). While we reused the RSS schema, we do recognize that these are not typical RSS feeds as they are not intended to be consumed and directly rendered by an aggregator. Therefore, we are considering whether we should use a different file extension or root element (e.g., replace RSS with Manifest) but still leverage the familiar tags. For the sake of simplicity, we chose to ship reusing RSS as the format and then listen to the community on how to proceed. We are very open to suggestions.

Looking at the Gadget manifest, we extended the RSS schema with one custom tag, and one custom attribute. We defined those under the binding namespace. Below is a sample Gadget manifest:

<?xml version="1.0"?>
<rss version="2.0" xmlns:binding=" ">
      <title>Derived Hello World</title>
      <description>A sample hello world binding.</description>
      <pubDate>Wed, 27 Apr 2005 04:00:00 GMT</pubDate>
      <lastBuildDate>Wed, 27 Apr 2005 04:00:00 GMT</lastBuildDate>
         <link binding:type="inherit"></link>
         <link binding:type="css"></link>

Looking at the Gadget manifest, until we reach an RSS item, the semantics of the existed RSS tags is maintained. The title serves as the Gadget title, link typically points to your home page or page about your Gadgets, description is your Gadget's description, and so on.  The added binding:type element serves as the Gadget class to instantiate from the associated resources.

Looking at each item, we do know that we left off the required title and description since this file is not intended to be directly viewed. However, adding those tags could be useful to help describe the resources being used.

The last change is we added a binding:type attribute to each resource. We currently support three types: script (the default), css, and inherit. Inherit would point to another "RSS" manifest file that would be further consumed.

Assocating a Manifest with a Feed supports loading stand-alone Gadgets directly from a manifest. In addition, You can now define a Gadget that presents a custom experience your feed. This is very useful for a number of scenarios...The custom experiences are defined using the "RSS" Manifest format described above. However, since these Gadgets for RSS feeds are driven by the feed itself, we needed to extend traditional RSS with a single extension. This extension associates a manifest with the feed. We created a new channel element, binding:manifest that can be included in any RSS feed. This element specifies the Gadget manifest to use for the feed.

<binding:manifest environment="Start" version="1.0">

We created this element to not be coupled to any single implementation. Hence the required environment element. Aggregators that understand the manifest tag can examine the environment value. If they support the specified environment, they can choose to present the custom experience.

Despite the fact that I kicked off some of the initial discussions with Steve Rider for what are now gadgets, I haven't paid much attention to the design since is a work in progress. Based on the current design, I have two primary pieces of feedback.

  1. I'd suggest picking a different namespace URI. XML namespace URIs usually point to documentation about the format, in the cases that they don't it is often a cause of consternation amongst developers. For example, most XML namespaces used by Microsoft are from the domain which often point to schemas for the various Microsoft XML vocabularies. In the cases where they don't it is likely that they will in future. See for some examples.

  2. If Gadget manifests aren't supposed to be consumed by traditional RSS aggregators then should not use RSS as it's manifest format. The value of using RSS is that even if a client doesn't understand your extensions then the feed is still useful. currently breaks that assumption which to me is an abuse of RSS.

Scott is currently seeking feedback for the RSS extensions and I'd suggest that interested parties should post some comments about what they like or dislike so far in response to his blog post.

Update: Since writing this post I've exchanged some mail with the team and in addition to my feedback we've discussed feedback from folks like Phil Ringnalda and James Snell. The team used RSS as the gadget manifest file format as an experiment in the spirit of the continuous experiment that is Based on the feedback from the community, alternatives will be considered and fully documented when the choices have been made. Given my experience in XML syndication technologies I'll be working with the team on exploring alternatives to the current techniques used for creating gadget manifests as well as documenting them.

Keep the feedback coming.


Recently, Sam Ruby announced that the Atom 0.3 syndication format would be deprecated by the Feed Validator. When I first read his post I half wondered what would happen if someone complained about being told their previously valid feed was no longer valid simply because it was now using an "old" format. This afternoon I found an  email from Donald Knuth (yes, that one) to the mailing list complaining about just that. In his mail note from Prof Knuth, he writes

Dear Validators,

I've been happily using your service for many years --- even before w3c
took it over. I've had a collection of web pages at Stanford since
1995 or so; it now amounts to hundreds of pages, dozens of which have
tens of thousands of hits, several of which have hits in the millions.

Every time I make a nontrivial change, I've been asking the validator
to approve it. And every time, I've won the right to display the
"HaL HTML Netscape checked" logo.

Until today. Alluva sudden you guys have jerked the rug out from
under my feet.

I protest! I feel like screaming! Unfair!

I'm not accustomed to flaming, but I have to warn you that I am just
now more than a little hot under the collar and trying not to explode.

For years and years, I have started each webpage with the formula
I found in the book from which I learned HTML many years ago, namely
  <!DOCTYPE HTML PUBLIC "-//Netscape Comm. Corp.//DTD HTML//EN">

Today when I tried to validate a simple edit of one page, I found
that your system no longer is happy --- indeed, it hates every
one of my webpages. (If you need a URL, Google for "don" and take
the topmost page, unless you are in France.)

For example, it now finds 19 errors on my home page, which was 100%
valid earlier this month. The first error is "unknown parse mode!".
Apparently Stanford's Apache server is sending the page out as text/html.
You are saying text/html is ambiguous, but that you are going to continue
as if it were SGML mode. Fine; but if I get the Stanford folks to
change the MIME type to SGML mode, I'll still have 18 more errors.

The next error is "no DOCTYPE found". But guys, it is there as
plain as day. Henceforth you default to HTML 4.01 Transitional.

Then you complain that I don't give "alt" specifications with
any of the images. But the Netscape DTD I have used for more
than 3000 days does not require it.

Then you don't allow align="absmiddle" in an image.

I went to your help page trying to find another DTD that might
suit. Version 2.0 seemed promising; but no, it failed in other
ways --- like it doesn't know the bgcolor and text color attributes
in the <body> of my page.

Look folks, I know that software rot (sometimes called "progress")
keeps growing, and backwards compatibility is not always possible.
At one point I changed my TeX78 system to TeX82 and refused to
support the older conventions.

But in this case I see absolutely no reason why system people who
are supposedly committed to helping the world's users from all
the various cultures are suddenly blasting me in the face and
telling me that you no longer support things that every decent
browser understands perfectly well.

To change all these pages will cost me a week's time. I don't
want to delay The Art of Computer Programming by an unnecessary week;
I've been working on it for 43 years and I have 20 more years of work
to do, and who knows what illnesses and other tragedies are in store.
Every week is precious, especially when it seems to me that there
is no valid validation reason for a competent computer system person
to be so fascistic. For all I know, you'll be making me spend
another week on this next year, and another the year after that.

So, my former friends, please tell me either (i) when you are
going to fix the problem, or (ii) who is your boss so that I
can complain at a higher level.

Excuse me, that was a bit flamey wasn't it, and certainly egocentric.
But I think you understand why I might be upset.

Sincerely, Don Knuth


September 18, 2005
@ 04:15 AM

I've been a long time skeptic when it comes to RDF and the Semantic Web. Every once in a while I wonder if perhaps what I have a problem with is the W3C's vision of the Semantic Web as opposed to RDF itself. However in previous attempts to explore RDF I've been surprised to find that its proponents seem to ignore some of the real world problems facing developers when trying to use RDF as a basis for information integration.

Recently I've come across blog posts by RDF proponents who've begun to question the technology. The first is the blog post entitled Crises by Ian Davis where he wrote

We were discussing the progress of the Dublin Core RDF task force and there were a number of agenda items under discussion. We didn’t get past the first item though - it was so hairy and ugly that no-one could agree on the right approach. The essence of the problem is best illustrated by the dc:creator term. The current definition says An entity primarily responsible for making the content of the resource. The associated comments states Typically, the name of a Creator should be used to indicate the entity and this is exactly the most common usage. Most people, most of the time use a person’s name as the value of this term. That’s the natural mode if you write it in an HTML meta tag and it’s the way tens or hundreds of thousands of records have been written over the past six years...Of course, us RDFers, with our penchant for precision and accuracy take issue with the notion of using a string to denote an “entity”. Is it an entity or the name of an entity. Most of us prefer to add some structure to dc:creator, perhaps using a foaf:Person as the value. It lets us make more assertions about the creator entity.

The problem, if it isn’t immediately obvious, is that in RDF and RDFS it’s impossible to specify that a property can have a literal value but not a resource or vice versa. When I ask “what is the email address of the creator of this resource?” what should the (non-OWL) query engine return when the value of creator is a literal? It isn’t a new issue, and is discussed in-depth on the FOAF wiki.

There are several proposals for dealing with this. The one that seemed to get the most support was to recommend the latter approach and make the first illegal. That means making hundreds of thousands of documents invalid. A second approach was to endorse current practice and change the semantics of the dc:creator term to explictly mean the name of the creator and invent a new term (e.g. creatingEntity) to represent the structured approach.
That’s when my crisis struck. I was sitting at the world’s foremost metadata conference in a room full of people who cared deeply about the quality of metadata and we were discussing scraping data from descriptions! Scraping metadata from Dublin Core! I had to go check the dictionary entry for oxymoron just in case that sentence was there! If professional cataloguers are having these kinds of problems with RDF then we are fucked.

It says to me that the looseness of the model is introducing far too much complexity as evidenced by the difficulties being experienced by the Dublin Core community and the W3C HTML working group. A simpler RDF could take a lot of this pain away and hit a sweet spot of simplicity versus expressivity.

Ian Davis isn't the only RDF head wondering whether there is too much complexity involved when trying to use RDF to get things done. Uche Ogbuji also has a post entitled Is RDF moving beyond the desperate hacker? And what of Microformats? where he writes

I've always taken a desperate hacker approach to RDF. I became a convert to the XML way of expressing documents right away, in 1997. As I started building systems that managed collections of XML documents I was missing a good, declarative means for binding such documents together. I came across RDF, and I was sold. I was never really a Semantic Web head. I used RDF more as a desperate hacker with problems in a fairly well-contained domain.
I've developed an overall impression of dismay at the latest RDF model semantics specs. I've always had a problem with Topic Maps because I think that they complicate things in search of an unnecessary level of ontological purity. Well, it seems to me that RDF has done the same thing. I get the feeling that in trying to achieve the ontological purity needed for the Semantic Web, it's starting to leave the desperate hacker behind. I used to be confident I could instruct people on almost all of RDF's core model in an hour. I'm no longer so confident, and the reality is that any technology that takes longer than that to encompass is doomed to failure on the Web. If they think that Web punters will be willing to make sense of the baroque thicket of lemmas (yes, "lemmas", mi amici docte) that now lie at the heart of RDF, or to get their heads around such bizarre concepts as assigning identity to literal values, they are sorely mistaken. Now I hear the argument that one does not need to know hedge automata to use RELAX NG, and all that, but I don't think it applies in the case of RDF. In RDF, the model semantics are the primary reason for coming to the party. I don't see it as an optional formalization. Maybe I'm wrong about that and it's the need to write a query language for RDF (hardly typical for the Web punter) that is causing me to gurgle in the muck. Assuming it were time for a desperate hacker such as me to move on (and I'm not necessarily saying that I am moving on), where would he go from here?

Uche is one of the few RDF heads whose opinions seem grounded in practicality (Joshua Allen is another) so it is definitely interesting to see him begin to question whether RDF is the right path.

I definitely think there is some merit to disconnecting RDF from the Semantic Web and seeing if it can hang on its own from that perspective. For example, XML as a Web document format is mostly dead-on-arrival but it has found a wide variety of uses as a general data interchange format instead. I've wondered if there is similar usefulness lurking within RDF once it loses its Semantic Web baggage.


Categories: Web Development | XML

September 16, 2005
@ 05:27 PM

The announcements from about Microsoft's Linq project just keep getting better and better. In his post XML, Dynamic Languages, and VB, Mike Champion writes

Thursday at PDC saw lots of details being put out about another big project our team has been working on -- the deep support for XML in Visual Basic 9...On the VB9 front, the big news is that two major features beyond and on top of LINQ will be supported in VB9:

"XML Literals" is  the ability to embed XML syntax directly into VB code. For example,

Dim ele as XElement = <Customer/>

Is translated by the compiler to

Dim ele as XElement =  new XElement("Customer")

The syntax further allows "expression holes" much like those in ASP.NET where computed values can be inserted.

"Late Bound XML" is the ability to reference XML elements and attributes directly in VB syntax rather than having to call navigation functions.  For example

Dim books as IEnumerable(Of XElement) =

Is translated by the compiler to

Dim books as IEnumerable(Of XElement) = bib.Elements("book")

 We believe that these features will make XML even more accessible to Visual Basic's core audience. Erik Meijer, a hard core languages geek who helped devise the Haskell functional programming language and the experimental XML processing languages X#, Xen, and C-Omega, now touts VB9 as his favorite.

Erik Meijer and I used to spend a lot of time talking about XML integration into popular  programming languages back when I was on the XML team. In fact, all the patent cubes on my desk are related to work we did together in this area. I'm glad to see that some of the ideas we tossed around are going to make it out to developers in the near future. This is great news.


Categories: XML

A few months ago in my post GMail Domain Change Exposes Bad Design and Poor Code, I wrote Repeat after me, a web page is not an API or a platform. It seems some people are still learning this lesson the hard way. In the post The danger of running a remix service Richard MacManus writes was a service that used data from social bookmarking site, to create a site with enhanced statistics and a better variety of 'popular' links. However the service has just been taken off air, because its developer can no longer get the required information from The developer of wrote:

" doesn't serve its homepage as it did and I'm not able to get all needed data to continue Right now doesn't show all the bookmarked links in the homepage so there is no way I can generate real statistics."

This plainly illustrates the danger for remix or mash-up service providers who rely on third party sites for their data. can not only giveth, it can taketh away.

It seems Richard Macmanus has missed the point. The issue isn't depending on a third party site for data. The problem is depending on screen scraping their HTML webpage. An API is a service contract which is unlikely to be broken without warning. A web page can change depending on the whims of the web master or graphic designer behind the site.

Versioning APIs is hard enough, let alone trying to figure out how to version an HTML website so screen scrapers are not broken. Web 2.0 isn't about screenscraping. Turning the Web into an online platform isn't about legitimizing bad practices from the early days of the Web. Screen scraping needs to die a horrible death. Web APIs and Web feeds are the way of the future.


Categories: Web Development

BusinessWeek has a cover story titled Troubling Exits at Microsoft which contains some excerpts from my blog. The relevant excerpt is

While Microsoft's internal reformers don't directly criticize Gates, they're frustrated with the sluggish pace of product development. As the company's chief software architect, Gates bears that responsibility. He's the author of a strategy called "integrated innovation." The idea is to get Microsoft's vast product groups to work closely together to take advantage of the Windows and Office monopolies and bolster them at the same time. But with so much more effort placed on cross-group collaboration, workers spend an immense amount of time in meetings making sure products are in sync. It "translates to more dependencies among shipping products, less control of one's product destiny, and longer ship cycles," writes Dare Obasanjo, a program manager in Microsoft's MSN division, on his blog.

To shake Microsoft out of its malaise, radical surgery may be in order.

Wow. Almost every month I am reminded that the stuff I write here can and is read by journalists. At the very least Jon Udell and Steve Gillmor seem to be somewhat regular readers given the number of times I've been quoted by them. This does make it hard for my blog to be as 'personal' as I want it to be.

This is the second time this year my blog has been quoted in a major business paper. The first was the mention in the Wall Street Journal over a back and forth blog discussion between myself, Adam Bosworth and Krzysztof Kowalczyk about Google's contributions to Open Source. Since then Google has launched efforts like the Summer of Code contest and the Google Code website as ways to contribute back to the Open Source movement which has benefitted them so much.

I hope we'll see as much positive change within Microsoft in the future. It seems that we are already trying to move in the right direction with the recent stories about Microsoft's plans to overhaul its development strategy.


You know you're a geek when it's not even 7AM but you've already spent half the morning reading a whitepaper about Microsoft's plans to integrate XML and relational query language functionality into the .NET Framework with Linq.  C# 3.0 is going to be hot.

Like it's forefather X# Xen Cω, XLinq does an amazing job of integrating XML directly into the Common Language Runtime and the C#/VB.NET programming languages. Below are some code samples to whet your appetite until I can get around to writing an article later this year

  1. Creating an XML document

    XDocument contactsDoc = 
    new XDocument(
    new XDeclaration("1.0", "UTF-8", "yes"),
    new XComment("XLinq Contacts XML Example"),
    new XProcessingInstruction("MyApp", "123-44-4444"),
    new XElement("contacts",
    new XElement("contact",
    new XElement("name","Patrick Hines"),                                       
    new XElement("phone", "206-555-0144"),
    new XElement("address",
    new XElement("street1", "123 Main St"),
    new XElement("city", "Mercer Island"),
    new XElement("state", "WA"),
    new XElement("postal", "68042")

  2. Creating an XML element in the "" namespace

    XElement contacts = new XElement("{}contacts");

  3. Loading an XML element from a file

    XElement contactsFromFile = XElement.Load(@"c:\myContactList.xml");

  4. Writing out an array of Person objects as an XML file

    class Person {
            public string Name;
            public string[] PhoneNumbers;

    var persons = new [] { new Person
    "Patrick Hines"
                                   PhoneNumbers =
    new string
    "206-555-0144", "425-555-0145"
    new Person {Name="Gretchen Rivas"
                                       PhoneNumbers =
    new string

    XElement contacts = new XElement("contacts",
    from p in persons
    select new XElement("contact"
    new XElement("name"
    , p.Name),
    from ph in
    select new XElement("phone"
    , ph)



  5. Print out all the element nodes that are children of the <contact> element

    foreach (x in contact.Elements()) {

  6. Print all the <phone> elements that are children of the <contact> element

    foreach (x in contact.Elements("phone")) {

  7. Adding a <phone> element as a child of the <contact> element

    XElement mobilePhone = new XElement("phone", "206-555-0168");

  8. Adding a <phone> element as a sibling of another <phone> element

    XElement mobilePhone = new XElement("phone", "206-555-0168");
    XElement firstPhone = contact.Element("phone"

  9. Adding an <address> element as a child of the <contact> element

    contact.Add(new XElement("address",
    new XElement("street", "123 Main St"
    new XElement("city", "Mercer Island"
    new XElement("state", "WA"
    new XElement("country", "USA"
    new XElement("postalCode", "68042"

  10. Deleting all <phone> elements under a <contact> element


  11. Delete all children of the <address> element which is a child of the <contact> element


  12. Replacing the content of the <phone> element under a <contact> element


  13. Alternate technique for replacing the content of the <phone> element under a <contact> element

    contact.SetElement("phone", "425-555-0155");

  14. Creating a contact element with attributes multiple phone number types

    XElement contact =
    new XElement("contact"
    new XElement("name", "Patrick Hines"
    new XElement("phone"
    new XAttribute("type", "home")
    new XElement("phone"
    new XAttribute("type", "work")

  15. Printing the value of the <phone> element whose type attribute has the value "home"

    foreach (p in contact.Elements("phone")) {
    if ((string)p.Attribute("type") == "home"
    Console.Write("Home phone is: " + (string

  16. Deleting the type attribute of the first <phone> element under the <contact> element


  17. Transforming our original <contacts> element to a new <contacts> element containing a list of <contact> elements whose children are <name> and <phoneNumbers>

    new XElement("contacts",
    from c in contacts.Elements("contact"
    select new XElement("contact"
    new XElement("phoneNumbers", c.Elements("phone"

  18. Retrieving the names of all the contacts from Washington, sorted alphabetically 

    from    c in contacts.Elements("contact")
    where   (
    string) c.Element("address").Element("state") ==
    orderby (string) c.Element("name"
    select  (
    string) c.Element("name");

All examples were taken from the XLinq: .NET Language Integrated Query for XML Data  white paper.


Categories: XML

A lot of the comments in the initial post on the Microsoft Gadgets blog are complaints that the Microsoft is copying ideas from Apple's dashboard. First of all, people should give credit where it is due and acknowledge that Konfabulator is the real pioneer when it comes to desktop widgets. More importantly, the core ideas in Microsoft Gadgets were pioneered by Microsoft not Apple or Konfabulator.

From the post A Brief History of Windows Sidebar by Sean Alexander

Microsoft "Sideshow*" Research Project (2000-2001)

While work started prior, in September 2001, a team of Microsoft researchers published a paper entitled, "Sideshow: Providing peripheral awareness of important information" including findings of their project. 
The research paper provides screenshots that bear a striking resemblance to the Windows Sidebar.  The paper is a good read for anyone thinking about Gadget development.  For folks who have visited Microsoft campuses, you may recall the posters in elevator hallways and Sidebar running on many employees desktops.  Technically one of the first teams to implement this concept

*Internal code-name, not directly related to the official, “Windows SideShow™” auxiliary display feature in Windows Vista.

Microsoft “Longhorn” Alpha Release (2003)

In 2003, Microsoft unveiled a new feature called, "Sidebar" at the Microsoft Professional Developer’s Conference.  This feature took the best concepts from Microsoft Research and applied them to a new platform code-named, "Avalon", now formally known as Windows Presentation Foundation...

 Microsoft Windows Vista PDC Release (2005)

While removed from public eye during the Longhorn plan change in 2004, a small team was formed to continue to incubate Windows Sidebar as a concept, dating back to its roots in 2000/2001 as a research exercise. Now Windows Sidebar will be a feature of Windows Vista.  Feedback from customers and hardware industry dynamics are being taken into account, particularly adding support for DHTML-based Gadgets to support a broader range of developer and designer, enhanced security infrastructure, and better support for Widescreen (16:10, 16:9) displays.  Additionally a new feature in Windows Sidebar is support for hosting of Web Gadgets which can be hosted on sites such as or run locally.  Gadgets that run on the Windows desktop will also be available for Windows XP customers – more details to be shared here in the future.

So the desktop version of "Microsoft Gadgets" is the shipping version of Microsoft Research's "Sideshow" project. Since the research paper was published a number of parties have shipped products inspired by that research including MSN Dashboard, Google Desktop and Desktop Sidebar but this doesn't change the fact that the Microsoft is the pioneer in this space.

From the post Gadgets and by Sanaz Ahari was initially released on February 2005, on – since then we’ve been innovating regularly (,, and working towards accomplishing our goals:

  • To bring the web’s content to users through:
    • Rich DHTML components (Gadgets)
    • RSS and behaviors associated with RSS
    • High customizability and personalization
  • To enable developers to extend their start experience by building their own Gadgets

Yesterday marked a humble yet significant milestone for us – we opened our "Atlas" framework enabling developers to extend their experience. You can read more it here: The key differentiators about our Gadgets are:

  • Most web applications were designed as closed systems rather than as a web platform. For example, most customizable "aggregator" web-sites consume feeds and provide a fair amount of layout customization. However, the systems were not extensible by developers. With, the experience is now an integrated and extensible application platform.
  • We will be enriching the gadgets experience even further, enabling these gadgets to seamlessly work on Windows Sidebar

The stuff is really cool. Currently with traditional portal sites like MyMSN or MyYahoo, I can customize my data sources by subscribing to RSS feeds but not how they look. Instead all my RSS feeds always look like a list of headlines. These portal sites usually use different widgets for display richer data like stock quotes or weather reports but there is no way for me to subscribe to a stock quote or weather report feed and have it look the same as the one provided by the site. fundamentally changes this model by turning it on its head. I can create a custom RSS feed and specify how it should render in using JavaScript which basically makes it a gadget, no different from the default ones provided by the site.

From my perspective, we're shipping really innovative stuff but because of branding that has attempted to cash in on the "widgets" hype, we end up looking like followers and copycats.

Marketing sucks.


Categories: MSN

September 13, 2005
@ 11:32 PM has always been an innovative service but today's announcements have kicked it up a notch. In his post A Preview of Web 3.0, Scott Isaacs writes

Today's preview of the Developer illustrates fundamental shifts in web programming patterns:

  • DHTML-based Gadgets consumes DHTML-based components called Gadgets. These Gadgets can be created by any developer, hosted on any site, and consumed into the experience. The model is completely distributed. You can develop components derived from other components on the web.
  • Adding Behavior to RSS
    RSS (Really Simple Syndication) is an incredible platform for sharing content and information. Today all RSS feeds are treated equally by aggregators. integrates the world of RSS with Gadgets enabling any feed to optionally be associated with a rich, interactive experience. Some feeds present information that may be better presented in an alternative format. Other feeds leverage extensions or provide extra semantics beyond standard RSS (e.g., Open Search, Geo-based coordinates, etc). By enabling a feed to define a unique experience or consume an existing one, the richness of the aggregator experience can improve organically without requiring a new application. Of course, we also allow the user to control whether a custom experience is displayed for a feed.
  • Open-ended Application Model is what I call an open-ended application. An open-ended application consumes Gadgets and provides core application services and experiences. This is and has been the model since its inception (how do you think they released new features every week?). By opening up, we have removed the boundaries around features and experiences. The community of developers and publishers can now define and control the richness of the experience.

These are the web-applications of the future - applications that can integrate not only content (e.g., RSS) but associated behaviors and services. Today, via, the developer community can preview MSN's client technology and infrastructure. At, you will find early samples and documentation. This site will be continually improved with more documentation and samples. Go and build Gadgets and custom experiences for your feeds. Most importantly, since we are far from finished, please give us feedback. The platform can only improve with your feedback. Also, we are always looking for interesting Gadgets and custom RSS experiences.

I'm not sure I'm feelin' the "Web 3.0" monicker but the extensibility of the site is definitely cool beans. I remember a conversation I had with Steve Rider I had during the early days of the site where I asked if it would be possible to customize how different RSS feeds were displayed. At the time, I had noticed that there were three primary widget types for weather reports, stock quotes and headlines. I suggested that it would be cool if people could add annotations to the RSS feed to tell it how to display on the Being an XML geek I was was thinking of extensions such as a start:display-style element which could have values like "columns", "headlines" or "rows".

Steve thought my idea was cool and chatted with Scott Isaacs about it. Since Scott is the DHTML guru of DHTML gurus, he kicked the idea up a notch and actually designed an infrastructure where sophisticated rendering behavior could be associated with an RSS feed using JavaScript. The rest is history.

Damn, I love working here.


Categories: MSN | Web Development

September 13, 2005
@ 11:02 PM

The former co-workers (the Microsoft XML team) have been hard at work with the C# language team to bring the XML query integration into the core languages for the .NET Framework. From Dave Remy's post Anders unveils LINQ! (and XLinq) we learn

In Jim Allchin's keynote At PDC2005 today Anders Hejlsberg showed the LINQ project for the first time.  LINQ stands for Language Integrated Query.  The big idea behind LINQ is to provide a consistent query experience across different "LINQ enabled" data access technologies AND to allow querying these different data access technologies in a single query.  Out of the box there are three LINQ enabled data access technologies that are being shown at PDC.  The first is any in-memory .NET collection that you foreach over (any .NET collection that implements IEnumerable<T>).  The second is DLinq which provides LINQ over a strongly typed relational database layer.  The third, which I have been working on for the last 6 months or so (along with Anders and others on the WebData XML team), is XLinq, a new in-memory XML programming API that is Language Integerated Query enabled.  It is great to get the chance to get this technology to the next stage of development and get all of you involved.  The LINQ Preview bits (incuding XLinq and DLinq) are being made available to PDC attendees.  More information on the LINQ project (including  the preview bits) are also available online at

This is pretty innovative stuff and I definitely can't wait to download the bits when I get some free time. Perhaps I need to write an article exploring LINQ for the way I did with my Introducing C-Omega article? Then again, I still haven't updated my C# vs. Java comparison to account for C# 2.0 and Java 1.5. It looks like I'll be writing a bunch of programming language articles this fall. 

Which article would you rather see?


Categories: XML

While perusing my referrer logs I noticed that I was receiving a large number of requests from Google Desktop. In fact, I was serving up more pages to Google Desktop users than to RSS Bandit users. Considering that my RSS feed is included by default in RSS Bandit and there have been about 200,000 downloads of RSS Bandit this year it seemed extremely unlikely that there are more people reading my feed from Google Desktop than RSS Bandit.

After grepping my referrer logs, I noticed an interesting pattern when it came to accesses from the Google Desktop RSS reader. Try and see if you notice it from this snapshot of a ten minute window of time.

2005-09-13 16:31:05 GET /weblog/SyndicationService.asmx/GetRss - 80.58.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:32:13 GET /weblog/SyndicationService.asmx/GetRss - 65.57.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:32:36 GET /weblog/SyndicationService.asmx/GetRss - 209.221.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:33:05 GET /weblog/SyndicationService.asmx/GetRss - 64.116.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:33:12 GET /weblog/SyndicationService.asmx/GetRss - 209.204.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:33:20 GET /weblog/SyndicationService.asmx/GetRss - 68.188.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:34:48 GET /weblog/SyndicationService.asmx/GetRss - 209.221.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:35:25 GET /weblog/SyndicationService.asmx/GetRss - 64.116.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:35:32 GET /weblog/SyndicationService.asmx/GetRss - 209.204.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:35:40 GET /weblog/SyndicationService.asmx/GetRss - 68.188.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:36:14 GET /weblog/SyndicationService.asmx/GetRss - 80.58.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:37:33 GET /weblog/SyndicationService.asmx/GetRss - 65.57.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:37:46 GET /weblog/SyndicationService.asmx/GetRss - 209.204.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:37:55 GET /weblog/SyndicationService.asmx/GetRss - 68.188.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:37:55 GET /weblog/SyndicationService.asmx/GetRss - 64.116.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:39:19 GET /weblog/SyndicationService.asmx/GetRss - 196.36.*.1* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:39:31 GET /weblog/SyndicationService.asmx/GetRss - 12.103.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:39:55 GET /weblog/SyndicationService.asmx/GetRss - 18.241.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:40:11 GET /weblog/SyndicationService.asmx/GetRss - 63.211.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:40:15 GET /weblog/SyndicationService.asmx/GetRss - 64.116.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200
2005-09-13 16:41:23 GET /weblog/SyndicationService.asmx/GetRss - 80.58.*.* Mozilla/4.0+(compatible;+Google+Desktop) - 200

The *'s are there to protect the privacy of the people accessing my RSS feed. However it is clear that not only is Google Desktop fetching my RSS feed every 5 minutes it is also not using HTTP Conditional GET requests. WTF?

Since I couldn't find a place to send feedback about this product, I've posted it to my blog. I hope Google fixes this soon, I'd hate to have to ban their client because it is wasting my bandwidth.


September 13, 2005
@ 05:26 PM

I am proud to announce that we have launched the MSN Developer Center on MSDN. This is culmination of some of the efforts I started driving shortly after writing the blog post MSN Developer Network? where I wrote

Yesterday, in a meeting to hash out some of the details of MSN Spaces API an interesting question came up. So far I've been focused on the technical details of the API (what methods should we have? what protocol should it use? etc) as well as the scheduling impact but completely overlooked a key aspect of building developer platform. I hadn't really started thinking about how we planned to support developers using our API. Will we have a website? A mailing list? Or a newsgroup? How will people file bugs? Do we expect them to navigate to and use the feedback form?

Besides supporting developers, we will need a site to spread awareness of the existence of the API.

After writing that post I started talking to various folks at MSN who were interested in providing APIs for their various services and realized that there was an opportunity for us to come up with a unified developer portal as opposed to a number of disjoint efforts. My argument was that instead of developers having to figure out they need to go to to find out about extending MSN toolbar and to find out about building MSN Virtual Earth mash-ups, we should just have for all of MSN's Web 2.0 efforts. Everyone I talked to thought this made sense and now here we are.

Currently you can find information on extending or building applications with the APIs from Windows Desktop Search, MSN Toolbar,, MSN Virtual Earth, MSN Search, and MSN Messenger. In the near future I will be adding information about the APIs for interacting with MSN Spaces.

In addition to the developer center, we also have MSN Developer Forums where developers can discuss the various MSN APIs and interact with some of the people who work on the technologies.

Of course, this is just the beginning. Over the long term we have a bunch of stuff planned for the dev center including more APIs and more MSN properties joining the Web 2.0 party. This is going to be lots of fun.

PS: Shout outs go out to Jim Gordon, Chris Butler, Seth Demsey, Scott Swanson, Josh Ledgard and a host of others who helped make this a reality.


Categories: MSN

Surprise, surprise. Check out to try out a preview of Microsoft's AJAX framework. From the website

ASP.NET "Atlas" is a package of new Web development technologies that integrates an extensive set of client script libraries with the rich, server-based development platform of ASP.NET 2.0. "Atlas" enables you to develop Web applications that can update data on a Web page by making direct calls to a Web server — without needing to round trip the page. With "Atlas", you can take advantage of the best of ASP.NET and server-side code while doing much of the work in the browser, enabling a richer user experience.

ASP.NET "Atlas" includes:

  • Client script libraries that provide a complete solution for creating client-based Web applications. The client script libraries support object-oriented development, cross-browser compatibility, asynchronous calls to Web services, and behaviors and components for creating a full-featured UI.
  • Web server controls that provide a declarative way to emit markup and client script for "Atlas" features.
  • Web services, such as ASP.NET profiles, that can add useful server-side features to an "Atlas" application.

It's great to see this getting out to developers so quickly. Kudos to Scott Isaacs, Walter Hsueh and the rest of the folks at MSN who worked with the ASP.NET team on this project.


Categories: Web Development

September 13, 2005
@ 03:35 PM

In planning for our future releases, I came up against a design question that I'd like to see solved in future versions of the .NET Framework. The basic scenario is a website that wants to expose a web service using a number of different protocols (SOAP, XML-RPC, JSON, Plain Old XML (POX), etc) without having to write a lot of duplicate code. For example, the Flickr web services expose 3 interfaces for each method (SOAP, XML-RPC and POX).

From my perspective, this is something that should be handled by a protocol agnostic web services stack which is what the Windows Communications Foundation (aka Indigo) is supposed to be. On the other hand, the ASP.NET team's recently announced Atlas project is targetted server-side AJAX development.

What I want to know is basically this, if I want to create web services that utilize both SOAP & JSON does that mean I have to adopt both Indigo and Atlas?

Can some helpful soul at PDC ask this at any sessions on Indigo or Atlas? I guess I can always wait until next week and get the info from the horses mouth but product teams tend to give better answers to external customers than us internal MSFT folks. :)


Since my previous post on the various MSN sessions at PDC, two new ones have finally had their details announced. They are

PRSL02 - Case Study: How Hotmail Used Atlas and ASP.NET to Build a Great User Experience
September 14, 12:30 PM - 1:15 PM
152/153 (Hall F)
Walter Hsueh

Microsoft's Hotmail web application team is developing the successor to Hotmail: a modern webmail experience focused on safety, simplicity, and speed. We will walk you through the scale and performance requirements of the Internet's largest distributed webmail application and show you how building on ASP.NET and Atlas technologies provides the right solution for the problem space. Learn from our experiences and design patterns of how we leveraged the "Atlas" programming model and "Atlas" components to build rich, interactive Web applications.

PRSL04 - MSN: Extending Using Startlets
September 15, 1:00 PM - 1:45 PM
408 AB
Scott Isaacs/ Sanaz Ahari
MSN's Web incubation team is creating a new AJAX-based personalized Web experience at - here is your opportunity to get under the covers and see how we are building this site. allows for consumers to personalize their web experience to the things that matter the most to them. See how we build modules (Startlets) such as the Stock Picker, Weather Picker, and Blogger Map. Learn how to create your own custom Startlets for We will also present how your RSS feed can also define a unique experience when viewed within The team and architect will give you a tour of code you can use today. Join us for lunch and catch the latest wave of innovation from MSN's Web incubation team.

If you are a Web developer attending PDC that is interested in server side or client side AJAX development then you should attend both talks.

Categories: MSN

September 13, 2005
@ 02:18 AM

Shelley Powers has a post entitled Change starts at home where she points out the speaker list of the Our Social World conference is pretty homogenous (white males) and consists of the usual suspects when it comes to geeking about social software. I was quite surprised to see a comment in response to her post which stated

Shelley, you’re totally off the mark here. Firstly, there simply are not that many women working professionally on social software/blogging in the UK...Secondly, the speakers were self-selecting. Geoff who organised it put up a wiki and anyone could put their name down to speak. No women other than myself went to the effort of putting their name down and turning up... Finally, regarding ethnic minorities, you have to remember that the UK is not as ethnically diverse (and that that diversity is not as widely spread out) as the US .

I don't know about the UK but I do know that in the US, there are a lot of women in Social Software yet I keep seeing the same set of [white male] names on the speaker lists of various conferences on the topic. Given that this is the second post I've read today that points out the incongruities in the choices of geeks typically chosen as spokespeople for the social software world (the first was Phil Haack's Where are the Sociologists of Social Software) I decided to write something about it.

Just like with my Women in XML post last year, Shelley's post did make me start thinking about how many women I knew who worked with Social Software whose works I'd rather see presented than at least one of the presentations currently on the roster for the Our Social World conference. Here is my list





These women either are heavily involved in research around the sociological impact of technology and human interaction or actually work on building social software applications used by millions of people. Quite frankly, I'd rather hear any one of them speak than the typical geek you see at the average O'Reilly conference yaking about Social Software.

Unfortunately the people who really do the work that changes the world often get less publicity than the ones who just talk about it.


Categories: Social Software

A couple more details of MSN's upcoming announcements of the various APIs we'll be opening up during PDC have come out. The write up with the best overview I've seen so far has been the article Microsoft Web plan takes aim at Google which states

Microsoft will detail its "Web platform" strategy at its Professional Developers Conference in Los Angeles next week, company executives told CNET
At the developers conference next week, Microsoft plans to publish the API to its MSN Search service, which can be used by developers through the Simple Object Access Protocol, or SOAP. The noncommercial license will let people produce 10,000 search results per day per Internet address, said Seth Demsey, group program manager for MSN Search. Microsoft will release an API for its desktop search as well.

Also next week, the company will announce a free commercial license to use a JavaScript "control" to display data from its Virtual Earth mapping service. The MSN Messenger group, meanwhile, will allow developers to write Windows applications that make use of the "Activity" window. This would allow a customer service representative, for example, to display customer information in a chat session.
Next Thursday, Microsoft executives will discuss a developer program for, an MSN incubator Web site that consolidates information from RSS feeds and other Web sites onto a single customizable page.

That's right, during the PDC we'll be announcing developer programs for four MSN properties; MSN Virtual Earth, MSN Messenger, and MSN Search. Unfortunately, MSN Spaces isn't on that list but this is mainly due to logistics reasons and not because we won't be opening it up to as a platform for developers.

Of course, there are more details from the horses mouth.

MSN Messenger
From Leah Pearlmann's post about the MSN Messenger Activity API we learn

The release of the API will be announced at the upcoming Microsoft PDC next week in LA. Along with this announcement will be another for a contest
Starting next week, you, yes you, can download the Activity API, build an Activity using the competition guidelines, and submit it. You Activity will be posted in the App Gallery where people around the globe can try it out and vote for it (which they will, because YOURS will be the best).

Activities will be judged based on creativity, usability, inclusion of MSN services/features and number of popular votes. Besides the most valuable reward – unlimited bragging rights— you can also win:

  • Alienware Area 51 laptop with armored case (grand prize)
  • Aurora Desktops (1st runner-up)
  • Oakley Thump sun glasses with built in MP3 player (2nd runner-up)

For more information, go to visit this forum or wait until September 12th and then go to and

I've been involved in the discussions on opening up different parts of our IM client and it's great to see some of these efforts begin to bear fruit. Once the contest is live, don't hesitate to post questions to the developer forum. I'll be watching as will several folks on the MSN Messenger team.

By the way, the MSN Messenger PM team is looking for someone with experience to drive their audio and video efforts. If this sounds like your cup of tea, then check out the job description and maybe even respond.

MSN Virtual Earth
The big news here was posted by Chandu Thota in his blog posting Virtual Earth APIs available for commercial use (and they are FREE!) where he wrote

Here is the good news folks: We are now offering the MSN Virtual Earth API for commercial applications free of charge to developers.  This APIs include the JavaScript map control and local search service (exposed via the What/Where search boxes on Virtual Earth site today). 

Here are some important notes about this release:

1. Maps are available in U.S. only and does not include any routing capabilities. In the future we will add European and Asian geographies. 
2. In order to use the API for commercial applications with no charge, your application must use the local search service on the map (the What/Where search boxes)
3. There is no SLA for Virtual Earth enabled applications until January 1, 2006.  Also, if you choose to use Virtual Earth in your production environment before the end of 2005, you must notify the MSN Virtual Earth team first in order to ensure capacity
4. MapPoint Web Service and Virtual Earth platform are not integrated (yet!)

 Now how does the future look like? On January 1, 2006, you will have two additional options to choose from to fit your commercial application needs:

1. You can use the Virtual Earth APIs for free as long as you use the What/Where search boxes on your map. This is also makes sense from a revenue stand point since you will have the opportunity to make money by placing advertisements on your site in a revenue sharing model (more details to be announced at a later date)
2. If you do not want to utilize the What/Where search boxes on your site or advertising, you can use the Virtual Earth API under your current MapPoint Web Service contract. In this scenario you will be charged for transactions through the Virtual Earth API. Note that this option comes with SLAs. 

There you have it - now you have what you are waiting for: the opportunity to integrate maps into your applications at free of cost (as long as you use the What/Where search boxes). I will be writing more about how to integrate maps and the What/Where search boxes into your web applications on this blog in my future posts. In the mean time, don't forget to check out ViaVirtualEarth to learn how to build applications using Virtual Earth Map control (and to win $1000!)

That's right, they have a developer contest going on as well. I'd hoped to have my article on building my Seattle Movie Finder page up by PDC to give folks a place to start when building their first mapping mash-up but I never got enough spare time at work. The article should be done by next week and should hopefully show up online the week after PDC.


Categories: MSN

Dion Hinchcliffe wrote a blog entry entitled State of Ajax: Progress, Challenges, and Implications for SOAs  which did a good job of pointing out the [somewhat obvious yet not so obvious] link between web services and AJAX. For those who don't have time to wade through his entire post, the key bit is

Lightweight SOAP and REST services are in and WS-* services may be on the rocks. Since Ajax applications can and will frequently call backend XML web services as the user interacts with the front-end, having lightweight, easy-to-consume, high performance/availability SOAP and REST web services will only become more important. Of course, the implication is that since Ajax applications live in a fairly austere JavaScript sandbox, this means heavy-duty WS-*-style SOAP stacks probably won't play well with the lightweight XML/JSON service interactions that Ajax prefers. This doesn't mean WS-* is broken for Ajax, but at this time there are no Ajax frameworks yet that support more than basic SOAP service interaction.

There's some stuff I agree with here but a lot I don't. Personally I think even lightweight SOAP services are out in the AJAX world. As it stands, lots of AJAX developers are trying to eschew the overhead of XML for the simplicity of JSON let alone the overhead and complexity of SOAP. The fact is that most AJAX applications talk to lightweight REST services with either XML or JSON being the wire format.

This is yet another example of the dichotomy between providing services for the Web and building services for using within an enterprise's intranet.

Surprisingly, it seems some people fail to acknowledge this dichotomy. One of these people is Nick Malik who in his recent post On Atlas/Ajax and SOA stated

I ran across a blog entry that attempts to link Atlas/Ajax to SOA.  What absolute nonsense!
So what's wrong with having a browser consume enterprise web services?  The point of providing SOA services is to be able to combine them and use them in a manner that is consistent and abstracted from the source application(s).  SOA operates at the integration level... between apps.  To assume that services should be tied together at the browser assumes that well formed architecturally significant web services are so fine-grained that they would be useful for driving a user interface.  That is nonsense.

For an Atlas/Ajax user interface to use the data made available by a good SOA, the U/I will need to have a series of fine-grained services that access cached or stored data that may be generated from, or will be fed to, an SOA.  This is perfectly appropriate and expected.  However, you cannot pretend that this layer doesn't exist... it is the application itself!

In a nutshell, the distinction is in the kinds of services provided.  An SOA provides coarse-grained services that are self-describing and fully encapsulated.  In this environment, the WS-* standards are absolutely essential.  On the other hand, the kinds of data services that a web application would need in an Atlas/Ajax environment would be optimized to provide displayable information for specific user interactions.  These uses are totally different. 

This is probably one of the most bogus posts I've ever seen written by a Microsoft employee. As Nick points out, the point of providing services is to be able to combine them and use them in a manner that is consistent and abstracted from the source application.

For example, my Seattle Movie Finder web page is powered by a RESTful web service which gives it information about movies currently playing in the Seattle area. The URL, gives me an XML list of all the movies currently playing in my neighborhood. The web page is an AJAX application that consumes this service. This information could also be consumed by a smart client on a desktop or another service which augments the data (e.g. merges in the movie critic ratings to the various movies before sending to an end user). Claiming that because this service doesn't use the various WS-* technologies and is being accessed from a web browser somehow makes it illegitimate is just plain ridiculous.

Furthermore, it is quite likely that the various services that are used to gather this information aren't RESTful. However what works within the intranet isn't necessarily what works on the Web.

An interesting challenge I've faced at work is convincing some of the developers on my team that just because we use SOAP for the services we use internally, this doesn't mean we may not use alternate appproaches for Web facing services. This issue first came up when we decided to go with the MetaWeblog API as the blog editing API for MSN Spaces and I'm sure it will keep coming up.

When you have a hammer, everything looks like a nail. SOAP/WS-* are not the answer to every problem and just because they can't be used in a particular problem space doesn't mean that problem space any less valid than others. The sooner people understand the dichotomy that is intranet vs. internet service development, the better.


It looks like I'm going to regret not going to PDC since we'll be unveiling some stuff I've been working with folks on over the past couple of months. Here's a hint, ;)

If you'll be at PDC, you definitely should check out some of the info first hand by attending some of our conference sessions. Such as

COM301  MSN Messenger: Extending MSN Messenger with Multi-Person Instant Messaging Applications

Day/Time: Wednesday, September 14 1:45 PM- 3:00 PM Room: 406 AB
Speaker(s): Scott Swanson
Session Type(s): Breakout
Session Level(s): 300
Track(s): Communications
This session covers the architecture and design of multi-person IM applications within MSN Messenger using the Messenger Activity API. We show how to use the peer-to-peer capabilities of the Activity API to build multi-user IM applications that can send files, instant messages, data, and integrate with other services. Build your IM applications to work with MSN Messenger, the world's largest instant messaging service with more than 165 million customers worldwide.

DAT322  MSN Search: Building Web and Desktop Search into Your Applications
Day/Time: Thursday, September 15 5:15 PM- 6:30 PM Room: 409 AB
Speaker(s): Seth Demsey, Chris McConnell
Session Type(s): Breakout
Session Level(s): 300
Track(s): Data & Systems
This session shows you how to harness the power of Web and Desktop Search within your applications. We provide an overview of the MSN Search APIs for both searching the Web and your desktop. We then demonstrate how to use these APIs to create applications that harness the power of searching your local data and Web data.

DATL03  Tips, Tricks & Hacks to MSN Search and Desktop Search Platforms
Day/Time: Friday, September 16 12:00 PM- 12:45 PM Room: 403 AB
Speaker(s): Andy Edmonds
Session Type(s): Lunch Session
Track(s): Data & Systems
This session will show you how to get the most of MSN Search and Windows Desktop Search. From advanced syntax to API usage and RSS, you will be equipped to get exactly what you want from these search tools. Learn about the ranking sliders which allow you to emphasize freshness or popularity in the results. We will also be distributing a hack which customizes MSN Search to the needs of a PDC attendee. For Windows Desktop Search you'll learn how to make yourself more productive with advanced query syntax, Deskbar shortcuts, additional locations and customized previews.

DATL05  Case Study: Extending Virtual Earth for Windows Mobile Devices
Day/Time: Thursday, September 15 1:00 PM- 1:45 PM Room: 404 AB
Speaker(s): Steve Lombardi
Session Type(s): Lunch Session
Track(s): Data & Systems
MSN Virtual Earth ( is an evolution of local search technology that gives consumers a deeply immersive search experience where they can easily find, discover, plan, and share what is important to them. Microsoft is now harnessing its extensive search and mapping assets to create an entirely new local search experience for consumers and businesses. In this session, learn how you can programmatically tap into the power of Virtual Earth to location enable your device applications. Learn how the powerful managed code features of Windows Mobile 5.0 enable you to integrate your devices contacts and calendar with Virtual Earth's powerful search and mapping engines.

PNL09  APIs in the Sky: Developing Public Web Services
Day/Time: Friday, September 16 10:30 AM- 12:00 PM Room: 152/153 (Hall F)
Speaker(s): Don Box, Seth Demsey, Omri Gazitt, Alan Geller, David Nielsen, Doug Purdy
Session Type(s): Panel
Track(s): Communications
Want to know how Amazon, Paypal and MSN design, build, and maintain their Web services? Interested in learning best practices and key insights from the architects of Windows Communication Foundation ("Indigo")? If so, this is the panel for you. Join Don Box and other industry luminaries as they discuss the practical trade-offs involved when building Web services today and what you can expect to see in the near and long-term future.
I was supposed to be one of the speakers on the panel on developing public web services, however since I bowed out Seth will be bringing the MSN perspective to the panel. The combination of my workload and the fact that I'll be attending the Web 2.0 conference next month me decline. I'll definitely be doing some blogging about the announcements next week.


Categories: MSN

As promised in the RSS Bandit roadmap, the preview of the next version of RSS Bandit is now available for general download. You can now download it at


  • NNTP Newsgroups support: Users can specify a public NNTP server such as and subscribe to newsgroups on that server. Permalinks in a newsgroup post point to the post on Google Groups.

  • Item Manipulation from Newspaper Views:  Items can be marked as read or unread and flagged or unflagged directly from the newspaper view. This improves end user work flow as one no longer has to leave the newspaper view and right-click on the list view to either flag or mark a post as unread. 

  • Subscription Wizard: The process for subscribing to newsgroups, search results and web feeds has been greatly simplified. For example, users no longer need to know the web feed of a particular web site to subscribe to it but can instead specify the web page URL and discovery of its web feed is done automatically. 

  • Synchronization with Newsgator Online: Users can synchronize the state of their subscribed feeds (read/unread posts, new/deleted feeds, etc) between RSS Bandit and their account on Newsgator Online. This allows the best of both worlds where one can use both a rich desktop client (RSS Bandit) and a web-based RSS reader (Newsgator Online) without having to worry about marking things as read in both places.

  • Atom 1.0 support: The Atom 1.0 syndication format is now supported. 

  • Threaded Posts Now Optional: The feature where items that link to each other are shown as connected items reminiscent of threaded discussions can now be disabled and is off by default. This feature is very processor intensive and can slow down the average computer to the point that is unusable if one is subscribed to a large number of feeds.

  • UI Improvements: Icons in the tree view have been improved to make them more distinctive and also visually separate newsgroups from web feeds.


  • Downloading of Enclosures/Podcasts
  • Comment Watch - Notifications of new comments on "watched" posts
  • Tip of the Day on Startup
  • Extensibility Framework to Enable Richer Plugins


Categories: RSS Bandit

For all you OneNote freaks out there, Torsten has written a plugin for sending an item from RSS Bandit to OneNote. From his post "Send to OneNote..." plugin for RSS Bandit 

You can download it here. Just expand the zip to your RSS Bandit installation sub-folder named plugins. Within the config file you can change the default note page used and some templates to format the posted item link and content.

The plugin uses the IBlogExtension interface so it should also work with other RSS readers written using the .NET Framework such as SharpReader.


Categories: RSS Bandit

September 6, 2005
@ 03:13 PM

I picked up The Incredible Hulk: Ultimate Destruction for the XBox this weekend and I've been hooked on it. The IGN review says it all

the development team has axed the structured, linear levels of its last Hulk adventure and opened things up into an expansive free-roaming world. While not based on any real-life city like Grand Theft Auto: San Andreas or Spider-Man 2, Ultimate Destruction still maintains an authentic urban feel to it -- with bridges, hospitals, moving traffic, and pedestrians that all have one thing in common: they're certain unavoidable targets for the basketball-sized fists of Ol' Greenskin.

What makes attacking these targets so fun, though, is that just about everything you encounter is completely deformable: Various types of autos break and smash into pieces, trees and lampposts are uprooted from their bases, explosions go off in just about every direction, and sometimes even buildings themselves crack and crumble to the ground. "Ultimate Destruction" is exactly what this experience offers and as the Incredible Hulk himself, there's little you can't do to make the game live up to its title.

But if all you could do was run around the city smashing things up with a small repertoire of moves, then the Incredible Hulk would grow old pretty quickly...There are a number of combos, weapon strikes, grabs, throws, and chains that you can perform because of this, as well, and just about every single one of them are incredibly cool.

Want to smash a bus into flattened metal and then use it as a shield? Go right ahead! Would you prefer to shatter a radio tower and use the leftover antenna as a javelin? You can do that too! You can even crush boulders into near-perfect circles and play oversized bowling or clang two cars together to form makeshift boxing gloves that inflict additional damage. And just when you though you've seen it all, a new mission comes along that requires you to hop onto a harrier jet and wrestle it to the ground or bash a cargo truck over your head for use as a Metal Gear Solid-inspired cardboard box disguise for sneaking into military bases. Needless to say, the move progression system here is perfect (using a traditional purchase system not unlike Onimusha or Devil May Cry) and steadily transforms you from a mindless oversized pugilist into an unstoppable engine of destruction. I love it.

This game is fantastic and I'm surprised that it hasn't been rated higher in the various reviews I've seen. It's definitely the best game I've seen for the XBox this year.


Categories: Video Games

Like everyone else I have been stunned by what I'vee seen on the various news channels about the aftermath of hurricane Katrina in New Orleans. I don't really have the words to express myself so I'll point to the words of others that express how I feel

  1. From Shelley Powers's post Stopping the World

    However, I’m finding that the contention and anger surrounding this event is becoming increasingly difficult to absorb. I can’t seem to maintain enough detachment to keep from being pulled completely in, and by the end of the day, I’m feeling emotionally drained and physically sick. Some of this is coming from the worries, frustrations, and the sense of loss–of people, of history–because of Katrina. But not all.

    Debate should energize, not drain. When it doesn’t, you need to step away. When I read the headline, Condi returns to DC after Bloggers expose vacation about how wrong it was for Rice to buy expensive shoes while people are suffering in New Orleans, it was enough. And I find I don’t have the words to explain why.

    While I’m taking a breather, some folks with good thoughts:

    Joseph Duemer: Small Town Accountability

    Jeneane Sessum: President Bush Declares War on Weather

    Dave Rogers: What can I say and Unbelievable

    A question and answer that Dave Winer had about the future impact of Katrina–beyond the South. In particular, check out the comments associated with the question.

    Loren Webster: Two Worlds Apart

    Frank Paynter: Down on our Luck

    Scott Reynen: Fear Kills

    Sheila Lennon provides a continuously updated round of news.

    Norm Jenson: Incompetence

    Charles Eicher: Outrage Overload

    Karl: We would have fought or died

    Lauren points to Culture of Life

    There are others, but this is a good start.

  2. From Doc Searls's post Prophecies 

    This event won't have ripple effects. The consequences will be tidal: on transportation, on agriculture, on lumber and other supplies, on retailing, on churches and on citizens across the country who will need to take on the burden of caring for refugees and helping others start new lives.

    Katrina also force us to face a subject even Demoncrats[sic] have stopped talking about, although it lurks beneath everything: class. When the dead are counted, most of them will have been poor. Count on it.

  3. From Paul Graham's essay Inequality and Risk

    Like many startup founders, I did it to get rich. But not because I wanted to buy expensive things. What I wanted was security.


Categories: Current Affairs

A number of people at work have asked me why in a previous post I stated that I have concluded that MSFT isn't the kind of place I see myself working at in 5 years. The main reason for this is that I don't think the Microsoft culture and the direction from its executive leadership lends itself to building great consumer-centric software.

A good example of this is taking a look at Windows from the consumer perspective. The decisions that Microsoft has made over the past couple of years from abandoning feature work in Internet Explorer until Firefox became popular to a lot of the original intentions around the 3 pillars of Longhorn (Avalon, WinFS & Indigo) are the actions of a company that is more interested in protecting its market share than one that is trying to improve the lives of its customers by building great software.  Of course, it's not only customers that get the short end of the stick. Employees also have the consequences of this kind of thinking to deal with as well. The primary way this manifests itself is integrated innovation, a buzzword that translates to more dependencies among shipping products, less control of one's product destiny and longer ship cycles. A lot of the frustration you see in the comments in places like the Mini-Microsoft blog are a direct consequence of this focus by our executive leadership.

For now, MSN doesn't suffer from the same kind of culture that I have described but I can see signs that this is just temporary while we face off against competitors like Google [and Yahoo! to a lesser extent]. Since I don't see any reason why the corporate culture will change since the members of our executive leadership who are pushing this kind of thinking are well entrenched, it's just a matter of time before we start thinking that way at MSN as well. 

I give it 5 years, tops. In the meantime, I get to work with really cool people building really cool software that is changing people's lives. I can't ask for much more than that.


Categories: Life in the B0rg Cube

September 1, 2005
@ 07:49 PM

In his post The saga of RSS (dis)continuity Jon Udell writes

It's been almost three years since I first wrote about the problem of RSS feed redirection. From time to time I'm reminded that it's still a problem, and today I noticed that two of the blogs I read were affected by it. I was subscribed to John Ludwig at, and today's entry says "Feed moved -- pls check out" In fact he's got an index.xml and an atom.xml, and the latter seems to correspond to what's actually published on the blog, but either way the issue is that we've still yet to agree on a standard way for newsreaders to follow relocated feeds.

Jon Udell is incorrect. There is a standard way to redirect feeds that is supported by a large number of RSS readers and it is called "just use HTTP". Many RSS readers including RSS Bandit support the various status codes for indicating that the location of a resource has changed temporarily or permanently as well as when the resource is no longer available.

Instead of constantly reinventing the wheel and looking for solutions to problems that have already been solved, a better use of our energy should be evangelizing how to properly use the existing technology.

Jon Udell does point out

So far as I know, that's where things stand today. If you control your server, you can of course do an HTTP-level redirect. But your blog is hosted, you probably can't, in which case you need to use the feed itself to signal the redirect.

This part just boggles my mind. If the user's blog is hosted (e.g. they are a LiveJournal, MSN Spaces or BlogSpot user) then not only can't they control the HTTP headers emitted by the server but they don't control their web feed either. So what exactly is the alternate solution that works in that case? If anything, this points to the fact that blog hosting services should give users the ability to redirect their RSS feed when they leave the service. This is a feature request for the various blog hosting services not an indication that a new technical solution is needed.



September 1, 2005
@ 06:34 PM

I have a LazyWeb request. I plan to write an article about my Seattle Movie Finder hack which is built on the MSN Virtual Earth APIs next week. Currently the application works in Internet Explorer but not in Firefox. This isn't due to any issues with VE but has to do with the fact that I don't know what the Firefox alternatives to innerHTML and some other properties of the DOM in IE actually are nor do I have time to brush up on writing Firefox-specific Javascript.

If any kind soul can do a view source on the Seattle Movie Finder and modify it to also work in Firefox, I'd appreciate it. Thanks in advance.

Update:  Julien Couvreur just swung by my office and fixed the page so it works in Firefox & IE. The new version of the site should be up in a few hours.


Categories: Web Development

September 1, 2005
@ 05:37 PM

It looks like is now an actual site as opposed to a 'coming soon' graphic. However the disclaimer at the bottom still states, this site is not an officially supported site. it is an incubation experiment and doesn't represent any particular strategy or policy.

Curioser and curioser.


Categories: MSN

I've known about this for a couple of days but was planning to wait until it was mentioned on the MSN Search blog. However since they've been scooped it looks like the cat is out of the bag. So here goes

Searching for Web (RSS, Atom) Feeds

Need to find an RSS or Atom feed? No problem.  Use the feed: operator to do so.  This searches for all documents that are RSS or atom feeds

Searching for documents that contain Web (RSS, Atom) Feeds

Need to find a document that contains an RSS or Atom feed?  Use the hasfeed: operator.

Folder Level Site Search

You can now use site search (site: operator) to restrict your search to a particular folder hierarchy in the URL up to two levels deep.  For example, (searching the Windows site on (searching a blog on MSN Spaces)

Note:  There is a known issue around this feature. You cannot include a / after the second directory. This will be fixed in the near future. 

So it looks like MSN Search is the first of the big three search engines to provide RSS search and the improvements to the site: operator are also quite cool. They definitely get mad props from me for getting these features out there. 


Categories: MSN