March 31, 2005
@ 03:45 PM

Bloglines just published a new press release entitled Bloglines is First to Go Beyond the Blog with Unique-to-Me Info Updates which is excerpted below

Oakland, CA -- March 30, 2005 -- Ask Jeeves®, Inc. (Nasdaq: ASKJ), today announced that Bloglines™ (, the world’s most popular free online service for searching, subscribing, publishing and sharing news feeds, blogs and rich web content has released the first of a wave of new capabilities that help consumers monitor customized kinds of dynamic web information. With these new capabilities, Bloglines is the first web service to move beyond aggregating general-audience blogs and RSS news feeds to enable individuals to receive updates that are personal to their daily lives.

Starting today, people can track the shipping progress of package deliveries from some of the world’s largest parcel shipping companies—FedEx, UPS, and the United States Postal Servicewithin their Bloglines MyFeeds page. Package tracking in Bloglines encompasses international shipments, in English. Bloglines readers can look forward to collecting more kinds of unique-to-me information on Bloglines in the near future, such as neighborhood weather updates and stock portfolio tracking.

“Bloglines is a Universal Inbox that captures all kinds of dynamic information that helps busy individuals be more productive throughout the day—at the office, at school, or on the go,” said Mark Fletcher, vice president and general manager of Bloglines at Ask Jeeves. “With an index of more than 370 million blog and news feed articles in seven languages, we’re already one of the largest wells of dynamic web information. With unique-to-me news updates we’re aiming to be the most comprehensive and useful personalized information resource on the web.”

So it looks like Bloglines is evolving into MyYahoo! or MyMSN which already provide a way to get customized personal information from local news and weather reports to RSS feeds and email inboxes. 

I've been pitching the concept of the digital information hub to folks at work but I think the term 'universal inbox" is a more attractive term. As a user spends more and more time in front of an information consumption tool be it an email reader, RSS reader or online portal, the more data sources the user wants supported by the tool. Online portals are now supporting RSS. Web-based RSS readers are now supporting content that would traditionally show up in a personalized view at an online portal.

At MSN, specifically with, we are exploring what would happen if you completely blurred the lines between a web-based RSS reader and the traditional personalized dashboard provided by an online portal. It is inevitable that both mechanisms of consuming information online will eventually be merged in some way. I suspect the result will look more like what Steve Rider's team is building than MyYahoo! or Bloglines do today.

As I mentioned before we'd love feedback about all the stuff we are doing at Don't be shy send your feedback.


Categories: MSN | Syndication Technology

I recently read Joe Beda's post on Google 20% Time and it actually left me with a lot of questions as well as a better understanding of why there is a steady flow of ex-B0rg to Google's Kirkland office. I found some of his points interesting, specifically

Switching teams at Google is a very fluid process.  An engineer can be 40% on one project, 40% on something completely different and 20% on his or her own thing.  That mix can be adjusted as project requirements change.  Switching groups should also not have an affect on your annual review score because of arbitrary team politics.   Joining a new group is more about find a good mutual fit then going through HR and a formal interview loop.  When there is an important project that needs to be staffed the groups and execs will evangelize that need and someone who is interested is bound to step up.  If it is critical to the business and no one is stepping up (I don't know if this has occurred or not) then I imagine it would go come down to a personal appeal by the management team
There is a big difference between pet projects being permitted and being encouraged. At Google it is actively encouraged for engineers to do a 20% project. This isn't a matter of doing something in your spare time, but more of actively making time for it. Heck, I don't have a good 20% project yet and I need one. If I don't come up with something I'm sure it could negatively impact my review.

He also mentions the openness of the Google intranet which I find less interesting because I expect that as the company grows it will have to start dealing with the kind of leaks that Apple and Microsoft has. Given the company's history of tightly controlling its message (the extremely boring press release style posts on the Google blog, the firing of Mark Jen, etc) I expect that once leak sites in the style of Neowin and Microsoft-Watch spring up around Google's actions, there will be tighter control over access to internal content.

The first set of questions I have are around his comments on switching teams. At Microsoft it is common knowledge that switching teams in the middle of the annual review cycle is guaranteed to screw up your score. Due to stack ranking managers are obligated to give some people low scores and it is always easy to tell the new guy "Everyone's been here for a year you were only here for a few months so you didn't contribute as much". My first manager at Microsoft was actually pretty good about this which I'm thankful for. This is the main reason I left the XML team without waiting for the release [or even the second beta] of version 2.0 of the .NET Framework which I spent about two years working on. Once I had decided I was going to leave it made the most sense to do so at the beginning of the next review cycle independent of where we were in the ship cycle of the product. The first question swirling through my mind about Joe's post is how does Google allow people to fluidly move between teams yet not get screwed during annual reviews?

The other thing I find interesting are his comments on 20% projects. I was recently part of a leadership training class where we were supposed to come up with a presentation as a team and the topic we chose was launching a 20% project at Microsoft. One of the things we got hung up on was deciding what exactly constituted a valid 20% project. If the company mandates strongly encourages me to have a 'pet' project then what guidelines is the project expected to follow? If my pet project isn't software related, such as a woodworking project, can it be considered a valid use of my 20% time? What if my pet project benefits a competitor such as Mark Pilgrim's Butler which ads links to competing services to Google pages?

I personally haven't needed a formal 20% program while at Microsoft. I've written lots of articles and worked on RSS Bandit while still being able to deliver on my day job. Having flex time and reasonable autonomy from management is all I've needed.

My suspicion is that 20% pet projects are a cheap way for Google to do new product development and prototyping in a light weight manner as opposed to being a way to encourage people to spend time working on their hobbies. This is a 'code first' approach to launching new products. Developers actually write cool apps and then someone decides whether there is a market or business model for it or not. This is in contrast to the 'VP first' or 'talk first' approach of new product development at places like Microsoft where new projects need to be pitched to a wide array of middle and upper management folks to get the green light unless the person creating the project is in upper management. At the end of the day with either approach one has to pitch the product to some management person to actually ship it but at least with Google's "code first" approach there already is a working app not just a dollar and a dream.


Categories: Life in the B0rg Cube

MSN services such as MSN Spaces and MSN Messenger require one to create a Passport account to use them. Passport requires the use of an email address which is used as the login name of the user. However as time progresses people often have to change email addresses for one reason or the other. On such occassions I've seen a couple of requests internally asking for the ability to change the Passport account an MSN Spaces or MSN Messenger  account is associated with. 

Doing this is actually quite straightforward in the general case. All one has to do is go to the website and click on the 'View or edit your profile' link which should have an option for changing the email address associated with the Passport. And that's it.

Thanks to a recent changes made by some of the devs on our team, the renaming process now works seamlessly in MSN Messenger. You don't have to import your existing contact list to the new account nor do your contacts have to add the new email address to their contact list. Instead the change to your email address propagates across the MSN network in about an hour or so.

The are some caveats. The first is that renaming a Passport to a Hotmail account isn't supported at the current time. Another is that you may be prevented from changing your display name from the new email address in MSN Messenger for some time. This means that your friends will see you in their contact list as (if that is your email address).

The above information also applies if you've been asked to change your MSN Messenger email address because your employer is deploying Microsoft Live Communications Server 2005 with Public IM Connectivity (PIC).  


Categories: MSN

Recently I was chatting with Steve Rider on the team about the various gotchas awaiting them as they continue to improve the RSS aggregator at I mentioned issues like feeds which don't use title's like Dave Winer's and HTML showing up in titles.

I thought I knew all the major RSS gotchas and RSS Bandit handled them pretty well. However I recently got two separate bug reports from users of WordPress about RSS Bandit's inability to handle extension elements in their feed. The first complaint was about Kevin Devin's RSS feed which couldn't be read at all. A similar complaint was made by Jason Bock who was helpful enough to debug the problem himself and provide an answer in his post RSS Bandit problem fixed where he wrote

I can't believe what it took to fix my feed such that Rss Bandit could process it.

I'm just dumbfounded.

Basically, this is the way it looked:

						<rss xmlns:blog="urn:blog">
   <blog:info directory="JB\Blog" />
      <!--  Channel stuff goes here... -->


This is what I did to fix it:

						<rss xmlns:blog="urn:blog">
      <!--  Channel stuff goes here... -->
   <blog:info directory="JB\Blog" />


After debugging the Rss Bandit code base, I found out what the problem was. Rss Bandit reads the file using an XmlReader. Basically, it goes through the elements sequentially, and since the next node after <rss> wasn't <channel>, it couldn't find any information in the feed, and that's what was causing the choke. Moving <blog:info> to the end of the document solved it.

The assumption I made when developing the RSS parser in RSS Bandit was that the top level rss element would have a channel element as its first child element. I handle extension elements if they appear as children of the channel or item element since these seem logical but never thought anyone would apply an extension to the rss element. I took a look at what the RSS 2.0 specification says about where extension elements can appear and it seems my assumption was wrong since it states

RSS originated in 1999, and has strived to be a simple, easy to understand format, with relatively modest goals. After it became a popular format, developers wanted to extend it using modules defined in namespaces, as specified by the W3C.

RSS 2.0 adds that capability, following a simple rule. A RSS feed may contain elements not described on this page, only if those elements are defined in a namespace.

Since there is no explicit restriction of where extension elements can appear it looks like I'll have to make changes to be able to expect extension elements anywhere in the feed.

My apologies to the folks who've had problems reading feeds because of this oversight on my part. I'll fix the issue today and refresh the installer later this week.



I've seen a number of responses to my recent post, SOA, AJAX and REST: The Software Industry Devolves into the Fashion Industry, about the rebranding of existing techniques for building websites which used to be called DHTML or Remote Scripting to AJAX. I've found the most interesting responses to be the ones that point out why this technique isn't seeing mainstream usage in designing websites.

Scott Isaacs has a post entitled AJAX or as I like to call it, DHTML where he writes

As Adam Bosworth explained on Dare's blog, we built Ajax-like web applications back in 1997 during and after shipping IE4 (I drove the design of DHTML and worked closely with the W3C during this period).  At that time xmlhttp (or xml for that matter) did not exist. Instead, structured data was passed back and forth encapsulated in Javascript via hidden IFrames.  This experimental work helped prove the need for a structured, standardized approach for managing and transporting data.

Over the years, quite a few web applications have been built using similar approaches - many are Intranet applications and one of the best was Outlook Web Access. However, very few main-stream web-sites appeared. I believe this was due to a number of factors - the largest being that the typical web user-experience falls apart when you start building web-based applications.  The user-interface issues revolve mostly around history and refresh. (For example - In GMail, navigate around your inbox and either refresh the page or use the back button. In Spaces (IE Users), expand and view comments and hit the back button.  I am willing to wager that what happens is not what you really want).  The problem lies in we are building rich applications in an immature application environment. We are essentially attempting to morph the current state of the browser from a dynamic form package to a rich application platform.

I have noticed these problems in various Javascript based websites and it definitely is true that complex Javascript navigation is incompatible with the traditional navigation paradigm of the Web browser. Shelley Powers looks at the problems with Javascript based websites from another angle in her post The Importance of Degrading Gracefully where she writes

Compatibility issues aside, other problems started to pop up in regards to DHTML. Screen readers for the blind disabled JavaScript, and still do as far as I know (I haven’t tried a screen reader lately). In addition, security problems, as well as pop-up ads, have forced many people to turn off JavaScript–and keep it off.

(Search engines also have problems with DHTML based linking systems.)

The end result of all these issues–compatibility, accessibility, and security–is a fundamental rule of web page design: whatever technology you use to build a web page has to degrade, gracefully. What does degrade, gracefully, mean? It means that a technology such as Javascript or DHTML cannot be used for critical functionality; not unless there’s an easy to access alternative.
Whatever holds for DHTML also holds for Ajax. Some of the applications that have been identified as Ajax-enabled are flickr and Google’s suite of project apps. To test how well both degrade, I turned off my JavaScript to see how they do.
Google’s gmail, on the other hand, did degrade, but did not do so gracefully. If you turn off JavaScript and access the gmail page, you’ll get a plain, rather ugly page that makes a statement that the primary gmail page requires JavaScript, either turn this on, get a JS enabled browser, or go to the plain HTML version.

Even when you’re in the plain HTML version, a prominent line at the top keeps stating how much better gmail is with a Javascript enabled browser. In short, Google’s gmail degrades, by providing a plain HTML alternative, but it didn’t do so gracefully; not unless you call rubbing the customer’s nose in their lack of JS capability “graceful”.

You don’t even get this message with Google Suggest; it just doesn’t work (but you can still use it like a regular search page). As for Google Maps? Not a chance–it is a pure DHTML page, completely dependent on JavaScript. However, Mapquest still works, and works just as well with JS as without.

(Bloglines also doesn’t degrade gracefully — the subscription is based on a JavaScript enabled tree. Wordpress, and hence Wordform, does degrade gracefully.)

If we’re going to get excited about new uses of existing technology, such as those that make up the concept of Ajax, then we should keep in mind the rule of degrading gracefully: Flickr is an example of a company that understands the principle of degrading gracefully; Google is an example of a company, that doesn’t.

Update: As Doug mentions in comments, flickr is dependent on Flash. If Flash is not installed, it does not degrade gracefully.

I do remember being surprised that I had to add "http://*" to my trusted sites list to get Google Maps to work. Of course, none of the above observations is new but given that we are seeing a renaissance of interest in using Javascript for building websites, it seems like a good idea for a similar return of the arguments discussing the cons of this approach as well.


Categories: Technology

While at ETech I got to spend about half an hour chatting with Steve Gillmor about what he's called "the attention problem" which isn't the same thing as the attention.xml specification. The attention problem is the problem that faces every power users of XML syndication clients such as RSS Bandit or Bloglines. It is so easy to subscribe to various feeds that eventually readers get overwhelmed by the flood of information hitting their aggregator's inbox. Some have used the analogy "drinking from a firehose" to describe this phenomenon.

This problem affects me as well which is the impetus for a number of features in the most recent release of RSS Bandit such as newspaper views which allow one to view all the unread posts in a feed in single pane, adding more sortable columns such as author and comment count to the list view, and skim mode ('mark all items as read on exiting a feed or category'). However the core assumption behind all these features is that the user is reading every entry.

Ideally a user should be able to tell a client, "Here are the sites I'm interested in, here are the topics I'm interested in, and now only show me stuff I'd find interesting or important". This is the next frontier of features for RSS/ATOM aggregators and an area I plan to invest a significant amount of time in for the next version of RSS Bandit.

In my post Some Opinions on the Attention.xml Specification I faulted the attention.xml specification because it doesn't seem to solve the problems it sets out to tackle and some of the data in the format is unrealistic for applications to collect. After talking to Steve Gillmor I realize another reason I didn't like the attention.xml spec; it ignores all the hard problems and assumes they've been solved. Figuring out what data or what algorithms are useful for determining what items are relevant to a user is hard. Using said data to suggest new items to the user is hard. Coming up with an XML format for describing an arbitrary set of data that could be collected by an RSS aggregator is easy.

There are a number of different approaches I plan to explore over the next few months in various alphas of the Nightcrawler release of RSS Bandit.  My ideas have run the gamut from using Bayesian filtering to using the Technorati link cosmos feature for weighting posts [in which case I'd need batch methods which is something I briefly discussed with Kevin Marks at Etech last week]. There is also weighting by author that needs to be considered, for example I read everything written by Sam Ruby and Don Box. Another example is a topic that may be mundane (e.g. what I had for lunch) and something I'd never read if published by a stranger but would be of interest to me if posted by a close friend or family member.

We will definitely need a richer extensibility model so I can try out different approaches [and perhaps others can as well] before the final release. Looks like I have yet another spring and summer spent indoors hacking on RSS Bandit to look forward to. :)


I woke up this morning to find an interesting bit of Web history posted in a comment in response my post SOA, AJAX and REST: The Software Industry Devolves into the Fashion Industry by Adam Bosworth. He wrote

I actually agree with the post and I could care less what it is called (Ajax or DHTML or ..) but I thought I'd correct a couple of historical points. We (I led the IE MSFT 4.0 team which shipped in the fall of 97) called it DHTML because we introduced the read and writable DOM so that Ecmascript could dynamically modify the page to react to fine grained user actions and asynchronous events. That really was new and inventing this and the term occured simultaneously. Scott Isaac drove this work and worked tirelessly to get it into the W3C. We (MSFT) had a sneak preview for many developers in the fall of 96 actually showing things like pages expanding and collapsing in the gmail style and Tetris running using Javascript and DHTML in the page. Before then, javascript could hide/unhide items and react to some coarse events, but that was about it. We added the XMLHTTPRequest object (Chris Lovett actually did this) in IE 5.0 when we wrote the auction demo to show how XML could be used with IE to build a new interactive client. If there was a pioneer for Ajax, I'd argue that the auction demo was it.

I am surprised by how familiar I am with some of the people he mentions. Chris Lovett is currently an architect on the XML team at Microsoft and was the person who gave me the Extreme XML column on MSDN when I was still fresh out of college in my few months at Microsoft. Scott Isaacs is an architect on the MSN Spaces team who I've been in a few design meetings with so far. Cool.

I also see that Adam is back posting to his blog with his post Tensions on the Web. He mentions our conversation at ETech, specifically

I haven't posted for quite a while because my last posts caused unfair attacks on Google by twisting the words I'd used in my posts and attributing my posts to Google. I want to be really clear about something. The opinions I express in this Blog are my own. They have nothing to do with Google's opinions. Google only asks that I not leak information about future products. Period. But despite that, recent blog posts of mine were used to attack Google and this upset me deeply. Much to my surprise, Dare Obasanjo came up to me and told me, after some fairly vitriolic complaining from me to him about this earlier state of affairs, that he wished I'd continue to post. I thought about this over the weekend and decided that to some degree, you have to take your chances in this environment rather than just hide when you don't like the behavior and that perhaps I was being over sensitive anyway. There are too many interesting things going on right now anyway.

Adam's blog postings have been somewhat inspirational to me and part of the reason I decided to move to MSN (in fact, I'd considered leaving Microsoft). They also led to the most popular entry in my blog, Social Software is the Platform of the Future. It's good to see that he's back to sharing his ideas with us all.

Welcome back to the blogosphere, Adam.


Categories: Ramblings

Ever since the article Ajax: A New Approach to Web Applications unleashed itself on the Web I've seen the cacophony of hype surrounding Asynchronous JavaScript + XML (aka AJAX reach thunderous levels. The introduction to the essay already should make one wary about the article, it begins

Ajax isnt a technology. Its really several technologies, each flourishing in its own right, coming together in powerful new ways. Ajax incorporates:

So AJAX is using Javascript and XML with the old new twist being that one communicates with a server using Microsoft's proprietary XmlHttpRequest object. AJAX joins SOA in ignominy as yet another buzzword created by renaming existing technologies which becomes a way for some vendors to sell more products without doing anything new. I agree with Ian Hixie's rant Call an apple an apple where he wrote

Several years ago, HTML was invented, and a few years later, JavaScript (then LiveScript, later officially named ECMAScript) and the DOM were invented, and later CSS. After people had been happily using those technologies for a while, people decided to call the combination of HTML, scripting and CSS by a new name: DHTML. DHTML wasn't a new technology it was just a new label for what people were already doing.

Several years ago, HTTP was invented, and the Web came to be. HTTP was designed so that it could be used for several related tasks, including:

  • Obtaining a representation of a resource from a remote host using that resource's identifier (GET requests).
  • Executing a procedure call on a remote host using a structured set of arguments (POST requests).
  • Uploading a resource to a remote host (PUT requests).
  • Deleting a resource from a remote host (DELETE requests).

People used this for many years, and then suddenly XML-RPC and SOAP were invented. XML-RPC and SOAP are complicated ways of executing remote procedure calls on remote hosts using a structured set of arguments, all performed over HTTP.

Of course you'll notice HTTP can already do that on its own, it didn't need a new language. Other people noticed this too, but instead of saying "hey everyone, HTTP already does all this, just use HTTP", they said, "hey everyone, you should use REST!". REST is just a name that was coined for the kind of architecture on which HTTP is based, and, on the Web, simply refers to using HTTP requests.

Several years ago, Microsoft invented XMLHttpRequest. People used it, along with JavaScript and XML. Google famously used it in some of their Web pages, for instance GMail. All was well, another day saved... then someone invented a new name for it: Ajax.
So I have a request: could people please stop making up new names for existing technologies? Just call things by their real name! If the real name is too long (the name Ajax was apparently coined because "HTTP+XML+HTML+XMLHttpRequest+JavaScript+CSS" was too long) then just mention the important bits. For example, instead of REST, just "HTTP"; instead of DHTML just "HTML and script", and instead of Ajax, "XML and script".

What I find particularly disappointing about the AJAX hype is that it has little to do with the technology and more to do with the quality of developers building apps at Google. If Google builds their next UI without the use of XML but only Javascript and HTML will we be inundiated with hype about the new JUDO approach (Javascript and Unspecified DOm methods) because it uses proprietary DOM extensions not in the W3C standard?

The software industry perplexes me. One minute people are complaining about standards compliance in various websites and browsers but the next minute Google ships websites built on proprietary Microsoft APIs and it births a new buzzword. I doubt that even the fashion industry is this fickle and inconsistent.

Postscript: I wasn't motivated to post about this topic until I saw the comments to the post Outlook Web Access should be noted as AJAX pioneer  by Robert Scoble. It seems some people felt that Outlook Web Access did not live up to the spirit of AJAX. Considering that the distinguishing characteristic of the AJAX buzzword is using XmlHttpRequest and Outlook Web Access is the reason it exists (the first version was written by the Exchange team) I find this highly disingenious. Others have pointed this out as well, such as Robert Sayre in his post Ever Wonder Why It's Called "XMLHTTPRequest"?


Categories: XML

Last night, we put the finishing touches on an upgrade to the server-side of MSN Messenger. The maximum size of a buddy list has been increased from 150 to 300. Enjoy.

Categories: MSN

March 21, 2005
@ 12:02 PM

TDavid has a post critical of MSN Spaces where he writes  I think it's about time for another MSN Spaces update.

I'm starting to lose interest in MSN Spaces as a blogging tool.

Some of the future features that have been hinted at like the ability to host a spaces at our own domain like blogger does need to come to fruition to revitalize my interest in this service. I purchased a domain for this very purpose months ago and they sit, collectin dust.

Not just me though, I don't hear many others talking about MSN Spaces any more either. I know the target audience isn't regular bloggers, but just people who want to have a journal, but if Microsoft really wants MSN Spaces to compete with blogger then they need to add the ability to run off their own domain and open up the template customization a lot more.

Otherwise this will become -- maybe it already is -- just a glorified diary for sharing notes, pictures and music playlists with family and friends.

I'll let Mike talk about upcoming features, site updates and the like. There is one thing, I would like to address though. If people view MSN Spaces as a place for people to share their words, music and images with their friends and family then it would be fulfilling our goals for the site and I'd view it as an unbridled success. In my post entitled Why MSN Spaces and not MSN Blogs? I wrote

As to why they are called "spaces" as opposed to "blogs" it is because we believe there is more to sharing one's experiences online than your online journal. People want to share their thoughts, their favorite music, their favorite books, pictures of their loved ones and so on. It isn't just  posting your thoughts and experiences in the reverse chronological order which is what typically defines a "weblog". It's supposed to be a person's personal space online which was the original vision of personal publishing on the Web which spawned the personal homepage revolution a couple of years ago. Weblogs are the next iteration of that vision and we expect MSN Spaces to be the refinement of that iteration.

MSN Spaces is designed to allow people to create textual and visual diaries of their lives to share with others. We aren't the only one's that believe this as is evidenced by competing services such as Yahoo! 360° which has borrowed a leaf or two from our book. I'm curious as to what other goals a weblogging service could have?


Categories: MSN

A few days ago I wrote a post entitled Bringing Affirmative Action to Blogging where I jokingly asked whether we would need a blaggercon conference for black bloggers. Less than a week later I found out there was a blogging while black panel at the SXSW conference via Nancy White's blog. The blogged transcript of the panel interesting.

Another thing I found interesting was the ratio of men to women at the panel which Nancy White put at 28:80 (35%) which is quite impressive for a technology conference. This compares unfavorably with the O'Reilly Emerging Technology Conference which a number of women in technology have criticized for being heavily male dominated. These posts include SXSW, why i attended and marginalized populations by Danah Boyd, why sxsw by Liz Lawley and Number 9 Number 9 Number 9 by Shelley Powers. These posts mainly point out that given that both ETech and SXSW were being held at the same time, it seems many women chose the latter over the former. Funny enough, while I did get the feeling that there were way too many white guys at ETech even for a technology conference I wasn't thinking "where are the women?" but instead "where are all the Indian men?". I guess that reveals something about me.

Speaking of conferences,  I did find Dave Winer's post on two-level communities to be quite interesting. Specifically

Last week there were two conferences that I didn't go to but followed through the Web. I could have gone to either of them in person, if I had been willing to pay their fees, and been willing to be in the audience or the hallways, at all times. In other words, I would have to accept my place as a second-level person, an outsider, in the presence of insiders...I remember well what it was like going to Esther's conferences in the 80s, when the insiders all had someone to eat with, and I was paying thousands of dollars for the priviledge of eating by myself because I didn't know anyone.

There's also a post in a similar vein from a participant at SXSW entitled How it feels to be an outsider which goes into more detail about what it feels like to be an outsider at one of these conferences.

Personally I got a lot of value out of ETech. From a technical perspective, I got first hand proof that REST is sweeping SOAP+WSDL as the technology of choice for building Web services from a diverse and knowledgeable set of folks.  From a personal networking perspective I got to chat with Sam Ruby, Steve Gillmor, Anil Dash, Brad Fitzpatrick, Ben Trott, Erik Benson, Adam Bosworth (the conversation was very interesting, expect more about this in a later post), Marc Canter, Kevin Marks, Jeremy Zawodny, Nelson Minar and a bunch of other people. Then there's the fact that I got to spend time hanging out with folks from work outside of meetings and email discussions.

I find it surprising that there are people who go to conferences to attend talks and 'eat by themselves'. However thinking about it now there definitely is a certain clique-like feel to the entire technology conference scene which I'm sure extends to academic and professional conferences as a while.


Categories: Ramblings

Steve Rider, one of the great folks behind, has started a category on his blog devoted to the site. His first post discusses some of the changes they've made to the site in the past week. He writes

As soon as I finish this post I'll be digging in my heels for the afternoon and working on OPML import support and increasing the number of headlines per feed.  Hey, what are rainy Sunday afternoons for?

Here are some of the improvements we've made since we were "discovered" a week and a half ago:

  • Full Firefox support
  • Migrated from cookie-based solution to back-end store for feeds and preferences
  • Removed the restriction on the number of feeds that can be added
  • Added ability to delete items from My Feeds and Recent Searches
  • Title of module is now hyperlinked (oops) and also gets updated if the title in the RSS feed is different
  • Show search history in correct order
  • Lots of fit and finish and minor cosmetic changes

  • Fixed a few problems with the ActiveX control that were causing boomarks not to be imported (there are still a couple of issues affecting some users)
  • Added OPML import support
  • Increased performance when fetching from server by making more async calls

One of the features I asked Steve for was OPML import so it's good to see that it's already being added to the site. I didn't realize how fast they'd be turning around on feature requests. Looks like I should dust off my list of feature requests for online aggregators and swing by Steve's office sometime this week. Sweet.


Categories: MSN | Syndication Technology

March 20, 2005
@ 07:40 PM

This is the final release of the version formerly codenamed "Wolverine". This is the most significant release to date and has a ton of cool features. Enjoy.

Download the installer from here. Differences between v1.2.0.117 and v1.3.0.26 below.


Newspaper styles: Ability to view all unread posts in feeds and categories or all posts in a search folder in a Newspaper view. This view uses templates that are in the same format as those used by FeedDemon so one can use RSS Bandit newspaper styles in FeedDemon and vice versa. One can choose to specify a particular stylesheet for a given feed or category. For example, one could use the slashdot.fdxsl stylesheet for viewing Slashdot, headlines.fdxsl for viewing news sites in the News category and outlook-express.fdxsl for viewing blog posts.

Column Chooser: Ability to specify which columns are shown in the list view from a choice of Headline, Topic, Date, Author, Comment Count, Enclosures, Feed Title and Flag. One can choose to specify a column layout for a particular feed or category. For example, one could use the layout {Headline, Author, Topic, Date, Comment Count} for a site like Slashdot but use one like {Headline, Topic, Date} for a traditional blog that doesn't have comments.

Item Deletion: News items can be deleted by either using the [Del] key or using the context menu. Deleted items go to the "Deleted Items" special folder and can be deleted permanently by emptying the folder or restored to the original feed at a later date.

Category Properties: It is now possible to specify certain properties for all feeds within a category such as how often they should be updated or how long posts should be kept before being removed. Integration: Users who have accounts on the service can upload links to items of interest directly from RSS Bandit.

Skim Mode: Added option to 'Mark All Items As Read on Exiting a Feed'

Scoped Search: Searches can be restricted to specific feeds or categories. So now one can create searches like finding all unread messages that are a week old which contain 'Microsoft' in the title from the Slashdot, InfoWorld or Microsoft-Watch feed.

Search Folder Improvements: Made the following additions to the context menu for search folders; 'New Search Folder', 'Refresh Search', 'Mark All Items As Read' and 'Rename Search Folder'. Also deletion of a search folder now prompts the user to prevent accidental deletion

Subscribing to Web Search Results: Previous versions allowed users to search Web search engines from RSS Bandit, add search engines of their choice as well as specify whether the results were RSS or not. In this version, users can now subscribe to RSS search results after they are returned. By default, MSN Search and Feedster are installed as web search engines with RSS results.

UI Improvements: Tabbed browsers now use a multicolored border reminiscent of Microsoft OneNote.

Identities: One can create multiple personas with associated information (homepage, name, email address, signature, etc) for use when posting comments to weblogs that support the CommentAPI.

Proxy Exclusion List: Users can specify domains which should be accessed directly instead of via the proxy server when making HTTP requests

HTTP Cookie Support: We now support HTTP cookies when requesting feeds.

Delta Encoding in HTTP Support: We support the RFC3229+feed technique when downloading feeds as described at

Better SSL Certificate Error Handling: If there is a problem with the SSL certificate of a site RSS Bandit now provides a dialog with the error information so users can make an informed decision instead of just erroring.


* Fixed issue where a feed with an invalid XML character is encountered it fills the Feed Errors folder with repeated messages whose title begins with "Refresh feed '' failed with error:[], hexadecimal value [] is an invalid character..."

* Comments now sorted from oldest to newest

* Comments now sorted visually differentiated from posts that link to an entry

* Fixed issue where we can't get title from feed that requires username/password in the New Feeds dialog even if they are specified

* Automatic feed detection now ignore feeds a user allready subscribed to

* Fixed issue with ObjectDisposedException sometimes thrown when notification windows pop up

* Fixed issue where if the URL for a feed is changed using the Properties dialog then all old posts are deleted

* Fixed issue where WordPress comment feeds don't show up in RSS Bandit because they use wfw:commentRSS instead of wfw:commentRss

* Clicking on a category node now shows all items from feeds in nested categories as well as child feeds instead of showing items from child feeds only

* Flagged items no longer marked as unread when placed in a flagged item folder

* Fixed issue where updating a search folder's properties leads to duplicate search folders being created

* Fixed issue where errors on loading a cached feed file prevent the feed from being updated from the Web

* Fixed issue where if RSS Bandit is not yet running and you select "Subscribe in default aggregator" within the web browsers context menu or provide url's at the command line, it displays an empty category dropdown.

* URLs containing Cr/Lf no longer cause an error on startup due to a data format XML schema exception

* Locate Feed feature now recognizes Atom autodiscovery links on Blogspot blogs

* Fixed issue where startup position of main window not correct on multi-screen systems

* Fixed issue where we get a feed error when an item in the feed contains an empty slash:comments element


Categories: RSS Bandit

March 20, 2005
@ 05:05 PM

I went to see the movie Robots based on the positive reviews of it on Yahoo! Movies both by critics and regular Yahoo! users. The only positive thing I can say about this movie is that the animation was quite good. Everything else about the movie sucked. The songs were annoying, the characters irritating, the jokes were unfunny and the story plodded along in an uninteresting fashion.

Sitting through the movie until the end was an ordeal for myself and the everyone I saw the movie with. Avoid this movie like the plague.

Rating: ** out of *****


Categories: Movie Review

These are my notes from the Odeo -- Podcasting for Everyone session by Evan Williams.

Evan Williams was the founder of Blogger and Odeo is his new venture. Just as in his post How Odeo Happened Evan likens podcasting to audioblogging and jokingly states that he and Noah Glass invented podcasting with AudioBlogger. Of course, the audience knew he was joking and laughed accordingly. I do wonder though, how many people think that podcasting is simply audioblogging instead of realizing that the true innovation is the time shifting of digital media to the user's player of choice.

The Odeo interface has three buttons across the top; Listen, Sync and Create. Users can choose to listen to a podcast from a directory of podcasts on the site directly from the Web page. They can choose to sync podcasts from the directory down to their iPod using a Web download tool which also creates Odeo specific playlists in iTunes. 

The Odeo directory also contains podcasts that were not created on the site so they can be streamed to users. If third parties would rather not have their podcasts hosted on Odeo they can ask for them to be taken down.  

The Create feature was most interesting. The website allows users to record audio directly on the website without needing any desktop software. This functionality seems to be built with Flash. Users can also save audio or upload MP3s from their hard drive which can then be spliced into their audio recordings. However one cannot mix multiple audio tracks at once (i.e. I can't create an audio post then add in background music later, I can only append new audio).

The revenue model for the site will most likely be by providing hosting and creating services that allow people to charge for access to their podcasts. There was some discussion on hosting music but Evan pointed out that there were already several music sites on the Web andd they didn't want to be yet another one.

Odeo will likely be launching in a few weeks but will be invitation-only at first.


This was a late breaking session that was announced shortly after The Long Tail: Conversation with Chris Anderson and Joe Kraus. Unfortunately, I didn't take my tablet PC with me to the long tail session so I don't have any notes from it. Anyway, back to Google Code.

The Google Code session was hosted by Chris DiBona. The Google Code homepage is is similar to YSDN in that it tries to put all the Google APIs under a single roof. The site consists of three main parts; information on Google APIs, links to projects Open Sourced by Google that are hosted on SourceForge and highlighted projects created by third parties that use Google's APIs.

The projects Open Sourced by Google are primarily test tools and data structures used internally. They are hosted on SourceForge although there seemed to be some dislike for the features of the site both from Chris and members of the audience. Chris did feel that among the various Open Source project hosting sites existing today, SourceForge was the one most likely to be around in 10 years. He mentioned that Google was ready to devote some resources to helping the SourceForge team improve their service.


Categories: Technology | Trip Report

These are my notes on Introduction to Yahoo! Search Web Services session by Jeremy D. Zawodny

The Yahoo! Search web services are available on the Yahoo! Search Developer Network(YSDN) site. YSDN launched by providing web services that allow applications to interact with local, news, Web, image and video search. Web services for interacting with Y!Q contextual search was launched during ETech.

Jeremy stated that the design goal for their web services was for them to be simple and have a low barrier to entry. It was hoped that this would help foster a community and create two-way communication between the Yahoo! Search team and developers.  To help foster this communication with developers YSDN provides documentation, an SDK, a blog, mailing lists and a wiki.

Requests coming in from client applications are processed by an XML proxy which then sends the queries to the internal Yahoo! servers and returns the results to developers. The XML proxy is written in PHP and is indicative of the trend to move all new development at Yahoo! to PHP.

Some of the challenges in building YSDN were deciding what communications features (wiki vs. mailing list), figuring licencing issues, and quotas on methods calls (currently 5,000 calls per day per IP address). In talking to Jeremy after the talk I pointed out that rate limiting by IP penalizes applications used behind a proxy server that make several requests a day such as RSS Bandit being used by Microsoft employees at work. There was a brief discussion about alternate approaches to identifying applications such as cookies or using a machine's MAC address but these all seemed to have issues.

Speaking of RSS, Jeremy mentioned that after they had implemented their Web services which returned a custom document format he realized that many people would want to be able to transform those results to RSS and subscribe to them. So he spoke to the developer responsible and he had RSS output working within 30 minutes. When asked why they didn't just use RSS as their output format instead of coming up with a new format, he responded that they didn't want to extend RSS but instead came up with their own format. Adam Bosworth mentioned afterwards that he thought that it was more disruptive to create a new format instead of reusing RSS and adding one or two extensions to meet their needs.

Then there was the inevitable REST vs. SOAP discussion. Yahoo! picked REST for their APIs because of its simplicity and low barrier to entry for developers on any platform. Jeremy said that the tipping point for him was when he attended a previous O'Reilly conference and Jeff Barr from Amazon stated that 20% of their API traffic was from SOAP requests but they accounted for 80% of their support calls.

Jeremy ended the talk by showing some sample applications that had been built on the Yahoo! Search web services and suggesting some ideas for members of the audience to try out on their own.


Categories: Trip Report | XML Web Services

These are my notes from the "Just" Use HTTP session by Sam Ruby

The slides for this presentation are available. No summary can do proper justice to this presentation so I'd suggest viewing the slides.

Sam's talk focuses on the various gotchas facing developers building applications using REST or Plain old XML over HTTP (POX). The top issues include unicode (both in URIs and XML), escaped HTML in XML content and QNames in XML content. A lot of these gotchas are due to specs containing inconsistencies with other specs or in some cases flat out contradictions. Sam felt that there is an onus on spec writers to accept the responsibility that they are responsible for interop and act accordingly.

At the end of the talk Sam suggested that people doing REST/POX would probably be better of using SOAP since toolkits took care of such issues for them. I found this amusing given that the previous talk was by Nelson Minar saying the exact opposite and suggesting that some people using SOAP should probably look at REST.

The one thing I did get out of both talks is that there currently isn't any good guidance on when to use SOAP+WSDL vs. when to use REST or POX in the industry. I see that Joshua Allen has a post entitled The War is Over (WS-* vs. POX/HTTP) which is a good start but needs fleshing out. I'll probably look at putting pen to paper about this in a few months.



Categories: Trip Report | XML Web Services

These are my notes on the Building a New Web Service at Google session by Nelson Minar

Nelson Minar gave a talk about the issues he encountered while shipping the Adwords API. The slides for the talk are available online. I found this talk the most useful of the conference given that within the next year or so I'll be going through the same thing at MSN.

The purpose of the Adwords API was to enable Google customers manage their adwords campaigns. In cases where users have large numbers of keywords, it begins to be difficult to manage an ad campaign using the Web interface that Google provides. The API exposes endpoints for campaign management, traffic estimation and reporting. Users have a quota on how many API calls they can make a month which is related to the size of their campaign. There are also some complex authentication requirements since certain customers give permission to third parties to manage their ad campaigns for them. Although the API has only been available for a couple of weeks there are already developers selling tools that built on the API.

The technologies used are SOAP, WSDL and SSL. The reason for using SOAP+WSDL was so that XML Web Service toolkits which perform object<->XML data binding could be used by client developers. Ideally developers would write code like

adwords = adwordsSvc.MakeProxy(...)
adwords.setMaxKeywordCPC(53843, "flowers", "$8.43")

without needing to know or understand a lick of XML. Another benefit of SOAP were that it has a standard mechanism for sending metadata (SOAP headers) and errors (SOAP faults).

The two primary ways of using SOAP are rpc/encoded and document/literal. The former treats SOAP as a protocol for transporting programming language method calls just like XML-RPC while the latter treats SOAP as a protocol for sending typed XML documents. According to Nelson, the industry had finally gotten around to figuring out how to interop using rpc/encoded due to the efforts of the SOAP Builders list only for rpc/encoded to fall out of favor and document/literal to become the new hotness.

The problem with document/literal uses of SOAP is that it encourages using the full expressivity of W3C XML Schema Definition (XSD) language. This is in direct contradiction with trying to use SOAP+WSDL for object<->XML mapping since XSD has a large number of concepts that have no analog in object oriented programming.

Languages like Python and PHP have poor to non-existent support for either WSDL or document/literal encoding in SOAP. His scorecard for various toolkits in this regard was

  • Good: .NET, Java (Axis)
  • OK: C++ (gSOAP), PERL (SOAP::Lite)
  • Poor: Python (SOAPpy, ZSI), PHP (many options)

He also gave an example that showed how even things that seemed fundamentally simple such as specifying that an integer element had no value could cause interoperability problems in various SOAP toolkits. Given the following choices

<foo xsi:nil="true"/>

The first fails in current version of the .NET framework since it maps ints to System.Int32 which is value type meaning it can't be null. The second is invalid according to the rules of XSD since an integer cannot be empty string. The third works in general. The fourth is ghetto but is the least likely to cause problems if your application is coded to treat -1 as meaning the value is non-existent.

There are a number of other issues Nelson encountered with SOAP toolkits including

  • Nested complex types cause problems
  • Polymorphic objects  cause problems
  • Optional fields cause problems
  • Overloaded methods are forbidden.
  • xsi:type can cause breakage. Favor sending untyped documents instead.
  • WS-* is all over the map.
  • Document/literal support is weak in many languages.

Then came the REST vs. SOAP part of the discussion. To begin he defined what he called 'low REST' (i.e. Plain Old XML over HTTP or POX) and 'high REST'. Low REST implies using HTTP GETs for all API accesses but remembering that GET requests should not have side effects. High REST involves using the four main HTTP verbs (GET, POST, PUT, and DELETE) to manipulate resource representations, using XML documents as message payloads, putting metadata in HTTP headers and using URLs meaningfully.

Nelson also pointed out some limitations of REST from his perspective.

  • Development becomes burdensome if lots of interactivity in application (No one wants to write lots of XML parsing code)
  • PUT and DELETE are not implemented uniformly on all clients/web servers.
  • No standard application error mechanism (most REST and POX apps cook up their own XML error document format)
  • URLs have practical length limitations so one can't pass too muh data in a GET  
  • No WSDL, hence no data binding tools

He noted that for complex data, the XML is what really matters which is the same if you are using REST or SOAP’s document/literal model. In addition he felt that for read-only APIs, REST was a good choice. After the talk I asked if he thought the Google search APIs should have been REST instead of SOAP and he responded that in hindsight that would have been a better decision. However he doesn't think there have been many complaints about the SOAP API for Google search. He definitely felt that there was a need for more REST tools as well as best practices.

He also mentioned things that went right including :

  • Switch to document/literal
  • Stateless design
  • Having a developer reference guide
  • Developer tokens
  • Thorough interop testing
  • Beta period
  • Batch methods (every method worked with a single item or an array which lead to 25x speed up for some customers in certain cases). Dealing with errors in the middle of a batch operation become problematic though. 

There was also a list of things that went wrong 

  • The switch to document/literal cost a lot of time
  • Lack of a common data model
  • Dates and timezones (they allowed users to specify a date to perform operations but since dates don't have time zones depending on when the user sent the request the results may look like they came from the previous or next day) 
  • No gzip encoding
  • Having quotas caused customer confusion and anxiety  
  • No developer sandbox which meant developers had to test against real data
  • Using SSL meant that debugging SOAP is difficult since XML on the wire is encrypted (perhaps WS-Security is the answer but so far implementations are spotty)
  • HTTP+SSL is much slower than just HTTP.
  • Using plain text passwords in methods meant that users couldn't just cut & paste SOAP traces to the forum which led to inadvertent password leaks.

This was an awesome talk and I definitely took home some lessons which I plan to share with others at work. 


Categories: Trip Report | XML Web Services

These are my notes on the From the Labs: Google Labs session by Peter Norvig, Ph.D.

Peter started off by pointing out that since Google hires Ph.D's in their regular developer positions they often end up treating their developers as researchers while treating their researchers as developers.

Google's goal is to organize the world's data. Google researchers aid in this goal by helping Google add more data sources to their search engines. They have grown from just searching HTML pages on the Web to searching video files and even desktop search. They are also pushing the envelop when it comes to improving user interfaces for searching such as with Google Maps.

Google Suggest which provides autocomplete for the Google search box was written by a Google developer (not a researcher) using his 20% time. The Google Personalized site allows users to edit a profile which is used to weight search results when displaying them to the user. The example he showed was searching for the term 'vector' and then moving the slider on the page to showing more personalized results. Since his profile showed an interest in programming, results related to vector classes in C++ and Java were re-sorted to the top of the search results. I've heard Robert Scoble mention that he'd like to see search engines open up APIs that allow users to tweak search parameters in this way. I'm sure he'd like to give this a whirl. Finally he showed Google Sets which was the first project to show up on the Google Labs site. I remember trying this out when it first showed up and thinking it was magic. The coolest thing to try out is to give it three movies starring the same and watch it fill out the rest.


Categories: Technology | Trip Report

These are my notes on the From the Labs: Yahoo! Research Labs session by
Gary William Flake.

Yahoo! has two research sites, Yahoo! Next and Yahoo! Research Labs.  The Yahoo! Next site has links to betas of products that will eventually become products such as Y!Q contextual search, a Firefox version of the Yahoo! toolbar and Yahoo! movie recommendations.

The Yahoo! research team focuses on a number of core research areas areas including machine learning, collective intelligence, and text mining. They publish papers frequently related to these topics.

Their current primary research project is the Tech Buzz Game which is a fantasy prediction market for high-tech products, concepts, and trends. This is in the same vein as other fantasy prediction markets such as the Hollywood Stock Exchange and the Iowa Electronics Market. The project is being worked in in collaboration with O'Reilly Publishing. A product's buzz is a function of the volume of search queries for that term. People who constantly predict correctly can win more virtual money which they can use to bet more. The name of this kind of market is a dynamic peri-mutuel market.

The speaker felt Tech Buzz would revolutionize the way auctions were done. This seemed to be a very bold claim given that I'd never heard of it. Then again it isn't like I'm auction geek.


Categories: Technology | Trip Report

These are my notes on the From the Labs: Microsoft Research session by Richard F. Rashid, Ph.D.

Rick decided that in the spirit of ETech, he would focus on Microsoft Research projects that were unlikely to be productized in the conventional sense.

The first project he talked about was SenseCam. This could be considered by some to be the ultimate blogging tool. It records the user's experiences during the day by taking pictures, recording audio and even monitoring the temperature. There are some fancy tricks it has to do involving usage of internal motion detectors to determine when it is appropriate to take a picture so it doesn't end up blurry because the user was moving. There are currently 20 that have been built and clinical trials have begun to see if the SenseCam would be useful as aid to people with severe memory loss.

The second project he discussed was the surface computing project. The core idea around surface computing is turning everyday surfaces such as tabletops or walls into interactive input and/or display devices for computers. Projectors project displays on the surface and cameras detect when objects are placed on the surface which makes the display change accordingly. One video showed a bouncing ball projected on a table which recognized physical barriers such as the human hand when they were placed on the table. Physical objects placed on the table could also become digital objects. For example, placing a magazine on the table would make the computer copy it and when the magazine was removed a projected image of it would remain. This projected image of the magazine could then be interacted with such as by rotating and magnifying the image.

Finally he discussed how Microsoft Research was working with medical researchers looking for a cure for HIV infection. The primary problem with HIV is that it constantly mutates so the immune system and drugs cannot recognize all its forms to neutralize them in the body. This is similar to the spam problem where the rules for determining whether a piece of mail is junk mail keeps changing as spammers change their tactics. Anti-spam techniques have to use a number of pattern matching heuristics to figure out whether a piece of mail is spam or not. MSR is working with AIDS/HIV researchers to see whether such techniques couldn't be used to attack HIV in the human body.


Categories: Technology | Trip Report

These are my notes in the Vertical Search and A9 by Jeff Bezos.

The core idea behind this talk was powerful yet simple.

Jeff Bezos started of by talking about vertical search. In certain cases, specialized search engines can provide better results than generic search engines. One example is searching Google for Vioxx and performing the same search on a medical search engine such as PubMed. The former returns results that are mainly about class action lawsuits while the latter returns links to various medical publications about Vioxx. For certain users, the Google results are what they are looking for and for others the PubMed results would be considered more relevant.

Currently at, they give users the ability to search both generic search engines like Google as well as vertical search engines. The choice of search engines is currently small but they'd like to see users have the choice of building a search homepage that could pull results from thousands of search engines. Users should be able to add any search engine they want to their A9 page and have those results display in A9 alongside Google or Amazon search results. To facilitate this, they now support displaying search results from any search engine that can provide search results as RSS. A number of search engines already do this such as MSN Search and Feedster. There are some extensions they have made to RSS to support providing search results in RSS feeds. From where I was standing some of the extension elements I saw include startIndex, resultsPerPage and totalResults.

Amazon is calling this initiative OpenSearch.

I was totally blown away by this talk when I attended it yesterday. This technology has lots of potential especially since it doesn't seem tied to Amazon in any way so MSN, Yahoo or Google could implement it as well. However there are a number of practical issues to consider. Most search engines make money from ads on their site so creating a mechanism where other sites can repurpose their results would run counter to their business model especially if this was being done by a commercial interest like Amazon.


Categories: Technology | Trip Report

These are my notes on the Remixing Technology at Applied Minds session by W. Daniel Hillis.

This was a presentation by one of the key folks at Applied Minds. It seems they dabble in everything from software to robots. There was an initial demo showing a small crawling robot where he explained that they discovered that six legged robots were more stable than those with four legs. Since this wasn't about software I lost interest for several minutes but did hear the audience clap once or twice for the gadgets he showed.

Towards the end the speaker started talking about an open market place of ideas. The specific scenario he described was the ability to pull up a map and have people's opinions of various places on the map show up overlayed on the map. Given that people are already providing these opinions on the Web today for free, there isn't a need to have to go through some licenced database of reviews to get this information. The ability to harness the collective consciousness of the World Wide Web in this manner was the promise of the Semantic Web which the speaker felt was going to be delivered. His talk reminded me a lot of the Committee of Gossips vision of the Semantic Web that Joshua Allen continues to evangelize.

It seems lots of smart people are getting the same ideas about what the Semantic Web should be. Unfortunately, they'll probably have to route around the W3C crowd if they ever want to realize this this vision.


Categories: Technology | Trip Report

These are my notes on the The App is the API: Building and Surviving Remixable Applications by Mike Shaver. I believe I heard it announced that the original speaker couldn't make it and the person who gave the talk was a stand in.

This was one of the 15 minute keynotes (aka high order bits). The talk was about Firefox and its extensibility model. Firefox has 3 main extensibility points; components, RDF data sources and XUL overlays.

Firefox components are similar to Microsoft's COM components. A component has a contract id which an analogous to a GUID in the COM world. Components can be MIME type handlers, URL scheme handlers, XUL application extensions (e.g. mouse gestures) or inline plugins (similar to ActiveX). The Firefox team is championing a new plugin model that is similar to ActiveX which is expected to be supported by Opera and Safari as well. User defined components can override built in components by claiming their contract id in a process which seemed to be fragile but the speaker claimed has worked well so far.

Although RDF is predominantly used as a storage format by both Thunderbird and Firefox, the speaker gave the impression that this decision was a mistake. He repeatedly stated that graph based data model was hard for developers to wrap their minds around and that it was too complex for their needs. He also pointed out that whenever RDF was criticized by them, advocates of the technology [and the Semantic Web] would claim tha there were  future benefits that would be reaped from using RDF.

XUL overlays can be used to add toolbar buttons, tree widget columns and context menus to the Firefox user interface. They can also be used to create style changes in viewed pages as well. A popular XUL overlay is GreaseMonkey which the author showed could be used to add features to web sites such as persistent searches to GMail all using client side script. The speaker did warn that such overlays which applied style changes were inherently fragile since they depend on processing the HTML on the site which could change without warning if the site is redesigned. He also mentioned that it was unclear what the versioning model would be for such scripts once new versions of Firefox showed up.


Categories: Technology | Trip Report

These are my notes on the Web Services as a Strategy for Startups: Opening Up and Letting Go by Stewart Butterfield and Cal Henderson.

This was one of the 15 minute keynotes (aka high order bits). The talk was about Flickr and its API.  I came towards the end so I missed the meat of the talk but it seemed the focus of it was showing the interesting applications people have built using the Flickr API. The speakers pointed out that having an API meant that cool features were being added to the site by third parties thus increasing the value and popularity of the site.

There were some interesting statistics such as the fact that their normal traffic over the API is 2.93 calls per second but can beup to 50-100 calls per second at its peak. They also estimate that about 5% of the website traffic are calls to the Flickr API.


Categories: Trip Report | XML Web Services

These are my notes on the Build Contentcentric Applications on RSS, Atom, and the Atom API session by Ben Hammersley.

This was a 3.5 hour tutorial session [which actually only lasted 2.5 hours].

At the beginning, Ben was warned the audience that the Atom family of specifications are still being worked on but should begin to enter the finalization stages this month. The specs have been stable for about the last 6 months, however anything based on work older than that (e.g. anything based on the Atom 0.3 syndication format spec) may be significantly outdated.

He indicated that there were many versions of syndication formats named RSS, mainly due to acrimony and politics in the online syndication space. However there are basically 3 major flavors of syndication formats; RSS 2.0, RSS 1.0 and Atom.

One thing that sets Atom apart from the other formats is that a number of optional items from RSS 1.0 and RSS 2.0 are mandatory in Atom. For example, in RSS 2.0 an item can contain only a <description> and be considered valid while in RSS 2.0 an item with a blank title and a RDF:about (i.e. link) can be considered valid. This is a big problem for consumers of feeds, when basic information like the date of the item isn't guaranteed to show up.

There then was a slide attempting to show when to use each syndication format. Ben contended that RSS 2.0 is good for machine readable lists but not useful for much else outside of displaying information in an aggregator. RSS 1.0 is useful for complex data mining, not useful for small ad-hoc web feeds. Atom is best of both worlds, a simple format yet strictly defined data.

I was skeptical of this definition especially since the fact that people are using RSS 2.0 for podcasting flies in the face of his contentions about what RSS 2.0 is good for. In talking with some members of the IE team, who attended the talk with me, about this part of the talk later they agreed that Ben didn't present any good examples of use cases that the Atom syndication format satisfied that RSS 2.0 didn't.

Atom has a feed document and an entry document, the latter being a new concept in syndication. Atom also has a reusable syntax for generic constructs (person, link, text, etc). At this point Marc Canter raised the point that there weren't constructs in Atom for certain popular kinds of data on the Web. Some example Marc gave were that there are no explicit constructs handle tags (i.e. folksonomy tags)  or digitial media. Ben responded that the former could be represented with category elements while the latter could be binary payloads that were either included inline or linked from an entry in the feed.

Trying a different tack, I asked how one represented the metadata for digital content within an entry. For example, I asked about doing album reviews in Atom. How would I provide the metadata for my album review (name, title, review content, album rating) as well as the metadata for the album I was reviewing (artist, album, URL, music sample(s), etc)  and his response was that I should use RSS 1.0 since it was more oriented to resources talking about other resources.

The next part of the talk was about the Atom API which is now called the Atom publishing protocol. He gave a brief history of weblog APIs starting with the Blogger API and ending with the MetaWeblog API. He stated that XML-RPC is inelegant while SOAP is "horrible overkill" for solving the problem of posting to a weblog from an API. On the other hand REST is elegant. The core principles of REST are using the HTTP verbs like GET, PUT, POST and DELETE to manipulate representations of resources. In the case of Atom, these representations are Atom entry and feed documents. There are four main URI endpoints; the PostUri, EditUri, FeedUri, and the ResourcePostUri. In a technique reminiscent of RSD, websites that support Atom can place pointers to the API end points by using <link> tags with appropriate values for the rel attribute. 

At the end of the talk I asked what the story was for versioning both the Atom syndication format and the publishing protocol. Ben floundered somewhat in answering this question but eventually pointed to the version attribute in an Atom feed. I asked how would an application tell from the version attribute if it had encountered a newer but backwards compatible version of the spec or was the intention that clients should only be coded against one version of Atom? His response was that I was 'looking for a technological solution to social problem' and more importantly there was little chance that the Atom specifications would change anyway.

Yeah, right.

During the break, Marc Canter and I talked about the fact that both the Atom syndication format and Atom publishing protocol are simply not rich enough to support existing blogging tools let alone future advances in blogging technologies. For example, in MSN Spaces  we already have data types such as music lists and photo albums which don't fit in the traditional blog entry syndication paradigm that Atom is based upon. More importantly it is unclear how one would even extend to do this in an acceptable way. Similar issues exist with the API. The API already has less functionality existing APIs such as the MetaWeblog API. It is unclear how one would perform the basic act of querying one's blog for a list of categories to populate the drop down list used by a rich client which is a commonly used feature by such tools. Let alone, doing things like managing one's music list or photo album which is what I'd eventually like us to do in MSN Spaces.

The conclusion that Marc and I drew was that just to support existing concepts in popular blogging tools, both the Atom syndication format and the Atom API would need to be extended.

There was a break, after which there was a code sample walkthrough which I zoned out on.


March 14, 2005
@ 01:57 AM

Recently I saw a post by Ed Oswald entitled Has Spaces Changed the Way You Blog? where he wrote

In the coming weeks I will be writing a commentary on the success of MSN Spaces for BetaNews.. I have made it no secret through several of my posts as well as comments to my friends that I truly think MSN has really struck gold with Spaces, and could change the way people think about blogs. Blogging before Spaces was more unidirectional -- where the author posted to an group which he likely did not know -- and were usually somewhat impersonal. However, with Spaces it's more omnidirectional -- yes, these can be your old fashioned blog -- however, through integration with MSN Messenger and the like, Spaces becomes an extension of your online self. You match it to your interests -- and people can learn more about you than a simple blog can provide. What music interests you -- photos of your recent trip to Australia -- and what not. Plus -- when you have something to say, all your friends will known in seconds with the "gleam".

Many people [especially in the mainstream media] view blogging as amateur punditry. However the truth is that for most people blogging and related activities are about communicating their thoughts and sharing their experiences with others [mainly friends and family]. This is a key aspect of the vision behind MSN Spaces. We aren't the first service provider to design a blogging service based on this premise, LiveJournal would be one of the best examples of this, but I believe have done one of the best jobs so far in truly focusing on building a platform for sharing experiences with friends, family & strangers.

At ETech, I am supposed to demo how the integration of MSN Messenger, MSN Spaces and Hotmail improves the ability of our users to communicate with their friends and family than in isolation. It is clear that this provides enormous value to our users as evidenced by posts such as Ed's, I just hope that I end up presenting this in a way that clearly shows why what we've built is so cool.


Categories: MSN

March 13, 2005
@ 07:48 PM

This time tomorrow I'll be at the O'Reilly Emerging Technology Conference. Checking out the conference program, I saw that Evan Williams will be hosting a session entitled Odeo -- Podcasting for Everyone. I've noticed the enthusiasm around podcasting among certain bloggers and the media but I am somewhat skeptical of the vision folks like Evan Williams have espoused in posts such as How Odeo Happened.

In thinking about podcasting, it is a good thing to remember the power law and the long tail. In his post Weblogs, Power Laws and Inequality, Clay Shirky wrote

The basic shape is simple - in any system sorted by rank, the value for the Nth position will be 1/N. For whatever is being ranked -- income, links, traffic -- the value of second place will be half that of first place, and tenth place will be one-tenth of first place. (There are other, more complex formulae that make the slope more or less extreme, but they all relate to this curve.) We've seen this shape in many systems. What've we've been lacking, until recently, is a theory to go with these observed patterns.
A second counter-intuitive aspect of power laws is that most elements in a power law system are below average, because the curve is so heavily weighted towards the top performers. In Figure #1, the average number of inbound links (cumulative links divided by the number of blogs) is 31. The first blog below 31 links is 142nd on the list, meaning two-thirds of the listed blogs have a below average number of inbound links. We are so used to the evenness of the bell curve, where the median position has the average value, that the idea of two-thirds of a population being below average sounds strange. (The actual median, 217th of 433, has only 15 inbound links.)

The bottom line here is that a majority of weblogs will have small to miniscule readership. However the focus of the media and the generalizations made about blogging will be on popular blogs with large readership. But the wants and needs of popular bloggers often do not mirror those of the average blogger. There is a lot of opportunity and room for error when trying to figure out where to invest in features for personal publishing tools such as weblog creation tools or RSS reading software. Clay Shirky also mentioned this in his post where he wrote

Meanwhile, the long tail of weblogs with few readers will become conversational. In a world where most bloggers get below average traffic, audience size can't be the only metric for success. LiveJournal had this figured out years ago, by assuming that people would be writing for their friends, rather than some impersonal audience. Publishing an essay and having 3 random people read it is a recipe for disappointment, but publishing an account of your Saturday night and having your 3 closest friends read it feels like a conversation, especially if they follow up with their own accounts. LiveJournal has an edge on most other blogging platforms because it can keep far better track of friend and group relationships, but the rise of general blog tools like Trackback may enable this conversational mode for most blogs.

The value of weblogging to most bloggers (i.e. the millions of people using services like LiveJournal, MSN Spaces and Blogger) is that it allows them to share their experiences with friends, family & strangers on the Web and it reduces the friction for getting content on the Web when compared to managing a personal homepage which was the state of the art in personal publishing on the Web last decade. In addition, there are the readers of weblogs to consider. The existence of RSS syndication and aggregators such as RSS Bandit & Bloglines have made it easy for people to read multiple weblogs with ease. According to Bloglines, their average user reads just over 20 feeds.

Before going into my list of issues with podcasting, I will point out that I think the current definition of podcasting which limits it to subscribing to feeds of audio files is fairly limiting. One could just as easily subscribe to other digital content such as video files using RSS. To me podcasting is about time shifting digital content, not just audio files.

With this setup out of the way I can list the top three reasons I am not as enthusiastic about podcasting as folks like Evan Williams 

  1. Creating digital content and getting it on the Web isn't easy enough: The lowest friction way I've seen thus far for personal publishing of audio content on the Web is the phone posting feature of LiveJournal but it is still a sub optimal solution. It gets worse when one considers how to create and share richer digital content such as videos. I suspect mobile phones will have a big part to play in the podcast creation if it becomes mainstream. On the other hand, sharing your words with the world doesn't get much easier than using the average blogging tool. 
  2. Viewing digital content is more time consuming than reading text content: I believe it takes the average person less time to read an average blog posting than to listen to an average audio podcast. This automatically reduces the size of the podcast market compared to plain old text blogging.  As mentioned earlier, the average Bloglines user subscribes to 20 feeds. Over the past two years, I've gone from subscribing to about 20 feeds to subscribing to around 160. However it would be impossible for me to find the time to listen to 20 podcast feeds a week, let alone scaling up to 160.
  3. Digital content tends to be opaque and lack metadata: Another problem with podcasting is that there are no explicit or implicit metadata standards forming around syndicating digital media content. The fact that an RSS feed is structured data that provides a title, author name, blog name, a permalink and so on allows one to build rich applications for processing RSS feeds both globally like Technorati & Feedster or locally like RSS Bandit. As long as digital media content are just opaque blobs of data hanging of an item in a feed, the ecosystem of tools for processing and consuming them will remain limited.

This is not to say that podcasting won't go a long way in making it easier for popular publishers to syndicate media content to users. It will, however it will not be the revolution in personal publishing that the combination of RSS and weblogging have been.

I'll need to remember to bring some of these up during Evan Williams' talk. I'm sure he'll have some interesting answers.


Charlene Li has a post entitled Bloghercon conference proposed where she writes

Quick – name me five woman bloggers. You probably came up with Wonkette, and if you’re reading this post, you’ve got me on your list. Can you come up with three more?

This is why Lisa Stone’s suggestion to develop Bloghercon is such a great idea. (Elisa Camahort has a follow-up post with more details here .)

It’s not that there are no women bloggers out there – it’s that we haven’t built up a network comparable to the “blog-boy’s club” that dominates the Technorati 100 . This is not to presume that there’s a conspiracy – just the reality that for a number of reasons, woman bloggers have had difficulty gaining visibility.


Interestingly enough I actually counted 10 women bloggers I know off of the top of my head without needing to count Charlene or knowing who this Wonkette person is. My list was Shelley Powers, Julia Lerman, Liz Lawley, Danah Boyd, Rebecca Dias, KC Lemson, Anita Rowland, Megan Anderson, Eve Maler and Lauren Wood. As I finished the list lots more came to mind, in fact I probably could have hit ten just counting women at MSN I know who blog but that would have been too easy.


I am constantly surprised by the people who read the closed circle of white-male dominated blogs commonly called the A-list who think that this somehow constitutes the entire blogosphere (I do dislike that word) or even a significant part of it.


I wonder when the NAACP or Jesse Jackson are going to get in on the act and hold a blaggercon conference for black bloggers. Speaking of which, it's my turn to ask "Quick – name me five black bloggers". Post your answers in the comments.


Categories: Ramblings

A bunch of folks at work have been prototyping a server-side RSS reader at This isn't a final product but instead is intended to show people some of the ideas we at MSN are exploring around providing a rich experience around Web-based RSS/Atom aggregation.  

The Read/Write Web blog has a post entitled Microsoft's Web-based RSS Aggregator? which has a number of screenshots showing the functionality of the site. The site has been around for a few weeks and I'm pretty surprised it took this long to get widespread attention.

We definitely would love to get feedback from folks about the site. I'm personally interested in where people would like to see this sort of functionality integrated into the existing MSN family of sites and products, if at all.

PS: You may also want to check out to test drive a prototype of a Web-based bookmarks manager.


Categories: MSN

March 8, 2005
@ 03:39 PM

A couple of days ago I was contacted about writing the foreword for the book Beginning RSS and Atom Programming by Danny Ayers and Andrew Watt. After reading a few chapters from the book I agreed to introduce the book.

When I started writing I wasn't familiar with the format of the typical foreword for a technical book. Looking through my library I ended up with two forewords that gave me some idea of how to proceed. They were Michael Rys's introduction of XQuery: The XML Query Language by Michael Brundage and Jim Miller's introduction of Essential .NET, Volume I: The Common Language Runtime by Don Box. I suspect I selected them because I've worked directly or indirectly with both authors and the folks who wrote the forewords to their books, so felt familiar about both the subjects and the people involved.

From the various forewords I read it seemed the goal of a foreword is twofold

  1. Explain why the subject matter is important/relevant to the reader
  2. Explain why the author(s) should be considered an authority in this subject area

I believe I achieved both these goals with the foreword I wrote for the book. The book is definitely a good attempt to distill what the important things a programmer should consider when deciding to work with XML syndication formats.

Even though I have written an academic paper, magazine articles and conference presentations this was a new experience. I keep getting closer and closer to the process of writing a book. Too bad I never will though.


Categories: Ramblings

Shelley Powers has written an amusing post about the Google AutoLink saga entitled Guys Dont Link which like all good satire is funny because it is true. Usually I'd provide an excerpt of the linked post but this post has to be read in its entirety to get the full effect.



The Wolverine release of RSS Bandit has entered its final stretch; the bug count is under 15 from a high of over 50, the codebase is frozen except for critical fixes, and translations have started to trickle in. It is looking like the final version number for Wolverine will be v1.3.0.25 but don't quote me on that just yet.

Torsten and I have started talking about what we'd like to see in the following release, currently codenamed Nightcrawler. Over the next few weeks I'll be sharing some of our thoughts on where we'd like to see RSS Bandit go and eliciting feedback from our users. The first topic I have in mind is building a richer extensibility model. Torsten and Phil have discussed this issue in their blogs in the posts Fighting Ads and  Building a Better Extensibility Model For RSS Bandit respectively. As Phil wrote

Currently, the only plug-in model supported by RSS Bandit is the IBlogExtension interface. This is a very limited interface that allows developers to write a plug-in that adds a menu option to allow the user to manipulate a single feed item.

The ability to interact with the application from such a plug-in is very limited as the interface doesn't define an interface to the application other than a handle. (For info on how to write an IBlogExtension plug-in, see this article.)

Despite the limitations of IBlogExtension, it has led to some interesting plugins such as my plugin for posting links to from RSS Bandit.  This was actually a feature request which I fulfilled without the user having to wait for the next version of RSS Bandit. I'd like to be able to fulfill more complex feature requests without having users wait for the next version. Given the small number of IBlogExtension plugins I've seen come from our user base I'm pretty sure that it is quite likely that a richer plugin model would just end up mostly being used by Phil, Torsten and myself.  However I'd still like to get some feedback from our users about where they'd like to see more extensibility. Below are some of the extensibility points we've discussed along with usage scenarios and possible risks. I'd like to know which ones our users are interested in and would consider writing plugins with.

Feed Preprocessing

Plugins will be able to process RSS items just after they have been downloaded but before they are stored in RSS Bandit's in-memory and on-disk caches. This is basically what Torsten describes in his post Fighting Ads.

Use Case/Scenario: A user can write a plugin that assigns scores to news items according to the user's interests (e.g. a Bayesian filter) then annotates each news item with its score. For example, for me posts containing 'XML' or 'MSN Spaces' would be assigned a score of 5 while every other post could be assigned a score of 3. Then I could create a newspaper style that either grouped posts by their ranking or even filtered out posts that didn't have a certain score.

Another potential use case is pre-processing each news item to filter out ads as Torsten did in his example.  

Risks: My main concern with this approach is that badly written plugins could harm cause problems with the normal functioning of RSS Bandit. For example, if a plugin got stuck in an infinite loop it could hang the entire application since we'd never get back news items from the pre-processing step. Given that this is an instance of the halting problem I know we can't solve it in a general way so I may just have to acept the risks.

Pluggable protocols

Every once in a while, users ask for RSS Bandit to support other data formats and protocols than just RSS & Atom over HTTP. For example, I'd like us to support USENET newsgroups while I've seen a couple of requests that we should be able to support subscribing to POP3 mail boxes.

Ideally here we'd have a plugn infrastructure that allowed one to plugin both the parser and the protocol handler for a given format. The plugin would also specify the URI scheme used by the newly supported format so that RSS Bandit would know to dispatch requests using that plugin

Use Case/Scenario: In the case of USENET support the user would provide a plugin that knew how to parse messages in the RFC 822 format and how to fetch messages using NNTP. The USENET plugin would also register itself as the handler for the nntp and news URI schemes. The user could then subscribe to newsgroups by specifying a URL such as news:// in the new feed dialog.  

Risks: Same as with Feed Preprocessing.

Hosted Winforms Applications

A user could add a .NET Winforms application  to RSS Bandit. This application would appear as a tab within the main Window and all its functionality could be used from RSS Bandit. There would also be some hooks for the application to register itself within the RSS Bandit main menu as well as mechanisms to pass information back and forth between the hosted application and RSS Bandit.

Use Case/Scenario: One would be able to host blog posting clients such as IMHO in RSS Bandit. The blog client would be distributed and updated independently of RSS Bandit.

Risks: This would be a great deal of work for questionable pay off.

Pluggable Storage

RSS Bandit currently caches feeds as RSS files on disk, specifically at the location C:\Documents and Settings\[username]\Application Data\RssBandit\Cache. It should be possible to specify other data storage sources and formats such as a relational database or Atom feeds.

Use Case/Scenario: A user can write a plugin that stores all RSS Bandit's feeds in a local Access database so that data mining can be done on the data.

Another use case is writing them to disk but in a different format than RSS. For example, one could write them to disk using the format used by another application so that the user could use both applications but have them share a single feed cache.

Risks: The same as with Feed Preprocessing.


Categories: RSS Bandit

March 5, 2005
@ 04:49 PM

Yesterday, in a meeting to hash out some of the details of MSN Spaces API an interesting question came up. So far I've been focused on the technical details of the API (what methods should we have? what protocol should it use? etc) as well as the scheduling impact but completely overlooked a key aspect of building developer platform. I hadn't really started thinking about how we planned to support developers using our API. Will we have a website? A mailing list? Or a newsgroup? How will people file bugs? Do we expect them to navigate to and use the feedback form?

Besides supporting developers, we will need a site to spread awareness of the existence of the API. After noticing the difference in the media response to the ability to get search results as RSS feeds from MSN Search and the announcement of the Yahoo! Search Developer Network it is clear to me that simply having great functionality and blogging about it isn't enough. To me, getting MSN Search results as RSS feeds gives me at least two things over Yahoo's approach. The first is that developers don't have to register with MSN as they have to with Yahoo! since they need to get application IDs. The second is that since the search results are an RSS feed, they not only can be consumed programmatically but can be consumed by regular users with off the shelf RSS readers. However I saw more buzz about YSDN than about the MSN Search feeds from various corners. I suspect that the lack of "oomph" in the announcement is the cause of this occurence.

Anyway, getting back to how we should support developers who want to use the MSN Spaces APIs, I'd be very interested to hear from developers of blogging tools as to what they'd like to see us do here.

Update: Jeroen van den Bos reminds me that MSN Search RSS feeds are only licensed for personal use. I need to ping the MSN Search folks about that.

Update: Mike Torres points out that both Yahoo (see YSDN FAQ) and Google (see Google API FAQ) have similar restrictions in their terms of use. It would be good to see MSN leading the way here. We've already gone one step better by not requiring developers to register and get application IDs. We should be able to the loosen the terms of use as well.


Categories: MSN

In his post Maybe a better posting api is needed  James Robertson writes

I've had harsh words to say about Atom in the past, but that was mostly over the feed format. I haven't looked at the posting API yet - maybe I should. The Blogger API and the MetaqWebLog API are simply nightmares. There doesn't seem to be any standard way for client tools to interact with a server - I was debugging the interaction between a client and my server last night via IRC. Even better - the client was set to use the MetaWebLog api, but was sending requests to blogger.apiNameHere names. Sheesh. There was also an interesting difference in api points - I had implemented 'getUserBlogs', and the client was sending 'getUsersBlogs'. A quick Google search turned up references to both. Sigh.

I implemented both names, pointing to the same method. I had to map blogger names over to MetaWebLog entry points, at least for the tool being tested last night - who knows what oddness will turn up next. What a complete mess...

I've been similarly stunned by the complete and utter mess the state of weblogging APIs is in. As I mentioned in my post What Blog Posting APIs are supported by MSN Spaces? one of my duties at work is to investigate the options and design the blogging API story for MSN Spaces. In doing this, I have discovered all the issues James Robertson brought up and more. Mark Pilgrim has an ApacheCon presentation entitled The Atom API which highlights some of the various issues. One of the lowlights from his presentation is the fact that the MetaWeblog API spec significantly contradicts itself by stating that the data model of structs passed between client and server is based on RSS 2.0 then includes examples of requests and responses that show that it clearly isn't.

My personal favorite bit of information that can only be discovered by trial and error is the existence of the blogger.deletePost method which isn't listed in the Blogger API documentation but is supported by a number of blog posting clients and weblog servers.

I can't believe that anyone who wants to write a client or server that uses the standard weblogging APIs has to go through this crap. It almost makes me want to go join in the atom-protocol discussions. Almost.


March 2, 2005
@ 12:40 PM

In the post Another totally obvious factoid Dave Winer writes

Okay, we don't know for a fact that Google is working on an operating system, but the tea leaves are pretty damned clear. Why else would they have hired Microsoft's operating system architect, Mark Lucovsky? Surely not to write a spreadsheet or word processor.

Considering that after working on operating systems Mark Lucovsky went on to become the central mind behind Hailstorm, it isn't clear cut that Google is interested in him for his operating systems knowledge. It will be interesting to see if after pulling of what Microsoft couldn't by getting the public to accept AutoLink when they rejected SmartTags, Google will also get the public to accept their version of Hailstorm.

I can already hear the arguments about how Google's Hailstorm would be just like a beloved butler whose job it was to keep an eye on your wallet, rolodex and schedule so you don't have to worry about them. Positive perception in the market place is a wonderful thing.


Categories: Technology

In the article entitled Wonder Why MSN Didn't Think of This?  Mary Jo Foley writes

Or, maybe it has but just has yet to announce it…. On Monday, AOL announced a beta of AIM Sync, a tool that effectively turns Microsoft's Outlook e-mail client into a massive AOL Instant Messaging buddy list.

The implication here is that similar integration of instant messaging presence information does not exist between Outlook and MSN Messenger. This is actually incorrect. This feature exists today in Outlook 2003 and is called the Person Names Smart Tag. Below is a screenshot of my email inbox showing the feature in action.

email inbox shoing mike champion's online status

In fact, this feature is a cause of a common annoyance among users of MSN Messenger and Outlook. Many people have complained that they can't close MSN Messenger if Outlook is running. This is the feature responsible for that behavior. Disabling it removes the dependency between both applications.  

It's good to see yet another of our competitors learning from our innovations.


Categories: MSN

March 1, 2005
@ 04:01 PM

The past two months was hectic for me at work but things have started to calm down. It's been great learning about the MSN communications infrastructure and working on the design for our next generation of communications services for end users. Of course, now my extracurricular activities have begun to pile up. Below is a brief list of things I plan to begin and/or complete before the end of the month

I'm sure there is something I've forgotten from this list. Anyway, I am pretty excited about everything on the above list and even moreso about work especially since most of the extracurricular stuff is related to my day job. I guess that turns the Sex & Cash Theory on its head.


Categories: Ramblings