March 31, 2005
@ 03:45 PM

Bloglines just published a new press release entitled Bloglines is First to Go Beyond the Blog with Unique-to-Me Info Updates which is excerpted below

Oakland, CA -- March 30, 2005 -- Ask Jeeves®, Inc. (Nasdaq: ASKJ), today announced that Bloglines™ (www.bloglines.com), the world’s most popular free online service for searching, subscribing, publishing and sharing news feeds, blogs and rich web content has released the first of a wave of new capabilities that help consumers monitor customized kinds of dynamic web information. With these new capabilities, Bloglines is the first web service to move beyond aggregating general-audience blogs and RSS news feeds to enable individuals to receive updates that are personal to their daily lives.

Starting today, people can track the shipping progress of package deliveries from some of the world’s largest parcel shipping companies—FedEx, UPS, and the United States Postal Servicewithin their Bloglines MyFeeds page. Package tracking in Bloglines encompasses international shipments, in English. Bloglines readers can look forward to collecting more kinds of unique-to-me information on Bloglines in the near future, such as neighborhood weather updates and stock portfolio tracking.

“Bloglines is a Universal Inbox that captures all kinds of dynamic information that helps busy individuals be more productive throughout the day—at the office, at school, or on the go,” said Mark Fletcher, vice president and general manager of Bloglines at Ask Jeeves. “With an index of more than 370 million blog and news feed articles in seven languages, we’re already one of the largest wells of dynamic web information. With unique-to-me news updates we’re aiming to be the most comprehensive and useful personalized information resource on the web.”

So it looks like Bloglines is evolving into MyYahoo! or MyMSN which already provide a way to get customized personal information from local news and weather reports to RSS feeds and email inboxes. 

I've been pitching the concept of the digital information hub to folks at work but I think the term 'universal inbox" is a more attractive term. As a user spends more and more time in front of an information consumption tool be it an email reader, RSS reader or online portal, the more data sources the user wants supported by the tool. Online portals are now supporting RSS. Web-based RSS readers are now supporting content that would traditionally show up in a personalized view at an online portal.

At MSN, specifically with http://www.start.com/2/, we are exploring what would happen if you completely blurred the lines between a web-based RSS reader and the traditional personalized dashboard provided by an online portal. It is inevitable that both mechanisms of consuming information online will eventually be merged in some way. I suspect the result will look more like what Steve Rider's team is building than MyYahoo! or Bloglines do today.

As I mentioned before we'd love feedback about all the stuff we are doing at start.com. Don't be shy send your feedback.


 

Categories: MSN | Syndication Technology

I recently read Joe Beda's post on Google 20% Time and it actually left me with a lot of questions as well as a better understanding of why there is a steady flow of ex-B0rg to Google's Kirkland office. I found some of his points interesting, specifically

Switching teams at Google is a very fluid process.  An engineer can be 40% on one project, 40% on something completely different and 20% on his or her own thing.  That mix can be adjusted as project requirements change.  Switching groups should also not have an affect on your annual review score because of arbitrary team politics.   Joining a new group is more about find a good mutual fit then going through HR and a formal interview loop.  When there is an important project that needs to be staffed the groups and execs will evangelize that need and someone who is interested is bound to step up.  If it is critical to the business and no one is stepping up (I don't know if this has occurred or not) then I imagine it would go come down to a personal appeal by the management team
...
There is a big difference between pet projects being permitted and being encouraged. At Google it is actively encouraged for engineers to do a 20% project. This isn't a matter of doing something in your spare time, but more of actively making time for it. Heck, I don't have a good 20% project yet and I need one. If I don't come up with something I'm sure it could negatively impact my review.

He also mentions the openness of the Google intranet which I find less interesting because I expect that as the company grows it will have to start dealing with the kind of leaks that Apple and Microsoft has. Given the company's history of tightly controlling its message (the extremely boring press release style posts on the Google blog, the firing of Mark Jen, etc) I expect that once leak sites in the style of Neowin and Microsoft-Watch spring up around Google's actions, there will be tighter control over access to internal content.

The first set of questions I have are around his comments on switching teams. At Microsoft it is common knowledge that switching teams in the middle of the annual review cycle is guaranteed to screw up your score. Due to stack ranking managers are obligated to give some people low scores and it is always easy to tell the new guy "Everyone's been here for a year you were only here for a few months so you didn't contribute as much". My first manager at Microsoft was actually pretty good about this which I'm thankful for. This is the main reason I left the XML team without waiting for the release [or even the second beta] of version 2.0 of the .NET Framework which I spent about two years working on. Once I had decided I was going to leave it made the most sense to do so at the beginning of the next review cycle independent of where we were in the ship cycle of the product. The first question swirling through my mind about Joe's post is how does Google allow people to fluidly move between teams yet not get screwed during annual reviews?

The other thing I find interesting are his comments on 20% projects. I was recently part of a leadership training class where we were supposed to come up with a presentation as a team and the topic we chose was launching a 20% project at Microsoft. One of the things we got hung up on was deciding what exactly constituted a valid 20% project. If the company mandates strongly encourages me to have a 'pet' project then what guidelines is the project expected to follow? If my pet project isn't software related, such as a woodworking project, can it be considered a valid use of my 20% time? What if my pet project benefits a competitor such as Mark Pilgrim's Butler which ads links to competing services to Google pages?

I personally haven't needed a formal 20% program while at Microsoft. I've written lots of articles and worked on RSS Bandit while still being able to deliver on my day job. Having flex time and reasonable autonomy from management is all I've needed.

My suspicion is that 20% pet projects are a cheap way for Google to do new product development and prototyping in a light weight manner as opposed to being a way to encourage people to spend time working on their hobbies. This is a 'code first' approach to launching new products. Developers actually write cool apps and then someone decides whether there is a market or business model for it or not. This is in contrast to the 'VP first' or 'talk first' approach of new product development at places like Microsoft where new projects need to be pitched to a wide array of middle and upper management folks to get the green light unless the person creating the project is in upper management. At the end of the day with either approach one has to pitch the product to some management person to actually ship it but at least with Google's "code first" approach there already is a working app not just a dollar and a dream.


 

Categories: Life in the B0rg Cube

MSN services such as MSN Spaces and MSN Messenger require one to create a Passport account to use them. Passport requires the use of an email address which is used as the login name of the user. However as time progresses people often have to change email addresses for one reason or the other. On such occassions I've seen a couple of requests internally asking for the ability to change the Passport account an MSN Spaces or MSN Messenger  account is associated with. 

Doing this is actually quite straightforward in the general case. All one has to do is go to the Passport.net website and click on the 'View or edit your profile' link which should have an option for changing the email address associated with the Passport. And that's it.

Thanks to a recent changes made by some of the devs on our team, the renaming process now works seamlessly in MSN Messenger. You don't have to import your existing contact list to the new account nor do your contacts have to add the new email address to their contact list. Instead the change to your email address propagates across the MSN network in about an hour or so.

The are some caveats. The first is that renaming a Passport to a Hotmail account isn't supported at the current time. Another is that you may be prevented from changing your display name from the new email address in MSN Messenger for some time. This means that your friends will see you in their contact list as yourname@example.com (if that is your email address).

The above information also applies if you've been asked to change your MSN Messenger email address because your employer is deploying Microsoft Live Communications Server 2005 with Public IM Connectivity (PIC).  


 

Categories: MSN

Recently I was chatting with Steve Rider on the Start.com team about the various gotchas awaiting them as they continue to improve the RSS aggregator at http://www.start.com/1/. I mentioned issues like feeds which don't use title's like Dave Winer's and HTML showing up in titles.

I thought I knew all the major RSS gotchas and RSS Bandit handled them pretty well. However I recently got two separate bug reports from users of WordPress about RSS Bandit's inability to handle extension elements in their feed. The first complaint was about Kevin Devin's RSS feed which couldn't be read at all. A similar complaint was made by Jason Bock who was helpful enough to debug the problem himself and provide an answer in his post RSS Bandit problem fixed where he wrote

I can't believe what it took to fix my feed such that Rss Bandit could process it.

I'm just dumbfounded.

Basically, this is the way it looked:

						<rss xmlns:blog="urn:blog">
   <blog:info directory="JB\Blog" />
   <channel>
      <!--  Channel stuff goes here... -->
   </channel>
</rss>

				

This is what I did to fix it:

						<rss xmlns:blog="urn:blog">
   <channel>
      <!--  Channel stuff goes here... -->
   </channel>
   <blog:info directory="JB\Blog" />
</rss>

				

After debugging the Rss Bandit code base, I found out what the problem was. Rss Bandit reads the file using an XmlReader. Basically, it goes through the elements sequentially, and since the next node after <rss> wasn't <channel>, it couldn't find any information in the feed, and that's what was causing the choke. Moving <blog:info> to the end of the document solved it.

The assumption I made when developing the RSS parser in RSS Bandit was that the top level rss element would have a channel element as its first child element. I handle extension elements if they appear as children of the channel or item element since these seem logical but never thought anyone would apply an extension to the rss element. I took a look at what the RSS 2.0 specification says about where extension elements can appear and it seems my assumption was wrong since it states

RSS originated in 1999, and has strived to be a simple, easy to understand format, with relatively modest goals. After it became a popular format, developers wanted to extend it using modules defined in namespaces, as specified by the W3C.

RSS 2.0 adds that capability, following a simple rule. A RSS feed may contain elements not described on this page, only if those elements are defined in a namespace.

Since there is no explicit restriction of where extension elements can appear it looks like I'll have to make changes to be able to expect extension elements anywhere in the feed.

My apologies to the folks who've had problems reading feeds because of this oversight on my part. I'll fix the issue today and refresh the installer later this week.

 


 

I've seen a number of responses to my recent post, SOA, AJAX and REST: The Software Industry Devolves into the Fashion Industry, about the rebranding of existing techniques for building websites which used to be called DHTML or Remote Scripting to AJAX. I've found the most interesting responses to be the ones that point out why this technique isn't seeing mainstream usage in designing websites.

Scott Isaacs has a post entitled AJAX or as I like to call it, DHTML where he writes

As Adam Bosworth explained on Dare's blog, we built Ajax-like web applications back in 1997 during and after shipping IE4 (I drove the design of DHTML and worked closely with the W3C during this period).  At that time xmlhttp (or xml for that matter) did not exist. Instead, structured data was passed back and forth encapsulated in Javascript via hidden IFrames.  This experimental work helped prove the need for a structured, standardized approach for managing and transporting data.

Over the years, quite a few web applications have been built using similar approaches - many are Intranet applications and one of the best was Outlook Web Access. However, very few main-stream web-sites appeared. I believe this was due to a number of factors - the largest being that the typical web user-experience falls apart when you start building web-based applications.  The user-interface issues revolve mostly around history and refresh. (For example - In GMail, navigate around your inbox and either refresh the page or use the back button. In Spaces (IE Users), expand and view comments and hit the back button.  I am willing to wager that what happens is not what you really want).  The problem lies in we are building rich applications in an immature application environment. We are essentially attempting to morph the current state of the browser from a dynamic form package to a rich application platform.

I have noticed these problems in various Javascript based websites and it definitely is true that complex Javascript navigation is incompatible with the traditional navigation paradigm of the Web browser. Shelley Powers looks at the problems with Javascript based websites from another angle in her post The Importance of Degrading Gracefully where she writes

Compatibility issues aside, other problems started to pop up in regards to DHTML. Screen readers for the blind disabled JavaScript, and still do as far as I know (I haven’t tried a screen reader lately). In addition, security problems, as well as pop-up ads, have forced many people to turn off JavaScript–and keep it off.

(Search engines also have problems with DHTML based linking systems.)

The end result of all these issues–compatibility, accessibility, and security–is a fundamental rule of web page design: whatever technology you use to build a web page has to degrade, gracefully. What does degrade, gracefully, mean? It means that a technology such as Javascript or DHTML cannot be used for critical functionality; not unless there’s an easy to access alternative.
...
Whatever holds for DHTML also holds for Ajax. Some of the applications that have been identified as Ajax-enabled are flickr and Google’s suite of project apps. To test how well both degrade, I turned off my JavaScript to see how they do.
...
Google’s gmail, on the other hand, did degrade, but did not do so gracefully. If you turn off JavaScript and access the gmail page, you’ll get a plain, rather ugly page that makes a statement that the primary gmail page requires JavaScript, either turn this on, get a JS enabled browser, or go to the plain HTML version.

Even when you’re in the plain HTML version, a prominent line at the top keeps stating how much better gmail is with a Javascript enabled browser. In short, Google’s gmail degrades, by providing a plain HTML alternative, but it didn’t do so gracefully; not unless you call rubbing the customer’s nose in their lack of JS capability “graceful”.

You don’t even get this message with Google Suggest; it just doesn’t work (but you can still use it like a regular search page). As for Google Maps? Not a chance–it is a pure DHTML page, completely dependent on JavaScript. However, Mapquest still works, and works just as well with JS as without.

(Bloglines also doesn’t degrade gracefully — the subscription is based on a JavaScript enabled tree. Wordpress, and hence Wordform, does degrade gracefully.)

If we’re going to get excited about new uses of existing technology, such as those that make up the concept of Ajax, then we should keep in mind the rule of degrading gracefully: Flickr is an example of a company that understands the principle of degrading gracefully; Google is an example of a company, that doesn’t.

Update: As Doug mentions in comments, flickr is dependent on Flash. If Flash is not installed, it does not degrade gracefully.

I do remember being surprised that I had to add "http://*.google.com" to my trusted sites list to get Google Maps to work. Of course, none of the above observations is new but given that we are seeing a renaissance of interest in using Javascript for building websites, it seems like a good idea for a similar return of the arguments discussing the cons of this approach as well.


 

Categories: Technology

While at ETech I got to spend about half an hour chatting with Steve Gillmor about what he's called "the attention problem" which isn't the same thing as the attention.xml specification. The attention problem is the problem that faces every power users of XML syndication clients such as RSS Bandit or Bloglines. It is so easy to subscribe to various feeds that eventually readers get overwhelmed by the flood of information hitting their aggregator's inbox. Some have used the analogy "drinking from a firehose" to describe this phenomenon.

This problem affects me as well which is the impetus for a number of features in the most recent release of RSS Bandit such as newspaper views which allow one to view all the unread posts in a feed in single pane, adding more sortable columns such as author and comment count to the list view, and skim mode ('mark all items as read on exiting a feed or category'). However the core assumption behind all these features is that the user is reading every entry.

Ideally a user should be able to tell a client, "Here are the sites I'm interested in, here are the topics I'm interested in, and now only show me stuff I'd find interesting or important". This is the next frontier of features for RSS/ATOM aggregators and an area I plan to invest a significant amount of time in for the next version of RSS Bandit.

In my post Some Opinions on the Attention.xml Specification I faulted the attention.xml specification because it doesn't seem to solve the problems it sets out to tackle and some of the data in the format is unrealistic for applications to collect. After talking to Steve Gillmor I realize another reason I didn't like the attention.xml spec; it ignores all the hard problems and assumes they've been solved. Figuring out what data or what algorithms are useful for determining what items are relevant to a user is hard. Using said data to suggest new items to the user is hard. Coming up with an XML format for describing an arbitrary set of data that could be collected by an RSS aggregator is easy.

There are a number of different approaches I plan to explore over the next few months in various alphas of the Nightcrawler release of RSS Bandit.  My ideas have run the gamut from using Bayesian filtering to using the Technorati link cosmos feature for weighting posts [in which case I'd need batch methods which is something I briefly discussed with Kevin Marks at Etech last week]. There is also weighting by author that needs to be considered, for example I read everything written by Sam Ruby and Don Box. Another example is a topic that may be mundane (e.g. what I had for lunch) and something I'd never read if published by a stranger but would be of interest to me if posted by a close friend or family member.

We will definitely need a richer extensibility model so I can try out different approaches [and perhaps others can as well] before the final release. Looks like I have yet another spring and summer spent indoors hacking on RSS Bandit to look forward to. :)


 

I woke up this morning to find an interesting bit of Web history posted in a comment in response my post SOA, AJAX and REST: The Software Industry Devolves into the Fashion Industry by Adam Bosworth. He wrote

I actually agree with the post and I could care less what it is called (Ajax or DHTML or ..) but I thought I'd correct a couple of historical points. We (I led the IE MSFT 4.0 team which shipped in the fall of 97) called it DHTML because we introduced the read and writable DOM so that Ecmascript could dynamically modify the page to react to fine grained user actions and asynchronous events. That really was new and inventing this and the term occured simultaneously. Scott Isaac drove this work and worked tirelessly to get it into the W3C. We (MSFT) had a sneak preview for many developers in the fall of 96 actually showing things like pages expanding and collapsing in the gmail style and Tetris running using Javascript and DHTML in the page. Before then, javascript could hide/unhide items and react to some coarse events, but that was about it. We added the XMLHTTPRequest object (Chris Lovett actually did this) in IE 5.0 when we wrote the auction demo to show how XML could be used with IE to build a new interactive client. If there was a pioneer for Ajax, I'd argue that the auction demo was it.

I am surprised by how familiar I am with some of the people he mentions. Chris Lovett is currently an architect on the XML team at Microsoft and was the person who gave me the Extreme XML column on MSDN when I was still fresh out of college in my few months at Microsoft. Scott Isaacs is an architect on the MSN Spaces team who I've been in a few design meetings with so far. Cool.

I also see that Adam is back posting to his blog with his post Tensions on the Web. He mentions our conversation at ETech, specifically

I haven't posted for quite a while because my last posts caused unfair attacks on Google by twisting the words I'd used in my posts and attributing my posts to Google. I want to be really clear about something. The opinions I express in this Blog are my own. They have nothing to do with Google's opinions. Google only asks that I not leak information about future products. Period. But despite that, recent blog posts of mine were used to attack Google and this upset me deeply. Much to my surprise, Dare Obasanjo came up to me and told me, after some fairly vitriolic complaining from me to him about this earlier state of affairs, that he wished I'd continue to post. I thought about this over the weekend and decided that to some degree, you have to take your chances in this environment rather than just hide when you don't like the behavior and that perhaps I was being over sensitive anyway. There are too many interesting things going on right now anyway.

Adam's blog postings have been somewhat inspirational to me and part of the reason I decided to move to MSN (in fact, I'd considered leaving Microsoft). They also led to the most popular entry in my blog, Social Software is the Platform of the Future. It's good to see that he's back to sharing his ideas with us all.

Welcome back to the blogosphere, Adam.


 

Categories: Ramblings

Ever since the article Ajax: A New Approach to Web Applications unleashed itself on the Web I've seen the cacophony of hype surrounding Asynchronous JavaScript + XML (aka AJAX reach thunderous levels. The introduction to the essay already should make one wary about the article, it begins

Ajax isnt a technology. Its really several technologies, each flourishing in its own right, coming together in powerful new ways. Ajax incorporates:

So AJAX is using Javascript and XML with the old new twist being that one communicates with a server using Microsoft's proprietary XmlHttpRequest object. AJAX joins SOA in ignominy as yet another buzzword created by renaming existing technologies which becomes a way for some vendors to sell more products without doing anything new. I agree with Ian Hixie's rant Call an apple an apple where he wrote

Several years ago, HTML was invented, and a few years later, JavaScript (then LiveScript, later officially named ECMAScript) and the DOM were invented, and later CSS. After people had been happily using those technologies for a while, people decided to call the combination of HTML, scripting and CSS by a new name: DHTML. DHTML wasn't a new technology it was just a new label for what people were already doing.

Several years ago, HTTP was invented, and the Web came to be. HTTP was designed so that it could be used for several related tasks, including:

  • Obtaining a representation of a resource from a remote host using that resource's identifier (GET requests).
  • Executing a procedure call on a remote host using a structured set of arguments (POST requests).
  • Uploading a resource to a remote host (PUT requests).
  • Deleting a resource from a remote host (DELETE requests).

People used this for many years, and then suddenly XML-RPC and SOAP were invented. XML-RPC and SOAP are complicated ways of executing remote procedure calls on remote hosts using a structured set of arguments, all performed over HTTP.

Of course you'll notice HTTP can already do that on its own, it didn't need a new language. Other people noticed this too, but instead of saying "hey everyone, HTTP already does all this, just use HTTP", they said, "hey everyone, you should use REST!". REST is just a name that was coined for the kind of architecture on which HTTP is based, and, on the Web, simply refers to using HTTP requests.

Several years ago, Microsoft invented XMLHttpRequest. People used it, along with JavaScript and XML. Google famously used it in some of their Web pages, for instance GMail. All was well, another day saved... then someone invented a new name for it: Ajax.
...
So I have a request: could people please stop making up new names for existing technologies? Just call things by their real name! If the real name is too long (the name Ajax was apparently coined because "HTTP+XML+HTML+XMLHttpRequest+JavaScript+CSS" was too long) then just mention the important bits. For example, instead of REST, just "HTTP"; instead of DHTML just "HTML and script", and instead of Ajax, "XML and script".

What I find particularly disappointing about the AJAX hype is that it has little to do with the technology and more to do with the quality of developers building apps at Google. If Google builds their next UI without the use of XML but only Javascript and HTML will we be inundiated with hype about the new JUDO approach (Javascript and Unspecified DOm methods) because it uses proprietary DOM extensions not in the W3C standard?

The software industry perplexes me. One minute people are complaining about standards compliance in various websites and browsers but the next minute Google ships websites built on proprietary Microsoft APIs and it births a new buzzword. I doubt that even the fashion industry is this fickle and inconsistent.

Postscript: I wasn't motivated to post about this topic until I saw the comments to the post Outlook Web Access should be noted as AJAX pioneer  by Robert Scoble. It seems some people felt that Outlook Web Access did not live up to the spirit of AJAX. Considering that the distinguishing characteristic of the AJAX buzzword is using XmlHttpRequest and Outlook Web Access is the reason it exists (the first version was written by the Exchange team) I find this highly disingenious. Others have pointed this out as well, such as Robert Sayre in his post Ever Wonder Why It's Called "XMLHTTPRequest"?


 

Categories: XML

Last night, we put the finishing touches on an upgrade to the server-side of MSN Messenger. The maximum size of a buddy list has been increased from 150 to 300. Enjoy.
 

Categories: MSN

March 21, 2005
@ 12:02 PM

TDavid has a post critical of MSN Spaces where he writes  I think it's about time for another MSN Spaces update.

I'm starting to lose interest in MSN Spaces as a blogging tool.

Some of the future features that have been hinted at like the ability to host a spaces at our own domain like blogger does need to come to fruition to revitalize my interest in this service. I purchased a domain for this very purpose months ago and they sit, collectin dust.

Not just me though, I don't hear many others talking about MSN Spaces any more either. I know the target audience isn't regular bloggers, but just people who want to have a journal, but if Microsoft really wants MSN Spaces to compete with blogger then they need to add the ability to run off their own domain and open up the template customization a lot more.

Otherwise this will become -- maybe it already is -- just a glorified diary for sharing notes, pictures and music playlists with family and friends.

I'll let Mike talk about upcoming features, site updates and the like. There is one thing, I would like to address though. If people view MSN Spaces as a place for people to share their words, music and images with their friends and family then it would be fulfilling our goals for the site and I'd view it as an unbridled success. In my post entitled Why MSN Spaces and not MSN Blogs? I wrote

As to why they are called "spaces" as opposed to "blogs" it is because we believe there is more to sharing one's experiences online than your online journal. People want to share their thoughts, their favorite music, their favorite books, pictures of their loved ones and so on. It isn't just  posting your thoughts and experiences in the reverse chronological order which is what typically defines a "weblog". It's supposed to be a person's personal space online which was the original vision of personal publishing on the Web which spawned the personal homepage revolution a couple of years ago. Weblogs are the next iteration of that vision and we expect MSN Spaces to be the refinement of that iteration.

MSN Spaces is designed to allow people to create textual and visual diaries of their lives to share with others. We aren't the only one's that believe this as is evidenced by competing services such as Yahoo! 360° which has borrowed a leaf or two from our book. I'm curious as to what other goals a weblogging service could have?


 

Categories: MSN