It seems both Google and Yahoo! provided interesting news on the personalized search front recently.

Yahoo! MyWeb 2.0 seems to merge the functionality of del.icio.us with the power of Yahoo! search. I can now add tags to my Yahoo! bookmarks, view cached versions of my bookmarked pages and perform searches restricted to the sites in my bookmark list. Even cooler is that I can share my bookmarks with members of my contact list or just make them public. The search feature also allows one to search sites restricted to those shared by others. All they need to do is provide an API and add RSS feeds for this to be a del.icio.us killer.

Google Personalized Search takes a different tack in personalizing search results. Google's approach involves tracking your search history and tracking what search results you cliked on. Then when next you perform searches, Google Personalized Search brings certain results closer to the top based on your search history or previous click behavior.

As for this week's news about MSN Search? Well you can catch what our CEO had to say about us in the ZDNet article Ballmer confident, but admits failings. We definitely have a lot of catching up to do but I don't think the race is over yet.


 

Categories: Social Software | Technology

Just when you think we've missed the boat on software development trends on the Web, Microsoft surprises folks. First it was announcing that we will be baking RSS support into the next version of Windows. Now we've announced that we will be shipping a toolkit for building AJAX-style Web applications. In his post about the Atlas Project, Scott Guthrie writes

All of the pieces of AJAX – DHTML, JScript, and XMLHTTP – have been available in Internet Explorer for some time, and Outlook Web Access has used these techniques to deliver a great browser experience since 1998. In ASP.NET 2.0, we have also made it easier to write AJAX-style applications for any browser using asynchronous callbacks, and we use them in several of our built-in controls.

 

Recently, however, the technologies used by AJAX have become broadly available in all browsers, and use of this model for rich web applications has really taken flight. There are a number of high-profile new AJAX-style websites out there today, including a number by Google, as well as sites like A9 and Flickr. Microsoft will also have more sites that use this technology out there soon – check out Start.com and the MSN Virtual Earth project for examples.

 

The popularity of AJAX shows the growing demand for richer user experiences over the web. However, developing and debugging AJAX-style web applications is a very difficult task today. To write a rich web UI, you have to know a great deal of DHTML and JavaScript, and have a strong understanding of all the differences and design details of various browsers. There are very few tools to help your design or build these applications easily. Finally, debugging and testing these applications can be very tricky.

...

For this work, we’ve been working on a new project on our team, codenamed “Atlas”. Our goal is to produce a developer preview release on top of ASP.NET 2.0 for the PDC this September, and then have a website where we can keep updating the core bits, publishing samples, and building an active community around it.

 

Here are some of the pieces of Atlas that we are going to be delivering over time:

 

 

Atlas Client Script Framework

 

The Atlas Client Script Framework is an extensible, object-oriented 100% JavaScript client framework that allows you to easily build AJAX-style browser applications with rich UI and connectivity to web services. With Atlas, you can write web applications that use a lot of DHTML, Javascript, and XMLHTTP, without having to be an expert in any of these technologies.

 

The Atlas Client Script Framework will work on all modern browsers, and with any web server. It also won’t require any client installation at all – to use it, you can simply include references to the right script files in your page.

 

The Atlas Client Script Framework will include the following components:

  • An extensible core framework that adds features to JavaScript such as lifetime management, inheritance, multicast event handlers, and interfaces
  • A base class library for common features such as rich string manipulation, timers, and running tasks
  • A UI framework for attaching dynamic behaviors to HTML in a cross-browser way
  • A network stack to simplify server connectivity and access to web services
  • A set of controls for rich UI, such as auto-complete textboxes, popup panels, animation, and drag and drop
  • A browser compatibility layer to address scripting behavior differences between browsers.
This is excellent news which I know a lot of our UX developers at MSN will be glad to hear. Already Scott Isaacs who's been a key part of the AJAX development we've been doing at MSN has posted his opinions about Atlas in his blog entry entitled My personal thoughts on an AJAX (DHTML) framework..... His post highlights some of the history of AJAX as well as the issues a toolkit like Atlas could solve.

First RSS, now AJAX. All that's left is to see some announcement that we will be shipping a REST toolkit to make this a trifecta of utmost excellence. More nagging I must do...

 

Today I learned that Apple brings podcasts into iTunes which is excellent news. This will definitely push subscribing to music and videos via RSS feeds into the mainstream. I wonder how long it'll take MTV to start providing podcast feeds.

One interesting aspect of the announcement which I didn't see in any of the mainstream media coverage was pointed out to me in Danny Ayers's post Apple - iTunes - Podcasting where he wrote

Apple - iTunes - Podcasting and another RSS 2.0 extension (PDF). There are about a dozen new elements (or “tags” as they quaintly describe them) but they don’t seem to add anything new. I think virtually everything here is either already covered by RSS 2.0 itself, except maybe tweaked to apply to the podcast rather than the item.
They’ve got their own little category taxonomy and this delightful thing:

<itunes :explicit>
This tag should be used to note whether or not your Podcast contains explicit material.
There are 2 possible values for this tag: Yes or No

I wondered at first glance whether this was so you could tell when you were dealing with good data or pure tag soup. However, the word has developed a new meaning:

If you populate this tag with “Yes”, a parental advisory tag will appear next to your Podcast cover art on the iTunes Music Store
This tag is applicable to both Channel & Item elements.

So, in summary it’s a bit of a proprietary thing, released as a fait accompli. Ok if you’re targetting for iTunes, for anything else use Yahoo! Media RSS . I wonder where interop went.

This sounds interesting. So now developers of RSS readers that want to consume podcasts have to know how to consume the RSS 2.0 <enclosure> element, Yahoo!'s extensions to RSS and Apple's extensions to RSS to make sure they cover all the bases. Similarly publishers of podcasts also have to figure out which ones they want to publish as well.

I guess all that's left is for Real Networks and Microsoft to publish their own extensions to RSS for dealing with providing audio and video metadata in RSS feeds to make it all complete. This definitely complicates my plans for adding podcasting support to RSS Bandit. And I thought the RSS 1.0 vs. RSS 2.0 vs. Atom discussions were exciting. Welcome to the world of syndication.

PS: The title of this post is somewhat tongue in cheek. It was inspired by Slashdot's headline over the weekend titled Microsoft To Extend RSS about Microsoft's creation of an RSS module for making syndicating lists work better in RSS. Similar headlines haven't been run about Yahoo! or Apple's extensions to RSS but that's to be expected since we're Microsoft. ;)


 

Categories: Syndication Technology | XML

As the developer of an RSS aggregator I'm glad to see Microsoft's Simple List Extensions for RSS. Many of the aggregator developers I spoke to at Gnomedex this weekend felt the same way. The reason for being happy about these extensions is that they provide a way to fix a number of key feeds that are broken in RSS aggregators today. This includes feeds such as the MSN Music Top 100 Songs feed, iTunes Top 25 Songs feed and Netflix Top 100 Movies feed.

The reasons these feeds appear broken in every aggregator in which I have tried them is covered in a previous post of mine entitled The Netflix Problem: Syndicating Ordered Lists in RSS. For those who don't have time to go back and read the post, the following list summarizes the problems with the feeds

  1. When the list changes some items change position, new ones enter the list and old one's leave. An RSS reader doesn't know to remove items that have left the list from the display and in some cases may not know to eliminate duplicates. Eventually you have a garbled list with last week's #25 song and this weeks #25 song and last month's #25 song all in the same view.

  2. There is no way to know how to sort the list. Traditionally RSS ggregators sort entries by date which doesn't make sense for an ordered list.  

The RSS extensions provided by Microsoft are meant to solve these problems and improve the current negative user experience of people who subscribe to ordered lists using RSS today.  

To solve the first problem Microsoft has provided the cf:treatAs element with the value "list" to be used as a signal to aggregators that whenever the feed is updated that the previous contents should be dumped or archived and replaced by the new contents of the list. That way we no longer have last week's Top 25 song list comingled with this week's list. The interesting question for me is whether RSS Bandit should always refresh the contents of the list view when a list feed is updated (i.e. the feed always contains the current list) or whether to keep the old version of the list perhaps grouped by date. My instinct is to go with the first option. I know Nick Bradbury also had some concerns about what the right behavior should be for treating lists in FeedDemon.

To solve the second problem Microsoft has provided the cf:sort element which can be used to specify what elements on an item should be used for sorting, whether the field is numeric or textual so we know how to sort it and what the human readable name of the field should be when displayed to the user. I'm not really sure how to support this in RSS Bandit. Having every feed be able to specify what columns to show in the list view complicates the user interface somewhat and requires a degree of flexibility in the code. Changing the code to handle this should be straightforward although it may add some complexity.

On the other hand there are some user interface problems. For one, I'm not sure what should be the default sort field for lists. My gut instinct is to add a "Rank" column to the list of columns RSS Bandit supports by default and have it be a numeric field that is numbered using the document order of the feed. So the first item has rank 1, the second has rank 2, etc. This handles the case where a feed has a cf:treatAs element but has no cf:sort values. This will be needed for feeds such as the Netflix Top 100 feed which doesn't have a field that can be used for sorting. The second problem is how to show the user what columns can be added to a feed. Luckily we already have a column chooser that is configurable per feed in RSS Bandit. However we now have to make the list of columns in that list configurable per feed. This might be confusing to users but I'm not sure what other options we can try.


 

I missed the first few minutes of this talk.

Bob Wyman of PubSub stated he believed Atom was the future of syndication. Other formats would eventually be legacy formats that would be analogous to RTF in the word processing world. They will be supported but rarely chosen for new efforts in the future.

Mark Fletcher of Bloglines then interjected and pleaded with the audience to stop the practice of providing the same feed in multiple formats. Bob Wyman agreed with his plea and also encouraged members of the audience to pick one format and stick to it. Having the same feed in multiple syndication formats confuses end users who are trying to subscribe to the feed and leads to duplicate items showing up in search engines that specialize in syndication formats like PubSub, Feedster or the Bloglines search features.

A member of the audience responded that he used multiple formats because different aggregators support some formats better than others. Bob Wyman replied that bugs in aggregators should result in putting pressure on RSS aggregator developers to fix them instead of causing confusion to end users by spitting multiple versions of the same feed. Bob then advocated using picking Atom since a lot of lessons had been learned via the IETF process to improve the format. Another audience member mentioned that 95% of his syndication traffic was for his RSS feed not his Atom feed so he knows which format is winning in the market place.

A question was raised about whether the admonition to avoid multiple versions of  feed also included sites that have multiple feeds for separate categories of content. The specific example was having a regular feed and a podcast feed.  Bob Wyman thought that this was not a problem. The problem was the same content served in different formats.

The discussion then switched to ads in feeds. Scott Rafer of Feedster said that he agreed with Microsoft's presentation from the previous day that Subscribing is a new paradigm that has come after Browsing and Searching for content. Although we have figured out how to provide ads to support Browse & Search scenarios we are still experimenting with how to provide ads to support the Subscribe scenarios. Some sites like the New York Times uses RSS to draw people to its website by providing excerpts in its feeds. Certain consultants have full text feeds which they view as advertising their services. While others put ads in their feeds. Bob Wyman mentioned that PubSub is waiting to see which mechanism the market settles on for having advertising in feeds before deciding on approach. Bob Wyman added that finding a model for advertising and syndication was imperative so that intermediary services like PubSub, Feedster and Bloglines can continue to exist. An audience member then followed up and asked why these services couldn't survive by providing free services to the general public and charging corporate users instead of resorting to advertising. The response was that both PubSub and Feedster already have corporate customers who they charge for their services but this revenue is not be enough for them to continue providing services to the general public. The Bloglines team considered having fee-based services but discarded the idea because they felt it would be a death-knell for the service given that most service providers on the Web are free not fee-based.

An audience member asked if any of the services would have done anything different two years ago when they started given the knowledge they had now. The answers were that Feedster would have chosen a different back-end architecture, Bloglines would have picked a better name and PubSub would have started a few months to a year sooner.

I asked the speakers what features they felt were either missing in RSS or not being exploited. Mark Fletcher said that he would like to see more usage of the various comment related extensions to RSS which currently aren't supported by Bloglines because they aren't in widespread use. The other speakers mentioned that they will support whatever the market decides is of value.


 

Scott Gatz of Yahoo! started by pointing out that there are myriad uses for RSS. For this reason he felt that we need more flexible user experiences for RSS that map to these various uses. For example, a filmstrip view is more appropriate for reading a feed of photos than a traditional blog and news based user interface typically favored by RSS readers. Yahoo! is definitely thinking about RSS beyond just blogs and news which is why they've been working on Yahoo! Media RSS which is an extension to RSS that makes it better at syndicating digital media content like music and videos. Another aspect of syndication Yahoo! believes is key is the ability to keep people informed about updates independent of where they are or what device they are using. This is one of the reasons Yahoo! purchased the blo.gs service.

Dave Sifry of Technorati stated that he believed the library model of the Web where we talk about documents, directories and so on is outdated. The Web is more like a river or stream of millions of state changes. He then mentioned that some trends to watch that emphasized the changing model of the Web were microformats and tagging.

BEGIN "Talking About Myself in the Third Person"

Steve Gillmor of ZDNet began by pointing out Dare Obasanjo in the audience and saying that Dare was his hero and someone he admired for the work he'd done in the syndication space. Steve then asked why in a recent blog posting Dare had mentioned that he would not support Bloglines proprietary API for synchronizing a user's subscriptions with a desktop RSS reader but then went on to mention that he would support Newsgator Online's proprietary  API. Specifically he wondered why Dare wouldn't work towards a standard instead of supporting proprietary APIs.

At this point Dare joined the three speakers on stage. 

Dare mentioned that from his perspective there were two major problems that confronted users of an RSS reader. The first was that users eventually need to be able to read their subscriptions from multiple computers. This is because many people have multiple computers (e.g. home & work or home & school) where they read news and blogs from. The second problem is that eventually, due to the ease of subscribing to feeds, people eventually succumb to information overload and need a way to see only the most important or interesting content in the feeds to which they are subscribed. This is the "attention problem" that Steve Gillmor is a strong advocate of solving. The issue discussed in Dare's blog post is the former not the latter. The reason for working with the proprietary APIs provided by online RSS readers instead of advocating a standard is that the online RSS readers are the ones in control. At the end of the day, they are the ones that provide the API so they are the ones that have to decide whether they will create a standard or not.  

Dare rejoined the audience after speaking.  

END "Talking About Myself in the Third Person"

Dave Sifry followed up by encouraging cooperation between vendors to solve the various problems facing users. He gave an example of Yahoo! working with Marc Canter on digital media as an example.

Steve Gillmor then asked audience members to raise their hand if they felt that the ability to read their subscriptions from multiple computers was a problem they wanted solved. Most of the audience raised their hands in response.

A member of the audience responded to the show of hands by advocating that people us web based RSS readers like Bloglines. Scott Gatz agreed that using a web based aggregator was the best way to access one's subscriptions from multiple computers. There is some disagreement between members of the audience and the speakers whether there are problems using Bloglines from mobile devices which prevent it from being the solution to this problem.

From the audience, Dave Winer asks Dave Sifry why Technorati invented Attention.Xml instead of reusing OPML. The response was that the problem was beyond just synchronizing the list of feeds the user is subscribed to.

Steve Gillmor ended the session by pointing out that once RSS usage becomes widespread someone will have to solve the problem once and for all.  


 

This was a keynote talk given by Dean Hachamovitch and Amar Gandhi that revealed the the RSS platform that will exist in Longhorn, the level of RSS support in Internet Explorer 7 as well as showed some RSS extensions that Microsoft is proposing.

Dean started by talking about Microsoft's history with syndication. In 1997, there was Active Desktop and channels in IE 4.0 & IE 5.0 which wasn't really successful. We retreated from the world of syndication for a while after that. Then In 2002, Don Box starts blogging on GotDotNet. In 2003, we hired Robert Scoble. In 2004, Channel 9 was launched. Today we have RSS feeds coming out of lots of places from Microsoft. This includes the various feeds on the 14 15 million blogs on MSN Spaces, the 1500 employee blogs on http://blogs.msdn.com and http://blogs.technet.com, 100s of feeds on the Microsoft website and even MSN Search which provides RSS feeds for search results.

Using XML syndication is an evolution in the way people interact with content on the web. The first phase was browsing the Web for content using a web browser. Then came searching the Web for content using search engines. And now we have subscribing content using aggregators. Each step hasn't replaced the latter but instead has enhanced user experience while using the Web.  In Longhorn, Microsoft is betting big on RSS both for end users and for developers in three key ways

  1. Throughout Windows various experiences will be RSS-enabled and will be easy for end users to consume
  2. An RSS platform will be provided that makes it easy for developers to RSS-enable various scenarios and applications
  3. Increasing the number of scenarios that RSS handles by proposing extensions

Amar then demoed the RSS integration in Internet Explorer 7. Whenever Internet Explorer encounters an RSS feed, a button in the browser chrome lights up which indicates that a feed is available. Clicking on the button shows a user friendly version of the feed that provides rich search, filtering and sorting capabilities. The user can then hit a '+' button and subscribe to the feed. Amar then navigated to http://search.msn.com and searched for "Gnomedex 5.0". Once he got to the search results, the RSS button lit up and he subscribed to the search results. This shows how one possible workflow for keeping abreast of news of interest using the RSS capabilities of Internet Explorer 7 and MSN Search.

At this point Amar segued to talk about the Common RSS Feed List. This is a central list of feeds that a user is subscribed to that is accessible to all applications not just Internet Explorer 7. Amar then showed a demo of an altered version of RSS Bandit which used the Common RSS Feed List and could pick up both feeds he'd subscribed to during the previous demo in Internet Explorer 7. I got a shout out from Amar at this point and some applause from the audience for helping with the demo. :)

Dean then started to talk about the power of the enclosure element in RSS 2.0. What is great about it is that it enables one to syndicate all sorts of digital content. One can syndicate  video, music, calendar events, contacts, photos and so on using RSS due to the flexibility of enclosures.

Amar then showed a demo using Outlook 2003 and an RSS feed of the Gnomedex schedule he had created. The RSS feed had an item for each event on the schedule and each item had an iCalendar file as an enclosure. Amar had written a 200 line C# program that subscribed to this feed then inserted the events into his Outlook calendar so he could overlay his personal schedule with the Gnomedex schedule. The point of this demo was to show that RSS isn't just for aggregators subscribing to blogs and news sites.

Finally, Dean talked about syndicating lists of content. Today lots of people syndicate Top 10 lists, ToDo lists, music playlists and so on. However RSS is limited in how it can describe the semantics of a rotating list. Specifically the user experience when the list changes such as when a song in a top 10 list leaves the list or moves to another position is pretty bad. I discussed this very issue in a blog post from a few months ago entitled The Netflix Problem: Syndicating Ordered Lists in RSS.

Microsoft has proposed some extensions to RSS 2.0 that allows RSS readers deal with ordered lists better. A demo was shown that used data from the Amazon Web Services to create an RSS feed of an Amazon wish list (the data was converted to RSS feeds with the help of Jeff Barr). The RSS extensions provided information that enabled the demo application to know which fields to use for sorting and/or grouping the items in the wish list feed.

The Microsoft Simple List Extensions Specification is available on MSDN. In the spirit of the RSS 2.0 specification, the specification is available under the Creative Commons Attribution-ShareAlike License (version 2.5)

A video was then shown of Lawrence Lessig where he commended Microsoft for using a Creative Commons license.

The following is a paraphrasing of the question & answer session after their talk

Q: What syndication formats are supported?
A: The primary flavors of RSS such as RSS 0.91, RSS 1.0 and RSS 2.0 as well as the most recent version of Atom.

Q: How granular is the common feed list?
A: The Longhorn RSS object model models all the data within the RSS specification including some additional metadata. However it is fairly simple with 3 primary classes.

Q: Will Internet Explorer 7 support podcasting?
A: The RSS platform in Longhorn will support downloading enclosures.

Q: What is the community process for working on the specifications?
A: An email address for providing feedback will be posted on the IE Blog. Robert Scoble also plans to create a wiki page on Channel 9.  

Q: What parts of the presentation are in IE 7 (and thus will show up in Windows XP) and what parts are in Longhorn?
A: The RSS features of Internet Explorer 7 such as autodiscovery and the Common RSS Feed List will work in Windows XP. It is unclear whether other pieces such as the RSS Platform Sync Engine will make it to Windows XP.

Q: What are other Microsoft products such as Windows Media Player doing to take advantage of the RSS platform?
A: The RSS platform team is having conversation with other teams at Microsoft to see how they can take advantage of the platform.


 

I've been doing a lot of thinking and writing (at work) about Web platforms and Web APIs recently. Just yesterday I was talking to some folks about Web APIs and an interesting question came up about the Google Web APIs. Basically we couldn't come up with good reasons for Google to have exposed them or for people to use them in their current form. I left the conversation wondering whether I just wasn't smart enough to see the brilliance of their strategy. Thanks to a post by Alex Bosworth's entitled Search Engine APIS it seems I may not be so dumb after all. He wrote

Recently, the Search engines have been trying to get into the mix, with new pushes to gain developer followers. Google recently launched a new subdomain emphasizing their committment to not being a technology black hole. Less recently, Yahoo launched their own developer site, in an attempt to build 3rd party value around their product.

However so far these 3rd party applications haven't found widespread appeal and had breakout success. This is in large part because of crippling artificial restrictions placed on Google/Yahoo/etc APIs.

It's not to say that applications made on platforms haven't been fairly cool ie: Yahoo News In Pictures, or clustering Yahoo search results.

But these will never progress beyond cool toys, and I'm not sure that Yahoo/Google/etc realize that. Talking to Google engineers, I was informed that Google thinks of their 3rd party API program as a complete failure and only a couple people have made anything vaguely useful from it. The reason why is they have no real committment to making these programs work. For example their terms and conditions are so draconian and laden with legalese there is no motive for developers to work with them.

This confirms some of the suspicions I had about their search engine API. Of course, this isn't to say I think providing programmatic access to search engine results is a bad idea. I just think Google's approach is very weird.

For example, I like the fact that I can get the results of a query on recent news items for terms such as "G-Unit Game beef" as an RSS feed from MSN Search.  Or that I can subscribe to the results of searching for my name on Feedster in my favorite RSS reader. As an end user I get a lot of use out of them and as a developer they are a cinch to work with.

There's a difference between cool apps and useful apps. The people at Google sometimes don't get the difference but the guys at Yahoo! definitely do. All you have to do is compare Yahoo! Maps to Google Maps to figure this out. Similarly I think the APIs provided by Google don't really enable people to actually build useful apps just cool little toys. MSN Search and Yahoo! seem to be on a better track because they enable more useful scenarios with their breadth of services exposed as APIs* as well as the fact that they provide these results as RSS.

What I've actually found surprising is that neither service puts sponsored search results in the results returned. I assume this is because we are still in the early days of search syndication but once it catches on and regular people are subscribing to search results so they can do things like build their own newspaper/dashboard then I'm sure we'll see the ads creep in.

* RSS is an API. Deal with it.


 

Categories: MSN | XML Web Services

Stephen O' Grady has a blog post entitled Gmail Fighting Off Greasemonkey? where he writes

I'm not sure of the reasoning behind it, but it appears that Google may have made some behind the scenes changes to Gmail that disrupted the scipts and extensions I rely on to get the most out of my account. One of the Apache devs noted here that his Greasemonkey enabled persistent searches were no longer functioning, and in the same timeframe I've lost my delete key. It's not just Greasemonkey scripts, however, as my Gmail Notifier extension for Firefox has apparently been disabled as well. While it's Google's perogative to make changes to the service as necessary for maintenance or other reasons, I'm hoping this is not a deliberate play at preventing would-be participants from enhancing the value of Google's service. It's remarkable how much less useful Gmail is to me when I have to log in to see if I have mail, or can't easily delete the many frivolous emails I receive each day (yes, I'm aware that I can use a POP client for this, but I'd rather not).
...
Update: As one reader and a few posters to the GM listserv have noted, one thing that's disrupted a variety of user scripts has been the fact that the domain to Gmail has changed from gmail.google.com to mail.google.com. While simply adding the domains into the GM interface had no effect on my Gmail, a reinstallation of a version of the script with updated domain returned my beloved Delete button. What do developers think of this change with Google's service? Here's one take from the GM list: "I noticed [the domain change] too. Why they can't just leave it alone, I can't understand." To be a bit less harsh, while Google probably had good reasons for making the change, it would have been great to see them be proactive and notify people of the change via their blog or some other mechanism.

I find this hilarious. Greasemonkey scripts work by effectively screen scrapping the website and inserting changes into the HTML. Stephen and others who are upset by Google's change are basically saying that Google should never change the HTML or URL structure of the website ever again because it breaks their scripts. Yeah, right.

Repeat after me, a web page is not an API or a platform.  

Speaking of apps breaking because of GMail domain changes it seems RSS Bandit users were also affected by the GMail domain name change. It looks like we have a problem with forwarding the username and password after being redirected from https://gmail.google.com/gmail/feed/atom to https://mail.google.com/gmail/feed/atom. I'll fix this bug in the next version but in the meantime RSS Bandit users should be able to work around this by changing the URL manually to the new one. My bad.


 

Categories: RSS Bandit | Technology

I've been really busy at work and haven't had free time left over to do significant work on RSS Bandit for the past few weeks. I should have some more free time starting this weekend and will begin to start making progress again. In the mean time there have been decisions made which are probably worth sharing.

Synchronization with Bloglines and Newsgator Online
I've mentioned in the past that I'd like the next version of RSS Bandit to be able to synchronize it state with a user's account in Newsgator Online or Bloglines. However I've also mentioned the various limitations of the Bloglines sync API which make it less than ideal as a way to synchronize the state of a desktop RSS reader with an online aggregator. On the flip side, I've had some discussions with folks at Newsgator which imply that I'll be able to get a full synchronization API from them by the time Nightcrawler ships.

My current decision is that I am dropping plans for supporting synchronizing RSS Bandit with Bloglines due to the limitations of its API which is guaranteed to leave users with a poor user experience. Instead the only new synchronization target added in the next release will be Newsgator Online.

Newsgroup support
One interesting problem that came up when integrating newsgroups into the RSS Bandit  user interface was what to use as permalinks for newsgroup posts. Thankfully, Google Groups comes to the rescue here. All newsgroup posts are given a permalink that maps the the Google Groups URL for that message ID. This works fine with public newsgroups that are archived by Google but leads to broken links for internal newsgroups. I think this is a decent trade off.

The screenshot below shows this feature in action

Subscription Wizard
Given that we now provide the ability to subscribe to both newsgroups and RSS/Atom feeds we've decided to revamp the feed subscription mechanism. We'll be adding an "Add Subscription" wizard which gives people the option of adding an RSS or Atom feed, a newsgroup, a web page (which we then scour for feeds) or a search term (which we map to an MSN Search or Feedster search results feed). My initial instinct with the last option was to just go with PubSub but it doesn't seem to have a straightforward mechanism for people to create search feeds programmatically. REST wins again.

Screenshot below 


 

Categories: RSS Bandit