November 27, 2005
@ 06:23 PM

The Nightcrawler release of RSS Bandit is now done and available for all. Besides the new features there are a number of performance improvements especially with regards to the responsiveness of the application. This release is available in the following languages; German, English, Brazilian Portuguese, Traditional Chinese, Polish, and Serbian.

Download the installer from here. Differences between v1.3.0.29 and v1.3.0.38 below.

NEW FEATURES

  • NNTP Newsgroups support: Users can specify a public NNTP server such as news.microsoft.com and subscribe to newsgroups on that server. Users can also respond to newsgroup posts as well as create new posts. Permalinks in a newsgroup post point to the post on Google Groups.

  • Item Manipulation from Newspaper Views: Items can be marked as read or unread and flagged or unflagged directly from the newspaper view. This improves end user work flow as one no longer has to leave the newspaper view and right-click on the list view to either flag or mark a post as unread.

  • Subscription Wizard: The process for subscribing to newsgroups, search results and web feeds has been greatly simplified. For example, users no longer need to know the web feed of a particular web site to subscribe to it but can instead specify the web page URL and discovery of its web feed is done automatically.

  • Synchronization with Newsgator Online: Users can synchronize the state of their subscribed feeds (read/unread posts, new/deleted feeds, etc) between RSS Bandit and their account on Newsgator Online. This allows the best of both worlds where one can use both a rich desktop client (RSS Bandit) and a web-based RSS reader (Newsgator Online) without having to worry about marking things as read in both places.

  • Using back and forward arrows to view last post seen in reading pane: When navigating various feeds in the RSS Bandit tree view it is very useful to be able to go back to the last feed or item viewed especially when using the [spacebar] button to read unread items. RSS Bandit now provides a way to navigate back and forth between items read in the reading pane using the back and forward buttons in the toolbar. 

  • Atom 1.0 support: The Atom 1.0 syndication format is now supported.

  • Threaded Posts Now Optional: The feature where items that link to each other are shown as connected items reminiscent of threaded discussions can now be disabled and is off by default. This feature is very processor intensive and can slow down the average computer to the point that is unusable if one is subscribed to a large number of feeds.

  • Launching Browsers in the Background: A new Web browser can now be opened from a newspaper view without it stealing the focus of the application by holding down Ctrl when clicking a link in the reading pane.

  • UI Improvements: Icons in the tree view have been improved to make them more distinctive and also visually separate newsgroups from web feeds.

BUG FIXES

  • Scroll wheel manipulates the main window instead of the drop down list on a dialog box when the dialog box has focus.

  • Setting the default refresh rate to 0 (zero) did not disable automatic refresh of all feeds

  • No results displayed when subscribed to URLs with escaped characters such as http://blogsearch.google.com/blogsearch_feeds?hl=en&q=C%23&btnG=Search+Blogs&num=100&output=rss

  • ThreadAbortException error sometimes occurs when multiple feeds clicked on in rapid succession

  • Mark all as read in a search folder does not update the treeview state of the subscriptions

  • Refresh button for web browser refresh does not work

  • Treeview font issue. Font height may rendered way too small on high-res screen resolutions or with custom font definitions that use big fonts.

  • Keyboard shortcut for "Mark All Items As Read" changed to Ctrl+Q from Ctrl+M

  • Deleting a feed changes the selected tree node to the "My Feeds" node instead of the next node in the tree.

  • Proxy bypass server list damaged after reload of the Options dialog if more than one bypass address/server was specified.

  • Pushing [spacebar] or "Next Unread Item" when positioned in comments goes to newest unread instead of oldest unread comment.

  • The links in the Options dialog didn't work

  • Ctrl-Enter does not expand the URLs as expected

  • When uploading a feed list, the dialog box that appears on successful upload has been removed since it is redundant. .

  • The "Email This" menu option didn't work correctly

  • Crash caused if an item has already been added: "AppUserDataPath"

  • Tab controls on the Options dialog are rendered incorrectly on wide screen displays (16:9 aspect ratio)

  • Category settings don't stick for nested categories

  • Crash occurs after setting properties for a category if some feeds are not in a category

  • Toolbar state and window maximized states are not saved if appplication was closed via system tray icon context menu

  • Modifying the feedlist while loading feeds from disk or refreshing from the Web by subscribing to a new feed or deletion stops the feeds loading/downloading progress.

  • On failed authentication with credentials, we don't ignore cookies and retry one more time before giving up and reporting the failure

  • Some HTML entities are not decoded correctly in UI widgets

  • NullReferenceException error on "Update category" command

  • Access requests to comment feeds on a password-protected feed do not use the credentials used to access the main feed

  • Search criteria on search folders not reloaded correctly on restart


 

Categories: RSS Bandit

Jeff Jarvis has a post entitled A principle: I have a right to know when I am read which is somewhat charming in its naivaté. He writes

How about this as a fundamental principle of content and conversation on the internet:

I have a right to know when what I create is read, heard, viewed, or used if I wish to know that.

That is my followup to the whine about RSS — and content — caching below.

If this simple principle were built into applications — not the internet, per se, but in how readers and viewers work — then caching and P2P, which both serve creators by reducing bandwidth demand, would not be issues. This also would help those who want to make use of advertising (though actually serving ads is a different matter).

I’d like to see this as a technical add-on to Creative Commons: Distribute my content freely, please, on the condition that you allow applications to report traffic back to me. And applications designers should build such reporting in. The creator is still free not to require this and the end user is still free not to consume those things that require ping-backs. But simple traffic reporting is at least common courtesy.

I  can understand where Jeff is coming from with this post. However that doesn't change the fact that it betrays a fundamental misunderstanding of how the Web has worked for over a decade. The results of Web requests being cached by intermediates between the user and target web server is a fundamental aspect of the design of the World Wide Web. From intermediate proxy servers at your ISP or your corporate network right down to your Web browser, caching Web requests is a fundamental feature. This reduces the load on target Web servers and leads to a better user experience due to increased page loads.  A consequence of this is that web site owners most often have an inaccurate view of how many people are actually reading their web site. All of this is explained in several writings from last decade such as Why web usage statistics are (worse than) meaningless and Understanding web log statistics and metrics.

Not being able to tell how many people are really reading your web site is a consequence of how the Web works. The only difference now is that instead of HTML, the discussion is about RSS feeds.  It's cool that some Web-based RSS readers provide readership numbers to website owners as part of their HTTP request. However this is a courtesy that they provide. Secondly, even if all Web-based RSS readers provide readership stats there is still the fact that traditional HTTP proxy servers don't. Is my ISPs proxy server sending back how many times its served cached requests for Jeff's feed back to him? I doubt it. I also am pretty sure that the proxy servers at my employer's don't either.

As for technical add-on to Creative Commons license? I'd be interested to see what kind of lawyering would produce a license that gives Jeff what he wants without requiring changes in every proxy server on the planet. 


 

Due to the Thanksgiving holiday, I've spent the past day and a half with a large variety of young people whose ages range from 11 to 22 years old. The various conversations I've participated in and overheard have cemented some thoughts I've had about competition in the consumer software game.

This series of thoughts started with a conversation I had with someone who works on MSN Windows Live Search. We talked about the current focus we have on 'relevance' when it comes to our search engine. I agree that it's great to have goals around providing users with more relevant results but I think this is just a one [small] part of the problem. Google rose to prominence by providing a much better search experience than anyone else around. I think it's possible to build a search engine that is as good as Google's. I also think its possible to build one that is a little better than they are at providing relevant search results. However I strongly doubt that we'll see a search engine much better than Google's in the near future. I think that in the near future, what we'll see is the equivalent of Coke vs. Pepsi. Eventually, will we see the equivalents the Pepsi Challenge with regards to Web search engines? Supposedly, the Pepsi challenge shows that people prefer Pepsi to Coke in a blind taste test. However the fact is Coca Cola is the world's #1 soft drink, not Pepsi. A lot of this is due to Coke's branding and pervasive high quality advertising, not the taste of their soft drink. 

Google's search engine brand has gotten to the point where it is synonymous with Web search in many markets. With Google, I've seen a 7-year old girl who was told she was being taken to the zoo by her parents, rush to the PC to 'Google' the zoo to find out what animals she'd see that day. That's how pervasive the brand is.It's like the iPod and portable MP3 players. People ask for iPods for Xmas not MP3 players. When I get my next portable MP3 player, I'll likely just get a video iPod without even bothering to research the competition. Portable audio used to be synonymous with the Sony Walkman until the game changed and they got left behind. Now that portable audio is synonymous with MP3 players, it's the Apple iPod. I don't see them being knocked off their perch anytime soon unless another game changing transition occurs.

So what does this mean for search engine competition and Google? Well, I think increasing a search engine's relevance to become competitive with Google's is a good goal but it is a route that seems guaranteed to make you the Pepsi to their Coke or the Burger King to their McDonalds. What you really need is to change the rules of the game, the way the Apple iPod did.

The same thing applies to stuff I work on in my day job. Watching an 11-year old spend hours on  MySpace and listening to college sorority girls talk about how much they use The Facebook, I realize we aren't just competing with other software tools and trying to build more features. We are competing with cultural phenomena. The MSN Windows Live Messenger folks have been telling me this about their competition with AOL Instant Messenger in the U.S. market and I'm beginning to see where they are coming from. 


 

Jeremy Epling responds to my recent post entitled Office Live: Evolve or Die with some disagreement. In his post Web Versions of Office Apps, Jeremy writes

In his post Office Live: Evolve or Die Dare Obasanjo a writes

I can understand the Office guys aren’t keen on building Web-based versions of their flagship apps but they are going to have get over it. They will eventually have to do it. The only question is whether they will lead, follow or get the hell out of the way.

I agree and disagree with Dare on this one. I agree because I think OWA should have been built and has a 2 compelling reasons to be Web based.

  1. Our increasing mobile population needs quick and easy anywhere access to communication. This is satisfied by a Web based app because a PC only needs a Web browser to “open” you mail app.
  2. Email is already stored on a server.

I disagree with Dare because of the two OWA advantages I listed above don’t equally apply to the other Office apps.

I don’t think anywhere access to document creation/editing is one of Office’s customer’s biggest pain points. Since it is not a major pain point it does not warrant investment because the cost of replicating all the Office flag ships apps as AJAX Web apps is too high.

There's a lot to disagree with in Jeremy's short post. In the comments to my original post, Jeremy argued that VPN software makes needing AJAX web apps redundant. However it seems he has conceded that this isn't true with the existence of Outlook Web Access. Considering that our increasingly mobile customers can use the main Outlook client either through their VPN or even with straight HTTP/HTTPS using the RPC over HTTP feature of Exchange, it is telling that many instead choose to use OWA.

Let's ignore that contradiction and just stick strictly to the rules Jeremy provides for deciding that OWA is worth doing but Web-based versions of Excel or Word are not. Jeremy's first point is that increasingly mobile users need access to their communications tools. I agree. However I also believe that people need 'anywhere' access to their business documents as well. As a program manager, most of my output is product specifications and email (sad but true). I don't see why I need 'anywhere' access to the latter but not the former. Jeremy's second point is that corporate email is already stored on a server. First of all, in my scenarios our team's documents are stored on a Sharepoint server. Secondly, even if all my documents were on my local machine, that doesn't change the fact that ideally I should be able to access them from another machine without needing VPN's and the same version of Office on the machine I'm on. In fact, Orb.com provides exactly this 'anywhere' access to digital media on my PC and it works great. Why couldn't this notion be extended to my presentations, spreadsheets and documents as well? 

Somebody is eventually going to solve this problem. As an b0rg employee, I hope its Microsoft. However if we keep resisting the rising tide that is the Web maybe it'll be SalesForce.com or even Google that will eat our lunch in this space.


 

Categories: Technology

November 23, 2005
@ 05:55 PM

The folks behind domains.live.com have a team blog which already has a bunch of informative posts answering the major questions people have had about the service. If you are interested in the service, you should definitely read their post entitled Let's Answer Some of Your Questions where they do exactly that.


 

Categories: Windows Live

November 23, 2005
@ 05:44 PM

I recently got a demo of Orb.com from Mike Torres and I was quite impressed. Mike talks about the capabilities of the service in his post Orb.com & "placeshifting" where he wrote

I now have unlimited storage for videos, television, music, and photos in my pocket for $399 less than an iPod video.

How is that possible?  www.Orb.com.

A free service designed for cell phones, PCs, Macs, and PDAs that gives me access to my computer at home to get at all of my digital media.  All of it.  I don’t have to worry about cradling or synchronizing anything every day... and I don’t have to carry a 250GB hard drive in my coat pocket either.  Beautiful.
...
From my phone or from my laptop at work (or anywhere else with a net connection):

I can listen to all of my WMA Lossless music on my PC at home; over 700 albums collected over the last 15 years.  I can play by genre, playlist, album, artist, or even do a search across all my music.  Of course, it doesn’t stream at lossless quality, but for walking around downtown Seattle, the quality is fine (128kbps – same as iTunes actually).

I can watch LIVE television – including cable – from wherever I am.  I can change the channel remotely – if I’m watching MSNBC and want to switch over to MTV, it takes about 5-7 seconds to do so.  If I like what I’m watching, I can click the record button and catch up with the program when I get home.  This blows people away when they see it ;)

I can view (and even download) my digital photos.  Whenever someone asks about some random event we attended together, I can pull up photos from that event within a few seconds.  With over 10,000 of my digital photos available to me from anywhere, I’ll never again say “Argh.  I have that on my computer at home.”

I can watch recorded television.  Meaning I can setup my Media Center PC to record shows to my hard drive and can click Play from anywhere to catch up with the show.  Before now, I didn’t have any time to watch Sportscenter... now I can listen to it on the way to work.

I can turn on my webcam and watch my cats.  Remember, I’m doing this from my freakin' PHONE.

I am so amazed by this service that I considered not even blogging about it.  I didn’t think I could do it justice.  I still don’t.
...
Overall this is the perfect example of combining software with online services to enable great scenarios for everyday people. It just works for me.

These guys have built a killer service. I'm quite surprised that they haven't been snapped up by one of the big web companies already.


 

November 23, 2005
@ 02:13 PM

I got the feedback from Bill Gates and his TA about my Thinkweek paper this morning.

They liked it. :)


 

Categories: Ramblings

In his post Really Simple Sharing, Ray Ozzie announced Simple Sharing Extensions for RSS and OPML. He writes

As an industry, we have simply not designed our calendaring and directory software and services for this “mesh” model. The websites, services and servers we build seem to all want to be the “owner” and “publisher”; it’s really inconsistent with the model that made email so successful, and the loosely-coupled nature of the web.

Shortly after I started at Microsoft, I had the opportunity to meet with the people behind Exchange, Outlook, MSN, Windows Mobile, Messenger, Communicator, and more. We brainstormed about this “meshed world” and how we might best serve it - a world where each of these products and others’ products could both manage these objects and synchronize each others’ changes. We thought about how we might prototype such a thing as rapidly as possible – to get the underpinnings of data synchronization working so that we could spend time working on the user experience aspects of the problem – a much better place to spend time than doing plumbing.

There are many great item synchronization mechanisms out there (and at Microsoft), but we decided we’d never get short term network effects among products if we selected something complicated – even if it were powerful. What we really longed for was "the RSS of synchronization" ... something simple that would catch on very quickly.

Using RSS itself as-is for synchronization wasn't really an option. That is, RSS is primarily about syndication - unidirectional publishing - while in order to accomplish the “mesh” sharing scenarios, we'd need bi-directional (actually, multi-directional) synchronization of items. But RSS is compelling because of the power inherent in its simplicity.
...
And so we created an RSS extension that we refer to as Simple Sharing Extensions or SSE. In just a few weeks time, several Microsoft product groups and my own 'concept development group' built prototypes and demos, and found that it works and interoperates quite nicely.

We’re pretty excited about the extension - well beyond the uses that catalyzed its creation. It’s designed in such a way that the minimum implementation is incredibly easy, and so that higher-level capabilities such as conflict handling can be implemented in those applications that want to do such things.

The model behind SSE is pretty straightforward; to sychronize data across multiple sources, each end point provides a feed and the subscribes to the feeds provided by the other end point(s).  I hate to sound like a fanboy but SSE is an example of how Ray Ozzie showed up at Microsoft and just started kicking butt. I've been on the periphery of some of the discussions of SSE and reviewed early drafts of the spec. It's been impressive seeing how much quick progress Ray made internally on getting this idea polished and evangelized.

The spec looks good modulo the issues that tend to dog Microsoft when it ships specs like this. For example,  is a lack of detail around data types (e.g. nowhere is the date format used by the spec documented although you can assume it's RFC 822 dates based on the examples) and there is also the lack of any test sites that have feeds which use this format so enterprising hackers can quickly write some code to prototype implementations and try out ideas.

Sam Ruby has posted a blog entry critical of Microsoft's practices when it publishes RSS extension specifications in his post This is Sharing? where he writes

The first attribute that the the Simple Sharing Extensions for RSS and OPML is to “treat the item list as an ordered set”.  This sounds like something from the Simple List Extensions Specification that was also hatched in private and then unleashed with great fanfare about five months ago. Sure a wiki was set up, but any questions posted there were promptly ignored.  The cone of silence has been so impenetrable that even invoking the name Scoble turns out to be ineffective. 

Now the Simple List Extensions Specification URI redirects to an ad for vaporware.  Some things never change.

Should we wait for version 3.0?

I agree with all of Sam's feedback. Hopefully Microsoft will do better this time around.


 

I just chatted with Torsten this morning and we've decided to ship the next release of RSS Bandit this weekend, on November 27th to be exact. As part of the final stretch, we've reached the point where we have to request for translators.

As RSS Bandit is a hobbyist application worked on in our free time, we rely on the generosity of our users when it comes to providing translations of our application. Currently we have completed translations for Brazilian Portuguese, German and Traditional Chinese. We also have folks working on Hindi and Dutch translations. If you look at the supported language matrix for RSS Bandit, you'll not that this means we don't have translators for previously supported languages like Polish, French, Russian, Japanese, Turkish or Serbian. If you'd like to provide your skills as a translator to the next release of RSS Bandit and believe you can get this done by this weekend then please send mail to . We'd appreciate your help.


 

Categories: RSS Bandit

Nick Bradbury has a post entitled An Attention Namespace for OPML where he writes

In a recent post I said that OPML would be a great format for sharing attention data, but I wasn't sure whether this would be possible due to uncertainty over OPML's support for namespaces.
...
As I mentioned previously, FeedDemon already stores attention data in OPML, but it uses a proprietary fd: namespace which relies on attributes that make little sense outside of FeedDemon. What I propose is that aggregator users and developers have an open discussion about what specific attention data could (and should) be collected by aggregators.

Although there's a lot of attention data that could be stored in OPML, my recommendation is that we keep it simple - otherwise, we risk seeing each aggregator support a different subset of attention data. So rather than come up with a huge list of attributes, I'll start by recommending a single piece of attention data: rank.

We need a way to rank feeds that makes sense across aggregators, so that when you export OPML from one aggregator, the aggregator you import into would know which feeds you're paying the most attention to. This could be used for any number of things - recommending related feeds, giving higher ranked feeds higher priority in feed listings, etc.

Although user interface and workflow differences require each aggregator to have its own algorithm for ranking feeds, we should be able to define a ranking attribute that makes sense to every aggregator. In FeedDemon's case, a simple scale (say, 0-100) would work: feeds you rarely read would get be ranked closer to zero, while feeds you read all the time would be ranked closer to 100. Whether this makes sense outside of FeedDemon remains to be seen, so I'd love to hear from developers of other aggregators about this.

I used be the program manager responsible for a number of XML technologies in the .NET Framework while I was on the XML team at Microsoft. The technology I spent the most time working with was the XML Schema Definition Language (XSD). After working with XSD for about three years, I came to the conclusion that XSD has held back the proliferation and advancement of XML technologies by about two or three years. The lack of adoption of web services technologies like SOAP and WSDL on the world wide web is primarily due to the complexity of XSD. The fact that XQuery has spent over 5 years in standards committees and has evolved to become a technology too complex for the average XML developer is also primarily the fault of XSD.  This is because XSD is extremely complex and yet is rather inflexible with minimal functionality. This state of affairs is primarily due to its nature as a one size fits all technology with too many contradictory design objectives. In my opinion, the W3C XML Schema Definition language is a victim of premature standardization. The XML world needed experiment more with various XML schema languages like XDR and RELAX NG before we decided to settle down and come up with a standard.

So what does this have to do with attention data and XML? Lots. We are a long way from standardization. We aren't even well into the experimentation stage yet. How many feed readers do a good job of giving you an idea of which among the various new items in your RSS inbox are worth reading? How many of them do a good job suggesting new feeds for you to read based on your reading habits? Until we get to a point where such features are common place in feed readers, it seems like putting the cart way before the horse to start talking about standardizing the XML representation of these features.

Let's look at the one field Nick talks about standardizing; rank. He wants all readers to track 'rank' using a numeric scale of 1-100. This seems pretty arbitrary. In RSS Bandit, users can flag posts as Follow Up, Review, Read, Reply or Forward. How does that map to a numeric scale? It doesn't. If I allowed users to prioritize feeds, it wouldn't be in a way that would map cleanly to a numeric scale. 

My advice to Nick and others who are entertaining ideas around standardizing attention data in OPML; go build some features first and see which ones work for end users and which ones don't. Once we've figured that out amongst multiple readers with diverse user bases, then we can start thinking about standardization.