As the developer of an RSS aggregator I'm glad to see Microsoft's Simple List Extensions for RSS. Many of the aggregator developers I spoke to at Gnomedex this weekend felt the same way. The reason for being happy about these extensions is that they provide a way to fix a number of key feeds that are broken in RSS aggregators today. This includes feeds such as the MSN Music Top 100 Songs feed, iTunes Top 25 Songs feed and Netflix Top 100 Movies feed.

The reasons these feeds appear broken in every aggregator in which I have tried them is covered in a previous post of mine entitled The Netflix Problem: Syndicating Ordered Lists in RSS. For those who don't have time to go back and read the post, the following list summarizes the problems with the feeds

  1. When the list changes some items change position, new ones enter the list and old one's leave. An RSS reader doesn't know to remove items that have left the list from the display and in some cases may not know to eliminate duplicates. Eventually you have a garbled list with last week's #25 song and this weeks #25 song and last month's #25 song all in the same view.

  2. There is no way to know how to sort the list. Traditionally RSS ggregators sort entries by date which doesn't make sense for an ordered list.  

The RSS extensions provided by Microsoft are meant to solve these problems and improve the current negative user experience of people who subscribe to ordered lists using RSS today.  

To solve the first problem Microsoft has provided the cf:treatAs element with the value "list" to be used as a signal to aggregators that whenever the feed is updated that the previous contents should be dumped or archived and replaced by the new contents of the list. That way we no longer have last week's Top 25 song list comingled with this week's list. The interesting question for me is whether RSS Bandit should always refresh the contents of the list view when a list feed is updated (i.e. the feed always contains the current list) or whether to keep the old version of the list perhaps grouped by date. My instinct is to go with the first option. I know Nick Bradbury also had some concerns about what the right behavior should be for treating lists in FeedDemon.

To solve the second problem Microsoft has provided the cf:sort element which can be used to specify what elements on an item should be used for sorting, whether the field is numeric or textual so we know how to sort it and what the human readable name of the field should be when displayed to the user. I'm not really sure how to support this in RSS Bandit. Having every feed be able to specify what columns to show in the list view complicates the user interface somewhat and requires a degree of flexibility in the code. Changing the code to handle this should be straightforward although it may add some complexity.

On the other hand there are some user interface problems. For one, I'm not sure what should be the default sort field for lists. My gut instinct is to add a "Rank" column to the list of columns RSS Bandit supports by default and have it be a numeric field that is numbered using the document order of the feed. So the first item has rank 1, the second has rank 2, etc. This handles the case where a feed has a cf:treatAs element but has no cf:sort values. This will be needed for feeds such as the Netflix Top 100 feed which doesn't have a field that can be used for sorting. The second problem is how to show the user what columns can be added to a feed. Luckily we already have a column chooser that is configurable per feed in RSS Bandit. However we now have to make the list of columns in that list configurable per feed. This might be confusing to users but I'm not sure what other options we can try.


 

I missed the first few minutes of this talk.

Bob Wyman of PubSub stated he believed Atom was the future of syndication. Other formats would eventually be legacy formats that would be analogous to RTF in the word processing world. They will be supported but rarely chosen for new efforts in the future.

Mark Fletcher of Bloglines then interjected and pleaded with the audience to stop the practice of providing the same feed in multiple formats. Bob Wyman agreed with his plea and also encouraged members of the audience to pick one format and stick to it. Having the same feed in multiple syndication formats confuses end users who are trying to subscribe to the feed and leads to duplicate items showing up in search engines that specialize in syndication formats like PubSub, Feedster or the Bloglines search features.

A member of the audience responded that he used multiple formats because different aggregators support some formats better than others. Bob Wyman replied that bugs in aggregators should result in putting pressure on RSS aggregator developers to fix them instead of causing confusion to end users by spitting multiple versions of the same feed. Bob then advocated using picking Atom since a lot of lessons had been learned via the IETF process to improve the format. Another audience member mentioned that 95% of his syndication traffic was for his RSS feed not his Atom feed so he knows which format is winning in the market place.

A question was raised about whether the admonition to avoid multiple versions of  feed also included sites that have multiple feeds for separate categories of content. The specific example was having a regular feed and a podcast feed.  Bob Wyman thought that this was not a problem. The problem was the same content served in different formats.

The discussion then switched to ads in feeds. Scott Rafer of Feedster said that he agreed with Microsoft's presentation from the previous day that Subscribing is a new paradigm that has come after Browsing and Searching for content. Although we have figured out how to provide ads to support Browse & Search scenarios we are still experimenting with how to provide ads to support the Subscribe scenarios. Some sites like the New York Times uses RSS to draw people to its website by providing excerpts in its feeds. Certain consultants have full text feeds which they view as advertising their services. While others put ads in their feeds. Bob Wyman mentioned that PubSub is waiting to see which mechanism the market settles on for having advertising in feeds before deciding on approach. Bob Wyman added that finding a model for advertising and syndication was imperative so that intermediary services like PubSub, Feedster and Bloglines can continue to exist. An audience member then followed up and asked why these services couldn't survive by providing free services to the general public and charging corporate users instead of resorting to advertising. The response was that both PubSub and Feedster already have corporate customers who they charge for their services but this revenue is not be enough for them to continue providing services to the general public. The Bloglines team considered having fee-based services but discarded the idea because they felt it would be a death-knell for the service given that most service providers on the Web are free not fee-based.

An audience member asked if any of the services would have done anything different two years ago when they started given the knowledge they had now. The answers were that Feedster would have chosen a different back-end architecture, Bloglines would have picked a better name and PubSub would have started a few months to a year sooner.

I asked the speakers what features they felt were either missing in RSS or not being exploited. Mark Fletcher said that he would like to see more usage of the various comment related extensions to RSS which currently aren't supported by Bloglines because they aren't in widespread use. The other speakers mentioned that they will support whatever the market decides is of value.


 

Scott Gatz of Yahoo! started by pointing out that there are myriad uses for RSS. For this reason he felt that we need more flexible user experiences for RSS that map to these various uses. For example, a filmstrip view is more appropriate for reading a feed of photos than a traditional blog and news based user interface typically favored by RSS readers. Yahoo! is definitely thinking about RSS beyond just blogs and news which is why they've been working on Yahoo! Media RSS which is an extension to RSS that makes it better at syndicating digital media content like music and videos. Another aspect of syndication Yahoo! believes is key is the ability to keep people informed about updates independent of where they are or what device they are using. This is one of the reasons Yahoo! purchased the blo.gs service.

Dave Sifry of Technorati stated that he believed the library model of the Web where we talk about documents, directories and so on is outdated. The Web is more like a river or stream of millions of state changes. He then mentioned that some trends to watch that emphasized the changing model of the Web were microformats and tagging.

BEGIN "Talking About Myself in the Third Person"

Steve Gillmor of ZDNet began by pointing out Dare Obasanjo in the audience and saying that Dare was his hero and someone he admired for the work he'd done in the syndication space. Steve then asked why in a recent blog posting Dare had mentioned that he would not support Bloglines proprietary API for synchronizing a user's subscriptions with a desktop RSS reader but then went on to mention that he would support Newsgator Online's proprietary  API. Specifically he wondered why Dare wouldn't work towards a standard instead of supporting proprietary APIs.

At this point Dare joined the three speakers on stage. 

Dare mentioned that from his perspective there were two major problems that confronted users of an RSS reader. The first was that users eventually need to be able to read their subscriptions from multiple computers. This is because many people have multiple computers (e.g. home & work or home & school) where they read news and blogs from. The second problem is that eventually, due to the ease of subscribing to feeds, people eventually succumb to information overload and need a way to see only the most important or interesting content in the feeds to which they are subscribed. This is the "attention problem" that Steve Gillmor is a strong advocate of solving. The issue discussed in Dare's blog post is the former not the latter. The reason for working with the proprietary APIs provided by online RSS readers instead of advocating a standard is that the online RSS readers are the ones in control. At the end of the day, they are the ones that provide the API so they are the ones that have to decide whether they will create a standard or not.  

Dare rejoined the audience after speaking.  

END "Talking About Myself in the Third Person"

Dave Sifry followed up by encouraging cooperation between vendors to solve the various problems facing users. He gave an example of Yahoo! working with Marc Canter on digital media as an example.

Steve Gillmor then asked audience members to raise their hand if they felt that the ability to read their subscriptions from multiple computers was a problem they wanted solved. Most of the audience raised their hands in response.

A member of the audience responded to the show of hands by advocating that people us web based RSS readers like Bloglines. Scott Gatz agreed that using a web based aggregator was the best way to access one's subscriptions from multiple computers. There is some disagreement between members of the audience and the speakers whether there are problems using Bloglines from mobile devices which prevent it from being the solution to this problem.

From the audience, Dave Winer asks Dave Sifry why Technorati invented Attention.Xml instead of reusing OPML. The response was that the problem was beyond just synchronizing the list of feeds the user is subscribed to.

Steve Gillmor ended the session by pointing out that once RSS usage becomes widespread someone will have to solve the problem once and for all.  


 

This was a keynote talk given by Dean Hachamovitch and Amar Gandhi that revealed the the RSS platform that will exist in Longhorn, the level of RSS support in Internet Explorer 7 as well as showed some RSS extensions that Microsoft is proposing.

Dean started by talking about Microsoft's history with syndication. In 1997, there was Active Desktop and channels in IE 4.0 & IE 5.0 which wasn't really successful. We retreated from the world of syndication for a while after that. Then In 2002, Don Box starts blogging on GotDotNet. In 2003, we hired Robert Scoble. In 2004, Channel 9 was launched. Today we have RSS feeds coming out of lots of places from Microsoft. This includes the various feeds on the 14 15 million blogs on MSN Spaces, the 1500 employee blogs on http://blogs.msdn.com and http://blogs.technet.com, 100s of feeds on the Microsoft website and even MSN Search which provides RSS feeds for search results.

Using XML syndication is an evolution in the way people interact with content on the web. The first phase was browsing the Web for content using a web browser. Then came searching the Web for content using search engines. And now we have subscribing content using aggregators. Each step hasn't replaced the latter but instead has enhanced user experience while using the Web.  In Longhorn, Microsoft is betting big on RSS both for end users and for developers in three key ways

  1. Throughout Windows various experiences will be RSS-enabled and will be easy for end users to consume
  2. An RSS platform will be provided that makes it easy for developers to RSS-enable various scenarios and applications
  3. Increasing the number of scenarios that RSS handles by proposing extensions

Amar then demoed the RSS integration in Internet Explorer 7. Whenever Internet Explorer encounters an RSS feed, a button in the browser chrome lights up which indicates that a feed is available. Clicking on the button shows a user friendly version of the feed that provides rich search, filtering and sorting capabilities. The user can then hit a '+' button and subscribe to the feed. Amar then navigated to http://search.msn.com and searched for "Gnomedex 5.0". Once he got to the search results, the RSS button lit up and he subscribed to the search results. This shows how one possible workflow for keeping abreast of news of interest using the RSS capabilities of Internet Explorer 7 and MSN Search.

At this point Amar segued to talk about the Common RSS Feed List. This is a central list of feeds that a user is subscribed to that is accessible to all applications not just Internet Explorer 7. Amar then showed a demo of an altered version of RSS Bandit which used the Common RSS Feed List and could pick up both feeds he'd subscribed to during the previous demo in Internet Explorer 7. I got a shout out from Amar at this point and some applause from the audience for helping with the demo. :)

Dean then started to talk about the power of the enclosure element in RSS 2.0. What is great about it is that it enables one to syndicate all sorts of digital content. One can syndicate  video, music, calendar events, contacts, photos and so on using RSS due to the flexibility of enclosures.

Amar then showed a demo using Outlook 2003 and an RSS feed of the Gnomedex schedule he had created. The RSS feed had an item for each event on the schedule and each item had an iCalendar file as an enclosure. Amar had written a 200 line C# program that subscribed to this feed then inserted the events into his Outlook calendar so he could overlay his personal schedule with the Gnomedex schedule. The point of this demo was to show that RSS isn't just for aggregators subscribing to blogs and news sites.

Finally, Dean talked about syndicating lists of content. Today lots of people syndicate Top 10 lists, ToDo lists, music playlists and so on. However RSS is limited in how it can describe the semantics of a rotating list. Specifically the user experience when the list changes such as when a song in a top 10 list leaves the list or moves to another position is pretty bad. I discussed this very issue in a blog post from a few months ago entitled The Netflix Problem: Syndicating Ordered Lists in RSS.

Microsoft has proposed some extensions to RSS 2.0 that allows RSS readers deal with ordered lists better. A demo was shown that used data from the Amazon Web Services to create an RSS feed of an Amazon wish list (the data was converted to RSS feeds with the help of Jeff Barr). The RSS extensions provided information that enabled the demo application to know which fields to use for sorting and/or grouping the items in the wish list feed.

The Microsoft Simple List Extensions Specification is available on MSDN. In the spirit of the RSS 2.0 specification, the specification is available under the Creative Commons Attribution-ShareAlike License (version 2.5)

A video was then shown of Lawrence Lessig where he commended Microsoft for using a Creative Commons license.

The following is a paraphrasing of the question & answer session after their talk

Q: What syndication formats are supported?
A: The primary flavors of RSS such as RSS 0.91, RSS 1.0 and RSS 2.0 as well as the most recent version of Atom.

Q: How granular is the common feed list?
A: The Longhorn RSS object model models all the data within the RSS specification including some additional metadata. However it is fairly simple with 3 primary classes.

Q: Will Internet Explorer 7 support podcasting?
A: The RSS platform in Longhorn will support downloading enclosures.

Q: What is the community process for working on the specifications?
A: An email address for providing feedback will be posted on the IE Blog. Robert Scoble also plans to create a wiki page on Channel 9.  

Q: What parts of the presentation are in IE 7 (and thus will show up in Windows XP) and what parts are in Longhorn?
A: The RSS features of Internet Explorer 7 such as autodiscovery and the Common RSS Feed List will work in Windows XP. It is unclear whether other pieces such as the RSS Platform Sync Engine will make it to Windows XP.

Q: What are other Microsoft products such as Windows Media Player doing to take advantage of the RSS platform?
A: The RSS platform team is having conversation with other teams at Microsoft to see how they can take advantage of the platform.


 

I've been doing a lot of thinking and writing (at work) about Web platforms and Web APIs recently. Just yesterday I was talking to some folks about Web APIs and an interesting question came up about the Google Web APIs. Basically we couldn't come up with good reasons for Google to have exposed them or for people to use them in their current form. I left the conversation wondering whether I just wasn't smart enough to see the brilliance of their strategy. Thanks to a post by Alex Bosworth's entitled Search Engine APIS it seems I may not be so dumb after all. He wrote

Recently, the Search engines have been trying to get into the mix, with new pushes to gain developer followers. Google recently launched a new subdomain emphasizing their committment to not being a technology black hole. Less recently, Yahoo launched their own developer site, in an attempt to build 3rd party value around their product.

However so far these 3rd party applications haven't found widespread appeal and had breakout success. This is in large part because of crippling artificial restrictions placed on Google/Yahoo/etc APIs.

It's not to say that applications made on platforms haven't been fairly cool ie: Yahoo News In Pictures, or clustering Yahoo search results.

But these will never progress beyond cool toys, and I'm not sure that Yahoo/Google/etc realize that. Talking to Google engineers, I was informed that Google thinks of their 3rd party API program as a complete failure and only a couple people have made anything vaguely useful from it. The reason why is they have no real committment to making these programs work. For example their terms and conditions are so draconian and laden with legalese there is no motive for developers to work with them.

This confirms some of the suspicions I had about their search engine API. Of course, this isn't to say I think providing programmatic access to search engine results is a bad idea. I just think Google's approach is very weird.

For example, I like the fact that I can get the results of a query on recent news items for terms such as "G-Unit Game beef" as an RSS feed from MSN Search.  Or that I can subscribe to the results of searching for my name on Feedster in my favorite RSS reader. As an end user I get a lot of use out of them and as a developer they are a cinch to work with.

There's a difference between cool apps and useful apps. The people at Google sometimes don't get the difference but the guys at Yahoo! definitely do. All you have to do is compare Yahoo! Maps to Google Maps to figure this out. Similarly I think the APIs provided by Google don't really enable people to actually build useful apps just cool little toys. MSN Search and Yahoo! seem to be on a better track because they enable more useful scenarios with their breadth of services exposed as APIs* as well as the fact that they provide these results as RSS.

What I've actually found surprising is that neither service puts sponsored search results in the results returned. I assume this is because we are still in the early days of search syndication but once it catches on and regular people are subscribing to search results so they can do things like build their own newspaper/dashboard then I'm sure we'll see the ads creep in.

* RSS is an API. Deal with it.


 

Categories: MSN | XML Web Services

Stephen O' Grady has a blog post entitled Gmail Fighting Off Greasemonkey? where he writes

I'm not sure of the reasoning behind it, but it appears that Google may have made some behind the scenes changes to Gmail that disrupted the scipts and extensions I rely on to get the most out of my account. One of the Apache devs noted here that his Greasemonkey enabled persistent searches were no longer functioning, and in the same timeframe I've lost my delete key. It's not just Greasemonkey scripts, however, as my Gmail Notifier extension for Firefox has apparently been disabled as well. While it's Google's perogative to make changes to the service as necessary for maintenance or other reasons, I'm hoping this is not a deliberate play at preventing would-be participants from enhancing the value of Google's service. It's remarkable how much less useful Gmail is to me when I have to log in to see if I have mail, or can't easily delete the many frivolous emails I receive each day (yes, I'm aware that I can use a POP client for this, but I'd rather not).
...
Update: As one reader and a few posters to the GM listserv have noted, one thing that's disrupted a variety of user scripts has been the fact that the domain to Gmail has changed from gmail.google.com to mail.google.com. While simply adding the domains into the GM interface had no effect on my Gmail, a reinstallation of a version of the script with updated domain returned my beloved Delete button. What do developers think of this change with Google's service? Here's one take from the GM list: "I noticed [the domain change] too. Why they can't just leave it alone, I can't understand." To be a bit less harsh, while Google probably had good reasons for making the change, it would have been great to see them be proactive and notify people of the change via their blog or some other mechanism.

I find this hilarious. Greasemonkey scripts work by effectively screen scrapping the website and inserting changes into the HTML. Stephen and others who are upset by Google's change are basically saying that Google should never change the HTML or URL structure of the website ever again because it breaks their scripts. Yeah, right.

Repeat after me, a web page is not an API or a platform.  

Speaking of apps breaking because of GMail domain changes it seems RSS Bandit users were also affected by the GMail domain name change. It looks like we have a problem with forwarding the username and password after being redirected from https://gmail.google.com/gmail/feed/atom to https://mail.google.com/gmail/feed/atom. I'll fix this bug in the next version but in the meantime RSS Bandit users should be able to work around this by changing the URL manually to the new one. My bad.


 

Categories: RSS Bandit | Technology

I've been really busy at work and haven't had free time left over to do significant work on RSS Bandit for the past few weeks. I should have some more free time starting this weekend and will begin to start making progress again. In the mean time there have been decisions made which are probably worth sharing.

Synchronization with Bloglines and Newsgator Online
I've mentioned in the past that I'd like the next version of RSS Bandit to be able to synchronize it state with a user's account in Newsgator Online or Bloglines. However I've also mentioned the various limitations of the Bloglines sync API which make it less than ideal as a way to synchronize the state of a desktop RSS reader with an online aggregator. On the flip side, I've had some discussions with folks at Newsgator which imply that I'll be able to get a full synchronization API from them by the time Nightcrawler ships.

My current decision is that I am dropping plans for supporting synchronizing RSS Bandit with Bloglines due to the limitations of its API which is guaranteed to leave users with a poor user experience. Instead the only new synchronization target added in the next release will be Newsgator Online.

Newsgroup support
One interesting problem that came up when integrating newsgroups into the RSS Bandit  user interface was what to use as permalinks for newsgroup posts. Thankfully, Google Groups comes to the rescue here. All newsgroup posts are given a permalink that maps the the Google Groups URL for that message ID. This works fine with public newsgroups that are archived by Google but leads to broken links for internal newsgroups. I think this is a decent trade off.

The screenshot below shows this feature in action

Subscription Wizard
Given that we now provide the ability to subscribe to both newsgroups and RSS/Atom feeds we've decided to revamp the feed subscription mechanism. We'll be adding an "Add Subscription" wizard which gives people the option of adding an RSS or Atom feed, a newsgroup, a web page (which we then scour for feeds) or a search term (which we map to an MSN Search or Feedster search results feed). My initial instinct with the last option was to just go with PubSub but it doesn't seem to have a straightforward mechanism for people to create search feeds programmatically. REST wins again.

Screenshot below 


 

Categories: RSS Bandit

June 22, 2005
@ 04:59 PM
One of the reasons I came to work at MSN is that I believe strongly in the power of software to connect people together. I recently found a post entitled Spaces Friends which shows exactly how strong these connections can become. The post is excerpted below

The Importance of Spaces

 One should never expect the good fortune that has come my way since I was first aware of the MSN Spaces wide beta release last fall.  I first started writing on December 4, 2004, as I recall on a Spaces blog with a different title and editorial theme.  My experience since has been both personally rewarding and life-expanding.

I published (posted) the first poetry I ever showed to anyone in this life here on Crackers.  I’ve made some attempts at humor in some entries.  I have written some serious stuff as well.  All of it is pretty strange coming from an old Electrical Engineer/Physicist/high-tech manager/pilot/geek.  But other guys I’ve known with similar histories have done even more bizarre stuff.

More important than publishing fledging writing has been some remarkable friendships that have developed by magic from initial contacts left in the comments to entries.  The first of these came in late December, 2004, and remains one of my strongest adult friendships, and there is every reason to believe that it is a life-long association even though we may never see each other face to face.  Geographical distance can be a real barrier at times, but one that is breaking down rapidly because of Internet-enabled communication.  Who knows?  We might meet in person someday down the road.  Along the way a few other friendships have developed which span great distances.  Each has been an asset to my life.
...
Now, as I find myself in a situation I simply can’t manage without help, my friends have made a selfless commitment getting me through the initial phases of my cancer treatment.

As my first Spaces friend said the other day, “Isn’t it interesting how God places people in our path at important points?”  It’s more than interesting, B.  It’s going to help save my life.  MSN Spaces has been of the utmost importance to me.  Maybe it will continue to be.

A number of us working on Spaces read Bill's blog and our thoughts definitely go out to him in this time of need. It is really humbling to see how the work we do is changing people's lives.



 

Categories: Social Software

From the press release MSN Continues Investments in Search With the Launch of Local Search we learn

Starting today, consumers in the United States will see a new Local category added to the MSN Search options on MSN.com. When consumers search for local information, they will receive results from city- and region-specific White Pages and Yellow Pages directory information. For example, a local search on “auto mechanics” could bring up listings of nearby mechanics, repair shops and towing companies.

Each local search result is shown as a numbered pin on a corresponding map provided through Microsoft® MapPoint® Web Service, and digital aerial images are supplied by TerraServer-USA when available for a given search result. The TerraServer-USA Web site is one of the world’s largest online databases, providing access to a vast data store of maps and aerial photographs of the United States. Originating at the Microsoft Bay Area Research Center, TerraServer is operated by Microsoft as a research project for developing advanced database technology.

The new MSN Local Search functionality is an evolution of the Near Me search feature that debuted on MSN Search in February of this year and allowed consumers to receive search results tailored to a geographic location. Those interested in trying the beta of the new offering should visit http://www.msn.com.

Kudos to the MapPoint/VirtualEarth and MSN Search folks for getting this out the door. My first test was trying a search for "pizza" in my area.  I decided to figure out how to specify my location with more accuracy. Come to think of it I'm not sure how it even figured out to look in Seattle. Anyway I followed the link from the phrase How do I change my location? and saw that I can specify my city, state and zip code in the settings page which is stored in a cookie.

There are two minuses here. The first is that I can't store my exact address. When I'm looking for a pizza place I want to find one that delivers in my area not just one in the same city as me. The second is that I can only store one location which is one better than Google Maps but quite poor when compared to Yahoo! Maps. It looks like I'll be sticking with Yahoo! for now. I will send the MSN Search folks some feedback about needing to add these features.

By the way, a more direct link to the service is http://search.msn.com/local.


 

Categories: MSN

Joe Wilcox has a post that has me scratching my head today. In his post Even More on New Office File Formats, he writes

Friday's eWeek story about Microsoft XML-based formats certainly raises some questions about how open they really are. Assuming reporter Pater Galli has his facts straight, Microsoft's formats license "is incompatible with the GNU General Public License and will thus prevent many free and open-source software projects from using the formats." Earlier this month, I raised different concerns about the new formats openness.

To reiterate a point I made a few weeks ago: Microsoft's new Office formats are not XML. The company may call them "Microsoft Office Open XML Fromats," but they are XML-based, which is nowhere the same as being XML or open, as has been widely misreported by many blogsites and news outlets.

There are two points I'd like to make here. The first is that "being GPL compatible" isn't a definition of 'open' that I've ever heard anyone make. It isn't even the definition of Open Source or Free Software (as in speech). Heck, even the GNU website has a long list of Open Source licenses that are incompatible with the GPL. You'll notice that this list includes the original BSD license, the Apache license, the Zope license, and the Mozilla public license. I doubt that EWeek will be writing articles about how Apache and Mozilla are not 'open' because they aren't GPL compatible.

Secondly, it's completely unclear to me what distinction Joe Wilcox is making between being XML and being XML-based. The Microsoft Office Open XML formats are XML formats. They are stored on the hard drive as compressed XML files using standard compression techniques that are widely available on most platforms. Compressing an XML file doesn't change the fact that it is XML. Reading his linked posts doesn't provide any insight into whether this is the distinction Joe Wilcox is making or whether there is another. Anyone have any ideas about this?

 


 

Categories: XML