January 23, 2004
@ 05:05 PM

Theres been some recent surprise by blogcrazy about the recent democratic party caucus in Iowa in which John Kerry won 38 percent of the state convention delegates, with 32 percent for John Edwards, 18 percent for Howard Dean and 11 percent for Gephardt. Many had assumed that Howard Dean's highly successful Internet campaign with its adoption of blogging technologies and support by many bloggers were an indication of strong grass roots support. Yeah, right.

Robert Scoble wrote

Along these lines, by tomorrow, I'm sure there'll be more than one person that will gnash their teeth and write "weblogs failed Dean."

Well, the weblog hype did get overboard the past few weeks. Weblogs do matter. Why? The influentials read weblogs. The press. The insiders. The passionate ones.

But, the average Joe doesn't read these. Come on, be real. Instapundit gets, what, 100,000 to 200,000 visitors a day? I get 2,000. That's a small little dinky number in a country of 290 million.

Weblogs and online technologies have helped Dean and others collect a lot of money, but you still gotta have a TV persona that hits home. Just reality in 2004. I'm not bitter about that.

The lessons for big-company evangelism (or small company, for that matter) are the same. If your product isn't something that average people like, it doesn't matter how good the weblogs are.

Considering that Robert Scoble is one of the weblog hypesters who may have gone “overboard“ as he puts  it I find his post particularly telling. Folks like Robert Scoble have trumpetted that weblogs would be triumphant against traditional marketting and in many posts he's berated product teams at the company he works for [Microsoft] that don't consider weblogging as part of their marketing message. Weblogs are currently a fairly low cost way of communicating with a certain class of internet savvy people. However nothing beats traditional communication channels such as television, billboards and the print media for spreading a message amongst all and sundry.

Don't be blinded by the hype.

There's one other response to the recent events in Iowa that made me smile.  Doc Searles wrote 


I see that my positive spin yesterday on Howard Dean's "barbaric yawp" speech got approximately no traction at all. Worse, the speech was (predictably) mocked by everybody in the major media from Stern in the morning to Letterman and Leno in the evening. 

  Clearly, its effects were regretable. It hurt the campaign. But it was also honest and authentic, and in the long run that can only help, for the simple reason that it was real.
  So. What to do? 
  Here's my suggestion... Look at media coverage as nothing more than transient conditions, like weather. And navigate by the stars of your own constituency.
  The main lesson from Cluetrain is "smart markets get smarter faster than most companies." The same goes for constituencies and candidates. Your best advice will come from the people who know you best, who hear your voice, who understand the missions of your campaign and write about it clearly, thoughtfully and with great insight. They're out there. Your staff can help you find them. Navigate by their stars, not the ones on television. 

I've always found people who espouse the Cluetrain Manifesto as seeming particularly naive as to the realities of markets and marketing. Telling someone to ignore media coverage and keep it real is not how elections are won in America. Any student of recent American history knows the increased significance of the media in presedential elections ever since the televised Kennedy-Nixon debates in the elections of the1960s.

Those who do not learn from history are doomed to repeat it.


 

Categories: Ramblings

Being a hobbyist developer interested in syndication technologies I'm always on the look out for articles that provide useful critiques of the current state of the art. I recently stumbled on an article entitled 10 reasons why RSS is not ready for prime time by Dylan Greene which fails to hit the mark in this regard. Of the ten complaints, about three seem like real criticisms grounded in fact while the others seem like contrived issues which ignore reality. Below is the litany of issues the author brings up and my comments on each

1) RSS feeds do not have a history. This means that when you request the data from an RSS feed, you always get the newest 10 or 20 entries. If you go on vacation for a week and your computer is not constantly requesting your RSS feeds, when you get back you will only download the newest 10 or 20 entries. This means that even if more entires were added than that while you were gone, you will never see them.

Practically every information medium has this problem. When I travel out of town for a week or more I miss the newspaper, my favorite TV news shows and talk radio shows which once gone I'll likely never get to enjoy. With blogs it is different since most blogs provide an online archive but on the other hand most news sites archive their content and require paid access after they're no longer current.

In general, most individual blogs aren't updated regularly enough that being gone for a week or two means that entries are missed. On the other hand most news sites are. In such cases one could leave their aggregator of choice connected and reduce its refresh rate (something like once a day) and let your fingers do the walking. That's exactly what one would have to do with a TiVo (i.e. leave your cable box on).

2) RSS wastes bandwidth. When you "subscribe" to an RSS feed, you are telling your RSS reader to automatically download the RSS file on a set interval to check for changes. Lets say it checks for news every hour, which is typical. Even if just one item is changed the RSS reader must still download the entire file with all of the entries.

The existing Web architecture provides a couple of ways for polling based applications to save bandwidth including  HTTP conditional GET and gzip compression over HTTP. Very few web sites actually support both well-known bandwidth saving techniques including the Dylan Green based on a quick check with Rex Swain's HTTP Viewer. Using both techniques can save bandwidth costs by an order of magnitude (by a factor of 10 for the mathematically challenged). Before coming up with sophisticated hacks for perceived problems it'd be nice if website administrators actually used existing best practices before trying to reinvent the wheel in more complex ways.

That said, it would be a nice additional optimization for web sites to only provide only the items that hadn't been read by a particular client for each request for the RSS feed. However I'd like to see us learn to crawl before we try to walk.

3) Reading RSS requires too much work. Today, in 2004, we call it "browsing the Web" - not "viewing HTML files". That is because the format that Web pages happen to be in is not important. I can just type in "msn.com" and it works. RSS requires much more than that: We need to find the RSS feed location, which is always labeled differently, and then give that URL to my RSS reader.

Yup, there isn't a standard way to find the feed for a website. RSS Bandit tries to make this easier by feed lookup via Syndic8 and supporting one click subscription to RSS feeds.  However aggregator authors can't do this alone, the blogging tools and major websites that use RSS need to get in on the act as well.

4) An RSS Reader must come with Windows. Until this happens too, RSS reading will only be for a certain class of computer users that are willing to try this new technology. The web became mainstream when Microsoft started including Internet Explorer with Windows. MP3's became mainstream when Windows Media Player added MP3 support.

I guess my memory is flawed but I always thought Netscape Navigator and Winamp/Napster where the applications that brought the Web and MP3s to the mainstream respectively. I'm always amused by folks that think that unless Microsoft supports some particular technology then it is going to fail. It'd be nice if an RSS aggregator but that doesn't mean that a technology cannot become popular until it ships in Windows. Being a big company, Microsoft is generally slow to react to trends until they've proven themselves in the market which means that if an aggregator ever ships in Windows it will happen when news aggregators are mainstream not before.

5) RSS content is not User-Friendly. It has taken about 10 years for the Web to get to the point where it is today that most web pages we visit render in our browser the way that the designer intended. It's also taken about that long for web designers to figure out how to lay out a web page such that most users will understand how to use it. RSS takes all of that usability work and throws it away. Most RSS feeds have no formatting, no images, no tables, no interactive elements, and nothing else that we have come to rely on for optimal content readability. Instead we are kicked back to the pre-web days of simple text.

I find it hard to connect tables, interactive elements and images with “optimal content readability” but maybe that's just me. Either way, there's nothing stoping folks from using HTML markup in RSS feeds. Most of the major aggregators are either browser based or embed a web browser so viewing HTML content is not a problem. Quite frankly, I like the fact that I don't have to deal with cluttered websites when reading content in my aggregator of choice.

6) RSS content is not machine-friendly. There are search engines that search RSS feeds but none of them are intelligent about the content they are searching because RSS doesn't describe the properties of the content well enough. For example, many bloggers quote other blogs in their blog. Search engines cannot tell the difference between new content and quoted content, so they'll show both in the search results.

I'm curious as to which search engine he's used which doesn't have this problem. Is there an “ignore items that are parts of a quote” option on Google or MSN Search? As search engines go I've found Feedster to be quite good and better than Google for a certain class of searches. It would be cool to be able to execute ad-hoc, structured queries against RSS feeds but this would be icing on the cake and in fact is much more likely to happen in the next few years than is possible that we will ever be able to perform such queries against [X]HTML web sites.

7) Many RSS Feeds show only an abridged version of the content. Many RSS feeds do not include the full text. Slashdot.org, one of the most popular geek news sites, has an RSS feed but they only put the first 30 words of each 100+ word entry in their feed. This means that RSS search engines do not see the full content. This also means that users who syndicate their feed only see the first few words and must click to open a web browser to read the full content.

This is annoying but understandable. Such sites are primarily using an RSS feed as a way to lure you to the site not as a way to provide users with content. I don't see this as a problem with RSS any more than the fact that some news sites need you to register or pay to access their content is a problem with HTML and the Web.

8) Comments are not integrated with RSS feeds. One of the best features of many blogs is the ability to reply to posts by posting comments. Many sites are noteworthy and popular because of their comments and not just the content of the blogs.

Looks like he is reading the wrong blogs and using the wrong aggregators. There are a number of ways to expose comments in RSS feeds and a number of aggregators support them including RSS Bandit which supports them all.

9) Multiple Versions of RSS cause more confusion. There's several different versions of RSS, such as RSS 0.9, RSS 1.0, RSS 2.0, and RSS 3.0, all controlled by different groups and all claiming to be the standard. RSS Readers must support all of these versions because many sites only support one of them. New features can be added to RSS 1.0 and 2.0 can by adding new XML namespaces, which means that anybody can add new features to RSS, but this does mean that any RSS Readers will support those new features.

I assume he has RSS 3.0 in there as a joke. Anyway, the existence of multiple versions of RSS is not that much more confusing to end users than the existence of multiple versions of [X]HTML, HTTP, Flash and Javascript some of which aren't all supported by every web browser.

That said a general plugin mechanism to deal with items from new namespaces would be an interesting problem to try and solve but sounds way too hard to successfully provide a general solution for.

10) RSS is Insecure. Lets say a site wants to charge for access to their RSS feed. RSS has no standard way for inputing a User Name and Password. Some RSS readers support HTTP Basic Authentication, but this is not a secure method because your password is sent as plain text. A few RSS readers support HTTPS, which is a start, but it is not good enough. Once somebody has access to the "secure" RSS file, that user can share the RSS file with anybody.

Two points. (A) RSS is a Web technology so the standard mechanisms for providing restricted yet secure access to content on the Web apply to RSS and (B) there is no way known to man short of magic to provide someone with digital content on a machine that they control and restrict them from copying it in some way, shape or form.

Aight, that's all folks. I'm off to watch TiVoed episodes of the Dave Chappelle show.


 

Categories: XML

January 21, 2004
@ 08:56 PM

Thanks to Technorati Beta, I found Aaron Skonnard's blog.  Aaron is the author of the XML Files column in the MSDN Magazine and an all around XML geek.


 

Categories: XML

Mark Pilgrim has a post entitled The history of draconian error handling in XML  where he excerpts a couple of the discussions on the draconian error handling rules of XML which state that if an XML processor encounters a syntax error in an XML document it should stop parsing and indicate a fatal error as opposed to muddling along or trying to fixup the error in some way. According to Tim Bray 

What happened was, we had a really big, really long, really passionate argument on the subject; the camps came to be called “Draconians” and “Tolerants.” After this had gone on for some weeks and some hundreds of emails, we took a vote and the Draconians won 7-4.

Reading some of the posts from 6 years ago on Mark Pilgrim's blog it is interesting to note that most of the arguments on the sides of the Tolerants are simply no longer relevant today while the Draconians turned out to be the reason for XML's current widespread success in the software marketplace.

The original goal of XML was to create a replacement for HTML which allowed you to create your own tags yet have them work in some fashion on the Web (i.e SGML on the Web). Time has shown that placing XML documents directly on the Web for human consumption just isn't that interesting to the general Web development comunity. Most content on the Web for human consumption is still HTML tag soup. Even when Web content claims to be XHTML it often is really HTML tag soup either because it isn't well-formed or is invalid according to the XHTML DTD. Even applications that represent data internally as XML tend to use XSLT to transform the content to HTML as opposed to putting the XML directly on the Web and styling it with CSS. As I've mentioned before the dream of the original XML working group of replacing HTML by inventing “SGML on the Web” is a failed dream. Looking back in hindsight it doesn't seem that the choice of tolerant over draconian error handling would have made a difference to the lack of adoption of XML as a format for representing content targetted for human consumption on the Web today.

On the other hand, XML has flourished as a general data interchange format for machine-to-machine interactions in wide ranging areas from distributed computing and database applications to being a format for describing configuration files and business documents. There are a number of reasons for XML's rise to popularity

  1. The ease with which XML technologies and APIs enabled developers to process documents and data in an easier and more  flexible manner than with previous formats and technologies.
  2. The ubiquity of XML implementations and the consistency of the behavior of implementations across platforms.
  3. The fact that XML documents were fairly human-readable and seemed familiar to Web developers since it was HTML-like markup.

Considering the above points, does it seem likely that XML would be as popular outside of its original [failed] design goal of being a replacement for HTML if the specification allowed parsers to pick and choose which parts of the spec to honor with regards to error recovery? Would XML Web Services be as useful for interoperability between platforms if different parser implementations could recover from syntax errors at will in a non-deterministic manner? Looking at some of the comments linked from Mark Pilgrim's blog it does seem to me that a lot of the arguments on the side of the Tolerants came from the perspective of “XML as an HTML replacement” and don't stand up under scrutiny in today's world.

April 19, 1997. Sean McGrath: Re: Error Handling in XML

Programming languages that barf on a syntax error do so because a partial executable image is a useless thing. A partial document is *not* a useless thing. One of the cool things about XML as a document format is that some of the content can be recovered even in the face of error. Compare this to our binary document friends where a blown byte can render the entire content inaccessible.

Given that today XML is used for building documents that are effectively programs such as XSLT, XAML and SVG it does seem like the same rules that apply for partial programs should apply as well.

May 7, 1997. Paul Prescod: Re: Final words, I think, on error handling

Browsers do not just need a well-formed XML document. They need a well-formed XML document with a stylesheet in a known location that is syntactically correct and *semantically correct* (actually applies reasonable styles to the elements so that the document can be read). They need valid hyperlinks to valid targets and pretty soon they may need some kind of valid SGML catalog. There is still so much room for a document author to screw up that well-formedness is a very minor step down the path.

I have to agree here with the spirit of the post [not the content since it assumed that XML was going to primarily be a browser based format]. It is far more likely and more serious that there are logic errors in an XML document than syntax errors. For example, there are more RSS feeds out there with dates are invalid based on the RSS spec they support than there are ill-formed feeds. And in a number of these it is a lot easier to fix the common well-formedness errors than it is to fix violations of the spec (HTML in descriptions or titles, incorrect date formats, data other than email addresses in the <author> element, etc).

May 7, 1997. Arjun Ray: Re: Final words, I think, on error handling

The basic point against the Draconian case is that a single (monolithic?) policy towards error handling is a recipe for failure. ...

XML is many things but I doubt that one could call it a failure except when it comes to its original [flawed] intent of replacing HTML. As an mechanism for describing structured and semi-structured content in a robust, platform independent manner IT IS KING.

So why do I say everyone lost yet everyone won? Today most XML on the Web targetted at human consumption [i.e. XHTML] isn't well-formed so in this case the Tolerants were right and the Draconians lost since well-formed XML has been a failure on the human Web. However in the places were XML is getting the most traction today, the draconian error handling rules promote interoperability and predictability which is the opposite of what a number of the Tolerants expected would happen with XML in the wild.  


 

Categories: XML

By default RSS Bandit isn't registered as the handler for the "feed" URI scheme until this feature is enabled. Once this feature is enabled, clicking on URIs such as feed:http://www.25hoursaday.com/weblog/SyndicationService.asmx/GetRss in your browser of choice should launch RSS Bandit's new feed dialog. The screenshot below shows how to do this from the Options dialog.

Future releases of RSS Bandit will have this enabled by default.


 

Categories: RSS Bandit

January 21, 2004
@ 03:27 AM

One of the hats I wear as part of my day job is that I'm the Community Lead for the WebData XML team. This means I'm responsible for a lot of our interactions with our developer community. One of the things I'm proudest of is that I got the Microsoft Most Valuable Professional (MVP) Program to create an award category for XML which was tough at first but they eventually buckled.

I'm glad to note that a number of folks I nominated to become MVPs this year have been awarded including Daniel Cazzulino, Oleg Tkachenko, DonXML Demsak and Dimitre Novatchev among others. These folks have gone above and beyond in helping fellow developers working with XML technologies on Microsoft platforms. You guys rock...

My next major task as community lead is to launch an XML Developer Center on MSDN. I've been working at this for over a year but it looks like there is now a light at the end of the tunnel. If you are interested in writing about XML technologies on Microsoft platforms on MSDN you should give me a holler via my work email address.

This is going to be fun. :)


 

Categories: Life in the B0rg Cube

January 21, 2004
@ 03:15 AM

The core of the UI for search folders is done. Now all we have to do is some test it and make sure the feature is fully baked. I'm beginning to have reservations about XPath search but time will tell if it'll actually be problematic or not.  Screenshot below.

All that's left now is for me to bang out an implemententation of Synchronization of Information Aggregators using Markup (SIAM) and then we should be ready for the next RSS Bandit release.


 

Categories: RSS Bandit

I just read Jon Udell's post on What RSS users want: consistent one-click subscription where he wrote

Saturday's Scripting News asked an important question: What do users want from RSS? The context of the question is the upcoming RSS Winterfest... Over the weekend I received a draft of the RSS Winterfest agenda along with a In an October posting from BloggerCon I present video testimony from several of them who make it painfully clear that the most basic publishing and subscribing tasks aren't yet nearly simple enoughrequest for feedback. Here's mine: focus on users. .

Here's more testimony from the comments attached to Dave's posting:

One message: MAKE IT SIMPLE. I've given up on trying to get RSS. My latest attempt was with Friendster: I pasted in the "coffee cup" and ended up with string of text in my sidebar. I was lost and gave up. I'm fed up with trying to get RSS. I don't want to understand RSS. I'm not interested in learning it. I just want ONE button to press that gives me RSS.... [Ingrid Jones]

Like others, I'd say one-click subscription is a must-have. Not only does this make it easier for users, it makes it easier to sell RSS to web site owners as a replacement/enhancement for email newsletters... [Derek Scruggs]

For average users RSS is just too cumbersome. What is needed to make is simpler to subscribe is something analog to the mailto tag. The user would just click on the XML or RSS icon, the RSS reader would pop up and would ask the user if he wants to add this feed to his subscription list. A simple click on OK would add the feed and the reader would confirm it and quit. The user would be back on the web site right where he was before. [Christoph Jaggi]

Considering that the most popular news aggregators for both the Mac and Windows platforms support the feed "URI" scheme including  SharpReader, RSS Bandit, NewsGator, FeedDemon (in next release), NetNewsWire, Shrook, WinRSS and Vox Lite I wonder how long it'll take the various vendors of blogging tools to wake up and smell the coffee. Hopefully by the end of the year, complaints like those listed above will be a thing of the past.


 

Categories: RSS Bandit | Technology

January 20, 2004
@ 03:33 PM

One of the biggest problems that faces designers of XML vocabularies is how to make them extensible and design them in a way that applications that process said vocabularies do not break in the face of changes to versions of the vocabulary. One of the primary benefits of using XML for building data interchange formats is that the APIs and technologies for processing XML are quite resistant to additions to vocabularies. If I write an application which loads RSS feeds looking for item elements then processes their link and title elements using any one of the various technologies and APIs for processing XML such as SAX, the DOM or XSLT it is quite straightforward to build an application that processes said elements which is resistant to changes in the RSS spec or extensions to the RSS spec as the link and title elements always appear in a feed.  

On the other hand, actually describing such extensibility using the most popular XML schema language, W3C XML Schema, is difficult because of several limitations in its design which make it very difficult to describe extension points in a vocabulary in a way that is idiomatic to how XML vocabularies are typically processed by applications. Recently, David Orchard, a standards architect at BEA Systems wrote an article entitled Versioning XML Vocabularies which does a good job of describing the types of extensibility XML vocabularies should allow and points out a number of the limitations of W3C XML Schema that make it difficult to express these constraints in an XML schema for a vocabulary. David Orchard has written a followup to this article entitled Providing Compatible Schema Evolution which contains a lot of assertions and suggestions for improving extensibility in W3C XML Schema that mostly jibe with my experiences working as the Program Manager responsible for W3C XML Schema technologies at Microsoft. 

The scenario outlined in his post is

We start with a simple use case of a name with a first and last name, and it's schema. We will then evolve the language and instances to add a middle name. The base schema is:

<xs:complexType name="nameType">
 <xs:sequence>
  <xs:element name="first" type="xs:string" />
  <xs:element name="last" type="xs:string" minOccurs="0"/>
 </xs:sequence>
</xs:complexType>


Which validates the following document:

<name>
 <first>Dave</first>
 <last>Orchard</last>
</name>


And the scenarios asks how to validate documents such as the following where the new schema with the extension is available or not available to the receiver.:

<name>
 <first>Dave</first>
 <last>Orchard</last>
 <middle>B</middle>
</name>

<name>
 <first>Dave</first>
 <middle>B</middle>
 <last>Orchard</last>
</name>

At this point I'd like to note that this a versioning problem which is a special instance of the extensibility problem. The extensibility problem is how does one describe an XML vocabulary in a way that allows producers to add elements and attributes to the core vocabulary without causing problems for consumers that may not know about them. The versioning  problem is specific to when the added elements and attributes actually are from a subsequent version of the vocabulary (i.e. a version 2.0 server talking to a version 1.0 client). The additional wrinkle in the specific scenario outlined by David Orchard is that elements from newer versions of the vocabulary have the same namespace as elements from the old version.  

A strategy for simplifying the problem statement would be if additions in subsequent versions of the vocabulary had were in a different namespace (i.e. a version 2.0 document would have elements from the version 1.0 namespace and the version 2.0 namespace) which would then make the versioning problem the same as the extensibility problem. However most designers of XML vocabularies would balk at creating a vocabulary which used elements from multiple namespaces for its core [once past version 2.0] and often site that this makes it more cumbersome for applications that process said vocabularies because they have to deal with multiple namespaces. This is a tradeoff which every XML vocabulary designer should consider during the design and schema authoring process.  

David Orchard takes a look at various options for solving the extensibility problem outlined above using current XML Schema design practices. 

Type extension

Use type extension or substitution groups for extensibility. A sample schema is:

<xs:complexType name="NameExtendedType"> <xs:complexContent> <xs:extension base="tns:nameType"> <xs:sequence> <xs:element name="middle" type="xs:string" minOccurs="0"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType>

This requires that both sides simultaneously update their schemas and breaks backwards compatibility. It only allows the extension after the last element 

There is a [convoluted] way to ensure that both sides do not have to update their schemas. The producer can send a <name> element that contains xsi:type attribute which has the NameExtendedType as its value. The problem is then how the client knows about the definition for the NameExtendedType type which is solved by the root element of the document containing an xsi:schemaLocation attribute which points to a schema for that namespace which includes the schema from the previous version. There are at least two caveats to this approach (i) the client has to trust the server since it is using a schema defined by the server not the client's and (ii) since the xsi:schemaLocation attribute is only a hint it is likely the validator may ignore it since the client would already have provided a schema for that namespace. 

Change the namespace name or element name

The author simply updates the schema with the new type. A sample is:

<xs:complexType name="nameType"> <xs:sequence> <xs:element name="first" type="xs:string" /> <xs:element name="middle" type="xs:string" minOccurs="0"/> <xs:element name="last" type="xs:string" minOccurs="0"/> </xs:sequence> </xs:complexType>

This does not allow extension without changing the schema, and thus requires that both sides simultaneously update their schemas. If a receiver has only the old schema and receives an instance with middle, this will not be valid under the old schema

Most people would state that this isn't really extensibility since [to XML namespace aware technologies and APIs] the names of all elements in the vocabulary have changed. However for applications that key off the local-name of the element or are unsavvy about XML namespaces this is a valid approach that doesn't cause breakage. Ignoring namespaces, this approach is simply adding more stuff in a later revision of the spec which is generally how XML vocabularies evolve in practice.

Use wildcard with ##other

This is a very common technique. A sample is:

<xs:complexType name="nameType"> <xs:sequence> <xs:element name="first" type="xs:string" /> <xs:any namespace="##other" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="last" type="xs:string" minOccurs="0"/> </xs:sequence> </xs:complexType>

The problems with this approach are summarized in Examining elements and wildcards as siblings. A summary of the problem is that the namespace author cannot extend their schema with extensions and correctly validate them because a wildcard cannot be constrained to exclude some extensions.

I'm not sure I agree with David Orchard summary of the problem here. The problem described in the article he linked to is that a schema author cannot refine the schema in subsequent versions to contain optional elements and still preserve the wildcard.  This is due to the Unique Particle Attribution constraint which states that a validator MUST always have only one choice of which schema particle it validates an element against. Given an element declaration for an element and a wildcard in sequuence the schema validator has a CHOICE of two particles it could validate an element against if its name matches that of the element declaration. There are a number of disambiguating rules the W3C XML Schema working group could have come up with to allow greater flexibility for this specific case such as (i) using a first match rule or (ii) allowing exclusions in wildcards. 

Use wildcard with ##any or ##targetnamespace

This is not possible with optional elements. This is not possible due to XML Schema's Unique Particle Attribution rule and the rationale is described in the Versioning XML Languages article. An invalid schema sample is:

<xs:complexType name="nameType"> <xs:sequence> <xs:element name="first" type="xs:string" /> <xs:any namespace="##any" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="last" type="xs:string" minOccurs="0"/> </xs:sequence> </xs:complexType>

The Unique Particle Attribution rule does not allow a wildcard adjacent to optional elements or before elements in the same namespace.

Agreed. This is invalid.

Extension elements

This is the solution proposed in the versioning article. A sample of the pre-extended schema is:

<xs:complexType name="nameType"> <xs:sequence> <xs:element name="first" type="xs:string" /> <xs:element name="extension" type="tns:ExtensionType" minOccurs="0" maxOccurs="1"/> <xs:element name="last" type="xs:string" minOccurs="0"/> </xs:sequence> </xs:complexType> <xs:complexType name="ExtensionType"> <xs:sequence> <xs:any processContents="lax" minOccurs="1" maxOccurs="unbounded" namespace="##targetnamespace"/> </xs:sequence> </xs:complexType>

An extended instance is

<name> <first>Dave</first> <extension> <middle>B</middle> </extension> <last>Orchard</last> </name>

This is the only solution that allows backwards and forwards compatibility, and correct validation using the original or the extended schema. This articles shows a number of the difficulties remaining, particularly the cumbersome syntax and the potential for some documents to be inappropriately valid. This solution also has the problem of each subsequent version will increase the nesting by 1 level. Personally, I think that the difficulties, including potentially deep nesting levels, are not major compared to the ability to do backwards and forwards compatible evolution with validation.

The primary problem I have with this approach is that it is a very unidiomatic way to process XML especially when combined with the problem with nesting in concurrent versions. For example, take a look at

<name> <first>Dave</first> <extension> <middle>B</middle> <extension> <prefix>Mr.</prefix> </extension> </extension> <last>Orchard</last> </name>

Imagine if this is the versioning strategy that had been used with HTML, RSS or DocBook. That gets real ugly, real fast. Unfortunately this is probably the best you can if you want to use W3C XML Schema to strictly define the an XML vocabulary with extensibility yet allow backwards & forwards compatibility.

David Orchard goes on to suggest a number of potential additions to future versions of W3C XML Schema which would make it easier to use it in defining extensible XML vocabularies. However given that my personal opinion is that adding features to W3C XML Schema is not only trying to put lipstick on a pig but also trying to build a castle on a foundation of sand, I won't go over each of his suggestions. My recent suggestion to some schema authors at Microsoft about solving this problem is that they should have two validation phases in their architecture. The first phase does validation according to W3C XML Schema rules while the other performs validation of “business rules“ specific to their scenarios. Most non-trivial vocabularies end up having such an architecture anyway since there are a number of document validation capabilities missing from W3C XML Schema so schema authors shouldn't be too focused on trying to force fit their vocabulary into the various quirks of W3C XML Schema.   

For example, in one could solve the original schema with a type definition such as

 <xsd:complexType name="nameType">
  <xsd:choice minOccurs="1" maxOccurs="unbounded">
   <xsd:element name="first" type="xsd:string" />
   <xsd:element name="last" type="xsd:string" minOccurs="0"/>
   <xsd:any namespace="##other" processContents="lax" />
  </xsd:choice>
 </xsd:complexType> 

where the validation layer above the W3C XML Schema layer ensures that an element doesn't occur twice (i.e. there can't be two <first> elements in a <name>). It adds more code to the clients & servers but it doesn't result in butchering the vocabulary either.


 

Categories: XML

 You know what we should do? Send up a Mars mission and once they're up in space, call them and say, "You guys can't reenter the atmosphere until you develop a cure for AIDS. Get crackin;

C'mon I bet if you asked people in Africa if they wanted us to go to Mars, they'd say yes--because it's important for humanity to reach ever upward. It's inspirational. We're at our best when we dare to dream a...GAAAH! I just puked in my space helmet.

I read somewhere that the cost of going to Mars may eventually total up to $170 billion which is nowhere close to the $12 billion the US President has stated will flow into NASA's coffers over the next 5 years to help finance the Mars dream. I don't want to knock the US government's spending on AIDS (supposedly $1 billion this year) but aren't there significant, higher priority problems on Earth that need tackling before one starts dabbling in interplanterary conquest?

Gil-Scott Heron's poem Whitey's on the Moon is still quite relevant today. I guess the more things change, the more they stay the same.