January 21, 2004
@ 08:56 PM

Thanks to Technorati Beta, I found Aaron Skonnard's blog.  Aaron is the author of the XML Files column in the MSDN Magazine and an all around XML geek.


 

Categories: XML

Mark Pilgrim has a post entitled The history of draconian error handling in XML  where he excerpts a couple of the discussions on the draconian error handling rules of XML which state that if an XML processor encounters a syntax error in an XML document it should stop parsing and indicate a fatal error as opposed to muddling along or trying to fixup the error in some way. According to Tim Bray 

What happened was, we had a really big, really long, really passionate argument on the subject; the camps came to be called “Draconians” and “Tolerants.” After this had gone on for some weeks and some hundreds of emails, we took a vote and the Draconians won 7-4.

Reading some of the posts from 6 years ago on Mark Pilgrim's blog it is interesting to note that most of the arguments on the sides of the Tolerants are simply no longer relevant today while the Draconians turned out to be the reason for XML's current widespread success in the software marketplace.

The original goal of XML was to create a replacement for HTML which allowed you to create your own tags yet have them work in some fashion on the Web (i.e SGML on the Web). Time has shown that placing XML documents directly on the Web for human consumption just isn't that interesting to the general Web development comunity. Most content on the Web for human consumption is still HTML tag soup. Even when Web content claims to be XHTML it often is really HTML tag soup either because it isn't well-formed or is invalid according to the XHTML DTD. Even applications that represent data internally as XML tend to use XSLT to transform the content to HTML as opposed to putting the XML directly on the Web and styling it with CSS. As I've mentioned before the dream of the original XML working group of replacing HTML by inventing “SGML on the Web” is a failed dream. Looking back in hindsight it doesn't seem that the choice of tolerant over draconian error handling would have made a difference to the lack of adoption of XML as a format for representing content targetted for human consumption on the Web today.

On the other hand, XML has flourished as a general data interchange format for machine-to-machine interactions in wide ranging areas from distributed computing and database applications to being a format for describing configuration files and business documents. There are a number of reasons for XML's rise to popularity

  1. The ease with which XML technologies and APIs enabled developers to process documents and data in an easier and more  flexible manner than with previous formats and technologies.
  2. The ubiquity of XML implementations and the consistency of the behavior of implementations across platforms.
  3. The fact that XML documents were fairly human-readable and seemed familiar to Web developers since it was HTML-like markup.

Considering the above points, does it seem likely that XML would be as popular outside of its original [failed] design goal of being a replacement for HTML if the specification allowed parsers to pick and choose which parts of the spec to honor with regards to error recovery? Would XML Web Services be as useful for interoperability between platforms if different parser implementations could recover from syntax errors at will in a non-deterministic manner? Looking at some of the comments linked from Mark Pilgrim's blog it does seem to me that a lot of the arguments on the side of the Tolerants came from the perspective of “XML as an HTML replacement” and don't stand up under scrutiny in today's world.

April 19, 1997. Sean McGrath: Re: Error Handling in XML

Programming languages that barf on a syntax error do so because a partial executable image is a useless thing. A partial document is *not* a useless thing. One of the cool things about XML as a document format is that some of the content can be recovered even in the face of error. Compare this to our binary document friends where a blown byte can render the entire content inaccessible.

Given that today XML is used for building documents that are effectively programs such as XSLT, XAML and SVG it does seem like the same rules that apply for partial programs should apply as well.

May 7, 1997. Paul Prescod: Re: Final words, I think, on error handling

Browsers do not just need a well-formed XML document. They need a well-formed XML document with a stylesheet in a known location that is syntactically correct and *semantically correct* (actually applies reasonable styles to the elements so that the document can be read). They need valid hyperlinks to valid targets and pretty soon they may need some kind of valid SGML catalog. There is still so much room for a document author to screw up that well-formedness is a very minor step down the path.

I have to agree here with the spirit of the post [not the content since it assumed that XML was going to primarily be a browser based format]. It is far more likely and more serious that there are logic errors in an XML document than syntax errors. For example, there are more RSS feeds out there with dates are invalid based on the RSS spec they support than there are ill-formed feeds. And in a number of these it is a lot easier to fix the common well-formedness errors than it is to fix violations of the spec (HTML in descriptions or titles, incorrect date formats, data other than email addresses in the <author> element, etc).

May 7, 1997. Arjun Ray: Re: Final words, I think, on error handling

The basic point against the Draconian case is that a single (monolithic?) policy towards error handling is a recipe for failure. ...

XML is many things but I doubt that one could call it a failure except when it comes to its original [flawed] intent of replacing HTML. As an mechanism for describing structured and semi-structured content in a robust, platform independent manner IT IS KING.

So why do I say everyone lost yet everyone won? Today most XML on the Web targetted at human consumption [i.e. XHTML] isn't well-formed so in this case the Tolerants were right and the Draconians lost since well-formed XML has been a failure on the human Web. However in the places were XML is getting the most traction today, the draconian error handling rules promote interoperability and predictability which is the opposite of what a number of the Tolerants expected would happen with XML in the wild.  


 

Categories: XML

By default RSS Bandit isn't registered as the handler for the "feed" URI scheme until this feature is enabled. Once this feature is enabled, clicking on URIs such as feed:http://www.25hoursaday.com/weblog/SyndicationService.asmx/GetRss in your browser of choice should launch RSS Bandit's new feed dialog. The screenshot below shows how to do this from the Options dialog.

Future releases of RSS Bandit will have this enabled by default.


 

Categories: RSS Bandit

January 21, 2004
@ 03:27 AM

One of the hats I wear as part of my day job is that I'm the Community Lead for the WebData XML team. This means I'm responsible for a lot of our interactions with our developer community. One of the things I'm proudest of is that I got the Microsoft Most Valuable Professional (MVP) Program to create an award category for XML which was tough at first but they eventually buckled.

I'm glad to note that a number of folks I nominated to become MVPs this year have been awarded including Daniel Cazzulino, Oleg Tkachenko, DonXML Demsak and Dimitre Novatchev among others. These folks have gone above and beyond in helping fellow developers working with XML technologies on Microsoft platforms. You guys rock...

My next major task as community lead is to launch an XML Developer Center on MSDN. I've been working at this for over a year but it looks like there is now a light at the end of the tunnel. If you are interested in writing about XML technologies on Microsoft platforms on MSDN you should give me a holler via my work email address.

This is going to be fun. :)


 

Categories: Life in the B0rg Cube

January 21, 2004
@ 03:15 AM

The core of the UI for search folders is done. Now all we have to do is some test it and make sure the feature is fully baked. I'm beginning to have reservations about XPath search but time will tell if it'll actually be problematic or not.  Screenshot below.

All that's left now is for me to bang out an implemententation of Synchronization of Information Aggregators using Markup (SIAM) and then we should be ready for the next RSS Bandit release.


 

Categories: RSS Bandit

I just read Jon Udell's post on What RSS users want: consistent one-click subscription where he wrote

Saturday's Scripting News asked an important question: What do users want from RSS? The context of the question is the upcoming RSS Winterfest... Over the weekend I received a draft of the RSS Winterfest agenda along with a In an October posting from BloggerCon I present video testimony from several of them who make it painfully clear that the most basic publishing and subscribing tasks aren't yet nearly simple enoughrequest for feedback. Here's mine: focus on users. .

Here's more testimony from the comments attached to Dave's posting:

One message: MAKE IT SIMPLE. I've given up on trying to get RSS. My latest attempt was with Friendster: I pasted in the "coffee cup" and ended up with string of text in my sidebar. I was lost and gave up. I'm fed up with trying to get RSS. I don't want to understand RSS. I'm not interested in learning it. I just want ONE button to press that gives me RSS.... [Ingrid Jones]

Like others, I'd say one-click subscription is a must-have. Not only does this make it easier for users, it makes it easier to sell RSS to web site owners as a replacement/enhancement for email newsletters... [Derek Scruggs]

For average users RSS is just too cumbersome. What is needed to make is simpler to subscribe is something analog to the mailto tag. The user would just click on the XML or RSS icon, the RSS reader would pop up and would ask the user if he wants to add this feed to his subscription list. A simple click on OK would add the feed and the reader would confirm it and quit. The user would be back on the web site right where he was before. [Christoph Jaggi]

Considering that the most popular news aggregators for both the Mac and Windows platforms support the feed "URI" scheme including  SharpReader, RSS Bandit, NewsGator, FeedDemon (in next release), NetNewsWire, Shrook, WinRSS and Vox Lite I wonder how long it'll take the various vendors of blogging tools to wake up and smell the coffee. Hopefully by the end of the year, complaints like those listed above will be a thing of the past.


 

Categories: RSS Bandit | Technology

January 20, 2004
@ 03:33 PM

One of the biggest problems that faces designers of XML vocabularies is how to make them extensible and design them in a way that applications that process said vocabularies do not break in the face of changes to versions of the vocabulary. One of the primary benefits of using XML for building data interchange formats is that the APIs and technologies for processing XML are quite resistant to additions to vocabularies. If I write an application which loads RSS feeds looking for item elements then processes their link and title elements using any one of the various technologies and APIs for processing XML such as SAX, the DOM or XSLT it is quite straightforward to build an application that processes said elements which is resistant to changes in the RSS spec or extensions to the RSS spec as the link and title elements always appear in a feed.  

On the other hand, actually describing such extensibility using the most popular XML schema language, W3C XML Schema, is difficult because of several limitations in its design which make it very difficult to describe extension points in a vocabulary in a way that is idiomatic to how XML vocabularies are typically processed by applications. Recently, David Orchard, a standards architect at BEA Systems wrote an article entitled Versioning XML Vocabularies which does a good job of describing the types of extensibility XML vocabularies should allow and points out a number of the limitations of W3C XML Schema that make it difficult to express these constraints in an XML schema for a vocabulary. David Orchard has written a followup to this article entitled Providing Compatible Schema Evolution which contains a lot of assertions and suggestions for improving extensibility in W3C XML Schema that mostly jibe with my experiences working as the Program Manager responsible for W3C XML Schema technologies at Microsoft. 

The scenario outlined in his post is

We start with a simple use case of a name with a first and last name, and it's schema. We will then evolve the language and instances to add a middle name. The base schema is:

<xs:complexType name="nameType">
 <xs:sequence>
  <xs:element name="first" type="xs:string" />
  <xs:element name="last" type="xs:string" minOccurs="0"/>
 </xs:sequence>
</xs:complexType>


Which validates the following document:

<name>
 <first>Dave</first>
 <last>Orchard</last>
</name>


And the scenarios asks how to validate documents such as the following where the new schema with the extension is available or not available to the receiver.:

<name>
 <first>Dave</first>
 <last>Orchard</last>
 <middle>B</middle>
</name>

<name>
 <first>Dave</first>
 <middle>B</middle>
 <last>Orchard</last>
</name>

At this point I'd like to note that this a versioning problem which is a special instance of the extensibility problem. The extensibility problem is how does one describe an XML vocabulary in a way that allows producers to add elements and attributes to the core vocabulary without causing problems for consumers that may not know about them. The versioning  problem is specific to when the added elements and attributes actually are from a subsequent version of the vocabulary (i.e. a version 2.0 server talking to a version 1.0 client). The additional wrinkle in the specific scenario outlined by David Orchard is that elements from newer versions of the vocabulary have the same namespace as elements from the old version.  

A strategy for simplifying the problem statement would be if additions in subsequent versions of the vocabulary had were in a different namespace (i.e. a version 2.0 document would have elements from the version 1.0 namespace and the version 2.0 namespace) which would then make the versioning problem the same as the extensibility problem. However most designers of XML vocabularies would balk at creating a vocabulary which used elements from multiple namespaces for its core [once past version 2.0] and often site that this makes it more cumbersome for applications that process said vocabularies because they have to deal with multiple namespaces. This is a tradeoff which every XML vocabulary designer should consider during the design and schema authoring process.  

David Orchard takes a look at various options for solving the extensibility problem outlined above using current XML Schema design practices. 

Type extension

Use type extension or substitution groups for extensibility. A sample schema is:

<xs:complexType name="NameExtendedType"> <xs:complexContent> <xs:extension base="tns:nameType"> <xs:sequence> <xs:element name="middle" type="xs:string" minOccurs="0"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType>

This requires that both sides simultaneously update their schemas and breaks backwards compatibility. It only allows the extension after the last element 

There is a [convoluted] way to ensure that both sides do not have to update their schemas. The producer can send a <name> element that contains xsi:type attribute which has the NameExtendedType as its value. The problem is then how the client knows about the definition for the NameExtendedType type which is solved by the root element of the document containing an xsi:schemaLocation attribute which points to a schema for that namespace which includes the schema from the previous version. There are at least two caveats to this approach (i) the client has to trust the server since it is using a schema defined by the server not the client's and (ii) since the xsi:schemaLocation attribute is only a hint it is likely the validator may ignore it since the client would already have provided a schema for that namespace. 

Change the namespace name or element name

The author simply updates the schema with the new type. A sample is:

<xs:complexType name="nameType"> <xs:sequence> <xs:element name="first" type="xs:string" /> <xs:element name="middle" type="xs:string" minOccurs="0"/> <xs:element name="last" type="xs:string" minOccurs="0"/> </xs:sequence> </xs:complexType>

This does not allow extension without changing the schema, and thus requires that both sides simultaneously update their schemas. If a receiver has only the old schema and receives an instance with middle, this will not be valid under the old schema

Most people would state that this isn't really extensibility since [to XML namespace aware technologies and APIs] the names of all elements in the vocabulary have changed. However for applications that key off the local-name of the element or are unsavvy about XML namespaces this is a valid approach that doesn't cause breakage. Ignoring namespaces, this approach is simply adding more stuff in a later revision of the spec which is generally how XML vocabularies evolve in practice.

Use wildcard with ##other

This is a very common technique. A sample is:

<xs:complexType name="nameType"> <xs:sequence> <xs:element name="first" type="xs:string" /> <xs:any namespace="##other" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="last" type="xs:string" minOccurs="0"/> </xs:sequence> </xs:complexType>

The problems with this approach are summarized in Examining elements and wildcards as siblings. A summary of the problem is that the namespace author cannot extend their schema with extensions and correctly validate them because a wildcard cannot be constrained to exclude some extensions.

I'm not sure I agree with David Orchard summary of the problem here. The problem described in the article he linked to is that a schema author cannot refine the schema in subsequent versions to contain optional elements and still preserve the wildcard.  This is due to the Unique Particle Attribution constraint which states that a validator MUST always have only one choice of which schema particle it validates an element against. Given an element declaration for an element and a wildcard in sequuence the schema validator has a CHOICE of two particles it could validate an element against if its name matches that of the element declaration. There are a number of disambiguating rules the W3C XML Schema working group could have come up with to allow greater flexibility for this specific case such as (i) using a first match rule or (ii) allowing exclusions in wildcards. 

Use wildcard with ##any or ##targetnamespace

This is not possible with optional elements. This is not possible due to XML Schema's Unique Particle Attribution rule and the rationale is described in the Versioning XML Languages article. An invalid schema sample is:

<xs:complexType name="nameType"> <xs:sequence> <xs:element name="first" type="xs:string" /> <xs:any namespace="##any" minOccurs="0" maxOccurs="unbounded"/> <xs:element name="last" type="xs:string" minOccurs="0"/> </xs:sequence> </xs:complexType>

The Unique Particle Attribution rule does not allow a wildcard adjacent to optional elements or before elements in the same namespace.

Agreed. This is invalid.

Extension elements

This is the solution proposed in the versioning article. A sample of the pre-extended schema is:

<xs:complexType name="nameType"> <xs:sequence> <xs:element name="first" type="xs:string" /> <xs:element name="extension" type="tns:ExtensionType" minOccurs="0" maxOccurs="1"/> <xs:element name="last" type="xs:string" minOccurs="0"/> </xs:sequence> </xs:complexType> <xs:complexType name="ExtensionType"> <xs:sequence> <xs:any processContents="lax" minOccurs="1" maxOccurs="unbounded" namespace="##targetnamespace"/> </xs:sequence> </xs:complexType>

An extended instance is

<name> <first>Dave</first> <extension> <middle>B</middle> </extension> <last>Orchard</last> </name>

This is the only solution that allows backwards and forwards compatibility, and correct validation using the original or the extended schema. This articles shows a number of the difficulties remaining, particularly the cumbersome syntax and the potential for some documents to be inappropriately valid. This solution also has the problem of each subsequent version will increase the nesting by 1 level. Personally, I think that the difficulties, including potentially deep nesting levels, are not major compared to the ability to do backwards and forwards compatible evolution with validation.

The primary problem I have with this approach is that it is a very unidiomatic way to process XML especially when combined with the problem with nesting in concurrent versions. For example, take a look at

<name> <first>Dave</first> <extension> <middle>B</middle> <extension> <prefix>Mr.</prefix> </extension> </extension> <last>Orchard</last> </name>

Imagine if this is the versioning strategy that had been used with HTML, RSS or DocBook. That gets real ugly, real fast. Unfortunately this is probably the best you can if you want to use W3C XML Schema to strictly define the an XML vocabulary with extensibility yet allow backwards & forwards compatibility.

David Orchard goes on to suggest a number of potential additions to future versions of W3C XML Schema which would make it easier to use it in defining extensible XML vocabularies. However given that my personal opinion is that adding features to W3C XML Schema is not only trying to put lipstick on a pig but also trying to build a castle on a foundation of sand, I won't go over each of his suggestions. My recent suggestion to some schema authors at Microsoft about solving this problem is that they should have two validation phases in their architecture. The first phase does validation according to W3C XML Schema rules while the other performs validation of “business rules“ specific to their scenarios. Most non-trivial vocabularies end up having such an architecture anyway since there are a number of document validation capabilities missing from W3C XML Schema so schema authors shouldn't be too focused on trying to force fit their vocabulary into the various quirks of W3C XML Schema.   

For example, in one could solve the original schema with a type definition such as

 <xsd:complexType name="nameType">
  <xsd:choice minOccurs="1" maxOccurs="unbounded">
   <xsd:element name="first" type="xsd:string" />
   <xsd:element name="last" type="xsd:string" minOccurs="0"/>
   <xsd:any namespace="##other" processContents="lax" />
  </xsd:choice>
 </xsd:complexType> 

where the validation layer above the W3C XML Schema layer ensures that an element doesn't occur twice (i.e. there can't be two <first> elements in a <name>). It adds more code to the clients & servers but it doesn't result in butchering the vocabulary either.


 

Categories: XML

 You know what we should do? Send up a Mars mission and once they're up in space, call them and say, "You guys can't reenter the atmosphere until you develop a cure for AIDS. Get crackin;

C'mon I bet if you asked people in Africa if they wanted us to go to Mars, they'd say yes--because it's important for humanity to reach ever upward. It's inspirational. We're at our best when we dare to dream a...GAAAH! I just puked in my space helmet.

I read somewhere that the cost of going to Mars may eventually total up to $170 billion which is nowhere close to the $12 billion the US President has stated will flow into NASA's coffers over the next 5 years to help finance the Mars dream. I don't want to knock the US government's spending on AIDS (supposedly $1 billion this year) but aren't there significant, higher priority problems on Earth that need tackling before one starts dabbling in interplanterary conquest?

Gil-Scott Heron's poem Whitey's on the Moon is still quite relevant today. I guess the more things change, the more they stay the same.


 

In a recent post entitled XML For You and Me, Your Mama and Your Cousin Too I wrote

The main problem is that there are a number of websites which have the same information but do not provide a uniform way to access this information and when access mechanisms to information are provided do not allow ad-hoc queries. So the first thing that is needed is a shared view (or schema) of what this information looks like which is the shared information model Adam talks about...

Once an XML representation of the relevant information users are interested has been designed (i.e. the XML schema for books, reviews and wishlists that could be exposed by sites like Amazon or Barnes & Nobles) the next technical problem to be solved is uniform access mechanisms... Then there's deployment, adoption and evangelism...

We still need a way to process the data exposed by these web services in arbitrary ways. How does one express a query such as "Find all the CDs released between 1990 and 1999 that Dare Obasanjo rated higher than 3 stars"?.. 

At this point  if you are like me you might suspect that defining that the web service endpoints return the results of performing canned queries which can then be post processed by the client may be more practical then expecting to be able to ship arbitrary SQL/XML, XQuery or XPath queries to web service end points.  

The main problem with what I've described is that it takes a lot of effort. Coming up with standardized schema(s) and distributed computing architecture for a particular industry then driving adoption is hard even when there's lots of cooperation let alone in highly competitive markets.

A few days ago I got a response to this post from Michael Brundage, author of XQuery : The XML Query Language and a lead developer of the XML<->relational database technologies the WebData XML team at Microsoft produces, on a possible solution to this problem that doesn't require lots of disparate parties to agree on schemas, data model or web service endpoints. Michael wrote

Dare, there's already a solution to this (which Adam created at MS five years ago) -- virtual XML views to unify different data sources. So Amazon and BN and every other bookseller comes up with their own XML format. Somebody else comes along and creates a universal "bookstore" schema and maps each of them to it using an XML view. No loss of performance in smart XML Query implementations.

And if that universal schema becomes widely adopted, then eventually all the booksellers adopt it and the virtual XML views can go away. I think eventually you'll get this for documents, where instead of translating WordML to XHTML (as Don is doing), you create a universal document schema and map both WordML and XHTML into it. (And if the mappings are reversible, then you get your translators for free.)

This is basically putting an XML Web Service front end that supports some degree of XML query on aggegator sites such as AddALL or MySimon. I agree with Michael that this would be a more bootstrapable approach to the problem than trying to get a large number of sites to support a unified data model, query interface and web service architecture.

Come to think of it we're already halfway there to creating something similar for querying information in RSS feeds thanks to sites such as Feedster and Technorati. All that is left is for either site or others like them to provide richer APIs for querying and one would have the equivalent of an XML View of the blogosphere (God, that is such a pretensious word) which you could query to your heart's delight.

Interesting...


 

Categories: XML

January 19, 2004
@ 06:10 AM

I just checked my Outlook Calendar for tomorrow and it looks like I have about six hours of meetings.

Suckage...


 

Categories: Life in the B0rg Cube