Jon Udell writes

Reading the Longhorn SDK docs is a disorienting experience. Everything's familiar but different.

Example 3: The new XSD:

"WinFS" Schema Definition Language "WinFS" introduces a schema definition language to describe "WinFS" types. This language is an XML vocabulary. "WinFS" includes a set of schemas that define a set of Item types and NestedElement types. These are called Windows types.

Yeah, "embrace and extend" was so much fun, I can hardly wait for "replace and defend." Seriously, if the suite of standards now targeted for elimination from Microsoft's actively-developed portfolio were a technological dead end, ripe for disruption, then we should all thank Microsoft for pulling the trigger. If, on the other hand, these standards are fundamentally sound, then it's a time for what Clayton Christensen calls sustaining rather than disruptive advances. I believe the ecosystem needs sustaining more than disruption. Like Joe, I hope Microsoft's bold move will mobilize the sustainers.

I can't speak for the other technologies Jon but being the Program manager responsible for XML schema technologies in the .NET Framework [as well as the fact that the offices of a number of WinFS folks are a few doors away from mine] I can speak on the above example.

The first thing I'd like to note is that Jon's example is that the W3C XML Schema definition language (XSD) is far from being targetted for elimination from Microsoft's actively-developed portfolio? We have almost a dozen technologies and products that utilize XSD in some way shape or form; SQL Server [Yukon], Visual Studio.NET, Indigo, Word, Excel, InfoPath, SQLXML, the .NET framework's DataSet, BizTalk, FoxPro, the .NET Framework's XmlSerializer and a couple of others. The fact that one technology or product decides that it makes more sense to create an XML vocabulary that meets their specific needs instead of shoehorning an inappropriate technology into their use case and thus causing pain for themselves and their customers is a wise decision that should be lauded (In fact,this is the entire point XML was invented in the first place, see my "SGML on the Web" post) instead of being heralded as another evil conspiracy by Microsoft.

Now to go into specifics. The WinFS schema language and W3C XML Schema do two fundamentally different things; W3C XML Schema is a way for describing the structure and contents of an XML document while a WinFS schema describes types and relationships of items stored in WinFS. At first glance, the only thing that connects both schema languages is that they are both written in XML. Of course, this is just syntax so this similarity isn't any more significant than the fact that both SQL and Java use English keywords.

Where it gets interesting is if one asks whether the WinFS data model can be mapped to XML and then XSD used to describe the structure of this XML view of WinFS. This is possible but leads to impedance mismatches due to the differences between the WinFS model and that used by W3C XML Schema. Specifically there are constructs in W3C XML Schema that don't map to concepts in WinFS and concepts in WinFS that don't map to constructs in W3C XML Schema. So for WinFS to use W3C XML Schema as the syntax for its schema language it would have to do two things

  1. Support a subset of W3C XML Schema
  2. Extend W3C XML Schema to add support for WinFS concepts

The problem with this approach is that it leads to complaints of a different kind. The first being that there is user confusion because not every valid W3C XML Schema construct is usable in this context and the other being that W3C XML Schema will end up be extended in a proprietary manner which eventually leads to yells of "embrace and extend". By the way, this isn't guesswork on my part this is a description of what happened when Microsoft took this approach with the .NET Framework's DataSet class and SQLXML. History has taught us that these approaches were unwise and used W3C XML Schema outside of its original goals and usage scenarios. Instead, we have moved back to the original goals of XML where instead of relying on one uber-vocabulary for all ones needs, vocabularies specific to various situations are built but can be translated or transformed as needed when moving from one domain to another.

Ideally, even though WinFS has its own schema language it makes sense that it should be able to import or export WinFS items as XML described using an W3C XML Schema since this is the most popular way to transfer structured and semi-structured data in our highly connected world. This is functionality is something I've brought up with the WinFS architects which they have stated will be investigated.


Categories: Life in the B0rg Cube

Drew Marsh blogged about the talk given by my boss at this year's Microsoft Professional Developer's Conference (PDC) entitled What's New In System.Xml For Whidbey?". Since I'm directly responsible for some of the stuff mentioned in the talk I though it would make sense if I made some clarifications or added details where some where lacking from his coverage.

Usability Improvements (Beta 1)

  • CLR type accesors on XmlReader, XmlWriter and XPathNavigator: Double unitPrice = reader.ValueAsDouble


This was a big gripe from folks in v1.0 of the .NET Framework that they couldn't access the XML in a validated document as a typed value, this is no longer the case in Whidbey. However people who want this functionality will have to move to the XPathDocument instead of the XmlDocument. People will be able to get typed values from an XmlDocument (actually from anything that implements IXPathNavigable) but actually storing the data in the in-memory representation as a typed value will only be available on the XPathDocument.  


XPathDocument A Better XML DOM

"XmlDocument is dead."


  • XPathDocument replaces the XmlDocument as the primary XML store.
  • Feature Set
    • 20%-40% more performant for XSLT and Xquery
    • Editing capabilities through the XPathEditor (derives from XPathDocument) using an XmlWriter (the mythical XmlNodeWriter we've all been searching for).
    • XML schema validation
    • Strongly typed store. Integers stored as int internally (per schema) (Beta 1)
    • Change tracking at node level
    • UI databinding support to WinForms and ASP.NET controls (Beta 1)

Yup, in v1.0 of the .NET Framework we moved away from a push-based parser (SAX) in MSXML to a pull-based parser (XmlReader) in the .NET Framework. In v2.0 of the .NET Framework there's been a similar shift, from the DOM data model & tree based APIs for accessing XML to the XPath data model & cursor based APIs for accessing XML. If you are curious about some of the thinking that went into this decision you should take a look at my article in XML Journal entitled Can One Size Fit All? 


Note: XPathDocument2 in PDC bits will be XPathDocument once again by Beta 1. "We were at an unfortunate design stage at the point where the PDC bits were created."

Yeah, things were in flux for a while during our development process. The features of the class called XPathDocument2 in the PDC builds will be integrated back into the XPathDocument class that was in v1.0 of the .NET Framework.


The rest of the stuff in the talk (XQuery, new XML editor in Visual Studio.NET, ADO.NET with SQLXML, etc) isn't stuff I'm directly responsible for so I hesitate to comment further, however Drew has taken excellent notes about them so it is clear which direction we're going in for Whidbey.





Categories: Life in the B0rg Cube | XML

The third in my semi-regular series of guidelines for working with W3C XML Schema for is now up. The article is entitled XML Schema Design Patterns: Is Complex Type Derivation Unnecessary? and the article is excerpted below for those who may not have the time to read the entire article


W3C XML Schema (WXS) possesses a number of features that mimic object oriented concepts, including type derivation and polymorphism. However real world experience has shown that these features tend to complicate schemas, may have subtle interactions that lead tricky problems, and can often be replaced by other features of WXS. In this article I explore both derivation by restriction and derivation by extension of complex types showing the pros and cons of both techniques, as well as showing alternatives to achieving the same results


As usage of XML and XML schema languages has become more widespread, two primary usage scenarios have developed around XML document validation and XML schemas.

  1. Describing and enforcing the contract between producers and consumers of XML documents: ...
  2. Creating the basis for processing and storing typed data represented as XML documents: ...


Based on the current technological landscape the complex type derivation features of WXS may add more problems than they solve in the two most commmon schema use cases. For validation scenarios, derivation by restriction is of marginal value, while derivation by extension is a good way to create modularity as well as encourage reuse. Care must however be taken to consider the ramifications of the various type substitutability features of WXS (xsi:type and substitution groups) when using derivation by extension in scenarios revolving around document validation.

Currently processing and storage of strongly typed XML data is primarily the province of conventional OOP languages and relational databases respectively. This means that certain features of WXS such as derivation by restriction (and to a lesser extent derivation by extension) cause an impedance mismatch between the type system used to describe strongly typed XML and the mechanisms used for processing and storing said XML. Eventually when technologies like XQuery become widespread for processing typed XML and support for XML and W3C XML Schema is integrated into mainstream database products this impedance mismatch will not be important. Until then complex type derivation should be carefully evaluated before being used in situations where W3C XML Schema is primarily being used as a mechanism to create type annotated XML infosets.


Categories: XML

October 29, 2003
@ 01:57 PM

A recent article by Phil Howard of Bloor Research on talks about the Demise of the XML Database. Excerpts below

While you can still buy an XML database purely because it provides faster storage capability and greater functionality than a conventional database, all the erstwhile XML database vendors are increasingly turning to other sources of use for their products.

These other markets basically consist of two different sectors: the use of XML databases as a part of an integration strategy, where the database is used to provide on-the-fly translation for XML documents, and for content management...

The reason why there is this trend away from pure XML storage is because advanced XML capabilities are being introduced by all the leading relational vendors. 

This has been considered "fighting words" from some in the XML database camp such as Mike Champion (works on Tamino XML database) and Kimbro Staken (one of the originators of Apache Xindice). Mike Champion comes up with a number of counter-arguments to the claims in the article I found interesting and felt compelled to comment on. According to Mike

  • It is widely believed that less than a quarter of enterprise data is currently stored in RDBMS systems. This suggests that the market is not "making do" with what the relational database products offer today, but using a wide variety of technologies.

This is actually the mantra of the team I work for at Microsoft. We are responsible for data access technologies (Relational, Object and XML) and our GM is fond of trotting out the quote about "less than a quarter of enterprise data is currently stored in a relational database". A lot of data important to businesses is just siting around on file systems in various Microsoft Office documents and other file formats. The bet across the software industry is that moving all this semi-structured business documents to XML is the right way to go and the first step has been achieved given that modern business productivity software (including the Open Source ones) are moving to fully supporting XML for their document formats. Step one is definitely to get all those memos, contracts and spreadsheets into XML.

  • The main reason OODBMS didn't hit the sweet spot, AFAIK, is that they created a tight coupling between application code and the DBMS. Potential performance gains this allows can outweigh the maintenance challenges in extremely business critical, high transaction volume environments...XML DBMS, on the other hand, inherit XML's suitability for loosely coupling systems, applications, and tools across a wide range of environments.

Totally agree here about the weakness of OODBMSs in creating a tight coupling between applications and the data they accessed. For a more in-depth description of the disadvantages of object oriented databases in comparison to their relational counterparts you can read my article An Exploration of Object Oriented Database Management Systems.

  • Again AFAIK (having only played with OODBMS personally), there is relatively little portability across OODBMS systems; code written for one would be very expensive to adapt to another. Investing in the technology required one to make a risky bet on the vendor who supplied it. This created an environment where the object-relational vendors could prosper by offering only a subset of the features but the absolute assurance that they would be in business for years to come. In the XML DBMS world, on the other hand, all support roughly the same schema, query language, and API standards;

There are two points Mike is making here

  1. There is very little portability across OODBMS systems.
  2. In the XML DBMS world, on the other hand, all support roughly the same schema, query language, and API standards

Based on my experiences with OODBMSs the first claim is entirely accurate, moving data from one OODBMS system was a pain and there was a definitle lack of standardization of APIs and query languages across various products. The second claim is rather suspect to me. I am unaware of any schema, query or API standards that are supported uniformly across XML database products. This isn't to say there aren't standardized W3C branded XML schema languages or query languages nor that there haven't been moves to come up with standard XML database APIs but when last I looked these weren't uniformly supported across many the XML database products and where they were there was a distinct lack of maturity in their offerings. Granted it's been almost a year since  I last looked.

However there is an obvious point about portability that Mike doesn't mention (perhaps because it is so obvious). The entire point of XML is being portable and interoperability, moving data from one XML database to another should simply be a case of "export database as XML" from one and "import XML into database" on the other.

  • The standards of the XML world provide a clearly defined and fairly high bar for those who would seek to take away the market pioneered by the XML DBMS vendors. For better or worse, the XML family of specs is complex and quite challenging to support efficiently in a DBMS system. It's one thing to support, as the RDBMS vendors now do quite well, XML views of structured, typed, relatively "flat" data such as are typically found in RDBMS applications. It is quite another to efficiently and scalably support queries and updates on "document-like" XML with relatively open content models, lots of recursion, mixed content, and where wildcard text comparisions are more frequent than typed value comparisons. The dominant DBMS vendors obviously have talent and money to throw at the problem, but analysts should not assume that they will surpass theese capabilities of the XML DBMS systems anytime soon

OK, this one sounds like FUD. Basically Mike seems to be saying the family of XML specs is so complex (thanks to the W3C, but that's another story) that companies like Oracle, IBM and Microsoft won't be able to come up with ways to query semi-structured data efficiently or perform text comparison searches well so you are best of sticking to a seperate database for your XML data instead of having all your data stored in a single unified store.

So what is my position on the death of native XML databases? Like Phil Howard, I suspect that once XML support becomes [further] integrated into mainstream relational databases (which it  already has to some degree) then native XML databases will be hard pressed to come up with reasons why one would want to buy a separate product for storing XML data distinct from the rest of the data for a business when a traditional relational database can store it all. It's all about integration. Businesses prefer buying a single office productivity suite than mixing and matching word processors, spreadsheets and presentation programs from different vendors. I suspect the same is true when it comes to their data storage needs.


Categories: XML

October 29, 2003
@ 12:49 PM
Beware the Hoover Dustette

Categories: Ramblings

October 29, 2003
@ 04:33 AM

Get it here

Differences between v1.2.0.43 and v1.2.0.42 below

  • This primarily fixes complaints by Roy Osherove about the responsiveness of RSS Bandit. Eliminated a number of places where GUI locks up when performing tasks by using background threads. Also reduced number of threads used by thread pool when downloading feeds.

  • Added missing source files to installer package, enabling one to compile RSS Bandit from source if so desired.

  • Reverted to behavior where links are opened in a new tabbed browser pane as opposed to the feed display pane

  • Fixed: ArgumentNullException when attempt made to "Refresh Feed" when the top most 'My Feeds' node is selected in the tree view.



Categories: RSS Bandit

Randy Holloway writes

This session was set up as an open conversation, with only one concrete agenda item.  That being RSS versus Atom.

Interesting, a bunch of developers get together to discuss weblogging technologies and they discuss the most irrelevant piece of the puzzle. For those not keeping track, there are two primary weblog syndication formats in popular usage; the RDF-based RSS 1.0 and Dave Winer's RSS 0.91/RSS 2.0. Developers tend to prefer Dave Winer's specs to the RSS 1.0 branch but due to various interpersonal issues with Dave Winer (unsurprising since he can be quite trying) a bunch of people decided to create a third syndication format (Atom) which duplicates the functionality of the other two primarily to get around the fact that Dave Winer controlled the spec for the most popular feed syndication format. This third format adds little to the table besides fragmenting the feed syndication world which already has to deal with RSS 1.0 vs. RSS 0.91/RSS 2.0 issues. In fact, this redundancy is currently being debated on the atom-syntax list.

There are three major technologies (i.e. XML formats) in the blogging world; feed syndication (RSS), blog editing (Blogger API & MetaWeblog API) and feed list information (OPML). Dave Winer's specs are dominant in all three areas given that he is the author of both the MetaWeblog API and OPML spec. Of the three of the them, RSS is probably the best of the specs and meets the needs of most users except for the Semantic Web folks who want an RDF-based format (i.e. RSS 1.0). On the other hand, there are significant deficiencies in both the MetaWeblog API and OPML. I have blogged about What is wrong with the MetaWeblog API as well as mentioned some of the problems with OPML as a format for storing information about subscribed feeds.

Given that RSS is the best of the weblog related technologies while the blog editing and feed list formats are actually the technologies with problems one might wonder why there is so much energy invested in fixing what isn't that broken instead of trying to tackle actual problems that affect developers and users of blogging tools? One of the answers to this question comes from a comment that Randy Holloway says someone made during the Weblogging BOF

"We don't solve problems, we just talk about them." 

Most of the people engaged in the discussions don't actually write any code or at least not any weblogging related code, so they are unaware of the real problems but instead focus on simple yet irrelevant issues that are easy to grok. This is definitely a case of bike shedding. [Hmmm, I love the term "bike shedding" so much I dug up  the original source of the phrase]

Speaking of Atom, I'm curious as to how all the XML Web Services folks at the Weblogging BOF felt about the fact that the current drafts of the ATOM API uses just HTTP and XML instead of the XML Web Services buzzword soup (SOAP, WSDL, etc) meaning they won't be using Indigo to code against it just plain old System.Xml and System.Net. If ""XML-RPC is a fantastic solution... from a while ago"  I wonder what they think of using just HTTP with no fancy object<->XML mappings, positively prehistoric :)


Categories: Ramblings

October 28, 2003
@ 06:58 AM

I finally got around to providing a list of the major features of RSS Bandit on the RSS Bandit wiki this evening. I finally was motivated to do this after reading a blog post by Luke Huttemann, the author of SharpReader and the ensuing comments.Of the ten comments in response to his post about an upcoming release of SharpReader,  four of them were requests for features already in RSS Bandit (actually two of them actually mentioned RSS Bandit by name). I find it amusing that the requestors would rather wait for Luke to add the features to SharpReader than use RSS Bandit. That's definitely some brand loyalty at work. :)

Anyway, in the spirit of democracy there's another vote going on in the RSS Bandit workspace. This time it's to decide what features should be in the next release of RSS Bandit. It seems like structured search (mentioned in a previous blog entry) is high on everyone's priority list and will definitely make it into the next release. Actually the release after the next one since I plan to ship a bugfix release in the next couple of days to clean up some of the sloppiness in the last release since it was rushed to be in time for PDC.

One of the things that warms the cockles of my heart is seeing RSS Bandit mentioned in posts such as  The Magic of Blogging. Having some of the stuff you work on mentioned in the same breath as the word "magic" is pretty fucking cool. It's great to see regular people actually getting some use out software I helped build instead of just building plumbing for developers which is my day job.  



Categories: RSS Bandit

October 28, 2003
@ 05:36 AM

Clemens Vasters writes

Indigo is the successor technology and the consolidation of DCOM, COM+, Enterprise Services, Remoting, ASP.NET Web Services (ASMX), WSE, and the Microsoft Message Queue. It provides services for building distributed systems all the way from simplistic cross-appdomain message passing and ORPC to cross-platform, cross-organization, vastly distributed, service-oriented architectures providing reliable, secure, transactional, scalable and fast, online or offline, synchronous and asynchronous XML messaging.

I think is truly awesome, they (folks like Don Box, Doug Purdy, Steve Swartz, Scott Gellock, Omri Gazitt , Mike Vernal , John Lambert et al) have not just cooked up a brand new distributed computing platform but have built it on open standards and open technologies meaning that probably for the first time in decades there won't be artifical, politics induced divisions limiting a distributed computing technology to particular platforms or operating systems (i.e. like CORBA, DCOM & Java RMI). The extra goodness is that these open standards are all XML based so crazy XML geeks like me can do stuff like this or people like Sam Ruby can do stuff like that.

The next generation of DCOM, just that this time it interoperates with everyone regardless of what programming language or operating system they are running.

Fucking sweet.


Categories: Life in the B0rg Cube | XML

Since I'm not at the Microsoft Professional Developer's conference, I decided to answer questions by attendees about stuff that I am directly or indirectly responsible for right here. So let's roll the questions out

Alan Dean writes

    • You can leverage the XML support in .NET against a data source of your choice (for example, the Registry) by implementing a new XmlReader. We were pointed to work by Mark Fussell on MSDN to do this (Writing XML Providers for Microsoft .NET).

      In Whidbey we will be encouraging people to implement custom XPathNavigator instances instead of custom XmlReaders unless their situation specifically calls for forward-only processing. The ObjectXPathNavigator is an example of a custom navigator

    • Tim indicated that the whitespace handling had been particularly useful in the field. He mentioned a gotcha with empty elements; namely that this
      is actually the same as this

      except that the second has been pretty-printed with a CRLF and some whitespace indentation. The only way to handle this correctly in your XmlReader, however, is to use _reader.WhitespaceHandling = WhitespaceHandling.None

      This is a legacy from what I like to call our "unconformant by default" era which is how we shipped the XML parser in v1.0 and v1.1. There were a couple of nice features like not erroring on invalid characters and the above feature that people needed in some cases but shouldn't have been the default behavior since it was extremely difficult to figure out how to turn of all the features and get a conformant XML 1.0 parser. In v2.0 we're going to a "conformant by default" mode where people have to go out of their way to read in unconformant XML not the other way around.

    • The current pain suffered by not being able to CreateElement independent of an XmlDocument which led to the ImportNode hack to allow movement between documents. They were sufficiently noisy on this to lead me to think that this is resolved in Whidbey.

      To my knowledge this behavior will remain the same in Whidbey.

Kirk Allen Evans writes

There will be another, more difficult, XML parser, and you will hear Mark Fussell talk about that later this week.“ - Don Box

I would hedge bets that this revolves around XQuery or something, I am looking forward to hearing what this is.

I am extremely curious about what this means myself but don't think Don was talking about XQuery. If anything he probably was talking about APIs but even then we aren't changing much from what we provided in v1 in the area of the XML parser (i.e. the XmlReader class) although the implementation has been rewritten to be faster and more conformant (much props to Helena). I am puzzled about what Don meant by that statement since I've seen Mark's slides and there really isn't anything about another XML parser that users have to learn. Perhaps he meant another XML API which is valid given that we did a ton of work on the XPathDocument for v2.0 which means there may be a lot of people moving from using XmlDocument to using XPathDocument for a number of reasons.



Categories: Life in the B0rg Cube

October 27, 2003
@ 03:39 PM

So it looks like my boss, his boss, his boss's boss, and his boss's boss's boss are all out at the Microsoft Professional Developer's Conference 2003 (aka PDC) where folks will get a sneak peak at the next versions of Windows, SQL Server and Visual Studio. Thus it looks like won't be much whip cracking going on this week so I can spend time working on my pet projects for work.

  1. XML Developer Center on MSDN: Mark Fussel recently posted complaints about the quality of some articles on XML he'd recently read. I generally feel the same way about websites dedicated to articles about XML. Of all the developer sites devoted to XML there are only two I've seen that aren't utter crap; and IBM's XML developerWorks site. Even these are kind of hit or miss, usually publishes about 3 articles a week of which one is excellent, one is good and one is crap. Which is fine except that the excellent article is typically about something that isn't directly applicable to what I work on. The problem with IBM's DeveloperWorks is that all the code is Java-centric which doesn't help me since I work with the .NET Framework.

    After seeing some of what Tim Ewald did with producing content around Microsoft technologies and XML Web Services via the Web Services Developer Center on MSDN I talked to some of the folks at MSDN about creating something similar for XML content. This was green lighted a while ago but preparations for PDC has stopped this from taking off until next month. In the meantime, I'll be creating my content plan and coming up with a list of authors (both Microsoft employees and non-Microsoft folks) for new dev center.

    So far I've gotten a couple of folks lined up internally as well as some excellent non-Microsoft folks like Daniel Cazzulino, Christoph Schittko and Oleg Tkachenko. Definitely expect some pages to the XML Home Page on MSDN in the next few months.

  2. Sequential XPath and Pull Based XML Parsing: In 2001, Arpan Desai presented on Sequential XPath at XML 2001. Relevant bits from the paper

    This paper will provide an explanation of and the subset of XPath which we will tentatively dub: Sequential XPath, or SXPath for ease of use. SXPath allows a event-based XML parser, such as a typical SAX-compliant XML parser, to execute XPath-like expressions without the need of more memory consumption than is normally used within a sequential pull-based parser.
    By creating a streaming XML parser which utilizes Sequential XPath, one is able to reap the inherent benefits of a streaming parser with the querying power of XPath. By defining this proper subset of XPath, we enable developers and users to utilize XML in a wide array of applications thought to be too performance sensitive for traditional XML processing.
    The code for the technology outlined above has actually been gathering dust on some hard drives at work for a while. I'm currently in the process of liberating this code so that everyone can get access to the combined benefits of pull-based parsing and XPath based matching of nodes. Hopefully folks should be able to download classes similar to the ones outlined in Arpan's presentation in the next few weeks. Hopefully by Christmas, everyone will be able to write code similar to the following snippet taken from Tim Bray's XML is too Hard for Programmers
while (<STDIN>) {
  next if (X<meta>X);
  if    (X<h1>|<h2>|<h3>|<h4>X)
  { $divert = 'head'; }
  elsif (X<img src="/^(.*\.jpg)$/i>X)
  { &proc_jpeg($1); }
  # and so on...
Of course you'll have to substitute the Perl code above for C#, VB.NET or any one the various languages targetted at the .NET Framework.


Categories: XML

October 25, 2003
@ 05:47 PM

Get it here

Differences between v1.1.0.36 and v1.2.0.42 below.

  • Support for password protected feeds using either HTTPS/SSL or HTTP Authentication. This feature can be tested using Steven Garrity's test feeds.
  • The ability to store and retrieve feed list from remote locations such as a dasBlog blog, an FTP server or a network file share. This enables users utilizing RSS aggregators on multiple machines to synchronize their feed list from a single point. This feature has been called a subscription harmonizer by some.
  • Multiple feeds downloaded simultaneously instead of one at a time thus reducing download time.
  • When saving as OPML, the hierarchy of the feed list is preserved instead of writing out a flat structure.
  • Default theme for viewing items changed to resemble that of a mail reader like Outlook Express. 
  • Added support for <dc:author> and <author> elements to a number of templates including the default theme.
  • FIXED: Feed list corruption when importing an OPML file where xmlUrl="" for some feeds
  • FIXED: NullReferenceException involving streams when accessing feeds after RSS Bandit has been running for a long time.


Categories: RSS Bandit

"This paper proposes extending popular object-oriented programming languages such as C#, VB or Java with native support for XML. In our approach XML documents or document fragments become first class citizens. This means that XML values can be constructed, loaded, passed, transformed and updated in a type-safe manner. The type system extensions, however, are not based on XML Schemas. We show that XSDs and the XML data model do not fit well with the class-based nominal type system and object graph representation of our target languages. Instead we propose to extend the C# type system with new structural types that model XSD sequences, choices, and all-groups. We also propose a number of extensions to the language itself that incorporate a simple but expressive query language that is influenced by XPath and SQL. We demonstrate our language and type system by translating a selection of the XQuery use cases."

From Programming with Rectangles, Triangles, and Circles by Erik Meijer and Wolfram Schulte

I talk to Erik about this stuff all the time, so it's great to finally see some of the thoughts and discussions around this topic actually written down in a research paper. According to Erik's blog post from a few weeks ago he'll actually be presenting about this at XML 2003


Categories: XML

According to C|Net News on Thursday unveiled a new service that lets bookworms search through pages of thousands of books available on its online store.

The service, dubbed "Search Inside the Book," lets people type in any keyword and receive results for all the pages and titles of various books that contain that term. In the past, Amazon customers could search only by author name, title or keyword.

I am impressed by how in one move Amazon made their search feature utterly useless. I just tried to search for "open source xml" and "java xml" books on Amazon it and it was a fucking disaster. Even the top 10 hits that were returned were polluted with books that simply had the words "Java" or "XML" somewhere in the book. In fact almost every search I tried returned Oracle9i JDeveloper Handbook  in the top 10. If ever a feature needed to be turned off by default it is this one.


Categories: Ramblings

October 23, 2003
@ 06:42 PM

Every once in a while I notice links from educational institutions that use my writings for their classes in my referrer logs. It gives me the warm fuzzies to know that I'm actually [indirectly] teaching a generation of CS geeks. In the past month I've seen the following referrers/references to my writings

My corrupting influence spreads...


Categories: Ramblings

October 23, 2003
@ 05:56 PM

I picked up a Belkin Mobile iPod FM Transmitter. on a whim last night. At first, I had issues with the amount of static and feedback that were being emitted from the speakers but once I figured out that I was supposed to turn down the volume on my iPod and turn it up on the car stereo it was heaven. Since this was an impulse buy I didn't shop around but if I had I may have decided on an iTrip instead since there are no dangling wires and batteries are not required. I'll see how I feel about the Belkin device in a week or so.

According to Slashdot, B0rg Central didn't have anything nice to say about the launch of iTunes on Windows. Looking beyond what seem like obvious sour grapes it is a bummer that iPods don't support the WMA format.

My favorite B0rg hater, Russell Beattie, has this to say about the iPod

So here's my thoughts: 1) The current iPod needs a successor and soon because consumers will start to balk at the B&W interface. 2) With the color screen and all that storage, it'd be dumb not to show multimedia like Photos and Video. 3) If Apple's going to show multimedia, they'll probably want to use Quicktime to do it... 4) If they're going that route, they'd need a Mobile OS to run it on. (Not to mention for other needs like supporting Wireless access to the iPod via WiFi or Bluetooth).

I guess I'm about the reveal myself as being a Luddite but I have no problem with the B & W iPod interface nor am I interested in taking pictures or playing videos on my music player. This annoying convergence of features has not interested me in my cell phone (which happen to have lost useful features over time like password protected address books for frivolous shit like games, web browsing and taking pictures) and I definitely don't want it in my music player especially if it keeps the price high instead of allowing it to drop to a more reasonable amount so I can pick up a few as Xmas gifts.


Categories: Ramblings

Many have complained about the fact that one of the major problems with RSS aggregators is the fact that if one uses an aggregator on multiple machines (such as at home and at work) then there is no easy way to synchronize the readers on both machines. This was one of the problems I set out to solve when I first started working on RSS Bandit, now thanks to some prodding from some of the co-developers on the RSS Bandit workspace multiple solutions have been implemented. Click below for details.

Categories: RSS Bandit

October 21, 2003
@ 07:35 AM

Dave Winer writes

Just had a phone talk with Scoble, and finally I have a clue why people use aggregators integrated with email clients. He had a couple of compelling reasons. 1. Since it's integrated with email he can easily forward an item to people he works with via email. 2. He has a folder where he drags items he wants to write about later. BTW he uses NewsGator. I still prefer the blog-style interface of Radio's aggregator.

Both of which are features RSS Bandit supports. There is one feature requested by Jeff Sandquist which Newsgator has and RSS Bandit does not; the ability to specify a username/password combo when accessing a particular feed. Torsten and I will see about getting this in by the weekend so Jeff can use it next week.

My bed beckons but so do my recent purchases that just arrived in the mail; Chinese Super Ninja and Shaolin Challenges Ninja.

Bah, sleep is for the weak.


Categories: RSS Bandit

October 19, 2003
@ 07:56 PM
The original impetus for designing XML was to create "SGML on the Web". Six years later, although XML has found widespread applicability in the software industry it seems to have failed at its original goal. Some thoughts about this follow.

Categories: XML

October 19, 2003
@ 05:52 PM

Rob Volk writes

Is XML Evil?

About a month ago I was asked by a contractor I work with who needed to import some very plain, fixed-width, ASCII text file data into SQL Server. In fact, this SQL Team post is very much like his situation, in that he also was going to convert PLAIN, FIXED-WIDTH, ASCII TEXT (did I mention that already?) into XML and THEN import it into SQL Server...  <snip />

Fortunately (!) we use SQL Server 7.0 so none of the XML extensions were available for him to use. As it turned out I already had a bcp format file that could read the text format he needed to import. So, with ONE LINE OF SQL, I was able to do something he would have had to write over 100 lines of C# to parse the file, XML-ize it, and then save out to ANOTHER FILE so that he could import it (using about 12-20 lines of SQL, or more) Using bcp also would've entailed one DOS prompt command. Even DTS would've been harder to use to accomplish the same thing.

So, exactly how is XML making this process easier? Where is the ease of use and interoperability it's supposed to provide? I'm completely astounded that so many people have been so thoroughly brainwashed by the XML hype that they not only see it as the best way to do something, but as the ONLY WAY TO DO IT.

Situations like the above were my motivation for writing the article Understanding XML on MSDN. Using XML for a software development project buys you two things (a) the ability to interoperate better with others and (b) a number of off-the-shelf tools for dealing with format. If neither of these things apply to a given situation then it doesn't make much sense to use XML.

Applying the interoperability litmus test, unless the data in the text file in the situation described above is going to be shared with partners there really isn't any reason to convert it to XML to gain interoperability. Even then one could argue that it may make more sense to just pull out relevant data from the database and convert that to XML as needed when data needs to be exchanged with partners. As for the gains from off-the-shelf tools, given that there were already tools existing for the format used by the text file that performed the required task there really wasn't anything to be gained by converting it to XML.  

Applying this litmus test makes it fairly easy to figure out when to use XML and when using it isn't such a good idea. This is one of the reasons I consider articles such as Parsing RSS At All Costs as setting a bad example because they encourage the notion that it is OK to produce and consume ill-formed XML. Of course, once you do that you can't really interoperate with others and traditional XML tools cannot be used on the ill-formed documents so you might as well not be using XML.


Categories: XML

I found a link to an article entitled The Great Walmart Wars via a link on Robert Scoble. The main thesis of the article is that although consumers like Walmart it is actually bad for the overall economy of a region, they don't their employees that highly and they drive smaller competitors out of business leading to homogenization.

I wasn't really interested in the veracity of the claims or otherwise. Instead what I found interesting was the overlap the article had with similar screeds I'd seen against large booksellers like Barnes & Nobles and Borders as well as a number of the rants on "Why Microsoft is Evil" style websites such as What's So Bad About Microsoft? on the No Pity For Microsoft website. The complaint about driving smaller competitors out of business seemed to be an underlying theme in such diatribes.

It seems that at a higher level the problem people seem to have is with the inherrent competitiveness of the capitalistic system. Few would argue that consumers have chosen with their feet that they prefer the goods and services of Walmart, Barnes & Nobles and Microsoft to those of their competitors due to being happier with their prices and convenience compared to alternatives. However the inherrent nature of competition is that there will be winners and that there will be losers. In a way, competition amongst producers in capitalist systems is a Zero Sum Game.

I am curious about examples of companies that have grown dominant in their particular markets without there being parties that complain about similar damage to the ecosystems of those particular markets. This would prove enlightening.


Categories: Ramblings

October 18, 2003
@ 06:25 PM
Tarantino serves up a homage to chop socky movies from yester-year including cameos from greats like Sonny Chiba and Gordon Liu as well as beats from the Rza. Although it kicked off kind of slowly the movie was a feast to behold and laugh out loud funny in a bunch of places as well. Score: **** out of *****

Categories: Movie Review

The author of the "Reflecting on Microsoft" blog  writes

Why Doesn't Microsoft have a Blogging Tool?

Have you ever wondered why Microsoft doesn't produce a blogging tool? To me, it seems a surprising omission.

AOL seems to see a commercial opportunity to interest a significant proportion of its subscribers. However, so far as I am aware Microsoft has no plans in this direction.

Microsoft already offers Office to students at a discount. Why not offer a blogging tool too?

The odd thing ... at least it seems odd to me ... is that Microsoft has many of the parts of a great blogging tool already in place but the parts have not been brought together yet.

To me this has been the most interesting aspect of working at Microsoft. Before I was assimilated I'd hear complaints about lock-in and how people hated the fact that Microsoft was unfairly competing by building software products that competed with offerings from other companies or making products work better together. Since I used to hang out on Slashdot a lot I usually got the "Microsoft is a greedy, evil company that is trying to rule the software world" perspective.

Working in B0rg Cube I've found out that in many cases there is significant customer demand for the actions Microsoft takes that its competitors rail against. Customers do want everything and the kitchen sink to be built into the products they buy so they don't have to buy multiple products or deal with multiple vendors. I suspect the main problem people have with the Microsoft isn't that it tries to satisfy customer needs for more functionality and software that works better together but that in some cases it builds the software itself instead of licensing from others and thus contributing to the software ecosystem. From what I've seen this isn't the general case, in general this extra value is added by Microsoft Partners but in the cases where partners become competitors there are always hard feelings.

Disclaimer: The above comments do not represent the thoughts, intentions, plans or strategies of my employer. They are solely my opinion.


Categories: Life in the B0rg Cube

October 18, 2003
@ 03:41 AM
Recently I've noticed the replacement in the buzzword lexicon of the phrase "XML Web Services" with "Service Oriented Architecture". I speculate on why I think the shift occured and what it means for XML and the Web.

Categories: XML

October 18, 2003
@ 02:33 AM

Elizabeth Spiers writes

If Markoff thinks all (or even most) bloggers are keeping diaries online, then Jarvis is probably right: he doesn't read blogs—which seems ironic, given that he's the technology reporter for the Times

I find her quote puzzling. From what I've seen the average weblog is an online diary. In general weblogs take the form of diaries, commentaries, link collections or some combination thereof. The most common form is the online diary which can be confirmed by selecting any dozen blogs at random from the hundreds of thousands at Blogger, LiveJournal or Xanga and examining the writings. The fact of the matter is that for every blog that is enlightened commentary about technology or politics there are a dozen blogs by some high school girl complaining about pimples and boys.

I find it amusing that the "blogs will change everything" hype crowd try to deny this. That's like denying that for a long time the only way to make money on the World Wide Web was with pr0n or that the driving incentive for broadband is copyright infringement and pr0n. Let's not forget that lots of innovation on the Web was driven by pr0n sites.

The blogerati need to accept the fact that their medium of communication is also the favored way for teenage girls to carry on in the grand tradition of "Dear Diary". Remember , just because IRC is mainly the haven of script kiddies and w4r3z d00ds doesn't make it any less useful nor does the predominance of email forwards and spam make email a pointless technology. Blogs are the same way.  


Categories: Ramblings

October 17, 2003
@ 04:30 AM

After Joel Spolsky's recent post on exceptions which generated lots of dissenting opinions he has attempted to recant in a way that leads one to believe he is bad at accepting critical feedback.

In his most recent posting Joel writes

And Back To Exceptions

There's no perfect way to write code to handle errors. Arguments about whether exception handling is "good" or "bad" quickly devolve into disjointed pros and cons which never balance each other out, the hallmark of a religious debate. There are lots of good reasons to use exceptions, and lots of good reasons not to. All design is about tradeoffs. There is no perfect design and there is certainly never any perfect code.

His response is a cop out and not something I'd expect from a professional software developer who had been shown the errors in his proposed solution. This non-recant actually lowers my opinion of Joel in ways his original post did not. At least the original post could be based on past experience with bad uses of exceptions or usage of exceptions in programming languages where they were bolted on such as C++. His newest post just sounds like he is trying to save face instead of critically analyzing the dissenting opinions and showing the pros and cons of his approach and that of others. The best part is that he picks perhaps the worst defense of using exceptions to counterpoint against

With status returns:

STATUS DoSomething(int a, int b)
    STATUS st;
    st = DoThing1(a);
    if (st != SGOOD) return st;
    st = DoThing2(b);
    if (st != SGOOD) return st;
    return SGOOD;

And then with exceptions:

void DoSomething(int a, int b)

Ned, for the sake of argument, could you do me a huge favor, let's use a real example. Change the name of DoSomething() to InstallSoftware(), rename DoThing1() to CopyFiles() and DoThing2() to MakeRegistryEntries().

Since I'm always one to take a challenge here goes how I'd probably write the code

void DoSomething(string name, string location)


  }catch(SoftwareInstallationException e){


  }catch(FileIOException fioe){



 }catch(RegistryException re){





DisplayInstallationCompleteMessage(name, location);


or even better

void DoSomething(string name, string location)


   }catch(Exception e){

     throw InstallationException(e, name, location);


DisplayInstallationCompleteMessage(name, location);


Of course, you could ask what happens if a rollback attempt fails in which case it either should catch the exceptions in itself or throw an exception. Either way, I prefer that the above approaches to littering the code with if...else blocks.


Categories: Ramblings

October 14, 2003
@ 04:04 PM

In the transition from Visual Studio 2002 (which uses v1.0 of the .NET Framework) and Visual Studio 2003 (which uses v1.1. of the .NET Framework) the Microsoft Visual Studio team decided to change the project file formats so that they'd be incompatible. This means that development teams can't build .NET Framework if developers use different versions of Visual Studio that were released merely one year from each other.  

Since I use Visual Studio 2002 at home this has led to some complaints from various members of the RSS Bandit workspace who like most developers are using the latest and greatest (Visual Studio 2003) because it means the project files under source control are not compatible with their version of Visual Studio. I have been hesitant to move to Visual Studio 2003 and v1.1 of the .NET Framework because I don't want RSS Bandit to become targetted at v1.1 of the .NET Framework instead of being able to run on both v1.0 and v1.1 of the .NET Framework. Based on my web server logs, users with v1.0 of the .NET Framework installed still outnumber those with v1.1.

However it seems this is out of my hands. Last night Torsten initiated a vote in the RSS Bandit workspace and the numbers overwhelmingly indicate that the members of the workspace want to move to v1.1 of the .NET Framework. So it looks like I need to go buy a copy of Visual Studio 2003.

I now have to figure the best process to put in place to ensure that RSS Bandit always runs on both v1.0 and v1.1 of the .NET Framework. Already there is one issue I'm sure will cause breakages if I don't keep an eye on it, when people the project moves, the System.Xml.Xsl.XslTransform class had a number of overloads of its Transform() method obsoleted and replaced by new methods. When compiled against v1.1 of the .NET Framework the compiler complains that RSS Bandit uses obsoleted methods which is fine in our case since the obsoletions were a result of some overreactions during our month-long security push from last year.  However many people keep thinking that they should change the method calls to the new ones which will break RSS Bandit when it runs on v1.1 of the .NET Framework since those methods don't exist there.

Last week Torsten asked when I think we'll start to make the transition to v2.0 of the .NET Framework (no doubt inspired by all the PDC hype. From what I've seen that's going to be a bitch of a move. I suspect we won't do that till a few months after Whidbey (the next version of the .NET framework) goes gold which gives us about a year and a half.


Categories: RSS Bandit

October 13, 2003
@ 11:23 PM

Joel Spolsky writes

The reasoning is that I consider exceptions to be no better than "goto's", considered harmful since the 1960s, in that they create an abrupt jump from one point of code to another. In fact they are significantly worse than goto's:

  1. They are invisible in the source code. Looking at a block of code, including functions which may or may not throw exceptions...
  2. They create too many possible exit points for a function. To write correct code, you really have to think about every possible code path through your function...

A better alternative is to have your functions return error values when things go wrong, and to deal with these explicitly

Whoa, this is probably the second post from Joel that I've completely disagreed with. Exceptions may suck (unchecked exceptions especially) but using return values with error codes sucks a lot more.

Categories: Ramblings

RSS Bandit is now the most kickass news aggregator for the .NET Framework. However there are features people keep asking for that I haven't seen others working on that I'll try to get into RSS Bandit before the end of the year. In this post I describe the top 3 as I see them.

Categories: RSS Bandit

October 12, 2003
@ 10:08 PM

From the Reuter's article Suspected Penis Snatcher Beaten to Death.

BANJUL (Reuters) - A 28-year-old man accused of stealing a man's penis through sorcery was beaten to death in the West African country of Gambia on Thursday, police said. A police spokesman told Reuters that Baba Jallow was killed by about 10 people in the town of Serekunda, nine miles from the capital Banjul.

Reports of penis snatching are not uncommon in West Africa, with purported victims claiming that alleged sorcerers simply touched them to make their genitals shrink or disappear in order to extort cash in the promise of a cure.

The police spokesman said many men in Serekunda were now afraid to shake hands, and he urged people not to believe reports of "vanishing" genitals. Belief in sorcery is widespread in West Africa.

This particular urban legend was quite common back home in Nigeria and every couple of years there'd be "seasons" of mass hysteria where people got beaten up for allegedly snatching penises. There was always some story of "a friend of a friend" who got his penis snatched by some lady/gentleman/witchdoctor who they bumped into in a crowded street but managed to confront the person in time and request their penis back. 

I never realized this urban legend/mass hysteria  was widespread across West Africa. You learn a new thing every day.


Categories: Ramblings

Aaron Swarts has an interesting post entitled Shades of Gray where he asks the following questions
  1. What do you do when someone says something obviously false? Do you correct them? Do you ignore them?

  2. What if they continue to repeat it? Are they malicious? Misguided? Simply taking another, but still reasonable, point of view?

  3. What if they get people to agree with them? Are they a conspiracy? Biased? Driven by other motivations? Amoral? Immoral?

  4. What if everyone starts to say it? Do you question your belief? Your sanity? Your life?

Below are some thoughts that surfaced when I read his entry and the followup post

Categories: Ramblings

October 11, 2003
@ 07:29 PM

I just made available a download containing a signed assembly (i.e. DLL for the non-.NET savvy) for the EXSLT.NET project. You can download it from here. Here's the elevator speech description of the project.

The EXSLT.NET library is an implementation of EXSLT extensions to XSLT for the .NET platform. EXSLT.NET implements the following EXSLT modules: Dates and Times, Common, Math,Regular Expressions, Sets and Strings. In addition EXSLT.NET provides own set of useful extension functions. See full list of supported extension functions and elements in "Extension Functions and Elements" section.

The project is primarily a merger of the code from my article EXSLT: Enhancing the Power of XSLT and Oleg Tkachenko's article Producing Multiple Outputs from an XSL Transformation with a number of enhancements from folks like Dimitre Novatchev and Paul Reid.

I'll probably write a followup article about this for my Extreme XML column on MSDN. In the meanwhile I assume Oleg will probably send out an announcement to xsl-list & xml-dev about the project in the next few days.


Categories: XML

October 10, 2003
@ 04:40 PM
The one where yet another person requests a feature that's been in RSS Bandit for months.

Categories: RSS Bandit

October 10, 2003
@ 10:52 AM
An exploration of the term Almost Standard Compliant

Categories: Life in the B0rg Cube

October 10, 2003
@ 10:27 AM

Since I moved websites I'd like people to access my dasBlog RSS feed as opposed to my old RSS feeds. However whereas getting a web server to send a 301 (Moved Permanently) result to a client is a piece of cake in Apache (simply edit some config file) it seems to be a bitch and a half in IIS.

If anyone has any tips or tricks as to how to setup IIS 5.1 running ASP.NET to send 301s I'm all ears.


Categories: Ramblings

In which I go over the first three features I plan to checkin to the dasBlog work space. Click below for details.

Categories: Das Blog

October 8, 2003
@ 06:02 PM

I was at Sam Goody again this past weekend where I picked up the Tenth Anniversary Edition of Ninja Scroll and where'll I'll probably soon be returning to once they get the Ninja Scroll Series. I couldn't help but notice the disparity in the way entertainment media like console games and movies are sold compared to music CDs. Sam Goody had an "under $10" rack where one could buy big budget movies from a few years ago for less than half their original price at the time they were first released. Similarly various video games were being sold at half price because they were "platinum sellers". On the other hand I still have to fork over $20 after taxes if I want to pick up a CD over a decade old like NWA's second album. What seemed quite absurd was that it was possible  to buy a DVD for half the price one would have paid for its accompanying soundtrack on CD. There is clearly a problem here yet the RIAA continues to act like the problem is with music fans and not with them. Being blinded by greed is an unfortunate thing.

Phil Greenspun has a post entitled RIAA, friendship and prostitution where he states

In the bad old days of Napster you kept your MP3 collection on your desktop.  Today, however, an MP3 jukebox with enormous capacity can be purchased for $200.  It won't be long now before average people carry around their entire music collections on their cell phones.

Consider this scenario.  You are sitting at Starbucks and see a friend.  He is not inside your Starbucks but across the street in the other Starbucks.  You walk across the street.  Both of you happen to have your MP3 jukeboxes your pockets.  He says "Have you heard the latest Britney Spears song?  It reminds me so much of the late Beethoven Quartets with some of Stravinsky's innovative tonality."  You haven't?  Just click your MP3 jukeboxes together and sync them up.  Any tracks that he had and you didn't you now have.  You're using a digital audio recorder; the device won't do anything except record music.  You're not paying each other so it is noncommercial.  Under Section 1008 what you're doing is perfectly legal in the United States.

<snip />

What is the point of Internet file sharing when people can, perfectly legally, copy as much music from each other as they could reasonably want?  Only a person with zero friends would want to bother with file sharing.

This is an interesting point and one I've heard expressed before by one of my friends who owns an iPod. It takes about 10-15 minutes to push an  album's worth of songs to an iPod. Nowadays when someone tells him about a good album they just bought, he doesn't even have to borrow it for more than 15 minutes to have all the music on his iPod. The problem for RIAA is that unlike listening to music on your PC which could be considered "try before you buy" since the PC is not the main music device of a large number of the population, an iPod is likely to be the main music player of a lot of people and once music is on it there is little incentive to go out and buy it.

Once the next generation of iPods (and other hard drive based digital music players) show up with wireless file sharing (just beam that song Scotty) this trend will be significantly accelerated.

Personally, I hope the music industry adapts but I definitely hope that during the adaptation process we lose the RIAA.


Categories: Ramblings

October 8, 2003
@ 04:15 PM
I finally decided to take the plunge and install dasBlog. It definitely wasn't a straightforward experience. Click below to read about the issues I faced.

Categories: Das Blog

Recently on the atom-syntax list someone posted a link to Jeremy Allaire's RSS-Data Proposal which to myself, Tim Bray, and Bill De Hora looked like an idea without much merit. The proposal is yet another iteration of the argument of how to embed extra information within an RSS feed besides the traditional elements representing the publication date, author and description of the news item. Jeremy Allaire's proposal not only attempts to solve the problem in a way that is less flexible and less useful than the current way the problem is solved in RSS feeds today (via namespaced vocabularies) but also does not take into account current industry practices for indicating datatype information in an XML document. I had originally planned to ignore the proposal along with the ensuing interest in the format that sprang up in a few weblogs but after seeing an article about RSS-Data in EWeek which attempts to legitimize what is basically a bad idea I decided to go ahead and post a critique of the proposal.

Below is a detailed look at the problems with the RSS-Data proposal and how some of its idiosyncracies can be improved.



Categories: XML

Jeremy Cowles has produced some kickass XSLT themes for RSS Bandit such as DOS (my new favorite), Halloween and Unwise Terminal. RSS Bandit is now available and contains six new themes, download it from here. The details of the changes since version are contained below.

As part of my ongoing effort to document the thinking process behind the work we've been doing on the XML APIs for the next version of the .NET Framework I wrote an article entitled Can One Size Fit All: Exploring the Possibility of One API for XML Processing which appears in the current issue of XML Journal. The article can be considered a followup to my previous article on, A Survey of APIs and Techniques for Processing XML

More below on the trojan I mentioned yesterday, why I decided to specialize in computer systems instead of software engineering while in school, hip hop feuds, and release notes for RSS Bandit.




My machine was acting weird this morning, I seemed to have a network connection but couldn't actually access any sites on the Internet. Suspecting a problem with my ISP's DNS servers I called Comcast where I learned that my machine was infected with the Qhosts trojan. It seems I browsed some malicious website (or received some malicious HTML spam) which allowed my machine to get 0wn3d. It seems a similar exploit and trojan are behind the Half Life 2 source code leak. I hate this shit.

More below on the W3C 's recent Workshop on Binary Interchange of XML Information Item Sets (aka the binary XML workshop), developer and user communities, the future of BlogX, and really cool software for the iPod.