October 28, 2004
@ 06:00 PM

Most people looking for a free RSS reader on Windows tend to gravitate towards SharpReader since it is the most popular one on Windows and the first they hear about. Recently I've begin to see more posts from people who've grown tired of its limited feature set and plain user interface that have switched to RSS Bandit.

In his post Vote With Your Download Dave writes

For the longest time I’ve used Sharpreader as my RSS reader. I had had some problems about which I contacted the SharpReader people. I think I sent two e-mails but never got a response. So I started looking.

I found RSS Bandit and I’m all about it now. I like the interface a little better and it’s got lots of nifty features that SharpReader doesn’t. (screenshots are here). So since both of these are free, and RSS Bandit is Open Source even, I can Vote with my download.

In his post RssBandit is awesome!  Imran Koradia writes

I've been using SharpReader for a while since that's the first one that turned up in google's search results on searching for an rss reader :) ! And considering I'm really lazy switching utility software even when it would seem the most logical thing to do, I've stayed on SharpReader for a while until yesterday. I finally switched to RssBandit. Not that SharpReader is bad, but RssBandit is more feature rich and looks way cooler than SharpReader :) I love the tabs that they've got since IE doesn't have them (which is the only reason why I switched to Netscape - I have no problems with IE as such otherwise..). Ofcourse, I could also use the local MSDN help to browse with multiple tabs but that somehow doesn't seem like a good idea to me..

We will be adding a lot more features and eye candy in the next release. There are one or two things SharpReader does that RSS Bandit doesn't such as the ability to delete posts which I'll be adding to RSS Bandit this weekend.  If you liked the last release then the next one will rock your world. I can't wait. ;) 


 

Categories: RSS Bandit

Almost two years ago I wrote a blog entry entitled Useful vs. Useless Abstractions which stated that the invention of URIs by the IETF/W3C crowd to replace the combination of URLs and URNs was a step backwards. I wrote

URIs are a merger of the syntax of URLs and URNs which seem to have been repurposed from their original task of identifying and locating network retrievable documents to being more readable versions UUIDs which can be used to identify any person, place or thing regardless of whether it is a file on the Internet or a feeling in your heart.

This addition to the URN/URL abstraction seemed to address some of the bits which may have been considered to be leaky (if I enter http://www.yahoo.com in my browser and it loads it from its cache then the URL isn't acting as a location but as an identifier). Others also saw URIs as a way for people who needed user friendly UUIDs for use on the Web. I've so far come into contact with URIs in two aspects of my professional experience and they have both left a bad taste in my mouth.

URIs and the Semantic Web: Ambiguity2

One problem with URIs is that they don't uniquely identify a single thing. Consider the following hyperlinked statements

Dare is a Georgia Tech alumni.

Dare's website is valid XHTML.

In the above statements I use the URI "http://www.25hoursaday.com" to identify both myself and my web page. This is a bad thing for the Semantic Web. If you read Aaron Swartz's excellent primer on the Semantic Web you will notice where he talks about RDF and its dependence on URIs
...
Now consider...

<http://aaronsw.com/> <http://love.example.org/terms/reallyLikes> <http://www.25hoursaday.com/> .

Can you tell whether Aaron really like my website or me personally from the above RDF statement? Neither can I. This inherrent ambiguity is yet another issue with the vision of the Semantic Web and the current crop of Semantic Web technologies that are overly dependent on URIs.

Over the past few years I've been on the W3C Technical Architecture Group mailing list I've seen this inherent ambiguity of URIs result in many lengthy, seemingly never-ending discussions about how to workaround this problem or whether it is even a problem in itself. The discussion thread entitled Information resources? which morphed into referendum on httpRange-14  is the latest incarnation of this permathread on the WWW-TAG mailing list.

I was much heartened to see that Tim Berners-Lee is beginning to see some of the problems caused by the inherent ambiguity of URIs. In his most recent response to the "referendum on httpRange-14 " thread he writes 

> It is a best practice that there be some degree of consistency
> in the representations provided via a given URI.

Absolutely.

> That applies *both* when a URI identifies a picture of
> a dog *and* when a URI identifies the dog itself.
>
> *All* URIs which offer consistent, predictable representations will be
> *equally* beneficial to users, no matter what they identify.

Now here seems to be the crunch.
The web architecture relies, we agree I think, on this consistency
or predictability of representations of a given URI.

The use of the URI in the web is precisely that it is associated
with that class of representations which could be returned for it.

Because the "class of representations which could be returned"
is a rather clumsy notion, we define a conceptual thing
which is related to any valid representation associated with the URI,
and as the essential property of the class is a similarity in
information content, we call the thing an Information Resource.

So a URI is a string whose sole use in the web architecture
is to denote that information resource.

Now if you say in the semantic web architecture that the same  will
identify a dog, you have a conflict.


>
>> The current web relies on people getting the same information from
>> reuse of the same URI.
>
> I agree. And there is a best practice to reinforce and promote this.
>
> And nothing pertaining to the practice that I and others employ, by
> using http: URIs to identify non-information resources, in any way
> conflicts with that.

Well, it does if the semantic web can talk about the web, as the
semantic web can't be ambiguous about what an identifier identifies in the way that
one can in english.

I want my agent to be able to access a web page, and then use the URI
to refer to the information resource without having to go and find some
RDF somewhere to tell it whether in fact it would be mistaken.

I want to be able to model lots and lots of uses of URIs in existing
technology in RDF. This means importing them wholesale,
it needs the ability to use a URI as a URI for the web page without
asking anyone else.

The saga continues. The ambiguity of URIs have also been a problem in XML namespaces since users of XML often wonder assume a namespace URI should lead to a network retrievable document when accessed.  Since they are URIs, this isn't necessarily true. If they were URLs it would be and if they were URNs they would not be.
 

Categories: Technology

At Chris Sells' XML DevCon conference* Don Box gave a talk called WS-Why which is described below

"Why? This talk will make sense of why various WS-* specs came to life and which ones every developer should ignore. Naturally, the size of this set is non-zero, however, it is not the entire universe. Hopefully, the audience will be left with a mental model for what to ignore going forward as the WS-* machine continues to move forward."

I got to hang out with Don before the conference as well as read the slides for his talk and although I liked the direction of the talk I wished it could have been more prescriptive. Before continuing to read it's a good idea to read a summary of Don's talk such as the one  at Jeff Barr's blog post AXDC - Don Box and WS-Why?.

In his talk Don breaks XML Web Services specs into

  • WS-DesertIsland - specs that are a must have that form the core XML Web Service specs. These include XML, SOAP, WS-Addressing, WS-MetadataExchange &  XSD+WSDL
  • WS-IslandResort - the next layer of important specs after the core. These include WS-Security, WS-Trust, WS-ReliableMessaging & WS-Policy
  • WS-NewZealand - specs you'd probably need once in a lifetime. These include WS-Eventing, WS-Enumeration & WS-AtomicTransaction
  • WS-IslandOfDoctorMoreau - the ugly step children of the WS-* spec family. These include UDDI, WS-Transfer, WS-BusinessActivity, MTOM and BPEL4WS
  • WS-FantasyIsland - specs Don would love to see. These include WS-Data (XPath/SQL-like query for web services), SOAP over TCP, XSD with better support for versioning, binary XML & WSDL based on RELAX NG.

As can be expected there have been folks who've done a deeper analysis of Don's talk than what I've done above. The most significant so far has been Steve Maine's post The Web Services Kernel which gives an overview of the 5 specs in Don's WS-DesertIsland.

In general I agree with the direction Don took with the talk. However although it was an appropriate talk for the audience of the XML Devcon, a bunch of implementers and industry experts, I don't see it as significant guidance to developers trying to make sense of the mess of WS-* specs. Don's talk is best seen through the lens of "If I was an implementer which specs should I implement in my XML Web Services toolkit", not "If I was a developer which specs should I use from my XML Web Services toolkit". This is an important distinction. This is why specs like WS-Addressing are in Don's WS kernel even though it is only important if you aren't using HTTP as your transport which most developers would be. 

The talk I'd most love to see next from Don or whoever else in Indigo is going to be doing the conference route next is WS-Which: How to decide what XML Web Services specs are right for your application. As someone who now has the responsibility of designing XML Web Service end points within an intranet (aren't job switches grand?) and perhaps beyond I'm interested in guidance that explains when I should use WS-Security versus SOAP + HTTPS or whether there are any scenarios where using MTOM or WS-Transfer actually make sense.

Don's talk was a good start by the Indigo team for providing guidance for future users of the XML Web Services family of specs but there's still a lot of guidance that is currently missing from them to the industry. More importantly, just because some spec is or isn't in Don's WS kernel doesn't indicate how significant or not it will be to a particular class of application developers. Perhaps I can get Doug to give this talk next year.

* Someone really needs to explain to Chris Sells how the Web works. Constantly changing the content of the page at http://www.sellsbrothers.com/conference logically breaks links to the site


 

Categories: XML

October 24, 2004
@ 09:45 PM

Friday was my last day on the XML team at Microsoft. On Monday I start work as a Program Manager on MSN Communication Services Platform team. This team is responsible for the server side implementation of several aspects of MSN including MSN Messenger, some of Hotmail and MSN Groups.

My immediate responsibilities will involve working on the back end of MSN Spaces which is a hosted blogging service in the same vein as Blogger, LiveJournal and TypePad. The site is currently in beta for the Japanese audience.

This means I'll no longer be responsible for the XML Developer Center on MSDN which now will be managed by Irwin Dolobowsky. My work-related blog will shortly no longer appear on that page. However I will continue to write the Extreme XML column on a bi-monthly basis. Arpan Desai is the contact person for the aspects of the XML programming model in the .NET Framework I was responsible for, specifically the System.Xml.XPath namespace, System.Xml.Schema namespace and the System.Xml.XmlNode hierarchy.

If you are curious as to why I decided to make this move, some of the thinking behind my decision can be gleaned from my weblog post Social Software is the Platform of the Future

Update: This move does not change my status as the project lead on RSS Bandit.


 

Categories: Life in the B0rg Cube

October 24, 2004
@ 02:53 AM

A couple of people at work use the term strategy tax when referring to compromises or similar decisions a product team makes that are less than optimal because they have to satisfy the corporate strategy. More and more, I see this term coming up with regards to what Microsoft has termed "integrated innovation". If you are unfamiliar with this term a good description of it can be found in the Microsoft press release 'Integrated Innovation' Provides Partners with Roadmap to Success  where its states

The pursuit of Integrated Innovaton pertains both to product and process. Integrated innovation is manifested in the Microsoft platform but it extends beyond the technology stack. It's Microsoft's approach to building software: simultaneously innovating in product design and development and tightly integrating products, each one with all the others.

Before pointing out the obvious cons of such an approach its benefits should be highlighted. Many would argue that the reason Microsoft Office became the multi-billion dollar business it is today is because it applied to doctrine of integrated innovation. Microsoft took what used to be considered disparate applications (word processing, presentation software, spreadsheets, etc) and turned them into a cohesive productivity suite that worked seamlessly together. The fact that one can design a chart in Excel then cut and paste it into Word with no problems was a big deal when it was first implemented and it still is today. As the saying goes, a lot of effort went into making this look effortless. This combined with the fact that you could purchase these tools in a single box made it the killer application for businesses all over the world.

With that out of the way, let's discuss the cons of this approach. Many students of software engineering will tell you that the goal of any systems architect is to increase cohesion and reduce the coupling in the software architecture. The problems with tight coupling (or tight integration) are well known. As mentioned in Object Oriented Software Engineering Knowledge Base, tight coupling causes a system to be hard to understand and change because changes in one place will require changes somewhere else. Requiring changes to be made in more than one place is problematic since it is time-consuming to find the different places that need changing, and it is likely that errors will be made. 

Those are developer-centric reasons for why tight integration is disadvantegeous but what do these mean in practical terms? The first thing it means is projects take longer. This is basically the Mythical Man Month effect. If you have two teams of 10 people working on separate projects that will take six months to complete and tell them to combine their efforts, it will take them more time to complete their joint tasks not less. As implied above this also means a bug in one component is a bug in all the components, a security flaw in one component is a security flaw in all the components and a slip in one component's schedule is a slip in the schedule for all the components. Finally, it makes the systems more complex to manage and understand which means it makes them harder to maintain.

In today's world where the majority of Microsoft's software is competing with the products of the Open Source community whose mantra of "Release early, release often" and increased threats from malicious hackers make security a primary concern the doctrine of integrated innovation seems almost anachronistic.

The effects of integrated innovation on product schedules shouldn't be underestimated. Although Microsoft Office is an example of integrated innovation when considering its constituent parts it also is an example of eschewing this practice when taken as a whole. The Office team releases a cohesive product with minimal dependencies on external products. The Office team has also consistently released software every 2 to 3 years over the past decade; Office 95, Office 97, Office 2000, Office XP, and Office 2003. Certain other products at Microsoft which are at the forefront of integrated innovation do not seem to have such consistent release cycles. These products are paying the strategy tax.

The main reason for writing this blog entry is so that next time someone asks me what I mean by "strategy tax" I can just give them a link to this entry instead of repeating myself.


 

Categories: Life in the B0rg Cube

October 22, 2004
@ 05:01 AM

It's hard to tell the headlines Google misses earnings expectations and Google's 3Q Profit More Than Doubles  are about the same occurence. I guess it's all about perspective. The market seems to be taking the half full approach seeing as GOOG jumped by $8.89 to close at $149.38.

Even more impressive is that it is trading at $161.79 in after hours trading. And I thought the original IPO price of $135 was excessive. Dang.


 

Karl Waclaweck has released version 1.0 of the SAX for .NET project. In the announcement on XML-DEV Karl writes

This is the first production release of the C#/.NET port of the SAX API.
It should be compatible with MS.NET 1.1 and Mono 1.0.2.

Since the API alone is not enough, a SAX parser implementation has been
released as well: SAXExpat.NET 1.0. It is based on the Expat parser, and
will work on MS.NET 1.1. Currently Mono 1.0.2 is not able to run it,
but this will hopefully change with future Mono releases.

Another implementation based on a port of the AElfred parser will
be available soon. It should work under both, MS.NET and Mono.

The project page is here: http://saxdotnet.sourceforge.net/

It's good to see more Open Source XML projects showing up for the .NET Framework. I haven't missed using SAX that much but I imagine people coming to the .NET Framework from the Java world would love to be able to keep using their favorite push model XML parsing API.


 

Categories: XML

It seems about half the feeds in my aggregator are buzzing with news of the new Google Desktop Search. Although I don't really have the need for a desktop search product I was going to download it and try it out anyway until I found it user a web browser interface accessed via a local web server. Not being a fan of browser based user interfaces I decided to pass. Since then I've seen a couple of posts from people like Joe Gregorio and Julia Lerman who've claimed that Google Desktop Search delivers the promise of WinFS today.

Full text search is really orthogonal to what WinFS is supposed to enable on the Windows platform. I've written about such misconceptions in the past, most recently in my post Killing the "WinFS is About Making Search Better" Myth where I wrote

At its core, WinFS was about storing strongly typed objects in the file system instead of opaque blobs of bits. The purpose of doing this was to make accessing and manipulating the content and metadata of these files simpler and more consistent. For example, instead of having to know how to manipulate JPEG, TIFF, GIF and BMP files there would just be a Photo item type that applications would have to deal with. Similarly one could imagine just interacting with a built in Music item instead of programming against MP3, WMA, OGG, AAC, and WAV files. In talking to Mike Deem a few months ago and recently seeing Bill Gates discuss his vision for WinFS to folks in our building a few weeks ago it is clear to me that the major benefits of WinFS to end users is the possibilities it creates in user interfaces for data organization.

Recently I switched from using WinAmp to iTunes on the strength of the music organizational capabilities of the iTunes library and "smart playlists". The strength of iTunes is that it provides a consistent interface to interacting with music files regardless of their underlying type (AAC, MP3, etc) and provides ways to add metadata about these music files (ratings, number of times played) then organize these files according to this metadata. Another application that shows the power of the data organization based on rich, structured metadata is Search Folders in Outlook 2003. When I used to think of WinFS I got excited about being able to perform SQL-like queries over items in the file system. Then I heard Bill Gates and Mike Deem speak about WinFS then saw them getting excited about the ability to take the data organizational capabilities of features like the My Pictures and My Music folders in Windows to the next level it all clicked.

Now this isn't to say that there aren't some searches made better by coming up with a consistent way to interact with certain file types and providing structured metadata about these files. For example a search like

Get me all the songs [regardless of file type] either featuring or created by G-Unit or any of its members (Young Buck, 50 Cent, Tony Yayo or Lloyd Banks) between 2002 and 2004 on my hard drive

is made possible with this system. However it is more likely that I want to navigate this in a UI like the iTunes media library than I want to type the equivalent of SQL queries over my file system.

Technologies like Google Desktop Search solve a problem a few people have while WinFS is aimed at solving a problem most computer users have. The problem the Google Desktop Search mainly satisfies is how to locate a single file in your file system that may not be easy to navigate to via the traditional file system explorer. However most computer users put files in few locations on their file system so they usually know where to find the file they need. Most of the time I put files in one of four folders on my hard drive 

  1. My Documents
  2. My Music
  3. Visual Studio Projects (subfolder of My Documents)
  4. My Download Files 

For some of my friends you can substitute the "Visual Studio Projects" folder for the "My Pictures" folder. I also know a number of people who just drop everything on their Windows desktop. However the point is still the same, lots of computer users store a large amount of their content in a single location where it eventually becomes hard to manipulate, organize and visualize the hundreds of files contained therein. The main reason I stopped using WinAmp was that the data organization features of Windows Explorer are so poor. Basically all I have when dealing with music files is 'sort by type' or some variation of 'sort by name' and a list view. iTunes changed the way I listened to music because it made it extremely easy to visualize and navigate my music library. The ability to also perform rich ad-hoc queries via Smart Playlists is also powerful but a feature I rarely use.

Tools like Lookout and Google Desktop Search are a crutch to get around the fact that the file navigation metaphor on most desktop systems is past its prime and is in dire need of improvement. This isn't to say fast full text search isn't important, even with all the data organizational capabilities of Microsoft Outlook I still tend to use Lookout when looking for emails sent past a few weeks ago. However it is not the high order bit in solving the problems most computer users have with locating and interacting with the files on their hard drives.

The promise of WinFS is that it aims to turn every application [including file navigation applications like Windows explorer] into the equivalent of Outlook and iTunes when it comes to data visualization and navigation by baking such functionality into the file system APIs and data model. Trying to reduce that to "full text search plus indexing" is missing the forest for the trees. Sure that may get you part of the way but in the end it's like driving a car with your feet. There is a better way and it is much closer than most people think.


 

Categories: Technology

In his post The gender profile of Wikipedia Joi ito writes

I haven't conducted any scientific analysis or anything, but Wikipedia seems much more gender balanced than the blogging community. I know many people point out that ratio of men at conferences on blogging and ratio of men who have loud blog voices seems to be quite high.

The core mistake in the assumption Joi Ito makes here is in assuming that participation is equivalent to talking about participation. I've seen several statistics and surveys on blogosphere (God, I hate that word) participation which all seem to point to the same conclusion; the number of female bloggers tends to outnumber the number of male bloggers.

For example, according to the LiveJournal statistics page there twice as many females blogging as males in that community. Given that LiveJournal is one of the oldest and largest blogging communities with almost 2 million active blogs (and almost 5 million user accounts) I think this counts for a lot more than claiming that a lot of women aren't seen at geeky conferences like Web 2.0 or Tim O'Reilly's FooCamp.   

One thing I've found interesting is wondering why if there are more female bloggers than male bloggers, lists such as the Technorati Top 100 are dominated by male bloggers. On Friday I attended a talk by Susan Herring which shed some light on the issue entitled Conversations in the blogosphere: An analysis "from the bottom up." where she discussed some research she and her colleagues had undertaken to discover the nature of conversations in the blogosphere. Interesting data points from her presentation include

  • Blogs with more outgoing links tend to be more linked to
  • Women tend to link less than men (even in women-centric blog circles such as homeschooling the rate of linking is comparatively less than male-centric blog circles like warblogging)
  • Linking doesn't tend to be reciprocal
  • Small percentage of blogosphere is interlinked.
  • 67% - 75% of blogs surveyed don't link outwards to other blogs
  • 95% of blogs surveyed have less than 10 inbound links from other blogs

Some of the data points from Susan's presentation gave me some ideas as to why the various lists of popular weblogs always seem to be male dominated.

The methodology and results of Susan's research can be obtained from her paper, Conversations in the blogosphere: An analysis "from the bottom up."


 

Categories: Ramblings

Derek Denny-Brown, the dev lead for both MSXML & System.Xml, who's been involved with XML before it even had a name has finally started a blog. Derek's first XML-related post is Where XML goes astray... which points out three features of XML that turn out to have caused significant problems for users and implementers of XML technologies. He writes

First, some background: XML was originally designed as an evolution of SGML, a simplification that mostly matched a lot of then existing common usage patterns. Most of its creators saw XML and evolving and expanding the role of SGML, namely text markup. XML was primarily intended to support taking a stream of text intended to be interpreted as a human readable document, and delineate portions according to some role. This sequence of characters is a paragraph. That sequence should be displayed with a link to some other information. Et cetera, et cetera. Much of the process in defining XML based on the assumption that the text in an XML document would eventually be exposed for human consumption. You can see this in the rules for what characters are allowed in XML content, what are valid characters in Names, and even in "</tagname>" being required rather than just "</>".
...
Allowed Characters
The logic went something like this: XML is all about marking up text documents, so the characters in an XML document should conform to what Unicode says are reasonable for a text document. That rules out most control characters, and means that surrogate pairs should be checked. All sounds good until you see some of the consequences. For example, most databases allow any character in a text column. What happens when you publish your database as XML? What do you do about values that include characters which are control characters that the XML specification disallowed? XML did not provide any escaping mechanism, and if you ask many XML experts they will tell you to base64 encode your data if it may include invalid characters. It gets worse.

The characters allowed in an XML name are far more limited. Basically, when designing XML, they allowed everything that Unicode (as defined then) considered a ‘letter’ or a ‘number’. Only 2 problems with that: (1) It turns out many characters common in Asian texts were left out of that category by the then-current Unicode specification. (2) The list of characters is sparse and random, making implementation slow and error prone.
...
Whitespace
When we were first coding up MSXML, whitespace was one of our perpetual nightmares. In hand-authored XML documents (the most common form of documents back then), there tended to be a great deal of whitespace. Humans have a hard time reading XML if everything is jammed on one line. We like a tag per line and indenting. All those extra characters, just there so that our feeble minds could make sense of this awkward jumble of characters, ended up contributing significantly to our memory footprint, and caused many problems to our users. Consider this example:
 <customer>  
           <name>Joe Schmoe</name>  
           <addr>123 Seattle Ave</addr> 
  </customer>
A customer coming to XML from a database back ground would normally expect that the first child of the <customer> element would be the <name> element. I can’t explain how many times I had to explain that it was actually a text node with the value newline+tab.
...
XML Namespaces
Namespaces is still, years after its release, a source of problems and disagreement. The XML Namespaces specification is simple and gets the job done with minimum fuss. The problem? It pushes an immense burden of complexity onto the APIs and XML reader/writer implementations. Supporting XML Namespaces introduces significant complexity in the parsers, because it forces parsers to parse the entire start-tag before returning any text information. It complicates XML stores, such as DOM implementations, because the XML Namespace specification only discusses parsing XML, and introduces a number of serious complications to edit scenarios. It complicates XML writers, because it introduces new constraints and ambiguities.

Then there is the issue of the 'default namespace’. I still see regular emails from people confused about why their XPath doesn’t work because of namespace issues. Namespaces is possibly the single largest obstacle for people new to XML.

My experiences as the program manager for the majority of the XML programming model in the .NET Framework agree with this list. The above list hits the 3 most common areas people seem to have problems with working with XML in the .NET Framework. His blog post makes a nice companion piece to my The XML Litmus Test: Understanding When and Why to Use XML article on MSDN.


 

Categories: XML

As has been pointed out by others you can read your GMail inbox in any Atom-enabled aggregator that supports secure feeds. I just subscribed to my GMail inbox in RSS Bandit and it worked like a charm. Screenshot below.


 

Categories: RSS Bandit

We are in the process of locking down System.Xml for Beta 2 of the .NET Framework 2.0 and Visual Studio 2005. In the past few months we have received customer feedback about our feature set previewed in the Whidbey Alpha & Whidbey Beta 1 and this has guided our decision making process as to where to focus our energies to ensure a comprehensive feature set.

Below is the list of changes to System.Xml and subsidiary namespaces that have occurred between Beta 1 and Beta 2 of the .NET Framework 2.0 release.

ADDITIONS

XmlSchemaValidator

The XmlSchemaValidator class provides a push model API for W3C XML Schema validation. The primary scenario for using the XmlSchemaValidator is for validating an XML infoset in-place without having to serialize it as an XML document then reparse the document using a validating XML reader.

CHANGES

XmlReader

  • Added overloads to the static Create() method that take XmlParserContext
  • ReadValueAsXXX() methods renamed to ReadContentAsXXX(). Also reduced the number of ReadContentAsXXX() methods relative to the number of ReadValueAsXXX() methods in Whidbey beta 1.
  • Added ReadElementContentAsXXX() methods which are specific to obtaining the value of element nodes
  • Added methods for reading large streams of text or binary data embedded in an XML document in a streaming fashion

public virtual bool CanReadValueChunk { get; }

public virtual int ReadValueChunk (byte[] buffer, int startIndex, int count);

public virtual bool CanReadBinaryContent { get; }

public virtual int ReadContentAsBase64 (byte[] buffer, int startIndex, int count);

public virtual int ReadContentAsBinHex (byte[] buffer, int startIndex, int count);

public virtual int ReadElementContentAsBase64(byte[] buffer, int startIndex, int count);

public virtual int ReadElementContentAsBinHex(byte[] buffer, int startIndex, int count);

  • Added ReadToFollowing(string localname, string namespaceURI) which moves to the next occurrence of the named element in document order.

XmlReaderSettings

  • Added XmlSchemaValidationFlags enumeration to replace the following flags; IgnoreInlineSchema, IgnoreSchemaLocation, IgnoreValidationWarnings and IgnoreIdentityConstraints
  • Added existing ValidationType enumeration to replace to replace the following flags; DtdValidate and XsdValidate

XmlWriter

  • Reduced number of WriteValue() methods
  • Removed overloads of WriteStartElement and WriteStartAttribute that took an IXmlSchemaInfo parameter

XPathDocument

XPathNavigator and XPathEditableNavigator

  • The XPathEditableNavigator has been merged into the XPathNavigator, making it an editable XML cursor model API.
  • The XPathNavigator is the preferred API for exposing data as XML. This has been incorporated into the design guidelines for using XML in the .NET Framework

XmlDocument

  • The XPathNavigator returned by the CreateNavigator() method now allows one to edit the XmlDocument through the cursor model API.
  • The XmlDocument now supports XML schema validation of the entire subtree or partial validation of nodes in the document using the Validate() method
  • The following property added to XmlDocument

public XmlSchemaSet Schemas { get; set; }

  • The following property added to XmlNode

public virtual IXmlSchemaInfo SchemaInfo { get; }

XsltCommand

  • The XslTransform class was obsoleted in Whidbey Beta 1 and replaced by the System.Xml.Query.XsltCommand class.  In Beta 2, we decided to revamp the XsltCommand API in order to make migration from XslTransform simpler.  This effort also resulted in the renaming of the XsltCommand class to  System.Xml.Xsl.XslCompiledTransform.
  • XslCompiledTransform compiles XSLT to MSIL for significantly improved performance at the cost of increased (yet still small) compile times.
  • Supports the MSXML XSLT extension functions such as format-date, format-time etc.

Inference

  • This class has been renamed to XmlSchemaInference

XPathExpression -

  • Added static Compile() method enables one to compile an input string containing an XPath query into an XPathExpression object

REMOVALS

XmlArgumentList

To reduce the cost of churn caused by the obsoletion of XslTransform this class has been removed. In its place the XsltArgumentList from v1.1 can be used

XQueryCommand

Microsoft has decided not to ship a client side XQuery implementation in .NET Framework 2.0 as our customers expect us to ship an implementation that meets the following criteria:

  • Compliant with the W3C standards
  • Functionally addresses key scenarios

As a core platform component in Windows, they also expect us to ship a product that meets the high bar of not breaking their applications when future updates are released.  After talking to key customers and partners, we have determined it is important that we cross this high bar before shipping a full implementation of XQuery in the platform. 

The best estimates tell us that ETA for XQuery to become a W3C recommendation is end of 2005 which does not fit with the .NET Framework 2.0 product release cycle.

In the meantime, we are shipping a well-defined small subset of XQuery in SQL Server 2005 to query information stored natively as XML data type.  This will enable new customer scenarios in SQL Server for storing and retrieving semi-structured data.

In the NET Framework 2.0 RTM timeframe, we recommend that our customers continue to use XSLT and XPath on the client side to solve their key client side filtering and transformation scenarios.  With this in mind, we have made significant improvements to our client side story including:

  • Performance improvements - making the .NET Framework XSLT processor the best performing processor.
  • Functional improvements - improving the usability and feature set of the existing .NET Framework processor

Note: As a result of not shipping XQuery, XML Views using mapping and XQuery to query SQL Server 2005 and the XmlAdapter to perform updates that were originally previewed in the PDC Alpha release of .NET V2.0 have also been removed. These were removed in the Beta 1 release.


 

Categories: Life in the B0rg Cube | XML

I just finished writing last month's Extreme XML column* entitled The XML Litmus Test: Understanding When and Why to Use XML. The article is a more formal write up from my weblog post The XML Litmus Test expanded to contain examples of appropriate and inappropriate uses of XML as well as with some of the criteria for choosing XML fleshed out. Below is an excerpt from the article which contains the core bits that I hope everyone who reads it remembers  

XML is the appropriate tool for the job if the following criteria are satisfied by choosing XML as the data representation format for a given application.

1.      there is a need to interoperate across multiple software platforms

2.      one or more of the off-the-shelf tools for dealing with XML can be leveraged when producing or consuming the data

3.      parsing performance is not critical

4.      the content is not primarily binary content such as a music or image file

5.      the content does not contain control characters or any other characters that are illegal in XML

If the expected usage scenario does not satisfy most or all of the above criteria then it doesn't make much sense to use XML as the data representation format for the situation in question.

As the program manager responsible for XML programming models and schema validation in the .NET Framework I've seen lots and lots of inappropriate usage of XML both from internal teams and our customers. Hopefully once this article is published I can stop repeating myself and just send people links to it next time I see someone asking how to escape control characters in XML or see another online discussion of "binary" XML.

* Yes, it's late


 

Categories: XML

A few days ago I saw the article Xamlon looks to beat Microsoft to the punch  on C|Net which begins

On Monday, Colton's company Xamlon released its first product, a software development kit designed to speed development of user interface software for Web applications. Xamlon built the program from the published technical specifications of Microsoft's own user interface development software, which Microsoft itself doesn't plan to release until 2006.

I've been having difficulty processing this news over the past few days. Reading the Xamlon homepage gives more cause for pause. The site proclaims

XAML is a revolution in Windows application development. Xamlon is XAML today.

  • Rapidly build Windows user interfaces with HTML-like markup
  • Easily draw user interfaces and convert directly to XAML
  • Deploy to the Windows desktop and to Internet Explorer with absolutely no changes to your application.
  • Run XAML applications on versions of Windows from ’98 to Longhorn, and via the Web with Internet Explorer
  • Write applications that port easily to Avalon

What I find interesting about this are the claims that involve unreleased products that aren't expected to beta until next year and ship the year afterwards. I can understand that it is cool to claim to have scooped Microsoft but considering that the XAML and Longhorn are still being worked on it seems strange to claim to have built a product that is compatible with unreleased Microsoft products.

While writing this blog entry I decided to take a quick glance at the various Avalon folks' blogs and I stumbled on a post entitled Attribute grammar for xaml attributes from Rob Relyea, a program manager on the Avalon team.  Rob writes

As part of this change, the flexibility that people have with compact syntax will be reduced.  Today, they can use *Bind(), *Button(), *AnyClass() in any attribute.  We'd like to restict this to a small set of classes that are explicitly in need of being set in an attribute.

I'm not going into great detail in the description of our fix because I'd prefer to be the first company to ship our design.  :-)

Considering that XAML isn't even in beta yet, one can expect a lot more changes to the language before it ships in 2006. Yet Xamlon claims to be compatible with XAML. I have no idea how the Xamlon team plans to make good on their promise to be compatible with XAML and Longhorn before they've even shipped but I'd love to see what developers out there think about this topic.

I totally empathize with the Avalon team right now. I'm in the process of drafting a blog post about the changes to System.Xml of the .NET Framework that have occured between Whidbey beta 1 and Whidbey beta 2. Even though we don't have companies building products based on interim versions of System.Xml we do have book authors who'll have to rewrite [or maybe even eliminate] chapters about our stuff based on changes then off course there's the Mono folks implementing System.Xml who seem to be tracking our Whidbey betas.  


 

I've been trying to come up with a list of the most disappointing movie sequels or prequels of all time. So far I've come up with three the got stuck.

  1. Phantom Menace, prequel to the Star Wars Trilogy
  2. Matrix Revolutions, sequel to Matrix Reloaded
  3. Escape from LA, sequel to Escape from NY

I'm curious as to what suggestions others have for filling out this list. There is one rule that has to be obeyed in submitting entries to this list. Sequels that went straight-to-video do not count. So the various Disney sequels to their major hits like Aladdin or Beauty & the Beast don't count nor do movies like Cruel Intentions 3 or Children of the Corn 7.

So which are your nominees for most disappointing movie sequel?


 

Categories: Ramblings

October 8, 2004
@ 05:58 PM

In his post Debating WS-* Geoff Arnold writes

Tim Bray continues to discuss the relevance of the so-called WS-* stack: the collection of specifications related to XML-based web services. I'm not going to dive into the technology or business issues here; however Tim referred to a piece by Dare Obasanjo which argues that WS-* Specs are like JSRs. I tried to add a comment to this, but Dare's blog engine collapsed in a mess of XML, so I'll just post it here. Hopefully you'll be able to get back to read the original piece if you're interested. [Update: It looks as if my comment made it into Dare's blog after all.]

Just out of curiosity... if WS-* are like JSRs, what's the equivalent of the JCP? Where's the process documented, and what's the governance model? The statement "A JSR is basically a way for various Java vendors to standardize on a mechanism for solving a particular customer problem" ignores the fact that it's not just any old "way"; it's a particular "way" that has been publically codified, ratified by the community, and evolved to meet the needs of participants.

Microsoft isn't trying to compete with standards organizations. The JCP process falls out of the fact that Sun decided not to submit Java to a standards body but got pushback from customers and other Java vendors for something similar. So Sun manufactured an organization and process quite similar to a standards body with itself at the head. Microsoft isn't trying to get into this game.

The WS-* strategy that Microsoft is pursuing is informed from a lot of experience in the world of XML and standards. In the early days of XML, the approach to designing XML standards [especially at the W3C] was to throw together a bunch of preliminary ideas and competing draft specs without implementation experience then try to merge that into a coherent whole. This has been problematic as I wrote a few months ago

In recent times the way the W3C produces a spec is to either hold a workshop where different entities can submit proposals and then form a working group based on coming up with a unification of the various proposals or forming a working group to find come up with a unification of various W3C Notes  submitted by member companies. Either way the primary mechanism the W3C uses to produce technology specs is to take a bunch of contradictory and conflictiong proposals then have a bunch of career bureaucrats try to find some compromise that is a union of all the submitted specs. There are two things that fall out of this process. The first is that the process takes a long time, for example the XML Query workshop was in 1998 and six years later the XQuery spec is still a working draft. Also XInclude proposal was originally submitted to the W3C in 1999 but five years later it is just a candidate recommendation. Secondly, the specs that are produced tend to be too complex yet minimally functionaly since they compromise between too many wildly differing proposals. For example, W3C XML Schema was created by unifying the ideas behind DCD, DDML, SOX, and XDR. This has lead to a dysfunctional specification that is too complex for the simple scenarios and nigh impossible to use in defining complex XML vocabularies.

The WS-* process Microsoft has engaged the industry in aims at preventing this problems from crippling the world of XML Web Services as it has the XML world. Initial specs are written by the vendors planning who'll primarily be implementing the functionality then they are revised based on the results of various feedback and interoperability workshops. As a result of these workshops some specs are updated while others turn out to be infeasible and are deprecated. Some people such as Simon Fell, in his post WS-Gone, have complained that these leads to a situation where things are too much in flux but I think this is a lot better than publishing standards which turn out to contain features that are either infeasible to implement or are just plain wrong. Working in the world of XML technologies over the past three years I've seen both.

The intention is that eventually the specs that show that they are the fittest will end up in the standards process. This exactly what has happened with WS-Security (OASIS) and WS-Addressing (W3C). I expect more to follow in the future.


 

Categories: Technology | XML

In his post What is the platform? Adam Bosworth writes

When I was at Microsoft, the prevailing internal assumption was that:
1) Platforms were great because they were "black holes" meaning that the more functionality they had, the more they sucked in users and the more users they had the more functionality they sucked in and so, it was a virtuous cycle.
...
The real value in my opinion has moved from the software to the information and the community. Amazon connects you to books, movies, and so on. eBay connects you to goodness knows how many willing sellers of specific goods. Google connects you to information and dispensers of goods and services. In every case, the push is for better and more timely access both to information and to people. I cannot, for the life of me, see how Longhorn or Avalon or even Indigo help one little bit in this value chain.

My mother never complains that she needs a better client for Amazon. Instead, her interest is in better community tools, better book lists, easier ways to see the book lists, more trust in the reviewers, librarian discussions since she is a librarian, and so on.

The platform of this decade isn't going to be around controlling hardware resources and rich UI. Nor do I think you're going to be able to charge for the platform per se. Instead, it is going to be around access to community, collaboration, and content. And it is going to be mass market in the way that the web is mass market, in the way that the iPod is mass market, in the way that a TV is mass market. Which means I think that it is going to be around services, not around boxes.

Last week while hanging out with Mike Vernal and a couple of smart folks from around Microsoft I had an epiphany about how the core of the consumer computing experience of the future would be tied to Web-based social software not operating systems and development platforms. When I read Adam Bosworth's post this weekend, it became clear to me that folks at Google have come to the same conclusion or soon will once Adam is done with them.

So where do we begin? It seems prudent to provide my definition of social software so we are all on the same page. Social software is any software that enables people to interact with one another. To me there are five broad classes of social software. There is software that enables 

  1. Communication (IM, Email, SMS, etc)
  2. Experience Sharing (Blogs, Photo albums, shared link libraries such as del.icio.us)
  3. Discovery of Old and New Contacts (Classmates.com, online personals such as Match.com, social networking sites such as Friendster, etc)
  4. Relationship Management (Orkut, Friendster, etc)
  5. Collaborative or Competitive Gaming (MMORPGs, online versions of traditional games such as Chess & Checkers, team-based or free-for-all First Person Shooters, etc)

Interacting with the aforementioned forms of software is the bulk of the computing experience for a large number of computer users especially the younger generation (teens and people in their early twenties). The major opportunity in this space is that no one has yet created a cohesive experience that ties together the five major classes of social software. Instead the space is currently fragmented. Google definitely realizes this opportunity and is aggressively pursuing entering these areas as is evidenced by their foray into GMail, Blogger, Orkut, Picasa, and most recently Google Groups 2. However Google has so far shown an inability to tie these together into a cohesive and thus "sticky" experience. On the other hand Yahoo! has been better at creating a more integrated experience and thus a better online one-stop-shop (aka portal) but has been cautious in venturing into the newer avenues in social software such as blogs or social networking. And then there's MSN and AOL.

One thing Adam fails to mention in his post is that the stickiness of a platform is directly related to how tightly it holds on to a users data. Some people refer to this as lock-in. Many people will admit that the reason they can not migrate from a platform is due to the fact that they have data tied to that platform they do not want to give up. For the most part on Windows, this has been local documents in the various Microsoft Office formats. The same goes for database products, data tends to outlive the application that was originally designed to process it nine times out of ten. This is one of the reasons Object Oriented Databases failed, they were too tightly coupled to applications as well as programming languages and development platforms. The recent push for DRM in music formats is also another way people are beginning to get locked in. I know at least one person who's decided he won't change his iPod because he doesn't want to loose his library of AAC encoded music purchased via the iTunes Music Store.

The interesting thing about the rise of social software is that this data lock-in is migrating from local machines to various servers on the World Wide Web. At first the battle for the dominant  social software platform will seem like a battle amongst online portals. However this has an interesting side effect to popular operating systems platforms. If the bulk of a computer user's computing experience is tied to the World Wide Web then the kind of computer or operating system the browser is running on tends to be irrelevant.

Of course, there are other activities that one performs on a computer such as creating business documents such as spreadsheets or presentations and listening to music. However most of these are not consumer activities and even then a lot of these are becoming commodified. Music already has MP3s which are supported on every platform. Lock-in based on office document formats can't last forever and I suspect that within the next five more years it will cease to be relevant. This is not to say that all people need is a web browser for all their computing needs but considering how much most people's computer interaction is tied to the Internet, it seems likely that owning the user's online experience will one day be as valuable as owning the operating system the user's Web browser is running on. Maybe more so if operating systems become commodified thanks to the efforts of people like Linus Torvalds.

This foray by Google into building the social software platform is definitely an interesting challenge to Microsoft both in the short term (MSN) and in the long term (Windows). This should be fun to watch.


 

Categories: Technology

Looking at the monthly download statistics for RSS Bandit I see that there were over 20,000 downloads for the month of September and we've hit over 100,000 total downloads [across all versions] since the project moved to SourceForge last December.

Thanks to everyone out there using RSS Bandit especially those who've been providing us feedback on how to make it an even better aggregator. You guys rock.

If you are a new user don't forget to read the RSS Bandit Product Roadmap and tell us what you think.


 

Categories: RSS Bandit

In recent times I've been pitching the concept of a digital information hub to various folks at work.  Currently people have multiple aplications for viewing and authoring messages. There are instant messengers, email clients, USENET news readers and RSS/Atom aggregators. All of these applications basically do the same thing; provide a user interface for authoring and viewing messages sent by one or more people to the user.

Currently the split has been based on what wire protocol is used to send and receive the messages. This is a fairly arbitrary distinction which means little to non-technical users. The more interesting distinction is usage patterns. For all of the aforementioned application types messages really fall into two groups; messages I definitely will  read and messages I might want to read. In Outlook, I have messages sent directly to me which I'll definitely read and messages on various discussion lists I am on [such as XML-DEV] which I might want to read if the titles seem interesting or relvant to me. In Outlook Express, there are newsgroups where I read every message and others where I skim content looking for titles that are of interest or are relevant to me. In RSS Bandit, there are feeds where I read every single post (such as Don's or Joshua's blogs) and those where I skim them looking for headlines (e.g. Blogs @ MSDN). The list goes on...

The plan I've had for RSS Bandit for a while has been to see if I can evolve it into the single application where I manage all messages sent to me. Adding NNTP support is a first step in this direction. Recently I realized that some other folks have realized the power of the digital information hub; Google.

However Google has decided to bring the mountain to Mohammed. Instead of building an application that manages messages sent via all the different protocols in a single application they've decided to expose the major classes of messages as Atom feeds. They already provide Atom feeds for weblogs hosted on Blogger. Recently they've experimented with Atom feeds for USENET groups as well as Atom feeds for your GMail account. This means instead of one application being your digital information hub, any Atom savvy client (such as RSS Bandit) can now hold this honor if you use Google as your online content provider.

This is very, very interesting. I'm beginning to really like Google.


 

Categories: RSS Bandit | Technology

In a recent post on James Robertson's blog entitled Re: This is interesting he wrote

In this post, I was stunned by the notion that an air traffic control system might be on win 95/98. Commenters pointed out this link, which indicates that the more likely explanation is this:

The elapsed time is stored as a DWORD value. Therefore, the time will wrap around to zero if the system is run continuously for 49.7 days.

This is just too amusing for words. A multi-hour shutdown caused by static typing, and the fact that many typing decisions in languages that require it end up being essentially random. Note to static typing advocates - had they used Smalltalk, this kind of problem would be impossible....

As pointed out by several of the comments in his post the problem had nothing to do with static typing versus dynamic typing. Instead the problem is that when the type overflows instead of erroring it simply wraps around. Whether the name of the type is known at compile time or not (i.e. static typing) really doesn't come into the question.

James Robertson is a Smalltalk advocate and just like every other advocate of a niche programming language (e.g. Lisp advocates)  he spends lots of time ranting about how every problem that exists in programming languages today was solved 20 years ago in his language of choice. The main problem with James's post isn't that he incorrectly judged the root of the issue.

The main problem with his post is that even though many people corrected his error, he has stubbornly stuck to his guns. Now instead of just seeming like he made a mistake which was pointed out by his readers he now looks like he either (a) twists issues to fit his agenda regardless of the truth or (b) is unknowledgeable of the topics he is trying to argue about.

If you are trying to convince people to come to your side in a technical discussion, refusing to admit flaws in your arguments is more likely to lose you supporters than gain them. Stubbornly sticking to your guns after being shown the error of your ways may work for the Bush administration when it comes to the War on Iraq but it doesn't win you many technical debates.


 

Categories: Ramblings

October 3, 2004
@ 06:38 PM

As an author of a news reader that supports RSS and Atom, I often have to deal with feeds that are often technically valid RSS/Atom feeds but for one or more reasons cause unnecessary inconvenience to authors and users of news aggregators. This is the second in a series of posts highlighting such feeds as an example to others on how not to design syndication feeds for a website.

This week's gem is the Sun Bloggers RSS feed. This RSS feed is a combined feed for all the blogs hosted at http://blogs.sun.com. This means that at any given time the feed most likely contains posts by multiple authors.

To highlight the problem with the feed I present the following two item elements taken from the feed a few minutes ago.

 <item>
    <title>Something fishy...</title>
    <description>A king was very fond of fish products. He went fishing in the only river of his kingdom. While fishing he accidently dropped his diamond ring presented by his wife - The Queen. A fish in the river mistook the sparkling ring for an insect and swallowed it. The fisherman caught the fish and sold it to a chef. The King on the other side was very sad and apologistic. Took the Queen to a restaurant for a dinner and ordered a fried fish. The chef presented the same which had the diamond ring inside. King was happy to find the ring back and rewarded the restaurant. The restaurant rewarded the chef and the Chef rewarded the fisherman. The fisherman then went back to the river, killed all the fishes in search of another diamond ring. I never understood the motto of the story but there is certainly something fishy about it!</description>
    <category>General</category>
    <guid isPermaLink="true">http://blogs.sun.com/roller/page/ashish/20041002#something_fishy</guid>
    <pubDate>Sat, 2 Oct 2004 08:53:15 PDT</pubDate>
  </item>
  <item>
    <title>Another one bytes the dust...</title>
    <description>Well, more like another one got bitten. Accoring to &lt;a href="http://www.heise.de/newsticker/meldung/51749"&gt;this&lt;/a&gt; (german) article from &lt;a href="http://www.heise.de"&gt;Heise&lt;/a&gt; Mr. Gates got himself some Spyware on his personal/private systems, and has now decided to take things into his own hand (or at least into those of his many and skilled engineers). Bravo!&lt;p&gt; Spyware or other unwanted executables like e.g. &lt;a href="http://securityresponse.symantec.com/avcenter/expanded_threats/dialers/"&gt;dialers&lt;/a&gt; are puzzeling me for some time now, since I simply don't understand how those thinks can be kept legal at all. No one needs dialers. There are enough good ways for online payment. No one in their right mind can honestly belive, that anyone with a serious business would need any of that crap. It's a plain ripoff scheme.&lt;p&gt;</description>
    <category>General</category>
    <guid isPermaLink="true">http://blogs.sun.com/roller/page/lars/20041002#another_one_bytes_the_dust</guid>
    <pubDate>Sat, 2 Oct 2004 07:32:18 PDT</pubDate>
  </item>

The problem with the feed is that even though the RSS 2.0 specification has a provision for an author element and the Dublin Core RSS module has the dc:creator element which can be used in its stead the Sun Bloggers RSS feed eschews directly identifying the author of the post in the feed. 

The obvious benefits of identifying authors in collaborative feeds include enabling the reader to better determine whether the speaker is an authority on the topic at hand or begin to ascribe authority to the author if the reader was previously unaware of the author. Then there are aggregator specific benefits such as the fact that readers could then group or filter items in the feed based on the author thus improving their reading experience.  

A solution to this problem is for the webmaster of the Sun Bloggers site to begin to use author elements to identify the authors of the various posts in the Sun Bloggers feed.


 

October 3, 2004
@ 06:10 PM

I just came up on some invites for Wallop a couple of days ago but have been too busy with work to explore it so far. I have about half a dozen invites to give away but don't have any friends who I think would use the system enough to give Lili and co the sort of feedback they want for their research project. If you are a friend of mine who would be interested in exploring and using Wallop then ping me over email or respond to this blog posting.

By the way if you are curious about what Wallop is, the quick description of the project is

is a research project of the Social Computing Group at Microsoft Research, exploring how people share media and build conversations in the context of social networks. 

With any luck I'll get a chance to explore Wallop over the next few days and perhaps will post my thoughts on how the experience compares to other online communities targetted at the same niche.


 

October 2, 2004
@ 05:57 AM

From Len Bullard

>What's the silver bullet?

It's a bar in Phoenix.

From Tim Bray

I disagree with virtually every technical argument Ted Nelson has ever
made and (in most cases) the implementations are on my side, but it
doesn't matter; Ted's place in history is secure because he asked more
important questions than just about anybody.   I think he usually
offered the wrong answers, but questions are more important.

The thread that produced these gems is Ted Nelson's "XML is Evil"  which revisits Ted Nelson's classic rant Embedded Markup Considered Harmful.

 


 

Categories: XML

October 2, 2004
@ 05:46 AM

It's always interesting to me how the same event can be reported completely differently depending on who's reporting the news. For example compare the headline US army massacres over 100 civilians in Iraq from Granma international where it begins

BAGHDAD, October 1 (PL).—, More than 100 Iraqi civilians have been killed and some 200 injured in Samarra and Sadr City today during the cruelest retaliatory operations that the US occupation forces have launched do date.

According to medical sources quoted by the Arab TV network Al Arabiya, 94 people died and another 180 were injured when soldiers from the US 1st Infantry Division attacked a civilian area of the city of Samarra with heavy weaponry. 

In the Sadr City district, located in west Baghdad, US soldiers massacred nine civilians during an operation to eliminate militia forces loyal to the wanted Islamic Shiite cleric Moqtada Al Sadr. Another three people were seriously injured.

to the following description of the same events reported by the Telegraph entitled '100 rebels dead' after US troops storm Samarra where it begins

American forces have stormed the rebel-held town of Samarra, claiming more than 100 insurgents killed, as coalition forces try to establish control in the Sunni triangle.

The US military said 109 fighters and one US soldier were killed in the offensive. Doctors at Samarra's hospital, said 47 bodies were taken in, including 11 women and five children.

An Iraqi spokesman said 37 insurgents were captured. During the push, soldiers of the US 1st Infantry Division rescued Yahlin Kaya, a Turkish building worker being held hostage in the city.

The operation came after "repeated attacks" on government and coalition forces had made the town a no-go zone, the US military said. Samarra lies at the heart of the Sunni Arab belt north and west of Baghdad where many towns are under the control of insurgents.

So was it a 100 civilians killed or a 100 insurgents? The truth is probably somewhere in the middle. Iraq is becoming even more of a giant Messopotamia. So far I can only see two choices for the US in Iraq over the next year; pull out or attempt to retake the country in force. Either way there's even more significant and unnecessary loss of life coming up.

All this because of some chicken hawks in the Bush administration...


 

Categories: Ramblings