Yesterday I found out that a shortage of styrofoam cups in the kitchen that we experienced in our building was actually occuring all over the Redmond campus. Some of us joked last night that this was the latest in the string of penny wise, pound foolish cost cuttings in the same vein as only having office supplies on one floor of the building.

This morning I realized that moving all of a particular resource to one floor to "cut costs" was actually an example used in The Dilbert Principle in a section entitled Companies That Turn On Themselves.

I wonder what other Dilbert-esque cost cutting moves folks out there have experienced? Post your favorites in the comments to this post.


 

Categories: Life in the B0rg Cube

I've been watching the hype about podcasting with some wariness but it looks like it is here to stay. I just noticed that Greg Reinacker (NewsGator) and Nick Bradbury (FeedDemon) have announced that they will have better support for RSS 2.0 enclosures and thus podcasting. This weekend I also started the roots of getting podcasting support into RSS Bandit, Torsten will likely finish this work once he is done with the GUI for NNTP newsgroup support.

Speaking of podcasting and RSS 2.0 enclosures, I agree 100% with Joshua Allen's points in his post, History of Podcasting. He wrote

Dave Winer doesn't want to end up like Eric Bina, written out of the history of a creation he helped usher into reality.  Adam steps up to make sure Dave gets credit.  This time, there is less reason to worry.  First, the WWW (which Eric helped enable) is now an independent and democratic public record which can triangulate the major media.  And blogs, which Dave helped enable, are one source of that public record.  The public record shows that Dave was planning “Radio” via RSS for a very long time.  Dave has talked about these ideas for a long time, but I have to admit that I wasn't quite prepared for how fast it would actually happen.  I believe credit goes to Adam for such a fast and effective bootstrap, but it also proves that all of the work on RSS laid a good foundation for quick incremental innovation.  

I also think that one of the major success factors was that the nattering nabobs ignored podcasting and dismissed it until it was too late to inject their stop energy.  Many of the nabobs were so convinced of their own stories about “RSS is broken”,  that it never occured to them that something like podcasting could be successful.  They were so busy trying to reinvent RSS that they ignored an idea that Dave has been giving away for free for years. 

There's a lot of innovation and interesting end user applications that can be built on RSS today. However many XML syndication geeks are prideful and would rather reinvent the wheel than use existing technology to solve real world problems.


 

November 1, 2004
@ 03:02 PM

Yesterday I found out my car had been broken into the previous night. I can't get over the fact that I had my car parked on the street at Pioneer Square until almost 2AM on Sunday but it gets broken into in the supposely secure underground parking garage of my apartment complex.

The wave of emotions washing over me this past 12 hours has been interesting. The ones that have stuck so far are the sense of violation and the anger. I've learned the hard way to never leave anything important in my car thinking that because its in the trunk it'll be "safe" .

Bah, I need to get ready for work.

 


 

Categories: Ramblings

October 28, 2004
@ 06:00 PM

Most people looking for a free RSS reader on Windows tend to gravitate towards SharpReader since it is the most popular one on Windows and the first they hear about. Recently I've begin to see more posts from people who've grown tired of its limited feature set and plain user interface that have switched to RSS Bandit.

In his post Vote With Your Download Dave writes

For the longest time I’ve used Sharpreader as my RSS reader. I had had some problems about which I contacted the SharpReader people. I think I sent two e-mails but never got a response. So I started looking.

I found RSS Bandit and I’m all about it now. I like the interface a little better and it’s got lots of nifty features that SharpReader doesn’t. (screenshots are here). So since both of these are free, and RSS Bandit is Open Source even, I can Vote with my download.

In his post RssBandit is awesome!  Imran Koradia writes

I've been using SharpReader for a while since that's the first one that turned up in google's search results on searching for an rss reader :) ! And considering I'm really lazy switching utility software even when it would seem the most logical thing to do, I've stayed on SharpReader for a while until yesterday. I finally switched to RssBandit. Not that SharpReader is bad, but RssBandit is more feature rich and looks way cooler than SharpReader :) I love the tabs that they've got since IE doesn't have them (which is the only reason why I switched to Netscape - I have no problems with IE as such otherwise..). Ofcourse, I could also use the local MSDN help to browse with multiple tabs but that somehow doesn't seem like a good idea to me..

We will be adding a lot more features and eye candy in the next release. There are one or two things SharpReader does that RSS Bandit doesn't such as the ability to delete posts which I'll be adding to RSS Bandit this weekend.  If you liked the last release then the next one will rock your world. I can't wait. ;) 


 

Categories: RSS Bandit

Almost two years ago I wrote a blog entry entitled Useful vs. Useless Abstractions which stated that the invention of URIs by the IETF/W3C crowd to replace the combination of URLs and URNs was a step backwards. I wrote

URIs are a merger of the syntax of URLs and URNs which seem to have been repurposed from their original task of identifying and locating network retrievable documents to being more readable versions UUIDs which can be used to identify any person, place or thing regardless of whether it is a file on the Internet or a feeling in your heart.

This addition to the URN/URL abstraction seemed to address some of the bits which may have been considered to be leaky (if I enter http://www.yahoo.com in my browser and it loads it from its cache then the URL isn't acting as a location but as an identifier). Others also saw URIs as a way for people who needed user friendly UUIDs for use on the Web. I've so far come into contact with URIs in two aspects of my professional experience and they have both left a bad taste in my mouth.

URIs and the Semantic Web: Ambiguity2

One problem with URIs is that they don't uniquely identify a single thing. Consider the following hyperlinked statements

Dare is a Georgia Tech alumni.

Dare's website is valid XHTML.

In the above statements I use the URI "http://www.25hoursaday.com" to identify both myself and my web page. This is a bad thing for the Semantic Web. If you read Aaron Swartz's excellent primer on the Semantic Web you will notice where he talks about RDF and its dependence on URIs
...
Now consider...

<http://aaronsw.com/> <http://love.example.org/terms/reallyLikes> <http://www.25hoursaday.com/> .

Can you tell whether Aaron really like my website or me personally from the above RDF statement? Neither can I. This inherrent ambiguity is yet another issue with the vision of the Semantic Web and the current crop of Semantic Web technologies that are overly dependent on URIs.

Over the past few years I've been on the W3C Technical Architecture Group mailing list I've seen this inherent ambiguity of URIs result in many lengthy, seemingly never-ending discussions about how to workaround this problem or whether it is even a problem in itself. The discussion thread entitled Information resources? which morphed into referendum on httpRange-14  is the latest incarnation of this permathread on the WWW-TAG mailing list.

I was much heartened to see that Tim Berners-Lee is beginning to see some of the problems caused by the inherent ambiguity of URIs. In his most recent response to the "referendum on httpRange-14 " thread he writes 

> It is a best practice that there be some degree of consistency
> in the representations provided via a given URI.

Absolutely.

> That applies *both* when a URI identifies a picture of
> a dog *and* when a URI identifies the dog itself.
>
> *All* URIs which offer consistent, predictable representations will be
> *equally* beneficial to users, no matter what they identify.

Now here seems to be the crunch.
The web architecture relies, we agree I think, on this consistency
or predictability of representations of a given URI.

The use of the URI in the web is precisely that it is associated
with that class of representations which could be returned for it.

Because the "class of representations which could be returned"
is a rather clumsy notion, we define a conceptual thing
which is related to any valid representation associated with the URI,
and as the essential property of the class is a similarity in
information content, we call the thing an Information Resource.

So a URI is a string whose sole use in the web architecture
is to denote that information resource.

Now if you say in the semantic web architecture that the same  will
identify a dog, you have a conflict.


>
>> The current web relies on people getting the same information from
>> reuse of the same URI.
>
> I agree. And there is a best practice to reinforce and promote this.
>
> And nothing pertaining to the practice that I and others employ, by
> using http: URIs to identify non-information resources, in any way
> conflicts with that.

Well, it does if the semantic web can talk about the web, as the
semantic web can't be ambiguous about what an identifier identifies in the way that
one can in english.

I want my agent to be able to access a web page, and then use the URI
to refer to the information resource without having to go and find some
RDF somewhere to tell it whether in fact it would be mistaken.

I want to be able to model lots and lots of uses of URIs in existing
technology in RDF. This means importing them wholesale,
it needs the ability to use a URI as a URI for the web page without
asking anyone else.

The saga continues. The ambiguity of URIs have also been a problem in XML namespaces since users of XML often wonder assume a namespace URI should lead to a network retrievable document when accessed.  Since they are URIs, this isn't necessarily true. If they were URLs it would be and if they were URNs they would not be.
 

Categories: Technology

At Chris Sells' XML DevCon conference* Don Box gave a talk called WS-Why which is described below

"Why? This talk will make sense of why various WS-* specs came to life and which ones every developer should ignore. Naturally, the size of this set is non-zero, however, it is not the entire universe. Hopefully, the audience will be left with a mental model for what to ignore going forward as the WS-* machine continues to move forward."

I got to hang out with Don before the conference as well as read the slides for his talk and although I liked the direction of the talk I wished it could have been more prescriptive. Before continuing to read it's a good idea to read a summary of Don's talk such as the one  at Jeff Barr's blog post AXDC - Don Box and WS-Why?.

In his talk Don breaks XML Web Services specs into

  • WS-DesertIsland - specs that are a must have that form the core XML Web Service specs. These include XML, SOAP, WS-Addressing, WS-MetadataExchange &  XSD+WSDL
  • WS-IslandResort - the next layer of important specs after the core. These include WS-Security, WS-Trust, WS-ReliableMessaging & WS-Policy
  • WS-NewZealand - specs you'd probably need once in a lifetime. These include WS-Eventing, WS-Enumeration & WS-AtomicTransaction
  • WS-IslandOfDoctorMoreau - the ugly step children of the WS-* spec family. These include UDDI, WS-Transfer, WS-BusinessActivity, MTOM and BPEL4WS
  • WS-FantasyIsland - specs Don would love to see. These include WS-Data (XPath/SQL-like query for web services), SOAP over TCP, XSD with better support for versioning, binary XML & WSDL based on RELAX NG.

As can be expected there have been folks who've done a deeper analysis of Don's talk than what I've done above. The most significant so far has been Steve Maine's post The Web Services Kernel which gives an overview of the 5 specs in Don's WS-DesertIsland.

In general I agree with the direction Don took with the talk. However although it was an appropriate talk for the audience of the XML Devcon, a bunch of implementers and industry experts, I don't see it as significant guidance to developers trying to make sense of the mess of WS-* specs. Don's talk is best seen through the lens of "If I was an implementer which specs should I implement in my XML Web Services toolkit", not "If I was a developer which specs should I use from my XML Web Services toolkit". This is an important distinction. This is why specs like WS-Addressing are in Don's WS kernel even though it is only important if you aren't using HTTP as your transport which most developers would be. 

The talk I'd most love to see next from Don or whoever else in Indigo is going to be doing the conference route next is WS-Which: How to decide what XML Web Services specs are right for your application. As someone who now has the responsibility of designing XML Web Service end points within an intranet (aren't job switches grand?) and perhaps beyond I'm interested in guidance that explains when I should use WS-Security versus SOAP + HTTPS or whether there are any scenarios where using MTOM or WS-Transfer actually make sense.

Don's talk was a good start by the Indigo team for providing guidance for future users of the XML Web Services family of specs but there's still a lot of guidance that is currently missing from them to the industry. More importantly, just because some spec is or isn't in Don's WS kernel doesn't indicate how significant or not it will be to a particular class of application developers. Perhaps I can get Doug to give this talk next year.

* Someone really needs to explain to Chris Sells how the Web works. Constantly changing the content of the page at http://www.sellsbrothers.com/conference logically breaks links to the site


 

Categories: XML

October 24, 2004
@ 09:45 PM

Friday was my last day on the XML team at Microsoft. On Monday I start work as a Program Manager on MSN Communication Services Platform team. This team is responsible for the server side implementation of several aspects of MSN including MSN Messenger, some of Hotmail and MSN Groups.

My immediate responsibilities will involve working on the back end of MSN Spaces which is a hosted blogging service in the same vein as Blogger, LiveJournal and TypePad. The site is currently in beta for the Japanese audience.

This means I'll no longer be responsible for the XML Developer Center on MSDN which now will be managed by Irwin Dolobowsky. My work-related blog will shortly no longer appear on that page. However I will continue to write the Extreme XML column on a bi-monthly basis. Arpan Desai is the contact person for the aspects of the XML programming model in the .NET Framework I was responsible for, specifically the System.Xml.XPath namespace, System.Xml.Schema namespace and the System.Xml.XmlNode hierarchy.

If you are curious as to why I decided to make this move, some of the thinking behind my decision can be gleaned from my weblog post Social Software is the Platform of the Future

Update: This move does not change my status as the project lead on RSS Bandit.


 

Categories: Life in the B0rg Cube

October 24, 2004
@ 02:53 AM

A couple of people at work use the term strategy tax when referring to compromises or similar decisions a product team makes that are less than optimal because they have to satisfy the corporate strategy. More and more, I see this term coming up with regards to what Microsoft has termed "integrated innovation". If you are unfamiliar with this term a good description of it can be found in the Microsoft press release 'Integrated Innovation' Provides Partners with Roadmap to Success  where its states

The pursuit of Integrated Innovaton pertains both to product and process. Integrated innovation is manifested in the Microsoft platform but it extends beyond the technology stack. It's Microsoft's approach to building software: simultaneously innovating in product design and development and tightly integrating products, each one with all the others.

Before pointing out the obvious cons of such an approach its benefits should be highlighted. Many would argue that the reason Microsoft Office became the multi-billion dollar business it is today is because it applied to doctrine of integrated innovation. Microsoft took what used to be considered disparate applications (word processing, presentation software, spreadsheets, etc) and turned them into a cohesive productivity suite that worked seamlessly together. The fact that one can design a chart in Excel then cut and paste it into Word with no problems was a big deal when it was first implemented and it still is today. As the saying goes, a lot of effort went into making this look effortless. This combined with the fact that you could purchase these tools in a single box made it the killer application for businesses all over the world.

With that out of the way, let's discuss the cons of this approach. Many students of software engineering will tell you that the goal of any systems architect is to increase cohesion and reduce the coupling in the software architecture. The problems with tight coupling (or tight integration) are well known. As mentioned in Object Oriented Software Engineering Knowledge Base, tight coupling causes a system to be hard to understand and change because changes in one place will require changes somewhere else. Requiring changes to be made in more than one place is problematic since it is time-consuming to find the different places that need changing, and it is likely that errors will be made. 

Those are developer-centric reasons for why tight integration is disadvantegeous but what do these mean in practical terms? The first thing it means is projects take longer. This is basically the Mythical Man Month effect. If you have two teams of 10 people working on separate projects that will take six months to complete and tell them to combine their efforts, it will take them more time to complete their joint tasks not less. As implied above this also means a bug in one component is a bug in all the components, a security flaw in one component is a security flaw in all the components and a slip in one component's schedule is a slip in the schedule for all the components. Finally, it makes the systems more complex to manage and understand which means it makes them harder to maintain.

In today's world where the majority of Microsoft's software is competing with the products of the Open Source community whose mantra of "Release early, release often" and increased threats from malicious hackers make security a primary concern the doctrine of integrated innovation seems almost anachronistic.

The effects of integrated innovation on product schedules shouldn't be underestimated. Although Microsoft Office is an example of integrated innovation when considering its constituent parts it also is an example of eschewing this practice when taken as a whole. The Office team releases a cohesive product with minimal dependencies on external products. The Office team has also consistently released software every 2 to 3 years over the past decade; Office 95, Office 97, Office 2000, Office XP, and Office 2003. Certain other products at Microsoft which are at the forefront of integrated innovation do not seem to have such consistent release cycles. These products are paying the strategy tax.

The main reason for writing this blog entry is so that next time someone asks me what I mean by "strategy tax" I can just give them a link to this entry instead of repeating myself.


 

Categories: Life in the B0rg Cube

October 22, 2004
@ 05:01 AM

It's hard to tell the headlines Google misses earnings expectations and Google's 3Q Profit More Than Doubles  are about the same occurence. I guess it's all about perspective. The market seems to be taking the half full approach seeing as GOOG jumped by $8.89 to close at $149.38.

Even more impressive is that it is trading at $161.79 in after hours trading. And I thought the original IPO price of $135 was excessive. Dang.


 

Karl Waclaweck has released version 1.0 of the SAX for .NET project. In the announcement on XML-DEV Karl writes

This is the first production release of the C#/.NET port of the SAX API.
It should be compatible with MS.NET 1.1 and Mono 1.0.2.

Since the API alone is not enough, a SAX parser implementation has been
released as well: SAXExpat.NET 1.0. It is based on the Expat parser, and
will work on MS.NET 1.1. Currently Mono 1.0.2 is not able to run it,
but this will hopefully change with future Mono releases.

Another implementation based on a port of the AElfred parser will
be available soon. It should work under both, MS.NET and Mono.

The project page is here: http://saxdotnet.sourceforge.net/

It's good to see more Open Source XML projects showing up for the .NET Framework. I haven't missed using SAX that much but I imagine people coming to the .NET Framework from the Java world would love to be able to keep using their favorite push model XML parsing API.


 

Categories: XML