I recently read an InfoWorld article entitled Gartner: Ignore Longhorn and stick with XP where it states

Microsoft Corp. may choose never to release its vaunted and long-overdue project WinFS, following its removal from the next version of Windows, according to analysts Gartner Inc.
...
Microsoft has said Longhorn will still include local desktop searching as a taste of the power of WinFS' relational database capabilities, but Gartner sees this as a hint that WinFS may never arrive. "Because Microsoft has committed to improving search without WinFS, it may choose never to deliver the delayed WinFS," Gartner said.

The fundamental premise of the above statements is that the purpose of WinFS is to make local desktop search better or to use a cruder term to create "Google for the Desktop". It may be true that when it first started getting pitched one of the scenarios people described was making search better. However as WinFS progressed the primary scenarios its designers focused on enabling didn't have much to do with search. If you read Longhorn evangelist Jeremy Mazner's blog posting entitled What happened to WinFS? posted after the Longhorn changes were announced you'll find the following excerpt

The WinFS team spent a solid couple weeks going through this evaluation.  There are of course plenty of things you could do to increase the confidence level on a project the size of WinFS, since it has so many features, including:

  • Built-in schemas for calendar, contacts, documents, media, etc
  • Extensibility for adding custom schema or business logic
  • File system integration, like promotion/demotion and valid win32 file paths
  • A synchronization runtime for keeping content up to date
  • Rich API support for app scenarios like grouping and filtering
  • A self-tuning management service to keep the system running well
  • Tools for deploying schema, data and applications

The above feature list is missing the recent decision to incorporate the features of the object relational mapping technology codenamed ObjectSpaces into WinFS. Taking all these features together none of them is really focused on making it easier for me to find things on my desktop.

At its core, WinFS was about storing strongly typed objects in the file system instead of opaque blobs of bits. The purpose of doing this was to make accessing and manipulating the content and metadata of these files simpler and more consistent. For example, instead of having to know how to manipulate JPEG, TIFF, GIF and BMP files there would just be a Photo item type that applications would have to deal with. Similarly one could imagine just interacting with a built in Music item instead of programming against MP3, WMA, OGG, AAC, and WAV files. In talking to Mike Deem a few months ago and recently seeing Bill Gates discuss his vision for WinFS to folks in our building a few weeks ago it is clear to me that the major benefits of WinFS to end users is the possibilities it creates in user interfaces for data organization.

Recently I switched from using WinAmp to iTunes on the strength of the music organizational capabilities of the iTunes library and "smart playlists". The strength of iTunes is that it provides a consistent interface to interacting with music files regardless of their underlying type (AAC, MP3, etc) and provides ways to add metadata about these music files (ratings, number of times played) then organize these files according to this metadata. Another application that shows the power of the data organization based on rich, structured metadata is Search Folders in Outlook 2003. When I used to think of WinFS I got excited about being able to perform SQL-like queries over items in the file system. Then I heard Bill Gates and Mike Deem speak about WinFS then saw them getting excited about the ability to take the data organizational capabilities of features like the My Pictures and My Music folders in Windows to the next level it all clicked.

Now this isn't to say that there aren't some searches made better by coming up with a consistent way to interact with certain file types and providing structured metadata about these files. For example a search like

Get me all the songs [regardless of file type] either featuring or created by G-Unit or any of its members (Young Buck, 50 Cent, Tony Yayo or Lloyd Banks) between 2002 and 2004 on my hard drive

is made possible with this system. However it is more likely that I want to navigate this in a UI like the iTunes media library than I want to type the equivalent of SQL queries over my file system.

More importantly, this system doesn't make much easier to find stuff I've lost on my file system like Java code I wrote while in college or drafts of articles created several years ago that I never finished. When I think "Google on the Desktop", that's the problem I want to see solved. However MSN just bought LookOut so I have faith that we will be solving this problem in the near future as well.


 

Categories: Technology

September 14, 2004
@ 08:45 AM

Oleg has just announced a new release of EXSLT, his post is excerpted below

Here we go again - I'm pleased to announce EXSLT.NET 1.1 release. It's ready for download. The blurb goes here:

EXSLT.NET library is community-developed free open-source implementation of the EXSLT extensions to XSLT for the .NET platform. EXSLT.NET fully implements the following EXSLT modules: Dates and Times, Common, Math, Random, Regular Expressions, Sets and Strings. In addition EXSLT.NET library provides proprietary set of useful extension functions.

Download EXSLT.NET 1.1 at the EXSLT.NET Workspace home - http://workspaces.gotdotnet.com/exslt
EXSLT.NET online documentation - http://www.xmland.net/exslt

EXSLT.NET Features:

  • 65 supported EXSLT extension functions
  • 13 proprietary extension functions
  • Support for XSLT multiple output via exsl:document extension element
  • Can be used not only in XSLT, but also in XPath-only environment
  • Thoroughly optimized for speed implementation of set functions

Here is what's new in this release:

  • New EXSLT extension functions has been implemented: str:encode-uri(), str:decode-uri(), random:random-sequence().
  • New EXSLT.NET extension functions has been implemented: dyn2:evaluate(), which allows to evaluate a string as an XPath expression, date2:day-name(), date2:day-abbreviation(), date2:month-name() and date2:month-abbreviation() - these functions are culture-aware versions of the appropriate EXSLT functions.
  • Support for time zone in date-time functions has  been implemented.
  • Multithreading issue with ExsltTransform class has been fixed. Now ExsltTransform class is thread-safe for Transform() method calls just like the  System.Xml.Xsl.XslTransform class.
  • Lots of minor bugs has been fixed. See EXSLT.NET bug tracker for more info.
  • We switched to Visual Studio .NET 2003, so building of the project has been greatly simplified.
  • Complete suite of NUnit tests for each extension function has been implemented (ExsltTest project).

The EXSLT.NET project has come quite some way since I started it last year. Oleg has done excellent work with this release. It's always great to see the .NET Open Source community come together this way.


 

Categories: XML

September 14, 2004
@ 08:35 AM

I installed Windows 2003 Server on the machine that this weblog runs on this morning. This should get rid of those pesky "Too Many Users" errors due to connection limits in Windows XP Professional which was the previous OS on the machine. It took me all day to figure out how to give ASP.NET write permissions for my weblog so if you attempted to post a comment in the past 24 hours and got an error message I apologize. Things should be fine now.

 


 

Categories: Ramblings | RSS Bandit

September 12, 2004
@ 02:17 AM

In his post Full text RSS on MSDN gets turned off Robert Scoble writes

Steve Maine: what the hell happened to blogs.msdn.com?

RSS is broken, is what happened. It's not scalable when 10s of thousands of people start subscribing to thousands of separate RSS feeds and start pulling down those feeds every few minutes (default aggregator behavior is to pull down a feed every hour).

Bandwidth usage was growing faster than MSDN's ability to pay for, or keep up with, the bandwidth. Terrabytes of bandwidth were being used up by RSS.

So, they are trying to attack the problem by making the feeds lighter weight. I don't like the solution (I've unsubscribed from almost all weblogs.asp.net feeds because they no longer provide full text) but I understand the issues.

This is becoming a broken record. Every couple of months some web site that hasn't properly prepared for the amount of bandwidth consumed by having a popular RSS feed loudly complains and the usual suspects complain that RSS is broken. This time the culprit is Weblogs @ ASP.NET and their mistake was not providing HTTP compression to clients speaking HTTP 1.0. This meant that they couldn't get the benefits of HTTP compression when talking to popular aggregators like Straw, FeedDemon, SharpReader, NewsGator and RSS Bandit. No wonder their bandwidth usage was so high.

But lets ignore that the site wasn't properly configured enough to utilize all the bandwidth saving capabilities of HTTP. Instead lets assume Weblogs @ ASP.NET had done all the right things but was still getting to much bandwidth being consumed. Mark Nottingham covered this ground in his post The Syndication Sky is Falling!

A few people got together in NYC to talk about Atom going to the W3C this morning. One part of the minutes of this discussion raised my eyebrows a fair amount;

sr: […] Lots of people are saying RSS won’t scale. Somebody is going to say I told you so.
bw: Werner Vogels at Cornell has charted it out. We're at the knee of the curve. I don’t think we have 2 years.
sr: I have had major media people who say, until you solve this, I’m not in.
bw: However good the spec is, unless we deal with the bag issues, it won’t matter. There are fundamental flaws in the current architecture.

Fundamental flaws? Wow, I guess I should remind the folks at Google, Yahoo, CNN and my old colleagues at Akamai that what they’re doing is fundamentally flawed; the Web doesn’t scale, sorry. I guess I’ll also have to tell the people at the Web caching workshops that what they do is futile, and those folks doing Web metrics are wasting their time. What a shame...

Bad Reasons to Change the Web Architecture

But wait, there’s more. "Media people" want to have their cake and eat it too. It’s not good enough that they’re getting an exciting, new and viable (as compared to e-mail) channel to eyeballs; they also have to throw their weight around to reduce their costs with a magic wand. What a horrible reason to foist new protocols, new software, and added complexity upon the world.

The amusing new wrinkle is that every body's favorite leader of the "RSS is broken let's start all over" crowd, Sam Ruby, has decided it is time to replace blogs pinging weblogs.com when they update and using HTTP to fetch RSS feeds. Hopefully, this will be more successful than his previous attempts to replace RSS and the various blogging APIs with Atom. It's been over a year and all we have to show from the creation of Atom is yet another crufty syndication format with the promise of one more incompatible one on the way.

Anyway, the point is that RSS isn't broken. After all, it is just an XML file format. If anything is broken it is using HTTP for fetching RSS feeds. But then again do you see people complaining every time some poor web site suffers the effects of the Slashdot Effect about how HTTP is broken and needs to be replaced? If you are running a popular web site, you will need to spend money to afford the traffic. AOL.com, Ebay.com and Microsoft.com are all serving terrabytes of content each month. If they were serving these with the same budget that I have for serving my website these sites would roll over and die. Does this mean we should replace using web browsers and HTTP for browsing the Web and resort to using BitTorrent for fetching HTML pages? It definitely would reduce the bandwidth costs of sites like AOL.com, Ebay.com and Microsoft.com.

The folks paying for the bandwidth that hosts Weblogs @ ASP.NET (the ASP.NET team not MSDN as Scoble incorrectly surmises) decided they had reached their limits and reduced the content of the feeds. It's basically a non-story. The only point of interest is that if they had announced this with enough warning internally folks wuld have advised them to turn on HTTP compression for HTTP 1.0 clients before resorting to crippling the RSS feeds. Act in haste, repent at leisure.


 

September 10, 2004
@ 05:29 PM

In a post entitled Report From the Intel Community Tim Bray writes

This has nothing to do with a California chip maker. Rather, its about a trip I recently took to a conference called Intelink, where the people gather who run one of the worlds biggest and most interesting intranets; the one that serves the community of U.S. Intelligence professionals
...
I was amused to note that on one of the sub-intranets distinguished by being loaded with particularly ultra-secret stuff, they were offering RSS Bandit for the people to download and use.

That's an awesome endorsement. I'm always surprised by the people I find using RSS Bandit whether it is a bunch of U.S. intelligence professionals or high school girls from Singapore. There's a lot that still needs to be done to make consuming information from syndication feeds a truly optimal experience but RSS Bandit gets closer to what I see as the ideal each day. The big focus for the next release will be making it easier to organize, locate and manage information within the aggregator.

Speaking of positive endorsements, here's one on the memory usage characteristics of RSS Bandit from Wesner Moise in his post NET vs Native Performance

The working set for SharpReader is 30Mb, FeedDemon is 23 Mb, and RSS Bandit is 4 Mb in their initial configuration on my machine. (In comparison, the working set for MS Word and MS Excel are about 18 Mbs.) So, actually in their bare configuration, RSS Bandit is the tightest of them all, even considering that RSS Bandit also uses the .NET runtime. However, the working set of .NET applications have a significantly higher variance than native applications. While RSS Bandit was idle, I watch the working set figures initially progress to 13 MBs, then in an instant fall down to 6.5MB, as it appears a collection has occurred. The working set oscillated in an ever narrowing range (down to a range between <3Mb to 6Mb) that apparently reflected dynamic tuning by CLR. Native applications, in contrast, normally have zero variance in working set during idle.

The contrast between SharpReader and FeedDemon is more a reflection of the difference between a free application written as a hobby and a professionally written commercial application, and less as a indicator of Delphi's inherent performance advantage over C#. Performance issues with NewsGator, an Outlook-based reader, which I believed is managed, are likely due to the very high overhead and poor performance of OLE automation in general.

The biggest performance issues with RSS Bandit are memory usage and slowdown when performing IO intensive operations like loading feed files from disk on startup or downloading lots of feeds for the first time. There are many approaches we've considered for resolving the memory issues. The first thing we will do is the easiest, making it possible for people to delete posts from feeds they are subscribed to. This would lead to less news items being held in memory when the application is running this reducing memory consumption.

I've also considered creating a 'memory lite' mode where some memory intensive features are disabled to reduce the memory usage of the application but the few people I've talked this over with have mentioned that memory usage has not been enough of a problem to forego features.


 

Categories: RSS Bandit

September 10, 2004
@ 04:17 PM

I've been a loyal user of WinAmp for several years. I am a big fan of skins and my favorite is currently MMD3. However I recently got tired of using the file system to navigate my music collection and sought out a change. I'd tried iTunes in the past but was underwhelmed by its lack of skinning functionality.

In the past few weeks I've given iTunes another shot and it is now my favorite player. It has a few elegant features that make it a killer app for organizing your music. The UI for navigating your music selection is straightforward and reminiscent of the iPod's. I also like the the built in playlists like 'Recently Played', 'My Top Rated' and 'Top 25 Most Played'.

The only downsides have been that I've had to update ID3 tags for my MP3s to get the most out of the music library UI and I miss some of the killer visualizations in WinAmp skins like MMD3. However this won't stop me from relegating WinAmp to the back burner and making iTunes my music player of choice.


 

Categories: Ramblings

September 8, 2004
@ 03:23 PM

Roger Costello recently started a discussion thread on the XML-DEV mailing list about the common misconceptions people have about XML document validation and schemas. He recently summarized the discussion thread in his post Fallacies of Validation, version #3. His post begins

The purpose of documenting the below "fallacies" is to identify erroneous common thought that many people have with regards to validation and its role in a system architecture.  Perhaps "assumptions" would be a better term to use than "fallacies".  In any case, the desire of this writeup (which is a compilation of discussions on the xml-dev list) is to provoke new ways of thinking about validation, and reject limiting and static views on validation. 

Fallacies of Validation

1. Fallacy of "THE Schema"

2. Fallacy of Schema Locality

3. Fallacy of Requisite Validation

4. Fallacy of Validation as a Pass/Fail Operation

5. Fallacy of a Universal Validation Language

6. Fallacy of Closed System Validation

7. Fallacy that Validation is Exclusively for Constraint Checking

I mostly agree with the fallacies as described in his post.

Fallacy #1 has been a favorite topic of Tim Ewald over the past year. It isn't necessarily true that there is one canonical schema for an XML vocabulary. Instead the schema for the vocabulary may depend on the context the XML document is being used in. A classic example of this is XHTML which has 3 schemas (DTDs) for a single format.

I consider Fallacy #2 to be more of a common mistake than a fallacy. Many people create validation systems that work in a local environment such as creating specific patterns or structures for addresses or telephone numbers which may work in a local system but break down when used in a global environment like the World Wide Web. This common mistake isn't limited to XML validation but applies to all arenas where user input is validated before being stored or processed

Fallacy #3 is interesting to me because I wonder how often it occurs in the wild. Are there really that many people who believe they have to validate XML documents against a schema?

Fallacy #4 is definitely a good one. However I disagree with the quotes he uses to butress the main point for this fallacy. I especially don't like the fact that he uses a generalization from Rick Jellife about bugs in a few schema validators as a core part of his argument. The important point is that schema validation should not always be viewed as a PASS/FAIL operation and in fact schema languages like W3C XML Schema go out of their way to define how one can view an XML document as being part valid, part invalid.

One size doesn't fit all is the message of Fallacy #5 to which I heartily cheer "Hear! Hear!". I agree 100%. There is no one XML schema language that satisfies every validation scenario.

I don't really understand Fallacy #6 without seeing some examples so I won't comment on it. I'll see if I can dig up the discussion threads about this on XML-DEV later.

Fallacy #7 is another one where I agree with the message but mostly disagree with how he argues the point. All of his examples are all variations of using schemas for constraint checking, they just differ on how the document is processed does after constraint checking is done. To me, the prime example of the fact that schema validation is not just for constraint checking is that many technologies actually using schemas for creating typed XML documents or for translating XML from one domain to another (e.g. Object<->XML, Relational<-> XML),

Everything said, this was a good list. Excellent work from Roger as usual.


 

Categories: XML

Recently I found out that we no longer had office supplies on the floor of the building I work in. Now if you need to grab a pen or get a marker after your last one runs out in the middle of a meeting you need to go upstairs. Folks have given me the impression that this is due to the recent cost cutting drive across the company. At first, I couldn't figure out why disrupting people by making them go to another floor for office supplies would cut costs.

Then it hit me. When faced with having to go to another floor to find office supplies the average geek desk jockey will probably say "forget it" and do without. The immediate saving is less office supplies used. But I suspect this is only phase one of the plan. Most people at MSFT believe that on average 50% - 75% of projects and features an employee works on in his career in the b0rg cube never ship. This is all just wasted cash. The best way to nip this is in the bud by preventing people from being able to write down their ideas or whiteboard different ideas with coworkers thus spreading the meme about new projects or features. The amount of money saved by not investing in new money losing ventures like *** and **** would be immense. It all makes a weird kind of sense now.

Seriously though. I've been reading blog posts like Dangerous Transitions and Dangerous Thoughts which call for Microsoft to start performing targetted layoffs instead of cost cutting with skepticism. When I think of the ways Microsoft spends immense amounts of cash for little return I don't worry about John Doe the tester who files on average less bugs than the other members of his team or Jane Doe the developer who writes buggier code than the rest of her team. I think about things like MSN, XBox, the uncertainty around MBF after purchasing Great Plains for billions, embarking on overambitious attempts to rewrite most of the APIs in Windows in an effort that spans 3 product units, spending years working on ObjectSpaces then canning it because there was potential overlap with WinFS and various other white elephant projects.  

All of the above cost from millions to billions of dollars and they are the result of decisions by folks in middle and upper management. I'm glad that Microsoft has decided not to punish rank and file employees for what are basically missteps by upper management in contravention to the typical way corporate America does business.

Ideally, we'd see our upper management address how they plan to avoid missteps like this in future instead of looking for minor bumps in the stock price and our paychecks by sacrificing some low level employees and coworkers.


 

Categories: Life in the B0rg Cube

September 6, 2004
@ 01:02 AM

According to HipHopGame.com Young Buck To 'Stomp' Out Luda/ T.I. Beef On Debut Album

If you have a mixtape featuring "Stomp," Young Buck's posse cut with T.I. and Ludacris, hold onto it. It's a collector's item. The track as we know it, with Cris and Tip battling each other, isn't going to be included on Buck's upcoming Straight Outta Cashville LP. Instead, a remix is going on the album, with newcomer D-Tay replacing T.I.
...
Buck says he asked 50 Cent to reach out to T.I. for a collaboration for Straight Outta Cashville. 50 obliged, and the track was sent to Atlanta for T.I. to rhyme on. Buck said he was surprised when the song came back with the line "And me getting beat down, that's ludicrous," because he didn't know if was a dis or not.

"I was hearing on the streets that [T.I.] and Luda be having problems with each other, and I know I just did a song with Luda's group about a week or two before," Buck elaborated. "Me and Luda are cool. To be all the way honest, I'd known Luda before I knew T.I., so I couldn't just jump on this record and have them having differences with each other, and then [have Luda] be like, 'Yo, Buck, what's up?' "

Staying diplomatic, Buck talked the situation over with Cris and even played T.I.'s verse for him. Ludacris confirmed that the two had been going back and forth, and he wanted to get on the song and speak his piece.

"I even got at T.I. like, 'Yo, Luda heard this record. He wanna jump on the record,' " Buck explained, "just to make sure all the feelings and everything would stay the same way. And he was like, 'Oh, I'm cool. I'm cool with it.' "

So Ludacris laced "Stomp" with his own battle raps, and the streets have been talking ever since.

T.I. and Cris have apparently now squashed their beef, Buck said, but controversy still surrounds the song. According to Buck, T.I.'s camp requested that Ludacris change his verse before they clear Tip to be on the album. (A T.I. spokesperson had no comment on that.) The G-Unit soldier said Cris has refused.

"Even throughout the song, you don't hear either one talking about killing each other," Buck lamented.

I'm not surprised Ludacris didn't want his rhymes removed. He totally schooled T.I. on that track. It's also a statement as to who is the bigger star that Luda's verses stay but T.I.'s will be removed given the standoff between both rappers. The track is hot, too bad it won't be making it onto the album.

By the way, Young Buck is wrong about them not talking about killing each other though.  T.I.s verse ends with When the choppers hit you bitch, you'll wish you got your ass stomped.


 

If you use RSS Bandit and recently installed .NET Framework 1.1 SP1 you may have noticed that you started getting errors of the form

Refresh feed 'SomeCategory\SomeFeed' failed with error: The underlying connection was closed: The server committed an HTTP protocol violation.

This is due to changes made to the System.Net.HttpWebRequest class to make it more compliant to the HTTP specification. For example, it now errors when fetching the Microsoft Research feeds because the web server returns the Content-Location header as "Content Location" with a space. The fix is straightforward and involves placing the following element as a child of the configuration element within the rssbandit.exe.config file in the C:\Program Files\RssBandit folder.

<system.net>
 <settings>
  <httpWebRequest useUnsafeHeaderParsing="true" />
 </settings>
</system.net>

This is also taken care of by v1.2.0.117 of RSS Bandit. When running it detects whether this option is available and enables it automatically so you don't have to mess around with XML configuration files.


 

Categories: RSS Bandit