I collect about half a dozen comic book titles and I've noticed a growing trend in blurring the line between the secret identity of a super hero and their super hero identity. In the old days, a super hero had a regular dayjob with coworkers, girlfriends and bills to pay and put on his or her tights at night to fight crime. Nowadays I read comics about the same characters I used to read about as a child who now either no longer hide their non-super hero identity as a secret or have had their secret identity revealed to so many people that it might as well not be a secret.

This trend is particularly true in Marvel's Ultimate universe. If you are unfamiliar with the Marvel Ultimate universe here is a brief description of it from the Wikipedia entry for Marvel Universe

A greater attempt has been made with the Ultimate titles; this series of titles is in a universe unrelated to the main Marvel continuity, and essentially is starting the entire Marvel Universe over again, from scratch. Ultimate comics now exist for the X-Men, the Avengers, Spider-Man, and the Fantastic Four. Sales of these titles are strong, and indications are that Marvel will continue to expand the line, effectively creating two Marvel Universes existing concurrently. (Some rumors exist that if sales continue to increase and more titles are added, Marvel may consider making the Ultimate universe its main universe.)

In the Marvel Ultimate universe the Avengers (now known as the Ultimates) are government agents who treated as celebrities by the tabloids and whose non-super hero identities are known to the public. The Ultimate X-Men appear on the cover of Time magazine and have met with the president several times. The anti-mutant hysteria that is a mainstay of the regular Marvel universe is much more muted in the Ultimate Marvel universe (Thank God for that, they had gone overboard with it although classics like God Loves, Man Kills will always have a special place in my heart). The identity of Ultimate spider-man isn't known to the general public but it is known by his girlfriend (Mary Jane), an orphan adopted by his aunt (Gwen Stacy), the Ultimates, all of major villains spidey has met (Doc Ock, Green Goblin, Kraven the Hunter, Sandman & Electro) as well as most of the staff of S.H.I.E.L.D. 

This has also spread to the regular Marvel universe, most noticeably with Daredevil. His secret identity was known by the Kingpin for a long time and eventually was an open secret to most of the Kingpin's criminal organization. In recent issues, Daredevil has been outed as Matt Murdock in the tabloids and has to deal with assassination attempts on him in his regular life as well as when he is Daredevil.

DC Comics is also playing along somewhat with Batman. Although it isn't common knowledge that Batman is Bruce Wayne there are now so many heroes (the entire Justice League, Robin, Nightwing, Spoiler, Batgirl, Huntress, Oracle) and villains (the Riddler, Hush, Bane, Ra's al Ghul) that it might as well be public.

I suspect that one of the reasons for this trend is a point that the character Bill makes in Kill Bill vol.2 towards the end of the movie. He points out that most super heroes are regular people with regular lives that have a secret identity as a super hero while Superman was actually a super hero who had a secret identity as a regular person. Getting rid of the artificial division between super hero and alter ego makes sense because we tend to look at them as different people (Bruce Wayne is nothing like Batman) when in truth they are different facets of the same character. The increased connectedness of society as a whole has also made it easier to blur the lines between various aspects of one's character that used to be kept separate. I think comic book authors are just reflecting this trend.

Speaking of reflecting current trends in comics I was recently disappointed and then impressed by statements made the Ultimate version of Captain America. In Ultimates #12, Cap is fighting the apparently indestructible leader of the alien invasion army who's just survived getting half his head blown off by an assault rifle when this exchange takes place

Alien Leader: Now let's get back to business, eh, Captain? The world was about to go up and you were about to surrender in these few brief moments we've got left. Let me hear you say it. “I surrender Herr Kleiser! Make it quick!”.

Captain America: *head butts and then starts to beat up the alien leader while saying* - Surrender? Surrender? You think the letter A on my head stands for France?

This issue came out when the "freedom fries" nonsense was still somewhat fresh in people's minds and I was very disappointed to read this in a comic book coming from a character I liked. However he recently redeemed himself with his line from a conversation with Nick Fury in Ultimate Six #7

Captain America: You know, being a veteran of war it occured to me, that really it's men of influence and power that decide what these wars will be about. They decide who we are going to fight and how we will fight them. And then they go about planning the fight. In a sense, really, these people will the war into existence.

I remember thinking the same thoughts as a preteen in military school trying to decide whether to follow in my dad's footsteps and join the military or not. I fucking love comics.


 

Categories: Ramblings

Ryan Farley has a blog post entitled In Search of the Perfect RSS Reader where he compares a number of the most popular desktop aggregators for Windows in search of the best of breed application. Ryan compared RSS Bandit, SharpReader, Newsgator, FeedDemon, SauceReader and Gush. The application that Ryan rated as the best was RSS Bandit. He writes

 RSSBandit
RSSBandit has the best of everything. One of the things that I was wanting in an aggregator was support for the CommentAPI so I could read and post comments from it. RSSBandit has a nice interface and has a really clean and professional look to it. I like nice looking software. For me, that was one of the biggest things in the favor of RSSBandit. I love the “auto-discover” feeds, where you can scan a given URI for feeds. Search folders and some cool searching features. Written in .NET (I love to support the cause). Also, when a post is updated, it just updates the content of the post (seems to pull it each time you view it instead of caching it?). I like that it does not pull down a second copy of the post, however I do wish it would somehow indicate that the contents of the post has changed. The only gripes I had about RSSBandit are very small (and they're not really gripes, just small things I'd change if it were mine). I hate the splash screen. It is ugly and does not match the rest of the clean and XP/Office/VS/etc look of the application. Also, I don't like the icon. The smiley-face with the eye-patch. Give me a break. I don't really care for silly looking software (at least since it is open source I can change that myself if I really want to). But overall, a completly awesome job Dare (and other sourceforge team members)

Mad props go to Torsten and Phil Haack who contributed a great deal to the most recent release. We've tried to address a lot of user pain points in dealing with RSS Bandit and there's still a lot of stuff we can do to make information management with an aggregator even better.

Thanks to all the users who gave us feedback and helped us improve the application. Expect more.


 

Categories: RSS Bandit

I just saw this in Wired and have been surprised that it hasn't been circulating around the blogs I read. It looks like the Mexican Air Force Captured a UFO Encounter on VideoTape. According to Wired

Mexican Air Force pilots filmed 11 unidentified flying objects in the skies over southern Campeche state, a spokesman for Mexico's Defense Department confirmed Tuesday.A videotape made widely available to the news media on Tuesday shows the bright objects, some sharp points of light and others like large headlights, moving rapidly in what appears to be a late-evening sky.

The lights were filmed on March 5 by pilots using infrared equipment. They appeared to be flying at an altitude of about 11,500 feet, and reportedly surrounded the jet as it conducted routine anti-drug trafficking vigilance in Campeche. Only three of the objects showed up on the plane's radar.

That is pretty amazing. I haven't been able to locate the entire video online. That definitely would be interesting to watch.


 

One of the problems I have at work is that I have 3 applications open for participating in online community. I have Outlook for email, RSS Bandit for blogs & news sites and Outlook Express for USENET newsgroups. My plan was to collapse this into two applications by adding the ability to read USENET newsgroups to RSS Bandit. However recently I discovered, via a post by Nick Bradbury that

Hey, just noticed that the Google Groups 2 BETA offers Atom feeds for each group. To see feeds for a specific group, use this format:

http://groups-beta.google.com/group/NAME-OF-GROUP/feeds

Here are a few examples:

So I'm now at a cross roads. On the one hand I could abandon my plans for implementing USENET support in RSS Bandit but there is functionality not exposed by Google Groups ATOM feeds currently. I can't view newsgroups that aren't public nor can I actually post to the newsgroups from the information in the feed currently.  I see two options

  1. Implement a mechanism for posting to newsgroup posts viewed via the Google Groups ATOM feeds from RSS Bandit. Don't worry about adding USENET support for the case of people who want to read newsgroups not indexed by Google.
  2. Implement the ability to view and post to any USENET newsgroups using RSS Bandit. This may also include figuring out how to deal with password protected newsgroups. 

The Program Manager in me says to do (1) because it leverages the work Google has done, it requires minimal effort in me and doesn't require significantly complicating the RSS Bandit code base. The hacker in me says to do (2) since it provides maximum functionality and would be fun to code as well. What do RSS Bandit users think?

PS: Does anyone know how Google plans to deal with the deployment nightmare of moving people off of the ATOM 0.3 syndication format? Given the recent political wranglings between the W3C and IETF over ATOM it is clear that we won't see a spec rubber stamped by a standards body this year. There'll be thousands of users who are subscribed to Blogger & Google Groups feeds who may be broken whenever Google moves to the finalized version of ATOM. Will Google abandon the feeds thus forcing thousands of people to either change their aggregator or upgrade? Will they support two potentially incompatible versions ATOM? 


 

Categories: RSS Bandit

Charles Cook has a blog posting on XML and Performance where he writes

XML-based Web Services look great in theory but I had one nagging thought last week while on the WSA course: what about performance? From my experience with VoiceXML over the last year it is obvious that processing XML can soak up a lot of CPU and I was therefore interested to see this blog post by Jon Udell in which he describes how Groove had problems with XML:

Sayonara, top-to-bottom XML I don't believe that I pay a performance penalty for using XML, and depending on how you use XML, you may not believe that you do either. But don't tell that to Jack Ozzie. The original architectural pillars of Groove were COM, for software extensibility, and XML, for data extensibility. In V3 the internal XML datastore switches over to a binary record-oriented database.

You can't argue with results: after beating his brains out for a couple of years, Jack can finally point to a noticeable speedup in an app that has historically struggled even on modern hardware. The downside? Debugging. It was great to be able to look at an internal Groove transaction and simply be able to read it, Jack says, and now he can't. Hey, you've got to break some eggs to make an omelette.

Is a binary representation of the XML Infoset a useful way of improving performance when handling XML? Would it make a big enough difference?

For the specific case of Groove I'd be surprised if they used a binary representation of the XML infoset as opposed to a binary representation of their application object model. Lots of applications that utilize XML for data storage or configuration data immediately populate this data into application objects. This is a layer of unnecessary processing since one could skip the XML reading and writing step and directly read and write serialized binary objects. If performance is that important to your application and there are no interoperability requirements it is a better choice to serialize binary objects instead of going through the overhead of XML serialization/deserialization. The main benefit of using XML in such scenarios is that in many cases there is existing infrastructure for working with XML such as parsers, XML serialization toolkits and configuration handlers. If your performance requirements are so high that the overhead of going from XML to application objects is too high then getting rid of the step in the middle is a wise decision. Although as pointed out by Jon Udell you loose the ease of debugging that comes with using a text based format.

If you are considering using XML in your applications always take the XML Litmus Test


 

Categories: XML

The XML team at Microsoft has recently started getting questions about our position on XQuery 1.0, XPath 2.0 and XSLT 2.0. My boss, Mark Fussell, posted about why we have decided to implement XQuery 1.0 but not XSLT 2.0 in the next version of the .NET Framework. Some people misinterpreted his post to mean that the we chose to implement XQuery 1.0 over XSLT 2.0 because we prefer the syntax of the former over that of the latter. However decisions of such scale aren't made that lightly.

There are several reasons why we aren't implementing XSLT 2.0 and XPath 2.0

It takes a lot of effort and resources to implement all 3 technologies (XQuery, XSLT 2.0 & XPath 2.0). Our guiding principle was that we believe creating a proliferation of XML query technologies is confusing to end users. We'd rather implement one more language that we push people to learn than have to support and explain three more XML query and transformation languages, in addition to XPath 1.0 & XSLT 1.0 which already exist in the .NET Framework. Having our customers and support people have to deal with the complexity of 3 sophisticated XML query languages two of which are look similar but behave quite differently in the case of XPath 2.0 and XQuery seemed to us not to be that beneficial. 

XPath 2.0 has different semantics from XQuery, XQuery is strongly and statically typed while XPath 2.0 is weakly and dynamically typed. So it isn't simply a case that if you implementing XQuery means that you can simply flip some flag and disable a feature or two to turn it into an XPath 2.0 implementation. However all of the use cases satisfied by XPath 2.0 can be satisfied by XQuery. In the decision to go with XQuery over XSLT 2.0, Mark is right that we felt that developers would prefer the familiar procedural model and syntax of XQuery to the template based model and syntax of XSLT 2.0. Most developers working with XSLT try to use it as a procedural language anyway, and don't really harness the power of templates. There's always the steep learning curve until you get to the “Aha“ moment and everything clicks. XQuery with its FLWOR construct and user defined functions fits more naturally with how both programmers and database administrators access and manipulate data than does XSLT 2.0. Thus we feel XQuery and not XSLT is the future of XML based query and transformation. 

This doesn't mean that we will be removing XSLT 1.0 or XPath 1.0 support from the .NET Framework. It just means that our innovation and development efforts will be focused around XQuery going forward. 


 

Categories: Life in the B0rg Cube | XML

I'm sure many have already seen the Google blogs. Below are a few suggestions to the various Google folks suggesting ways they can improve the blogging experience for themselves and their readers

  1. The blogs currently don't have names attached to each post nor do they have a way to post comments in response to each entry. The power of blogs is that they allow you to have a conversation with your users. An anonymous weblog that doesn't provide a mechanism for readers to provide feedback is little more than a corporate PR website. Currently I don't see much difference between the Google Press Center and Google Blog besides the fact that the Press Center actually has the email addresses of real people at Google on the page while the blogs do not.

  2. Pick better URLs for your articles. An example of a bad URL is http://www.google.com/explanation.html which provides a link to Google's explanation of the Jew Watch fiasco. Why is this URL bad? Well do you think this is the only explanation Google will ever give? Does the URI provide any hint to what the content is actually about? I suggest the folks in charge of managing the Google URI namespace take a gander at the W3C's Choose URIs wisely and Tim Berners-Lee's excellent Cool URIs Don't Change

  3. Either explain to your bloggers that they should use their common sense when blogging or come up with a policy where blog posts are reviewed before being posted. Google is about to become a multibillion dollar company whose every public utterance will be watched by thousands of customers and hundreds of journalists, it can't afford PR gaffes like the one's described in C|Net's article Google blog somewhat less than 'bloggy'.

  4. Provide an RSS feed. I understand that Evan and the rest of Blogger have had their beefs with Dave Winer but this is getting ridiculous. Dave Winer has publicly flamed me on more than one occassion but I don't think that means I shouldn't use RSS on the MSDN XML Developer Center or remove support for it from RSS Bandit. If an evil Microsoft employee can turn the other cheek and rise above holding grudges, I don't see why Google employees whose company motto is “Do No Evil” can't do the same.

  5. Let us know if working at Google is really as cool as we all think it is. :)


 

Categories: Ramblings

May 11, 2004
@ 06:19 PM

Jon Udell recently wrote

In a recent column on how we use and abuse email, I mentioned the idea of passing attachments "by reference" rather than "by value." Unfortunately I overlooked a product recently reviewed by InfoWorld that does exactly that. The Xythos WebFile Server has a companion WebFile Client that hooks File Attach (in Notes and Outlook) and replaces attachments with secure links to an access-controlled and versioned instance of the document. Cool!

The $50K price tag, as our reviewer noted, "may keep smaller companies away." But other implementations of the idea are clearly possible.

SharePoint provides the ability to provide shared workspaces for documents and even integrates nicely with the rest of Microsoft Office such as Word. I know quite a few people who've gone from sending documents back and forth over email to sending links to documents in shared workspaces. Of course, old habits die hard and even at Microsoft lots of people tend to send documents in email as attachments.  


 

May 11, 2004
@ 03:21 PM

In Steve Burnap's blog post entitled Infamy he writes

I've been opposed to the Iraq war since the beginning, but until this, I didn't feel disgust.
...
And that makes me sad. There are only villians here. Villians who have made the US no better than Saddam Hussein. We got into this war with lies, but even if there were no WMDs and no Al Qaeda connection, we could say "well, at lest we got rid of a tyrant". Now all we can say is that we've replaced a tyrant.

I share the exact same sentiments. The Iraq War has hit every worst case scenario I could have come up with; no weapons of mass destruction, no concrete ties to Al Qaeda shown, militant resistance from members of the local populace, and US troops actively violating the Geneva Convention. The funny thing is that polls show that 44 per cent of people polled in the US think the war is worthwhile.  Sad.


 

The folks behind the InfoPath team blog have posted a short series on how to programmatically modify InfoPath form templates. The example in the series shows how to change the URL of a XML Web Service end point used by the InfoPath form using a script instead of having to do it manually by launching InfoPath. The posts in the series are linked below

  1. Modifying InfoPath manifest.xsf file from script (1/5)
  2. Modifying InfoPath manifest.xsf file from script (2/5)
  3. Modifying InfoPath manifest.xsf file from script (3/5)
  4. Modifying InfoPath manifest.xsf file from script (4/5)
  5. Modifying InfoPath manifest.xsf file from script (5/5) 

The posts highlight that the InfoPath XSN format is really a CAB file, and the files that make up the template are XML files which can easily be modified programmatically.


 

Categories: XML