After talking about it for the past few weeks the XML Developer Center on MSDN is finally here. As mentioned  in my previous post on the Dev Center the most obvious changes from the previous incarnation of http://msdn.microsoft.com/xml are

  1. The XML Developer Center will provide an entry point to working with XML in Microsoft products such as Office and SQL Server.

  2. The XML Developer Center will have an RSS feed.

  3. The XML Developer Center will pull in content from my work weblog.

  4. The XML Developer Center will provide links to recommended books, mailing lists and weblogs.

  5. The XML Developer Center will have content focused on explaining the fundamentals of the core XML technologies such as XML Schema, XPath, XSLT and XQuery.

  6. The XML Developer Center will provide sneak peaks at advances in XML technologies at Microsoft that will be shipping future releases of the .NET Framework, SQL Server and Windows.

As mentioned in my previous post the first in a series of articles describing the changes to System.Xml in version 2.0 of the .NET Framework is now up. Mark Fussell has published What's New in System.Xml for Visual Studio 2005 and the .NET Framework 2.0 Release which mentiones the top 10 changes to the core APIs in the System.Xml namespace.

There is one cool new addition that is missing from Mark's article, which I guess would be number 11 of his top 10 list. The XSD Inference API which can be used to create an XML Schema definition language (XSD) schema from an XML instance document will also be part of System.Xml in Whidbey. Given the enthusiasm we saw in various parties about XSD inference we decided to promote it from just being a freely downloadable tool to being part of the .NET Framework. Below are a couple of articles about XSD Inference 

If you have any thoughts about what you'd like to see on the Dev Center or any comments on the new design, please let me know.


 

Categories: XML

As a lead developer with an Open Source project on SourceForge and a program manager at company that produces software commercially I have used private and public bug databases. Sometimes I've wondered what it would be like if the users of Microsoft products had access to our bug databases and could vote for bugs like in Bugzilla. There are two main things that decide if and when a bug will be fixed when bugs are being triaged at work. The first is the priority of the bug but how this is determined varies from component team to component team and is more a holistic process than anything. The second is how much work it would take to fix the bug and whether it takes up too much of the schedule ( a bug that takes a week to fix will less likely be fixed than 4 bugs that will take a day a piece). I've always thought it would be interesting if one of the factors we used to determine the priority of the bug included direct customer feedback such as the number of folks who'd voted for getting the bug fixed.

There are down sides to every proposal and I discovered this post from Matthew Thomas (who seems to have abandoned his Phraswewise.com blog) entitled discussions in unwanted places where he lists some of the problems with public bug databases that the Mozilla project has faced. It should be noted that the Mozilla project is probably the largest and most significant instance of a software project with a public bug database. Matthew writes

While none of these channels may be intended as an avenue for discussion, humans have frequently demonstrated that they will try to converse even in areas where discussions are not wanted.
...

The saddest example I know of is Bugzilla, a Web-based system originally developed to track bugs and requests for enhancement for the Mozilla software, and now used for a variety of other projects as well.

By default in Bugzilla, when someone adds a comment or makes any other change to a bug report, everyone associated with the bug report will receive an e-mail message: the reporter, the assigned programmer, the QA contact, and anyone else who has registered their interest. This can result in a lot of e-mail being sent.

It’s not so much of a problem when Bugzilla is used by a small or professional team, because participants have social or disciplinary incentives (or both) to ensure everything they do in the system is productive. But when Bugzilla is used by a large mostly-volunteer team, as it is with the Mozilla Project, you get problems. People argue about whether something is a bug or not. They argue about its severity. They argue about its schedule. They plead for the bug to be fixed soon. They throw tantrums. They make long tedious comments no-one can understand. In short, they treat Bugzilla as a discussion forum.

As a result, over the past few years several of Mozilla’s best programmers have begun to ignore most or all of the e-mail they receive from Bugzilla, for the good reason that they’d rather be fixing bugs than wading through Bugzilla discussions. The correct response from Bugzilla’s maintainers would have been to make Bugzilla harder to use as a discussion forum, but instead they made it easier. They added linkable numbers for comments, making it easier to reply to them in new comments. They made the comment field larger, aiding long and rambling comments. They added a mechanism for quoting a previous comment when replying, aiding long and rambling conversations. And they could have turned off the quoting feature in the mozilla.org installation, but they left it turned on.

Each of these decisions appeared to be good and proper, as it improved the usability of Bugzilla for those writing comments. But the purpose of Bugzilla is not to collect comments, it is to track bugs. And the resulting blizzard of comments has made Bugzilla less useful for tracking bugs.

It seems obvious that having a public bug database leads to information overload but what is surprising is that the problems don't come from too many spurious or duplicate bugs being entered [as I expected] but from people actually using the bug database how it is intended.

Well, looks like another idea I had turned out not to have been as good as I first thought.


 

Categories: Ramblings

I was reading a post by Rory Blyth where he points to Steve Maine's explanation of the benefits of Prothon (an object oriented programming language without classes). He writes

One quote from Steve's post that has me thinking a bit, though, is the following:

The inherent extensibility and open content model of XML makes coming up with a statically typed representation that fully expresses all possible instance documents impossible. Thus, it would be cool if the object representation could expand itself to add new properties as it parsed the incoming stream.

I can see how this would be cool in a "Hey, that's cool" sense, but I don't see how it would help me at work. I fully admit that I might just be stupid, but I'm honestly having a hard time seeing the benefit. Right now, I'm grabbing XML in the traditional fashion of providing the name of the node that I want as a string key, and it seems to be working just fine.

The problem solved by being able to dynamically add properties to a class in the case of XML<->object mapping technologies is that it allows developers to program against aspects of the XML document in a strongly typed manner even if they are not explicitly described in the schema for the XML document.

This may seem unobvious so I'll provide an example that illustrates the point. David Orchard of BEA wrote a schema for the ATOM 0.3 syndication format. Below is the fragment of the schema that describes ATOM entries

 <xs:complexType name="entryType">
  <xs:sequence>
   <xs:element name="title" type="xs:string"/>
   <xs:element name="link" type="atom:linkType"/>
   <xs:element name="author" type="atom:personType" minOccurs="0"/>
   <xs:element name="contributor" type="atom:personType" minOccurs="0" maxOccurs="unbounded"/>
   <xs:element name="id" type="xs:string"/>
   <xs:element name="issued" type="atom:iso8601dateTime"/>
   <xs:element name="modified" type="atom:iso8601dateTime"/>
   <xs:element name="created" type="atom:iso8601dateTime" minOccurs="0"/>
   <xs:element name="summary" type="atom:contentType" minOccurs="0"/>
   <xs:element name="content" type="atom:contentType" minOccurs="0" maxOccurs="unbounded"/>
   <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/>
  </xs:sequence>
  <xs:attribute ref="xml:lang" use="optional"/>
  <xs:anyAttribute/>
 </xs:complexType> 

The above schema fragment produces the following C# class when the .NET Framework's XSD.exe tool is run with the ATOM 0.3 schema as input.

/// <remarks/>
[System.Xml.Serialization.XmlTypeAttribute(Namespace="http://purl.org/atom/ns#")]
public class entryType {
   
    /// <remarks/>
    public string title;
   
    /// <remarks/>
    public linkType link;
   
    /// <remarks/>
    public personType author;
   
    /// <remarks/>
    [System.Xml.Serialization.XmlElementAttribute("contributor")]
    public personType[] contributor;
   
    /// <remarks/>
    public string id;
   
    /// <remarks/>
    public string issued;
   
    /// <remarks/>
    public string modified;
   
    /// <remarks/>
    public string created;
   
    /// <remarks/>
    public contentType summary;
   
    /// <remarks/>
    [System.Xml.Serialization.XmlElementAttribute("content")]
    public contentType[] content;
   
    /// <remarks/>
    [System.Xml.Serialization.XmlAnyElementAttribute()]
    public System.Xml.XmlElement[] Any;
   
    /// <remarks/>
    [System.Xml.Serialization.XmlAttributeAttribute(Namespace="http://www.w3.org/XML/1998/namespace")]
    public string lang;
   
    /// <remarks/>
    [System.Xml.Serialization.XmlAnyAttributeAttribute()]
    public System.Xml.XmlAttribute[] AnyAttr;

}

As a side note I should point out that David Orchard's ATOM 0.3 schema is invalid since it refers to an undefined authorType so I had to remove the reference from the schema to get it to validate.

The generated fields highlighted in bold show the problem that the ability to dynamically add fields to a class would solve. If programming against an ATOM feed using the above entryType class then once one saw an extension element, you'd have to fallback to XML processing instead of programming using strongly typed constructs.  For example, consider Mark Pilgrim's RSS feed which has dc:subject elements which are not described in the ATOM 0.3 schema but are allowed due to the existence of xs:any wildcards. Watch how this complicates the following code which prints the title, issued date and subject of each entry.

foreach(entryType entry in feed.Entries){

  Console.WriteLine("Title: " + entry.title);
  Console.WriteLine("Issued: " + entry.issued);

  string subject = null;

 //find the dc:subject
  foreach(XmlElement elem in entry.Any){
   if(elem.LocalName.Equals("subject") &&
      elem.NamespaceUri.Equals("http://purl.org/dc/elements/1.1/"){
     subject = elem.InnerText;
     break;
   }
  }

  Console.WriteLine("Subject: " + subject); 
 
 }

As you can see, one minute you are programming against statically and strongly typed C# constructs and the next you are back to checking the names of XML elements and programming against the DOM. If there was infrastructure that enabled one to dynamically add properties to classes then it is conceivable that even though the ATOM 0.3 schema doesn't define the dc:subject element one would still be able program against them in a strongly typed manner in generated classes. So one could write code like

foreach(entryType entry in feed.Entries){

  Console.WriteLine("Title: " + entry.title);
  Console.WriteLine("Issued: " + entry.issued); );
  Console.WriteLine("Subject: " + entry.subject);  
 }

Of course, there are still impedance mismatches to resolve like how to reflect namespace names of elements or make the distinction between attributes vs. elements in the model but having the capabilities Steve Maine describes in his original post would improve the capabilities of the XML<->Object mapping technologies that exist today.


 

Categories: XML

I just found the following post in Andrew Watt's blog

Beta of RSS Bandit Available - but doesn't work, at least for me

Dare Obasanjo has announced, RSS Bandit 1.2.0.106 (beta) available, the availability for download of a beta of RSS Bandit. The download is available from Sourceforge.net.

Installation was smooth but the beta unfortunately seems not to be able to access any feeds, whether RDF or RSS. Sometimes it fails with an exception, sometimes silently. I don't know whether it is because the uninstall of the previous version didn't clean up fully after itself or not (it didn't) or for some other reason.

The end result, however, is that RSS Bandit is broken meantime failing to access any RSS or RDF feed I have pointed it at.

Posted by Andrew Watt at 04:09 PM | TrackBack (0)
The point of betas is to gather feedback about possible issues with the software before a release. If you are a beta user and would like to provide feedback you can file bugs in the bug database on SourceForge, send a message to the mailing list or discussion board
 
It may seem that like this is too much trouble in which case you can blog about your woes and Torsten or I will eventually find your post. However it would be much appreciated if you provided a way to follow up and get more details about your problem. The above post by Andrew Watt is an example of a post we can't do much with since he has no contact information on his blog and it doesn't support comments.  
 
Thanks to all the folks out there using RSS Bandit and helping us make it better. You all rock.

 

Categories: RSS Bandit

March 26, 2004
@ 06:08 PM

This month's issue of The Source magazine asks the following question on it's cover, “Are Rappers the New Target of America's Criminal Justice System?”. In article entitled “Operation Lockdown” there is a spread on various rappers who've had trouble with the law. They include

  1. Jamal Barrow aka Shyne: Assault, Gun Possesion and Reckless Endangerment, sentenced to 10 years at the Clinton Correctional Facility in Dannemora, New York.

  2. Corey Miler aka C-Murder: Second Degree Murder, awaiting sentencing but facing a mandatory life sentence.

  3. Michael Tyler aka Mystikal: Extortion and Sexual Battery, sentenced to 6 years in prison.

  4. Ricky Walters aka Slick Rick: Attempted Second Degree Murder, Self-Deportation and Illegal Reentry, served 5 years for the former and 17 months in an INS detention center for the latter.

  5. John Austin aka Ras Kass: Driving Under the Influence sentenced to 16 months at the California State Prison in Corcoran.

  6. Dwight Grant aka Beanie Sigel: Attempted Murder and Gun Possession, awaiting trial.

  7. Chi-Ali Griffith: Murder and Gun Possesion, awaiting trial after being on the run for two years and being profiled on America's Most Wanted

  8. John Fortè: Drug Possession with Intent to Distribute, sentenced to 14 years.

  9. Patrick Houston aka Project Pat: Aggravated Robbery and Parole Violation, sentenced to 51 months.

  10. Warren McGlone aka Steady B and Chris Rony aka Cool C: Murder and Armed Robbery, Cool C was sentenced to death by lethal injection while Steady B was sentenced to life without parole

  11. Chad Butler aka Pimp C from the group UGK: Aggravated Assault with a Weapon, sentenced to 8 years.

  12. Marion Knight aka Suge Knight (CEO of Death Row Records): Firearm Trafficking, Assault, Parole Violation and Other Charges, currently serving 10 months for parole violation

  13. Shyheim: Armed Robbery and Gun Possesion, sentenced to two years.

  14. J-Dee of Da Lench Mob: Murder, sentenced to 25 years-to-life.

  15. Ronald Blackwell aka Spigg Nice from the group Lost Boyz: Conspiracy to Commit Bank Robbery, sentenced to 37 years

  16. Marvin Bernard aka Tony Yayo from the group G-Unit: Gun Possession and Probation Violation, served a year.

  17. Peedi Crakk of the group State Property: Gun Possession, sentenced to 11 to 23 months.

  18. Big Lurch: Murder, sentenced to life.

  19. Styles P of the Lox: Assualt, served 9 months.

  20. Ol Dirty Bastard:Probation Violation and Drug Possession, served 20 months

  21. Lil' Troy: Drug Possession, served 18 months.

  22. Flesh-N-Bone: Threats with a Deadly Weapon, sentenced to 12 years.

  23. Keith Murray: Assault, sentenced to 5 years.

  24. Capone from the group Capone-N-Noreaga: Gun Possession, served 27 months.

  25. Tupac Shakur aka 2Pac: Sexual Assault, sentenced to 18 months to 4.5 years.

The above list doesn't include various rappers that got probation or were put under house arrest for their crimes such as Jay-Z, Trick Daddy and Coolio. Going back to the original question asked on the  cover of the Source as to whether the criminal justice system is targetting rappers, it seems to me that if anything it seems rappers are just going out of their way to tangle with the criminal justice system by committing crimes. Of course, there is the fact that young, black males are more likely to be harshly sentenced for a crime than their caucasian counterparts but this is different from the criminal justice system going out of its way to target rappers.

I find it sad that a lot of these folks whose music I've bought in the past made it out of the hood just to screw their lives up by doing the wrong thing at the wrong time.  


 

Categories: Ramblings

March 26, 2004
@ 04:47 PM

The following are excerpts from the interview with 50 Cent (multi-platinum hiphop artist, highest album and single sales for the year 2003) in the April 2004 issue of Playboy.  

Playboy: When you started dealing, at 12, where did you get the drugs?

50 Cent: I was uncomfortable asking my grandparents for certain things. They raised their kids at a time when ProKeds cost $10. When I was a kid the new Jordans were more than $100. The people I met while I was with my mother, they had jewelry and nice cars. They gave me three and a half grams -- an eight ball. That's the truth. The same money I would've paid for those Jordans. Sometimes when you ask for fish people give you a pole.

Playboy: You did buy-one-get-one-free promotions.

50 Cent: And I only called it “buy one get one free” because they were calling it “two for $5” on the next block. I was trying to make it different. I was marketing! Fiends want something free , so use the word free. It's better than “two for $5”

Platboy: Did it work?

50 Cent: Hell, yes, it worked. And I made the pieces bigger. Some guys made small pieces and figured they'd make a huge profit. But it takes longer to sell the pieces. I made the pieces huge, and they started coming from down the block. All the pieces would sell the same dat and I'd accumulate more money.

Playboy: This seems pretty heavy for a teenager.

50 Cent: Older dudes in our neighborhood were way worse. They were robbing banks; they would kidnap each other. They tried to rob me one night in front of my grandmother's house. I was 19 and had bought a 400 SE Mercedes-Benz. I got to the front door, and the sliding door of a cargo van opened. They had a shotgun. I jumped over the porch and ran for a gun in the backyard. Pow! I got away from them, though. There's a strong possibility they would've killed me.

Playboy: Did you ever use the gun you hid in your grandmother's yard?

50 Cent: The first time I ever shot somebody, I was in junior high school. I was coming out of a project building -- I ain't gonna tell you where. I was going to see this girl. I had my uncle's jewlery on, and two kids decided to rob me. This kid was like “Yo c'mere, let me holler at you”. As I turned they all started pouring out of the lobby. It had to be 15 people stepping to me to rob me. I had a little .380 six-shot pistol, and I didn't even look. I just spun around bangin'. Pop-pop-pop-pop-pop-pop! Shot and just kept running.

Playboy: Did you hit anybody?

50 Cent: Yeah, I hit one of 'em. And that encouraged the next situation. After that, you just get comfortable shooting. The first time, you're scared to death, as scared as the guy you're shooting at. Then it grows easier for you. Everybody has a conscience. You say to yourself, Man, he was gonna do something to me. Then it's like, I don't give a fuck, whatever. After a while the idea of shooting somebody doesn't bother you.

Playboy: When you were signed with Columbia, you decided to quit dealing. Then what happened?

50 Cent: I got a $65,000 advance; $50,000 went to Jam Master Jay, and $10,000 went to the lawyer to negotiate my contractual release from Jay and do my contract with Columbia. I had only $5,000 left. I had to be able to provide for myself so I took the $5,000 and turned it into 250 grams.

Playboy: You went back to dealing.

50 Cent: I had no choice.

Playboy: Do you think Jam Master Jay ripped you off?

50 Cent: He didn't he took what he felt was his. I was never bitter at Jay, because what I learned from him is what allows me now to sell 10 million records. He groomed me. That's worth $50,000

There were a bunch of other questions but most of them focused either on his violent past or his publicized beef with Ja Rule and Murder Inc. Two things struck me as I read the interview. The first was how people could live in the same country and in some times the same city yet exist in totally different worlds. The second is that America is truly the land of opportunity.


 

Categories: Ramblings

I recently read Al Franken's Lies and the Lying Liars Who Tell Them: A Fair and Balanced Look at the Right which was a very interesting read. It was definitely partisan but in this age of blatant lying by practically every highly visible member of the Bush cabinet and the Republican media boosters on Fox News, it's hard to be objective when describing some of the things they've done.

Al did a lot of research for the book, thanks a team of 14 graduate and undergraduate student researchers he got from Harvard.  There are extensive end notes and some thorough disection of the lies of the usual suspects in the conservative media like Sean Hannity, Bill O'reilly and Ann Coulter. There was also a personal account of behind the scenes of the political circus that was the memorial service of the late Senator Paul Wellstone. An interesting data point is comparing the coverage of the memorial service on CNN the shortly after it happened where they wrote, thousands pay tribute to Wellstone, to the coverage the day after once Republican talk show hosts had put their negative spin on it. The story became tone of Wellstone memorial generates anger. Al gives a blow by blow of how this happened from behind the scenes and exposes a lot of the half truths and exagerations that led to the media reports.

Another thing I found interesting was chapter 16 of the book which was entitled Operation Ignore which described the Bush administration's attitude to terrorism which was to consider a lower priority than the previous administration. A lot of the stuff I've read online about Richard Clarke's testimony to the independent 9-11 commision was stuff I'd already seen in Al Franken's book. I'm just glad it is getting wider coverage in the mainstream media instead of just being available to the few people who bought Al Franken's book.

I pray we don't get four more years of this shit...


 

Categories: Ramblings

Torsten and I are getting ready to ship the next version of RSS Bandit and have made a beta version available.

The beta version adds a couple of features over the last version of RSS Bandit such as support for the Atom 0.3 syndication format, the ability to import OPML lists into a specific category, translations to German & simplified Chinese, ability to display logos of feeds that provide them and auto-completion when typing URLs in the address bar.

The beta version also fixes a number of bugs such as the fact that the notification bubble pops up too frequently, right-clicking URLs in the address bar makes them disappear, inability to launch the application on Win98/WinME, sometimes closing a browser tab causes dozens of IE windows to be spawned, clicking mailto: or news: links opens a new browser tab and most importantly the fact that in certain cases feeds are not updated.

Any comments about the beta version should be brought up on the mailing list or discussion board. Bugs should be filed in the bug database on SourceForge and feature requests go to the feature request database on SourceForge.

Our current plan is for the beta to last 2 to 3 weeks after which we'll create an installer for the next release.

PS: Given that RSS Bandit now supports other formats besides RSS and will support more technologies in future (e.g. NNTP) it seems to make sense for us to rename the application. Torsten and I are interested in any suggestions for a new name for the project.


 

Categories: RSS Bandit

Miguel pointed me to an interesting discussion between Havoc Pennington of RedHat and Paolo Molaro, a lead developer of the Mono project. Although I exchanged mail with Miguel about this thread about a week ago I've been watching the discussion as opposed to directly commenting on it because I've been trying to figure out if this is just a discussion between a couple of Open Source developers or a larger discussion between RedHat and Novell being carried out by proxy.

Anyway, the root of the discussion is Havoc's entry entitled Java, Mono, or C++? where he starts of by pointing out that a number of the large Linux desktop projects are interested in migrating from C/C++ to managed code. Specifically he writes

In the Linux desktop world, there's widespread sentiment that high-level language technologies such as garbage collection, sandboxed code, and so forth would be valuable to have and represent an improvement over C/C++.

Several desktop projects are actively interested in this kind of technology:

  • GNOME: many developers feel that this is the right direction
  • Mozilla: to take full advantage of XUL, it has to support more than just JavaScript
  • OpenOffice.org: has constantly flirted with Java, and is considering using Java throughout the codebase
  • Evolution: has considered writing new code and features in Mono, though they are waiting for a GNOME-wide decision

Just these four projects add up to probably 90% of the lines of code in a Linux desktop built around them

Havoc then makes the argument that the Open Source community will have to make a choice between Java/JVM or C#/CLI. He argues against choosing C#/CLI by saying

Microsoft has set a clever trap by standardizing the core of the CLI and C# language with ECMA, while keeping proprietary the class libraries such as ASP.NET and XAML. There's the appearance of an open managed runtime, but it's an incomplete platform, and no momentum or standards body exists to drive it to completion in an open manner...Even if we use some unencumbered ideas or designs from the .NET world, we should never define our open source managed runtime as a .NET clone.

and argues for Java/JVM by writing

Java has broad industry acceptance, historically driven by Sun and IBM; it's by far the most-used platform in embedded and on the UNIX/Linux enterprise server...One virtue of Java is that it's at least somewhat an open standard; the Java Community Process isn't ideal, but it does cover all the important APIs. The barest core of .NET is an ECMA standard, but the class libraries of note are Microsoft-specific...It's unclear that anyone but Microsoft could have significant influence over the ECMA spec in any case...

Also worth keeping in mind, OO.org is already using Java.

Combining Java and Linux is interesting from another standpoint: it merges the two major Microsoft-alternative platforms into a united front.

At this point it is clear that Havoc does agree with what Miguel and the rest of the Mono folks have been saying for years about needing a managed code environment to elevate the state of the art in desktop application development on UNIX-based Open Source platforms. I completely disagree with him that Sun's JCP process is somehow more of an open standard than ECMA. That just seems absurd. He concludes the article with

What Next?

For some time, the gcj and Classpath teams have been working on an open source Java runtime. Perhaps it's time to ramp up this effort and start using it more widely in free software projects. How long do we wait for a proprietary JDK to become GPL compatible before we take the plunge with what we have?

The first approach I'd explore for GNOME would be Java, but supporting a choice of gcj or IKVM or the Sun/IBM JDKs. The requirement would be that only the least common denominator of these three can be used: only the subset of the Java standard completed in GNU Classpath, and avoiding features specific to one of the VMs. Over time, the least common denominator becomes larger; Classpath's goal is to complete the entire Java standard.

There is also some stuff about needing to come up with an alternative to XAML so that GNOME and co. stay competitive but that just seems like the typical Open Source need to clone everything a proprietary vendor does without thinking it through. There was no real argument as to why he thought it would be a good idea, just a need to play catchup with Microsoft.

Now on to the responses. Paolo has two responses to Havoc's call to action. Both posts argue that technically Mono is as mature as the Open Source Java/JVM projects and has niceties such as P/Invoke that make communication between native and managed code straightforward. Secondly, his major point is that there is no reason to believe that while Microsoft will eventually sue the Mono project for violating patents on .NET Framework technologies that Sun would not do the same with Java technologies. Not only has Sun sued before when it felt Java was being threatened (the lengthy lawsuit with Microsoft) but unlike Microsoft it has never given any Java technology to a standards body to administer in a royalty free manner as Microsoft has done with C# and the CLI. Miguel also followed up with his post Java, Gtk and Mono which shows that it is possible to write Java code against Mono which points out that language choice is separate from the choice of which runtime (JVM vs. CLI) you use. He also echoes Paolo's sentiments on Sun and Microsoft's behavior with regards to software patents and their technologies in his post On Software Patents.

Havoc has a number of followup posts where he points out various other options people have mailed him and where he points out that his primary worry is that the current state of affairs will lead to fragmentation in the Open Source desktop world.  Miguel responds in his post On Fragmentation, reply with the followng opening

Havoc, you are skipping over the fact that a viable compromise for the community is not a viable compromise for some products, and hence why you see some companies picking a particular technology as I described at length below.

which I agree with completely. Even if the Open Source community agreed to go with C#/CLI I doubt that Sun would choose anything besides Java for their “Java Desktop System”. If Havoc is saying having companies like Sun on board with whatever decision he is trying to arrive at is a must then he's already made the decision to go with Java and the JVM. Given that Longhorn will have managed APIs (aka WinFX) Miguel believes that the ability to migrate from Windows programming to Linux programming [based on Mono] would be huge.  I agree, one of the reasons Java became so popular was the ease with which one could migrate from platform to platform and preserve one's knowledge since Java was somewhat Write Once Run Anywhere (WORA). However this never extended to building desktop applications which Miguel is now trying to tap into by pushing Linux desktop development to be based on Mono.

I have no idea how Microsoft would react to the outcome that Miguel envisions but it should be an interesting ride.

 


 

Categories: Technology

Aaron Skonnard has a new MSDN magazine article entitled All About Blogs and RSS where he does a good job of summarizing the various XML technologies around weblogs and syndication. It is a very good FAQ and one I definitely will be pointing folks to in future when asked about blogging technologies. 


 

Categories: Mindless Link Propagation | XML

My recent Extreme XML column entitled Best Practices for Representing XML in the .NET Framework  is up on MSDN. The article was motivated by Krzysztof Cwalina who asked the XML team for design guidelines for working with XML in WinFX. There had been and currently is a bit of inconsistency in how APIs in the .NET Framework represent XML and this is the first step in trying to introduce a set of best practices and guidelines.

As stated in the article there are three primary situations when developers need to consider what APIs to use for representing XML. The situations and guidelines are briefly described below:

  • Classes with fields or properties that hold XML: If a class has a field or property that is an XML document or fragment, it should provide mechanisms for manipulating the property as both a string and as an XmlReader.

  • Methods that accept XML input or return XML as output: Methods that accept or return XML should favor returning XmlReader or XPathNavigator unless the user is expected to be able to edit the XML data, in which case XmlDocument should be used.

  • Converting an object to XML: If an object wants to provide an XML representation of itself for serialization purposes, then it should use the XmlWriter if it needs more control of the XML serialization process than what is provided by the XmlSerializer. If the object wants to provide an XML representation of itself that enables it to participate fully as a member of the XML world, such as allow XPath queries or XSLT transformations over the object, then it should implement the IXPathNavigable interface.

A piece of criticism I got from Joshua Allen was that the guidelines seemed to endorse a number of approaches instead of defining the one true approach. The reason for this is that there isn't one XML API that satisfies the different scenarios described above. In Whidbey we will be attempting to collapse the matrix of choices by expanding the capabilities of XML cursors so that there shouldn't be a distinction between situations where an API exposes an API like XmlDocument or one like XPathNavigator.  

One of the interesting design questions we've gone back and forth on is whether we have both a read-only XML cursor and read-write XML cursor (i.e. XPathNavigator2 and XPathEditor)  or a single XML cursor class which has a flag that indicates whether it is read-only or not (i.e. the approach taken by the System.IO.Stream class which has CanRead and CanWrite properties). In Whidbey beta 1 we've gone with the former approach but there is discussion on whether we should go with the latter approach in beta 2. I'm curious as to which approach developers using System.Xml would favor.


 

Categories: XML

In less than a week we'll be launching the XML Developer Center on MSDN and replacing the site at http://msdn.microsoft.com/xml. The main differences between the XML Developer Center and what exists now will be

  1. The XML Developer Center will provide an entry point to working with XML in Microsoft products such as Office and SQL Server.

  2. The XML Developer Center will have an RSS feed.

  3. The XML Developer Center will pull in content from my work weblog.

  4. The XML Developer Center will provide links to recommended books, mailing lists and weblogs.

  5. The XML Developer Center will have content focused on explaining the fundamentals of the core XML technologies such as XML Schema, XPath, XSLT and XQuery.

  6. The XML Developer Center will provide sneak peaks at advances in XML technologies at Microsoft that will be shipping future releases of the .NET Framework, SQL Server and Windows.

During the launch the feature article will be the first in a series by Mark Fussell detailing the changes we've made to the System.Xml namespaces in Whidbey. His first article will focus on the core System.Xml classes like XmlReader and XPathNavigator. A follow up article is scheduled that will talk about additions to System.Xml since the last version of the .NET Framework such as XQuery. Finally, either Mark or Matt Tavis will write an article about the changes coming to System.Xml.Serialization such as the various hooks for allowing custom code generation from XML schemas such as IXmlSerializable (which is no longer an unsupported interface) and SchemaImporterExtensions.

I'll also be publishing our guidelines for exposing XML in .NET applications as well during the launch. If there is anything else you'd like to see on the XML Developer Center let me know.


 

Categories: XML

Both Dave Walker and Tim Bray state their aggregators of choice barfed when trying to read a post entitled because their aggregators of choice didn't know how to deal with tags in content. Weird. RSS Bandit dealt with it fine. Click below for the screenshot.
 

Categories: RSS Bandit

I just noticed that Arve Bersvendsen has written a post entitled 11 ways to valid RSS where he states he has seen 11 different ways of providing content in an RSS feed namely

Content in the description element

I have so far identified five different variants of content in the <description> element:

  1. Plaintext as CDATA with HTML entities - Validate
  2. HTML within CDATA - Validate
  3. HTML escaped with entities - Validate
  4. Plain text in CDATA - Validate
  5. Plaintext with inline HTML using escaping - Validate

<content:encoded>

I have encountered and identified two different ways of using <content:encoded>:

  1. Using entities - Validate
  2. Using CDATA - Validate

XHTML content

Finally, I have encountered and identified four different ways in which people has specified XHTML content:

  1. Using <xhtml:body> - Validate
  2. Using <xhtml:div> - Validate
  3. Using <body> with default namespace - Validate
  4. Using <div> with default namespace - Validate

At first these seem like a lot until you actually try to program against this using an XML parser. In which case, the first thing you notice is that there is no difference programming against CDATA vs. escaped entities since they are both syntactic sugar.  For example, the XML infoset and data models compatible with it such as the XPath data model do not differentiate character content that is written as character references, CDATA sections or entered directly. So the following

    <test><![CDATA[ ]]>2</test>
    <test>&#160;2</test>
    <test> 2</test>

are all equivalent. More directly if you loaded all three into an instance of System.Xml.XmlDocument and checked their InnerText property they'd all return the same result. So this reduces Arve's first two elements to

Content in the description element

I have so far identified five two different variants of content in the <description> element:

  1. HTML
  2. Plain text

<content:encoded>

I have encountered and identified two different ways one way of using <content:encoded>:

  1. Containing escaped HTML content

If your code makes any distinctions other than these then it is a sign that you have (a) misunderstood how to process RSS or (b) are using a crappy XML parser. When I first started working on RSS Bandit I also was confused by these distinctions but after a while things became clearer. The only problem here is the description element since you can't tell whether it is HTML or not without guessing. Since RSS Bandit always provides the content to an embedded web browser this isn't a problem but I can see how it could be one for aggregators that don't know how to process HTML (although I've never seen one before).

Another misunderstanding by Arve seems to be how namespaces work in XML. A few years ago I wrote an XML Namespaces and How They Affect XPath and XSLT where I wrote

A qualified name, also known as a QName, is an XML name called the local name optionally preceded by another XML name called the prefix and a colon (':') character...The prefix of a qualified name must have been mapped to a namespace URI through an in-scope namespace declaration mapping the prefix to the namespace URI. A qualified name can be used as either an attribute or element name.

Although QNames are important mnemonic guides to determining what namespace the elements and attributes within a document are derived from, they are rarely important to XML aware processors. For example, the following three XML documents would be treated identically by a range of XML technologies including, of course, XML schema validators.

<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
        <xs:complexType id="123" name="fooType"/>
</xs:schema>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema">
        <xsd:complexType id="123" name="fooType"/>
</xsd:schema>
<schema xmlns="http://www.w3.org/2001/XMLSchema">
        <complexType id="123" name="fooType"/>
</schema>

Bearing this information in mind this reduces Arve's example to

XHTML content

Finally, I have encountered and identified four two different ways in which people has specified XHTML content:

  1. Using <xhtml:body>
  2. Using <xhtml:div>

Thus with judicious use of an XML parser (which makes sense since RSS is an XML format), Arve's list of eleven ways of providing content in RSS is actually whittled down to five. I assume Arve is unfamiliar with XML processing which led to his initial confusion.

NOTE: Before anyone bothers to start pointing out that Atom somehow frees aggregator author from this myriad of options I'll point out that Atom has more ways of encoding content than these. Even ignoring the inconsequential differences in syntactic sugar in XML (escaped tags vs. unescaped tags in CDATA sections) the various combinations of the <summary> and <content> elements, the mode attribute (escaped vs. xml) and MIME types (text/plain, text/html, application/xhtml+xml) more than double the number of variations possible in RSS.


 

Categories: XML

March 16, 2004
@ 05:10 PM

While hanging around TheServerSide.com I discovered a series on Naked Objects. It's an interesting idea that eschews separating application layers in GUIs (via MVC) or server applications (presentation/business logic/data access layers) and instead only coding domain model objects which then have a standard GUI autogenerated for them. There are currently five articles in the series which are listed below with my initial impressions of each article provided below.

Part 1: The Case for Naked Objects: Getting Back to the Object-Oriented Ideal
Part 2: Challenging the Dominant Design of the 4-Layer Architecture
Part 3: Write an application in Java and deploy it on .Net
Part 4: Modeling simultaneously in UML, Java, and User Perspectives
Part 5: Fat is the new Thin: Building Rich Internet Applications with Naked Objects

Part 1 points out that in many N-tier server-side applications there are four layers; persistence, the domain model, the controller and presentation. The author points out that object-relational mapping frameworks are now popular as a mechanism for collapsing the domain model and persistence layer. Naked objects comes from the other angle and attempts to collapse the domain model, control and presentation layers. The article also argues that the current practices in application development efforts such as web services and component based architectures which separate data access from the domain model reduce many of the benefits of object oriented programming.

In a typical naked objects application, the framework uses reflection to determine the methods of an object and render them using a generic graphical user interface (screenshot). This encourages objects to be 'behaviourally complete' all significant actions that can be performed on the object must exist as methods on the object.

The author states that there are six benefits of using naked objects

  • Higher development productivity through not having to write a user interface
  • More maintainable systems through the enforced use of behaviourally-complete objects.
  • Improved usability deriving from a pure object-oriented user interface
  • Easier capture of business requirements because the naked objects would constitute a common language between developers and users.
  • Improved support for test-driven development
  • It facilitates cross-platform development

What I found interesting about the first article in the series is that the author rails against separating the domain model from data access layer but it seems naked objects are more about blending the GUI layer with the domain model. There seem to be some missing pieces to the article. Perhaps the implication is that one should use object-relational mapping technologies in combination with naked objects to collapse an application from 4 layers to a single 'behaviorally complete' domain model?

Part 2 focuses on implementing the functionality of a 4 layer application using naked objects. One of the authors had written a tutorial application for a book that which was software for running  an auto servicing shop which performed tasks like booking-in cars for service and billing the customer. The conclusion after the rewrite was that the naked objects implementation took less lines of code and had less classes than the previous implementation which had 4 layers. Also it took less time to add new functionality such as obtaining the customer's sales history to the application in the naked objects implementation than in the 4 layer implementation. 

There are caveats, one was that the user interface was not as rich as the one where the developer had an explicit presentation layer as opposed to relying on a generic autogenerated user interface. Also complex operations such as 'undoing' a business action were not supported in the naked objects implementation.  

Part 3 points out that if you write a naked objects implementation targetting Java 1.1 then you can compile it using J# without modification. Thus porting from Java to .NET should be a cinch as long as you use only Java 1.1. Nothing new here.

Part 4 points out that naked objects encourages “code first design” which the authors claim is a good thing. They also point out if one really wants to get UML diagrams out of a naked objects application they can use tools like Together which can generate UML from source code.

I'm not sure I agree that banging out code first and writing use cases or design documents afterwards is a software development methodology worth encouraging.

Part 5 trots out the old saw about rich internet applications and how much better they are than the limiting HTML-based browser applications. The author points out that with the writing a Java applet which uses the naked objects framework gives a richer user experience than an HTML-based application. However as mentioned in previous articles you could build an even richer client interface with an explicit presentation layer instead of relying on the generic user interface provided by the naked objects framework. 

Interesting ideas. I'm not sure how well they'd scale up to building real-world applications but it is always good to challenge assumptions so developers don't get complacent. 


 

Categories: Technology

March 15, 2004
@ 04:35 PM

It what seems to be the strangest news story I've read this year I find out Sun snatches up XML guru what I found particularly interesting in the story was the following excerpt

One of the areas Bray expects to work on is developing new applications for Web logs, or "blogs," and the RSS (Resource Description Framework Site Summary) technology that grew out of them. "I think that this is potentially a game-changer in some respects, and there are quite a few folks at Sun who share that opinion," he said.

Though RSS is traditionally thought of as a Web publishing tool, it could be used for much more than keeping track of the latest posts to blogs and Web sites, Bray said. "I would like to have an RSS feed to my bank account, my credit card, and my stock portfolio," he said.

Personally I think it's a waste of Tim Bray's talents having him work on RSS or it's competitor du jour, Atom, but it should be fun seeing whether he can get Sun out of it's XML funk as well stop them from spreading poisonous ideas like replacing XML with ASN.1.

Update: Tim Bray has a post about his new job entitled Sunny Boy where he writes

That aside, I’m comfy being officially a direct competitor of Microsoft. On the technical side, I find the APIs inelegant, the UI aesthetics juvenile, and the neglect of the browser maddening.

Sounds like fighting words. This should be fun. :)


 

Categories: XML

My homegirl, Gretchen Ledgard (y'know Josh's wife), has helped start the Technical Careers @ Microsoft weblog. According to her introductory post you'll find stuff like

  • Explanation of technical careers Microsoft.  What do people really do at Microsoft?  What does a “typical” career path look like?  What can you do to prepare yourself for a career at Microsoft?
  • Sharing of our recruiting expertise.  Learn “trade secrets” from Microsoft recruiters!  What does a good resume look like?  How can you get noticed on the internet?  How should you best prepare for an interview?
  • Information on upcoming Microsoft Technical Recruiting events and programs. 
  • I hope Gretchen dishes up the dirt on how the Microsoft recruiters deal with competition for a candidate such as when a prospective hire also has an offer from another attractive company such as Google. Back in my college days, the company that was most competitive with Microsoft was Trilogy (what a difference a few years make). 

    I remember when I first got my internship offer and I told my recruiter I also had an offer from i2 technologies, she quickly whipped out a pen and did the math comparing the compensation I'd get at Microsoft to that I'd get from i2. I eventually picked Microsoft instead of i2 for that summer internship which definitely turned out to be a life altering decision. Ahhh, memories.    


     

    After lots of procrastination we now have online documentation for RSS Bandit. As usual, The current table of contents is just a place holder and the real content is Phil Haack's Getting Started with RSS Bandit. The table of contents for the documentation I plan to write [once the MSDN XML Developer Center launches in about a week or so] is laid out below.

    • Bandit Help
      • Getting Started
        • What is an RSS feed?
        • What is an Atom feed?
        • The RSS Bandit user interface
        • Subscribing to a feed
        • Locating new feeds
        • Displaying feeds
        • Changing the web browser security settings
        • Configuring proxy server settings
      • Using RSS Bandit from Multiple Computers
        • Synchronization using FTP
        • Synchronization using a dasBlog weblog
        • Synchronization using a local or network file
      • Advanced Topics
        • Customizing the Way Feeds Look using XSLT
        • Creating Search Folders
        • Adding Integration with your Favorite Search Engine
        • Building and Using RSS Bandit plugins
      • Frequently Asked Questions
      • How to Give Provide Feedback
      • Contributing to RSS Bandit

    If you are an RSS Bandit user I'd love to get your feedback


     

    Categories: RSS Bandit

    I'm not sure which takes the cake for geekiest weeding proposal, popping  the question on a customized PC case or on a custom Magic: The GatheringTM card.

    Anyone else have some particularly geeky wedding proposals to share?


     

    It is now possible to use RSS Bandit to read protected Live Journal feeds now that they support HTTP authentication. Brad Fitzpatrick wrote

    Digest auth for RSS
    Aparently this was never announced:

    http://www.livejournal.com/users/USER/data/rss?auth=digest

    Get RSS feeds (including protected entries) by authenticating with HTTP Digest Auth. Good for aggregators.

    Good indeed. Given that I agitated for this in a previous post I'd like to thank the LiveJournal folks for implementing this feature.


     

    Categories: RSS Bandit

    I've written about this before but a recent mail from David Stutz and rumblings about slipped dates pushed this topic to the forefront of my mind today. If you have competition whose mantra is to ship "little components that can be combined together" and "release early, release often" is it wise to counter this with a strategy that involves integrating monolithic applications into even larger applications those multiplying the complexity and dealing with integration issues?

    On the one hand, no one can argue that the success of Microsoft Office isn't related to the fact that it is a suite of programs that work well together but on the other hand as David Stutz wrote in his farewell email

    As the quality of this software improves, there will be less and less reason to pay for core software-only assets that have become stylized categories over the years: Microsoft sells OFFICE (the suite) while people may only need a small part of Word or a bit of Access. Microsoft sells WINDOWS (the platform) but a small org might just need a website, or a fileserver. It no longer fits Microsoft's business model to have many individual offerings and to innovate with new application software. Unfortunately, this is exactly where free software excels and is making inroads. One-size-fits-all, one-app-is-all-you-need, one-api-and-damn-the-torpedoes has turned out to be an imperfect strategy for the long haul.

    Digging in against open source commoditization won't work - it would be like digging in against the Internet, which Microsoft tried for a while before getting wise. Any move towards cutting off alternatives by limiting interoperability or integration options would be fraught with danger, since it would enrage customers, accelerate the divergence of the open source platform, and have other undesirable results. Despite this, Microsoft is at risk of following this path, due to the corporate delusion that goes by many names: "better together," "unified platform," and "integrated software." There is false hope in Redmond that these outmoded approaches to software integration will attract and keep international markets, governments, academics, and most importantly, innovators, safely within the Microsoft sphere of influence. But they won't .

    Exciting new networked applications are being written. Time is not standing still. Microsoft must survive and prosper by learning from the open source software movement and by borrowing from and improving its techniques. Open source software is as large and powerful a wave as the Internet was, and is rapidly accreting into a legitimate alternative to Windows. It can and should be harnessed. To avoid dire consequences, Microsoft should favor an approach that tolerates and embraces the diversity of the open source approach, especially when network-based integration is involved.

    I don't agree with the general implication of David's comments but I do believe there is a grain of truth in what he writes. The issues aren't as black and white as he paints them but his opinions can't be written off either. The writing is definitely on the wall, I just wonder if anyone is reading it.


     

    Categories: Life in the B0rg Cube

    I hung out with Lili Cheng on Tuesday and we talked about various aspects of social software and weblogging technologies. She showed me Wallop while I showed her RSS Bandit and we were both suitably impressed by each other. She liked the fact that in RSS Bandit you can view weblogs as conversations while I liked the fact that unlike other social software I've seen Wallop embraced blogging and content syndication.

    We both felt there was more to social software than projects like Orkut and she had an interesting insight; Orkut and tools like it turned the dreary process of creating a contact list into something fun. And that's about it. I thought to myself that if a tool like Outlook allowed me to create contact lists in an Orkut-style manner instead of the tedious process that exists today that would really a killer feature. She also showed me a number of other research projects created by the Microsoft Research Social Computing Group. The one I liked best was MSR connections which can infer the relationships between people based on public information exposed via Active Directory. The visualizations produced were very interesting and quite accurate as well.  She also showed me a project called Visual Summaries which implemented an idea similar to Don Park's Friendship Circle using automatically inferred information.

    One of the things Lili pointed out to me about aggregators and blogging is that people like to share links and other information with friends. She asked about how RSS Bandit enables that I mentioned the ability to export feed lists to OPML and email people blog posts as well as the ability to invoke w.bloggar from RSS Bandit. This did get me to thinking that we should do more in this space. One piece of low hanging fruit is that users should be able to export a category in their feed list to OPML instead of being limited to the exporting the whole feedlist since they only want to share a subset of their feed. I have this problem because my boss has asked to try out my feed list but I've hesitated to send it to him since I know I subscribe to a bunch of stuff he won't find relevant. I also find some of the ideas in Joshua Allen's post about FOAF + De.licio.us  to be very interesting if you substitute browser integration with RSS Bandit integration. He wrote  

    To clarify, by better browser integration I meant mostly integration with the de.licio.us functionality.  For example, here are some of the things I would like to see:

    ·         Access to my shared bookmarks from directly in the favorites menu of IE; and adding to favorites from IE automatically adds to de.licio.us (I’m aware of the bookmarklet)

    ·         When browsing a page, any hyperlinks in the page which are also entries in your friends’ favorites lists would be emphasized with stronger or larger typeface

    ·         When browsing a page, some subtle UI cue would be given for pages that are in your friends’ favorites lists

    ·         When visiting a page that is in your favorites list, you could check a toolbar button to make the page visible to friends only, everyone, or nobody

    ·         When visiting a page which appears in other people’s favorites, provide a way to get at recommendations “your friends who liked this page also liked these other pages”, or “your friends recommend these pages instead of the one you are visiting”

    I honestly think the idea becomes much more compelling when you share all page visits, and not just favorites.  You can cluster, find people who have similar tastes or even people who have opposite tastes.  And the “trace paths” could be much more useful

    Ahhh, so many ideas yet so little free time. :)


     

    Categories: RSS Bandit | Technology

    March 11, 2004
    @ 04:54 PM

    Dave Winer has a proposal for merging RSS and ATOM. I'm stunned. It takes a bad idea (coming up with a redundant XML syndication format that is incompatible with existing ones) and merges it with a worse idea (have all these people who dislike Dave Winer have to work with him).

    After adding Atom support to RSS Bandit a thought crystallized in my head which had been forming for a while; Atom really is just another flavor of RSS with different tag names. It looks like I'm not the only aggregator author to come to this conclusion, Luke Huttemann also came to the same conclusion when describing SharpReader implementation of Atom. What this means in practice is that once you've written some code that handles one flavor of RSS be it RSS 0.91, RSS 1.0, or RSS 2.0 then adding support for other flavors isn't that hard and they basically all have the same information just hidden in different tag names (pubDate vs. dc:date, admin:errorsReportsTo vs. webMaster, author vs. dc:creator, etc). To the average user of any popular aggregator there isn't any noticeable difference when subscribed to an RSS 1.0 feed vs. an RSS 2.0 feed or an RSS 2.0 feed vs. an  Atom feed.

    And just like with RSS, aggregators will special case popular ATOM feeds with weird behavior that isn't described in any spec or interprets the specs in an unconventional manner. As Phil Ringnalda points out Blogger ATOM feeds claim that the summary contains XHTML when in fact they contain plain text. This doesn't sound like a big deal until you realize that in XHTML whitespace isn't significant (e.g. newlines are treated as spaces) which leads to poorly formatted content when the aggregator displays the content as XHTML when it truth it is plain text. Sam Ruby's ATOM feed contains relative links in the <url> and <link> elements but doesn't use xml:base. There is code in most aggregators to deal with weird but popular RSS feeds and it seems Atom is already gearing up to be the same way. Like I said, just another flavor of RSS. :)

    As an aside I find it interesting that currently Sam Ruby's RSS 2.0 feed provides a much better user experience for readers than his ATOM feed. The following information is in Sam's RSS feed but not his Atom feed

    • Email address of the webmaster of the site. [who to send error reports to]
    • The number of comments per entry
    • An email address for sending a response to an entry
    • An web service endpoint for posting comments to an entry from an aggregator
    • An identifier for the tool that generated the feed
    • The trackback URL of each entry

    What this means is that if you subscribe to Sam's RSS feed with an aggregator such as SharpReader or RSS Bandit you'll get a better user experience than if you used his Atom feed. Of course, Sam could easily put all the namespace extensions in his RSS feed in his Atom feed as well in which case the user experience subscribing to either feed would be indistinguishable.

    Arguing about XML syndication formats is meaningless because the current crop that exist all pretty much do the same thing. On that note, I'd like to point out that websites that provide multiple syndication formats are quite silly. Besides confusing people trying to subscribe to the feed there isn't any reason to provide an XML syndication feed in more than one format. Particularly silly are the ones that provide both RSS and Atom feeds (like mine).

    Blogger has it right here by providing only one feed format per blog (RSS or Atom). Where they screwed up is by forcing users to make the choice instead of making the choice for them. That's on par with asking whether they want the blog served up using HTTP 1.0 or HTTP 1.1. I'm sure there are some people that care but for the most part it is a pointless technical question to shove in the face of your users.


     

    Categories: XML

    I just read a post by Bruce Eckel entitled Generics Aren't where he writes

    My experience with "parameterized types" comes from C++, which was based on ADA's generics... In those languages, when you use a type parameter, that parameter takes on a latent type: one that is implied by how it is used, but never explicitly specified.

    In C++ you can do the equivalent:

    class Dog {
    public:
      void talk() { }
      void reproduce() { }
    };
    
    class Robot {
    public:
      void talk() { }
      void oilChange() { }
    };
    
    template<class T> void speak(T speaker) {
      speaker.talk();
    }
    
    int main() {
      Dog d;
      Robot r;
      speak(d);
      speak(r);
    }
    

    Again, speak() doesn't care about the type of its argument. But it still makes sure – at compile time – that it can actually send those messages. But in Java (and apparently C#), you can't seem to say "any type." The following won't compile with JDK 1.5 (note you must invoke the compiler with the source -"1.5" flag to compile Java Generics):

    public class Communicate  {
      public <T> void speak(T speaker) {
        speaker.talk();
      }
    }
    

    However, this will:

    public class Communicate  {
      public <T> void speak(T speaker) {
        speaker.toString(); // Object methods work!
      }
    }
    

    Java Generics use "erasure," which drops everything back to Object if you try to say "any type." So when I say <T>, it doesn't really mean "anything" like C++/ADA/Python etc. does, it means "Object." Apparently the "correct Java Generic" way of doing this is to define an interface with the speak method in it, and specify that interface as a constraint. this compiles:

    interface Speaks { void speak(); }

    public class Communicate  {
      public <T extends Speaks> void speak(T speaker) {
        speaker.speak();
      }
    }

    What this says is that "T must be a subclass or implementation of Speaks." So my reaction is "If I have to specify a subclass, why not just use the normal extension mechanism and avoid the extra clutter and confusion?"

    You want to call it "generics," fine, implement something that looks like C++ or Ada, that actually produces a latent typing mechanism like they do. But don't implement something whose sole purpose is to solve the casting problem in containers, and then insist that on calling it "Generics."

    Although Bruce didn't confirm whether the above limitation exist in the C# implementation of generics he is right that they have the same limitations as Java generics. The article An Introduction to C# Generics  on MSDN describes the same limitations Bruce encountered with Java generics and shows how to work around them using constraints as Bruce discovered. If you read the problem statement described in the article on MSDN it seems the main goal of C# generics is to solve the casting problem in containers.

    What I find interesting about Bruce's post is the implication that to properly implement generics one most provide duck typing. I've always thought the behavior of templates in C++ was weird in that one could pass in a parameter and not enforce constraints on the behavior of the type. Yet it isn't really dynamic or latent typing because there is compile time checking to see if the type supports those methods or operations.

    A few years ago I wrote an article entitled C++ in 2005: Can It Become A Java Beater?  in which I gave some opinions on an interview with Bjarne Stroustrup where he discussed various language features he'd like to see in C++. One of those features was constraints on template arguments, below is an excerpt from my article on this topic

    Constraints for template arguments

    Bjarne: This can be simply, generally, and elegantly expressed in C++ as is.

    Templates are a C++ language facility that enable generic programming via parameteric polymorphism. The principal idea behind generic programming is that many functions and procedures can be abstracted away from the particular data structures on which they operate and thus can operate on any type.

    In practice, the fact that templates can work on any type of object can lead to unforeseen and hard to detect errors in a program. It turns out that although most people like the fact that template functions can work on many types without the data having to be related via inhertance (unlike Java), there is a clamor for a way to specialize these functions so that they only accept or deny a certain range of types.

    The most common practice for constraining template arguments is to have a constraints() function that tries to assign an object of the template argument class to a specified base class's pointer. If the compilation fails then the template argument did not meet the requirements. 

    The point I'm trying to get at is that both C++ users and its inventor felt that the being able to constrain the operations you could perform on the parameterized type as opposed to relying on duck typing was a desirable feature.

    The next thing I want to point out is that Bruce does mention that generic programming in C++ was based on Ada's generics so I decided to spend some time reading up on them to see if they also supported duck typing. I read Chapter 12 of the book Ada 95: The Craft of Object-Oriented Programming where we learn

    In the case of a linked list package, we want a linked list of any type. Linked lists of arrays, records, integers or any other type should be equally possible. The way to do this is to specify the item type in the package declaration as private, like this:

        generic
            type Item_Type is private;
        package JE.Lists is
            ...
        end JE.Lists;
    

    The only operations that will be allowed in the package body are those appropriate to private types, namely assignment (:=) and testing for equality and inequality (= and /=). When the package is instantiated, any type that meets these requirements can be supplied as the actual parameter. This includes records, arrays, integers and so on; the only types excluded are limited types...

    As you can see, the way you declare your generic type parameters puts restrictions on what operations you can perform on the type inside the package as well as what types you can supply as parameters. Specifying the parameter as ‘range <>’ allows the package to use all the standard operations on integer types but restricts you to supplying an integer type when you instantiate the package. Specifying the parameter as ‘private’ gives you greater freedom when you instantiate the package but reduces the range of operations that you can use inside the package itself.

    So it looks like Ada gave you two options, neither of which look like what you can do in C++. You could either pass in any type then the only operations allowed on the type were equality and assignment or you could pass in a constrained type. Thus it doesn't look like Ada generics had the weird mix of static and duck typing that C++ templates have.

    I am as disappointed as Bruce that neither C# nor Java support dynamic typing like languages such as Python or Smalltalk but I don't think parametric polymorphism via generics has ever been used to solve this problem. As I have pointed out neither Ada nor C++ actually give him the functionality he wants so I wouldn't be too hard on Java or C# if I was in his shoes.


     

    Categories: Technology

    Sam Ruby writes

     Ted Leung: If I'm looking for thought leadership from the community, in the Java community, I'm looking towards the non Sun bloggers -- these are the folks doing AOP, Groovy, SGen, Prevalence, WebWork, etc. This shows the rich ecosystem that has grown up around Java. If I look at the .NET community, I pretty much look for the MS bloggers.

    Let's not confuse cause and effect here.  There used to be plenty of .Net bloggers who didn't work for Microsoft. 

    It seems Sam and Ted have different ideas of what thought leadership is from me. When I think of thought leadership I think of ideas that add to the pool of common practices or impact the way developers work and think. Examples of thought leadership are the ideas in the GoF's Design Patterns or the writings of Joel Spolsky.

    I read a lot of blogs from Microsoft and non-Microsoft people about .NET development and I see more thought leadership from non-Microsoft people than I do from Microsoft people. What I see from Microsoft people is what I'll term accidental thought leadership. Basically if I'm the developer or PM that designed or implemented component X then it stands to reason that I'm better placed to talk about it than others. Similarly if I'm one of the folks designing or implementing future technology Y then it stands to reason I'd be the best placed to talk about Longhorn/Indigo/Avalon/WinFS/Whidbey/Yukon/etc. Also the other thing is that it more interesting to read about upcoming future technology than it is to read about how best to use existing technology which is why people tend to flock to the blogs of the folks working on future stuff and ignore the Microsoft bloggers talking about existing technologies until they need a workaround for some bug.

    Personally, the only real thought leadership I've seen from the 200 or so Microsoft blogs I've read have come from folks like Erik Meijer and Don Box. I see a lot of Microsoft people blogging about SOA but to me most of them are warmed over ideas that folks like Pat Helland have been talking about for years. When I think of thought leadership in the .NET world I'm more likely to think of Sam Gentile or Clemens Vastersr than I am to think of some blue badge carrying employee at the Redmond campus.  

    What I do find interesting is that a Sun employee, Simon Phipps, is actually trying to use this to score points and claim that the lack of Sun bloggers with insightful posts is due to a "wide community as you'd expect from the openness of the JCP". When Microsoft folks weren't blogging and directly interacting with our developer community people railed because they felt the company was aloof and distant from its developers. Now we try to participate more and it is a sign that “it's a closed-source dictatorship - no amount of pushing up-hill will fix that”. I guess you can't win them all. :)  


     

    Categories: Ramblings

    I recently started using Real Player again after a few years of not using it and it does seem a lot less user hostile. It seems that this is the result of some internal turmoil at Real Networks. Below are links to some interesting readings about behind the scenes at Real and how it ended up affecting their product

    This seems to have been making the rounds in a couple of popular blogs.


     

    MSDN has a number of Developer Centers for key developer topics such as XML Web Services and C#. There are also node home pages for lesser interesting [according to MSDN] topics such as Windows Scripting Host or SQLXML. Besides the fact that developer centers are highlighted more prominently on MSDN as key topics the main differences between the developer centers and the node home pages are

    1. Developer Centers have a snazzier look and feel than node home pages.

    2. Developer Centers have an RSS feed.

    3. Developer Centers can pull in blog content (e.g. Duncan Mackenzie's blog on the C# Developer Center)

    I've been working on getting a Developer Center on MSDN that provides a single place for developers to find out about XML technologies and products at Microsoft for about a year or more. The Developer Center is now about two weeks from being launched. There are only two questions left to answer.

    The first question is what the tagline for the Developer Center should be. Examples of existing taglines are

    • Microsoft Visual C# Developer Center: An innovative language and tool for building .NET-connected solutions

    • Data Access and Storage Developer Center: Harnessing the power of data

    • Web Services Developer Center: Connecting systems and sharing information

    • .NET Architecture Developer Center: Blueprint for Success

    I need something similar for the XML Developer Center but my mind's been drawing a blank. My two top choices are currently “The language of information interchange” or “Bridging gaps across platforms with the ubiqitous data format”. In my frivilous moments, I've also considered “Unicode + Angle Brackets = Interoperability”. Any comments on which of the three taglines I have in mind sounds best or suggestions for taglines would be much appreciated.

    The second issue is how much we should talk about unreleased technologies. I personally dislike talking about technologies before they ship because history has taught me that projects slip or get cut when you least expect them to do so. For example, when I was first hired fulltime at Microsoft about two years ago we were working on XQuery which was supposed to be in version 2.0 of the .NET Framework. At the time the assumption was that they'd both (XQuery & the next version of the .NET Framework) be done by the end of 2003. It is now 2004 and it is optimistic to expect that either XQuery or the next version of the .NET Framework will both be done at the end of this year. If we had gone off our initial assumptions and started writing about XQuery and the classes we were designing for the .NET Framework (e.g. XQueryProcessor ) in 2002 and 2003 on MSDN then we'd currently have a number of outdated and incorrect articles on MSDN. On the other hand this does mean that while you won't find articles on XQuery on MSDN you do find articles like An Introduction to XQuery, XML for Data: An early look at XQuery ,X is for XQuery, and XQuery Tricks and Traps  on the developer websites of our competitors like IBM and Oracle. All four of those articles contain information that is either outdated or will be outdated when the W3C is done with the XQuery recommendation. However they do provide developers with a glimpse and an understanding of the fundamentals of XQuery.

    The question I have is whether it would be valuable for our developers if we wrote articles about technologies that haven't shipped and whose content may differ from what we actually ship? Other developer centers on MSDN have decided to go this route such as the Longhorn Developer Center and Web Services Developer Center which regularly feature content that is a year or more away from shipping. I personally think this is unwise but I am interested in what the Microsoft developer community thinks of providing content about upcoming releases versus focusing on existing releases.


     

    Categories: XML

    ...Pimps at Sea.

    Thanks to Thaddeus Frogley for the link.


     

    RSS Bandit provides users with the ability to perform searches from its toolbar and view the results in the same UI used for reading blogs if the results can be viewed as an RSS feed. This integration is provided for Feedster searches and I was considering adding other blog search engines to the default. Torsten had given me a RSS link to some search results on PubSub and they seemed to be better than Feedster in some cases. So this evening I decided to try out PubSub's weblog search and see if there was a way to provide similar integration with RSS Bandit. Unfortunately, it turns out that one has to provide them with an email address before you can perform searches.

    I guess they aren't in a rush to have any users.


     

    Categories: Technology

    I always thought that dis records created by dueling MCs over conjured up beefs designed to sell records (e.g. the Nas vs. Jay-Z beef) I was surprised to find out that the same thing now happens with R&B songs. From an article on MTV.com entitled That Eamon Dis Track? Ho-Wopper Now Claims He Was Behind It we read

    The beef between R&B singer Eamon and his so-called ex-girlfriend, Frankee, continues to heat up on radio, as stations across the country follow up his hit "F--- It (I Don't Want You Back)" with her dis track, "FU Right Back." Frankee's song, which uses the exact same music as "F--- It (I Don't Want You Back)," calls Eamon out as being lousy in bed, having pubic lice and generally sucking as a boyfriend. And Eamon loves every word. In fact, he claims he approved the song before the public even heard it.

    Not only does he say Frankee was never his girl, he said she was handpicked by his staff to record a response to "F--- It  (I Don't Want You Back)" in order to create the illusion of a feud (see "Eamon's Alleged Old Flame Burns Him With Dis Track"). "There was a big tryout, and I actually know some of the girls who wanted to do the song, but I never met Frankee in my life," Eamon said. "I think it's corny to death, but it's funny."

    I've listened to both songs, Eamon's is the better record although Frankee's version is kind of peppy.

    Speaking of faked endings there's the article I read this afternoon on Yahoo! about the finalists on the reality show 'Last Comic Standing' were pre-picked which has caused some of the judges such as Drew Carey and Brett Butler to fire barbs at NBC. Brett Butler claimed "As panel judges, we can say that (a) we were both surprised and disappointed at the results and (b) we had NOTHING to do with them". It seems there was some fine print which indicated that the judges where just there for window dressing and the finalists were pre-picked. I guess this just goes to show you that you should always read the fine print.


     

    March 7, 2004
    @ 06:26 PM

    There's currently a semi-interesting discussion about software patents on the XML-DEV mailing list sparked by a post by Dennis Sonoski entitled W3C Suckered by Microsoft where he rants angrily about why Microsoft is evil for not instantly paying $521 million dollars to Eolas and thereby starting a patent reform revolution. There are some interesting viewpoints voiced in the ensuing thread including Tim Bray's suggestion that Microsoft pay Tim Berners-Lee $5 million for arguing against the Eolas patent.

    The thread made me think about what my position on filing software patents was given the vocal opposition to them on some online fora. I recently have gotten involved in patent discussions at work and I jotted down my thought processes as I was deciding whether filing for patents was a good idea or not. Belows are the pros and cons of filing for patents from my perspective in the trenches (so to speak).

    PRO

    1. Having a patent or two on your resume is a nice ego and career boost.
    2. As a shareholder at Microsoft it is in my best interests to file patents which allow the company defend itself from patent suits and reap revenue from patent licencing.
    3. The modest financial incentive we get for filing patents would make for buying a few rounds of drinks with friends.

    CON

    1. Filing patents involve having meetings with lawyers.
    2. Patents are very political because you don't want to snub anyone who worked on the idea but also don't want to cheapen them by claiming that people who were peripherally involved were co-inventers. For example, is a tester who points out a design flaw in an idea now one of the co-inventers if it was a fundamental flaw? 
    3. There's a very slight chance that Slashdot runs an article about a particular patent claiming that it is another evil plot by Microsoft. The fact that it is a slight chance is that the ratio of Slashdot articles about patents to those actually filed is quite small.

    That was my thought process as I sat in on some patent meetings. Basically there is a lot of incentive to file patents for software innovations if you work for a company that can afford to do so. However the measure of degree of innovation is in the eye of the beholder [and up to prior art searches].

    I've seen a number of calls for patent reform for software but not any that have any feasible or concrete proposals behind them. Most of the proponents of patent reform I've seen usually argue something akin to “Some patent that doesn't seem innovative to me got granted so the system needs to be changed“. How the system should be changed and whether the new system will not have problems of its own are left as excercises for the reader.

    There have been a number of provocative writings about patent reform, the most prominent in my memory being the FSF's Patent Reform Is Not Enough and An Open Letter From Jeff Bezos On The Subject Of Patents. I suspect that the changes suggested by Jeff Bezos in his open letter do a good job of straddling the line between those who want do away with software and business method patents and those that want to protect their investment.

    Disclaimer: The above statements are my personal opinions and do not represent my employer's view in anyway. 


     

    As pointed out in a recent Slashdot article some researchers at HP Labs have come up with what they have termed a Blog Epidemic Analyzer which aims to “track how information propagates through networks. Specifically...how web based memes get passed on from one user to another in blog networks“. It sounds like an interesting idea, it would be cool to know who the first person to send out links about All Your Base Are Belong To Us or I Kiss You. I can also think of more serious uses of being able to track down the propagation of particular links across the World Wide Web.

    Unfortunately, it seems the researchers behind this are either being myopic or have to justify the cost of their research to their corporate masters by trying to compare what they've done to Google. From the  Blog Epidemic Analyzer FAQ

    2. What's the point?

    There has been a lot of discussion over the fairness of blogs, powerlaws, and A-list bloggers (You can look at the discussion on Many2Many for some of the highlights). The reality is that some blogs get all the attention. This means that with ranking algorithms like Technorati's and Google's Page Rank highly linked blogs end up at the top of search pages. Sometimes (maybe frequently) this is what you want. However, it is also possible that you don't want the most connected blog. Rather you would like to find the blog that discovers new information first.

    The above answer makes it sound like these guys have no idea what they are talking about. Google and Technorati do vastly different things. The fact that Google's search engine lists highly linked blogs at the top of search results that they are tangentially related to is a bug. For example, the fact that a random post by Russell Beattie about a company now makes him the fifth result that comes up for a search for that comapny in Google isn't a feature, it's a bug. The goal of Google (and all search engines) is to provide the most relevant results  for a particular search term. In the past, tying relevance to popularity was a good idea but with the advent of weblogs and the noise they've added to the World Wide Web this is becoming less and less of a good idea. Technorati on the other hand has one express purpose, measuring weblog popularity based on incoming links.

    The HP iRank algorithm would be a nice companion piece to things like Technorati and BlogPulse but comparing it to Google seems like a stretch.


     

    Categories: Technology

    Torsten just finished creating a German version of RSS Bandit. This should make it into the next release which should please our various German users. As Torsten mentioned we are looking for volunteers to do other languages. We will need at least two volunteers per language so that there can be some degree of error checking and the like.

    To get started with translating RSS Bandit, only one file a number of files need translating and these include RSSBanditText.resx, the resource files for the main GUI, and the resource files for the various dialog boxes. An example of a translated version of that document is Torsten's German translation; RSSBanditText.de.resx.


     

    Categories: RSS Bandit

    Folks at work have been cracking up about the Rick James skit on a recent episode of the Dave Chapelle show. Linked below are two video clips from the show. Pure comic genius.

    Chappelle's Show: MORE of Charlie Murphy's True Hollywood Stories

    Rick James asks the philosophical question “What did the five fingers say to the face?” in this all-new Charlie Murphy-inspired clip.

    Eddie Murphy's brother Charlie tells the tale of Rick James:
    Habitual line-stepper.

    You will need to download RealPlayer to view the video clips. Speaking of which it seems Real Player is much improved from the last time I used a couple of years ago.


     

    March 7, 2004
    @ 12:07 AM

    For the past few years I've had my taxes done by H & R Block and each year their prices get steeper. This year the price tag would have been a bit shy of $200. Considering that all the tax preparer did was enter some values into some fields after being prompted by the tax preparation software that seemed a bit steep. A couple of folks have recommended TurboTax so I've decided to give it a try this year. At a cost of about $30, it seems this year I'll not have the ripped of feeling I usually do after filing my taxes.

     


     

    Categories: Ramblings

    March 5, 2004
    @ 02:18 AM

    I became addicted to reality TV after I watched a 2 hour block on Fox that ran between 10 PM  & Midnight consisting of Elimidate, 5th Wheel, Blind Date and Change of Heart. My experience mirrors that of Justin Berton who wrote in his article Embracing the Idiot Box

    So far, in my month long experiment with the set, all the shows I expected to be good are bad, and all the bad ones are really good. In this peculiar calculus, nothing is worse than the reality dating show genre. And lowest of the low is Elimidate, which, of course, makes it the best thing on TV.

    There's no irony here. A dude goes on a date with four women. They drink lots of booze. As the date goes on, the dude eliminates one girl per round.

    I'm now a reality dating show junkie although the stuff on prime time (e.g. the Batchelor and Joe Millionairre) are a bit to sophisticated for my tastes.


     

    Clay Bennet says it better than I ever could with his Vote Nader billboard cartoon. While you are at it you should also check out who really should have won the Oscar for Best Actor


     

    Two recurring themes have shown up in my development of RSS Bandit and usage of news aggregators in general

    1. There are feeds I'm susbscribed to whose content I never end up reading because there is too much content (e.g. Weblogs @ ASP.NET) thus missing the good stuff.

    2. There is no easy way to find content that I'd find interesting.

    I've noticed more and more people complaining about the information overload that comes with being subscribed to too many feeds and wanting some way to sift through the information. I spoke to someone at work yesterday who said he'd stopped using his aggregator to subscribe to individual feeds but instead just subscribed to RSS feeds of search results on Feedster. Similarly some RSS Bandit users subscribe to a lot of feeds and just use Search Folders to sift through them. Both approaches are slight variations of the same thing. The first person would rather read all information in blogs about a certain topic or keyword while the other would like to read all information about a certain topic or keyword from a select list of feeds.

    The goal of RSS Bandit is to encourage both approaches. For the former we provide functionality for viewing Feedster [and other search engines that return RSS feeds] search results in the same manner one would view an RSS feed. In the next version we will provide the functionality to directly subscribe to such search results in two clicks (type search term in address bar, click the search button, results come back as an RSS feed, click subscribe to search results). The last piece is currently missing from RSS Bandit but will be in the next version. For the latter scenario where users subscribe to lots of feeds but only read the ones that match the searches in particular search folders I am considering improving the search capabilities by supporting query-like functionality. Currently you can create a Search Folder that shows all items that match a particular key word or key phrase. However sometimes you want to perform searches over multiple terms (e.g. “Microsoft AND Longhorn”) or fine tune certain searches by ignoring posts that may coincidentally match your keyword but are not of interest (e.g. “Java -coffee -indonesia”).

    As for finding new interesting content, RSS Bandit already provides a way to search for feeds by keyword on Syndic8 but there is a bunch more that can be done. There are a bunch of other ideas I have about enabling users to manage the deluge of feeds on the Web and finding new interesting content. Including

    1. Only show posts that have been linked to by other feeds you are subscribed to. This would work for news sites like Slashdot or high traffic feeds like Blogs @ MSDN.

    2. Add a way to integrate with Technorati's Interesting Blogs and Interesting Newcomers lists whenever they are implemented.  

    3. Only show posts that have a certain threshold of incoming links (e.g. 5 or more) as measured by Technorati. This may be infeasible due to causing high load on Technorati.

    I'm supposed to be hanging out with Lili Cheng in the next couple of days, I wonder what she'll think of some of these ideas and perhaps she can set me on the right path.


     

    Categories: RSS Bandit

    March 3, 2004
    @ 06:48 AM

    I just checked in rudimentary support for Mark Nottingham's Atom Syndication Format 0.3 (PRE-DRAFT) and Mark Pilgrim's Atom Feed Autodiscovery (PRE-DRAFT) into the RSS Bandit CVS tree. I expect there to be bugs and incomplete feature support but it is good enough to read the various ATOM feeds being produced by Blogger since I can now subscribe to the ATOM feeds of Evan Williams and Steve Saxon.

    The screenshot below shows Mark Pilgrim's ATOM feed.


     

    Categories: RSS Bandit

    I just stumbled on a guide on How to Stop Receiving Credit Card Offers on Kuro5hin which I definitely need given the aggravating amount of junk mail I get from credit card companies. It begins

    Tired of annoying "pre-approved" credit card offers? I sure am. According to the Fair Credit Reporting Act (FCRA) of 1970 as amended in 1996, the four major credit bureaus have the right to sell your information to companies that want to offer you a credit card. Fortunately, the amendment also stipulated that credit bureaus must provide a way for consumers to have their names excluded from pre-approval lists. If you're a United States citizen sick of getting pre-screened credit card offers, this article will show you how to avoid receiving them

    Lots of useful information here including a number you can call to opt out of credit card junk mail. According to the article, 1-888-5-OPTOUT is an automated service run jointly by the four main credit bureaus. With one phone call you can opt out of pre-screened mailings from all four bureaus.  Sweet.

    A quick Googling confirms the information in the Kuro5hin article which seems to be a summary of the following PDF on the FTC website, Where to Go To "Just Say No"  .


     

    This recently appeared on the rssbandit-users mailing list

    From: Dare Obasanjo <dareo@mi...>
    Synchronizing RSS Bandit Across Multiple Machines  
    2004-03-02 00:29
     Hello all, 
       One of the big features on TODO list is to implement a way to
     synchronize the state of RSS Bandit across multiple machines so that if
     you use it on one machine and start it up on another it remembers what
     you've read, what you're subscribed to, etc. I've gone as far as writing
     a spec[0] for a format that does this. The main problems I've had now is
     that there needs to be a server where the individual RSS Bandit
     instances can fetch feeds from in the same way your mail reader (e.g.
     Outlook) downloads mail from your mail server (e.g Exchange). Since it
     is unlikely that users will be able to setup servers for this purpose
     here are a couple of ideas I've thought of supporting. I'd love your
     feedback and added suggestions 
     
     1.) FTP support: This is straightforward, an RSS Bandit instance can
     upload and download a SIAM file containing synchronization information
     via FTP. 
     
     2.) Mail support: This is fairly crafty and some would call it a hack.
     RSS Bandit mails the subscription file either as a zipped attachment or
     as inline text content to the user's email address and can download it
     using POP3 if the user's mail server supports POP3. The appropriate mail
     to use for synchronization is identified by an extension header in the
     mail or some similar identifier. There are a large number of free POP3
     services[1] and users can create a throwaway account specificly for RSS
     Bandit synchronization. 
     
     [0] http://www.25hoursaday.com/draft-obasanjo-siam-01.html
     [1] http://www.emailaddresses.com/email_pop.htm
     
     --
     PITHY WORDS OF WISDOM 
     Please all and you please none. 

    If you are an RSS Bandit user I would appreciate your thoughts on the issue. Most of the responses I've gotten so far is that I should just implement synchronization support via FTP which I'm not sure is that accessible to the average RSS Bandit user.


     

    Categories: RSS Bandit