January 5, 2005
@ 03:54 PM

I finished my first article since I switching jobs this weekend. It's tentatively titled Integrating XML into Popular Programming Languages: An Overview of Cω and should show up on both XML.com and my Extreme XML column on MSDN at the end of the month. I had initially planned to do the overview of (C-Omega) for MSDN and do a combined article about ECMAScript for XML (E4X) &  for XML.com but it turned out that just an article on  was already fairly long. My plan is to follow up with an E4X piece in a couple of months. For the geeks in the audience who are a little curious as to exactly what the heck  is, here's an introduction to one of the sections of the article to whet your appetite.

The Cω Type System

The goal of the Cω type system is to bridge the gap between Relational, Object and XML data access by creating a type system that is a combination of all three data models. Instead of adding built-in XML or relation types to the C# language, the approach favored by the Cω type system has been to add certain general changes to the C# type system that make it more conducive for programming against both structured relational data and semi structured XML data.

A number of the changes to C# made in Cω make it more conducive for programming against strongly typed XML, specifically XML constrained using W3C XML Schema. Several concepts from XML and XML Schema have analogous features in Cω. Concepts such as document order, the distinction between elements and attributes, having multiple fields with the same name but different values, and content models that specify a choice of types for a given field exist in Cω. A number of these concepts are handled in traditional Object<->XML mapping technologies but it is often with awkwardness. Cω aims at makes programming against strongly typed XML as natural as programming against arrays or strings in traditional programming languages.

I got a lot of good feedback on the article from a couple of excellent reviewers including the father of the X#/Xen himself, Erik Meijer. For those not in the know, X#/Xen was merged with Polyphonic C# to create Cω. Almost all of my article focuses on the aspects of Cω inherited from X#/Xen.


 

Categories: XML

January 2, 2005
@ 04:20 AM

In response to Krzysztof Kowalczyk's post entitled Google - we take it all, give nothing back and some of the responses to that post, Adam Bosworth has fired off a missive entitled We all stand on the shoulders of giants. He writes

Recently I pointed out that databases aren't evolving ideally for the needs of modern services and customers who which must support change and massive scale without downtime. This post was savaged by an odd alliance; the shrill invective of the Microsoft apparachiks perhaps sensing an opportunity to take the focus away from Ballmer's remorseless attack on all that is not Microsoft (but most especially on Open Source) and certain Open Source denizens themselves who see fit to attack Google for not "giving back" enough apparently unaware that all software benefits in almost infinite measure from that which comes before. As usual the extremes find common ground in a position that ignores common sense, reason, and civility.
...
It would seem that these cacophonous critics, yammering about giving back and sweepingly ignoring the 100's of billions of times people use and appreciate what Google gives them for free every day from Search to Scholar to Blogger to gMail to Picasa, do not understand this basic fact.

It seems Adam Bosworth's position is that Google gives back to the Open Source community by not charging for accessing Google or Blogger. This seems to imply that advertising supported services like MSN Search, Hotmail and MSN Spaces are some sort of charity as opposed to the businesses they actually are.

Mr. Bosworth's statements seem to make a number of the observations made by Krzysztof Kowalczyk in his recent post Google - comments on comments more interesting. Krzysztof wrote

More importantly, Chris DiBona, formerly a Slashdot editor and contributor to a book on open source, now a Google employee, calls me ignorant and lazy for not knowing about Google’s open source contributions.

Maybe I am. However:

  • I do follow my share of open source projects (a bad addiction, really) and I’ve never seen a Google employee participating in them. Which, of course, proves nothing but one data point is better than zero.
  • I did ask on my weblog for pointers to Google’s contributions. Despite temporary popularity of my blog, no-one sent me any.
  • I’ve read all the weblog posts commenting on my piece and no-one else in blogosphere was any less ignorant or lazy.

All that leads me to believe that Google’s contribution, if not a mythical creature, is not that easy to find.

Chris promises a list of Google’s contributions in “coming months". I would rather have it now. The good thing about promising to do something months from today is that you don’t have to do it. You can just rely on the fact that everybody will forget that you’ve made such promise.

I think no additional commentary is necessary. Krzysztof's post and Adam's response speak for themselves.


 

Mike Vernal and I are supposed to be writing a Bill Gates Think Week paper about Social Software. However given how busy both our schedules are this may turn out to be easier said than done. For this reason I've decided that I'll continue blogging my thoughts around this class of software that led me to switch job roles a few months ago.

Today's entry is inspired by a blog post by Stowe Boyd entitled Mark Ranford on Open Standards for Social Tools. Stowe writes

I would like to see -- as just one example -- a means to manage my personal social tools digital identity independently of the various services through which I apply and augment it. None of the social tools that I use today -- whether communication tools, coordinative tools, or community tools -- support anything like what should be in place. My eBay or Amazon reputation is not fungible; my slash dot karma cannot be tapped when I join the Always-On Network; and the degree of connectedness I have achieved through an explicit social networking solution like Spoke, LinkedIn, or ZeroDegrees or through a more implicit social media model as supported by blogging cannot interoperate in the other context in any productive way.

We are forced to live in a thousand separate walled gardens; a thousand, disconnected worlds, where each has to be managed and maintained as if the other don't exist at all.

As a result, I have gotten to the point where I am going to retreat from those worlds that are the least open, the least integrated to others, and the most self-centered. The costs of participating with dozens of tiny islands of socializing are just too high, and I have decided to extricate myself from them all.

This is the biggest problem with the world of Social Software today. I wrote about this in my previous post on the topic entitled Social Software is the Platform of the Future. In that post I wrote

So where do we begin? It seems prudent to provide my definition of social software so we are all on the same page. Social software is any software that enables people to interact with one another. To me there are five broad classes of social software. There is software that enables 

1. Communication (IM, Email, SMS, etc)
2. Experience Sharing (Blogs, Photo albums, shared link libraries such as del.icio.us)
3. Discovery of Old and New Contacts (Classmates.com, online personals such as Match.com, social networking sites such as Friendster, etc)
4. Relationship Management (Orkut, Friendster, etc)
5. Collaborative or Competitive Gaming (MMORPGs, online versions of traditional games such as Chess & Checkers, team-based or free-for-all First Person Shooters, etc)

Interacting with the aforementioned forms of software is the bulk of the computing experience for a large number of computer users especially the younger generation (teens and people in their early twenties). The major opportunity in this space is that no one has yet created a cohesive experience that ties together the five major classes of social software. Instead the space is currently fragmented. Google definitely realizes this opportunity and is aggressively pursuing entering these areas as is evidenced by their foray into GMail, Blogger, Orkut, Picasa, and most recently Google Groups 2. However Google has so far shown an inability to tie these together into a cohesive and thus "sticky" experience. On the other hand Yahoo! has been better at creating a more integrated experience and thus a better online one-stop-shop (aka portal) but has been cautious in venturing into the newer avenues in social software such as blogs or social networking. And then there's MSN and AOL.

Since posting that entry I've changed jobs and now work at MSN delivering social software applications such as MSN Messenger, Hotmail and MSN Spaces. My new job role which has given me a more enlightened perspective on some of these problems. The issues Stowe has with the existing Social Software landscape will not be easily solved with industry standards, if at all. The reasons for this are both social and technical.

The social problems are straightforward, there is little incentive for competing social software applications to make it easy for people to migrate away from their service. There is no business incentive for Friendster to make it easy to export your social network to Orkut or for eBay to make it easy to export your sales history and reputation to Yahoo! Auctions. Besides the obvious consequence of lock-in, another more subtle consequence is that the first mover advantage is very significant in the world of Social Software. New entrants into various social software markets need to either be extremely innovative (e.g. GMail) or bundle their offerings with other more popular services (e.g. Yahoo! Auctions) to gain any sort of popularity. Simply being cheaper or better [for some definition of better] does not cut it.

The value of a user's social network and social information is the currency of a lot of online services. This is one of the reasons efforts like Microsoft's Hailstorm was shunned by vendors. The biggest value users get out of services like eBay and Amazon is that they remember information about the user such as how many successful sales they've made or their favorite kinds of music. Users return to such services because of the value of the social network around the service (Amazon reviews, eBay sales feedback, etc) and accumulated information about the user that they hold. Hailstorm aimed to place a middleman between the user and the vendors with Microsoft as the broker. Even though this might have turned out to be better for users, it was definitely bad for the various online vendors and they rejected the idea. The fact that Microsoft was untrusted within the software industry did not help. A similar course of events is  playing itself out with Microsoft's identity service, Passport. The current problems with sharing identities across multiple services have been decried by many, even Microsoft critics feel that Passport may have been better than the various walled gardens we have today.

The technical problems are even more interesting. The fact of the matter is that we still don't know how to value social currency in any sort of objective way. Going back to Stowe's examples, exactly what should having high karma on Slashdot translate to besides the fact that you are a regular user of the site? Even the site administrators will tell you that your Slashdot karma is a meaningless value. How do you translate the fact that the various feeds for my weblog have 500 subscribers in Bloglines into some sort of reputation value when I review articles on Amazon? The fact is that there is no objective value for reputation, it is all context and situation specific. Even for similar applications, differences in how certain data is treated can make interoperability difficult.

Given the aforementioned problems I suspect that for the immediate future walled gardens will be the status quo in the world of social software.

As for MSN, we will continue to make the experience of sharing, connecting and interacting with friends and family as cohesive as possible across various MSN properties. One of the recent improvements we made in getting there were outlined by Mike Pacheloc in his post Your contacts and buddy lists are the same! where he wrote

Over the last couple of years we took the challenge of integrating the MSN Messenger buddy lists and your MSN Address Book Contacts into one centralized place in MSN.  Although they were called Contacts in both MSN Messenger, Hotmail, and other places in MSN, only until now are they actually one list!  Some benefits of doing this:

* You can now keep detailed information about your MSN Messenger buddies.  Not just the Display Name and Passport login, but all their email addresses, phone numbers, street addresses and other other information.
* Creating a buddy in MSN Messenger means you immediately can send email to that buddy in Hotmail, because the information is already in the Hotmail Contacts list!
* If you define a Group in Messenger, that Group is available in Hotmail.  You can email the Group immediately.  If you rename the Group in Hotmail, the change is immediately made in Messenger.

These are a few of the benefits of the integration we did on the services platform.

The benefits listed above do not do justice to the how fundamental the change has been. Basically, we've gotten rid of one of major complaints about online services; maintaining to many separate lists of people you know. One of the benefits of this is that you can utilize this master contact list across a number of scenarios outside of just one local application like an email application or an IM client. For example, in MSN Spaces we allow users to use their MSN Messenger allow list (people you've granted permission to contact you via IM) as a access control list for who can view your Space (blog, photo album, etc). There are a lot more interesting things you can do once the applications you use can tell "here are the people I know, these are the ones I trust, etc". We may not get there as an industry anytime soon but MSN users will be reaping the benefits of this integration in more and more ways as time progresses.

Well, I have to go eat some breakfast, I'm starved...

Happy New Year!!!


 

Categories: MSN | Technology

December 30, 2004
@ 08:38 PM

In Adam Bosworth's post Where have all the good databases gone he asks the Open Source community to target some problems with relational databases that the Big 3 vendors have seemingly been unable to solve.

Krzysztof Kowalczyk has an interesting response to Adam Bosworth's post entitled Google - we take it all, give nothing back where he writes

Open-source - not working as advertised.

The popular theory ("myth” would be a better name) is that open-source works because of this positive feedback loop:

  • source code for product foo is released
  • it’s free so it gets used
  • if it doesn’t fully meet someone’s needs, that someone can code the functionality (since the code is open) and submit the changes back to project (something not possible if you use closed products like Windows or Office or Google)

  • those contributions improve the product for everyone else, so more people use it so more people contribute the code and so on. Sky is hardly the limit.

The good thing in this theory is that it doesn’t rely on kindness of strangers but on englightened self-interest of those who benefit from free software. The bad thing about this theory is that in theory it works much better than in practice.

It’s all because of a weblog post by Google’s Adam Bosworth. Read it yourself, but the gist of it is that, according to Adam, commercial database vendors don’t understand the needs of companies like Google or Amazon or Federal Express. Relational database rely on static schemas and there are no good ways to dynamically reconfigure databases without the disruption in service. Adam ends with a plea to open-source fairy:

My message is to the Open Source community that has, so ably, built LAMP (Linux, Apache and Tomcat and MySQL and PHP and PERL and Python). Please finish the job. Do for databases what you did for web servers. Give us dynamism and robustness. Give us systems that scale linearly, are flexible and dynamically reconfigurable and load balanced and easy to use.

This is why the theory of open source doesn’t work in real world. A multi-billion company has a clear need for software that works well for them but instead of investing in existing open-source projects like PostgreSQL or MySQL to make them do what they need, all they do is ask some magic, undefined entity they call Open Source community to do the work for them. For free.

Google - we take it all, give nothing back. Come work for us.

Let’s estimate how much money did Google save by using open source software that they would otherwise have to purchase. The operating system for tens of thousands of their computers. Web servers they use. All the Unix utilities they use. Editors, compilers and debuggers they use to write their code. E-mail smtp server. E-mail pop servers. Languages like Perl and Python. Databases like MySQL and PostgreSQL. It’s safe to say that if Richard Stallman was never born, the licenses for those kinds of software would cost them tens of millions of dollars.

And what does Google contribute back? Where are their patches to gcc, gdb, python, postgresql, sendmail, emacs?

Google - we leave open-source to Microsoft. Come work for us.

It’s very ironic that I can find more open-source code created by Microsoft and its employees ( RSS Bandit, IronPython, Windows Installer XML (WiX), FlexWiki) than by Google employees. Not saying that there aren’t any but they are certainly not easy to find, even when I use mighty search engine trying to find google open-source.

Google - we like our hardware cheap and our software free. Come work for us.

If you’re into this stuff you know that Google is known for it’s highly tuned process of selecting hardware components (i.e. all those thousands of computers they need to index and store the web) to hit the best price/performance ratio. In a way, they use the cheapest thing, when you define the cost as the total cost of ownership (as opposed to simply the cost of buying the hardware). Thanks to Adam’s admision:

Indeed, in these days of open source, I wonder if the software itself, should cost at all?

we also know, that they like their software free.

As a side note, it’s a surprising statement coming from Adam who knows very well that writing software costs a lot. Open-source doesn’t eliminate this cost, it just shift the costs and allows unlimited number of free-riders, like Google.

I’m picking on Google, but they are not alone. Amazon, yahoo, ebay, aol. Any large business that uses web as means of providing services and making revenues is enjoying enormous savings by using open source stack on their back end. And what do they contribute back? A good approximation of zero compared to benefits they reap.
...
But Adam’s example shows that there’s a fat chance of this happening. Adam is not a rank Google employee. He was not hired to give free massage to stressed Google employees. Before Google Adam was a high-ranked executive at Microsoft and BEA. He led teams that created successful products (IE, Access among them). He’s in position to influence what Google does. He understands technology, he understand the cost and difficulty of making software. He has a weblog and deep thoughts. If only he understood the strategic value of open source.

If someone like Adam cannot see further than the tip of his own nose and his ideas are as bold as asking others to write the software he needs for free, then I don’t have much hope for anyone at aol to get it either.
...

Google - do no Evil. Do no Good. Just like everybody else. Come work for us.
...
In those days of focus on corporate profits (where there any other days?), Google’s motto “Do no Evil” is refreshing.

Or is it? It’s a nice soundbite, but when you think about it, it’s really a low requirement. There are very little things that deserve to be called Evil. If a senior citizen is taking a nap outside his house on a sunny day and you kick him in the groin - that’s Evil. Most other things are bad or neutral. Not doing Evil is easy. Doing Good is the hard thing.

I doubt that this is the kind of response that Adam Bosworth was expecting when he posted his plea. The fun thing about corporate blogs is that it gives people more places to read between the lines and learn how a company really thinks. I suspect this is why Google doesn't have many authentic bloggers and instead has favored the press release page masquerading as group blog approach at http://www.google.com/googleblog/.


 

Thanks to Danny Ayers post entitled Attention, Attention.xml I finally found a link to the attention.xml specification that was referenced in Robert Scobles post Gillmor's report on Attention.xml is done where he wrote

One of my 2005 predictions is coming true. Steve Gillmor's report on Attention.xml is included in Esther Dyson's Release 1.0. Thanks to Mike Manuel for letting us know the report is now available for $80. I'll have to check our corporate library and see if it's available there (I believe it is).

Danny Ayers does a good job of taking a critical look at the syntax chosen for the attention.xml format. I on the other hand, have fundamental questions about the purpose of the format and how it expects to solve the problems highlighted in its problem statement. As at the time I wrote this post the attention.xml problem statement stated

  • How many sources of information must you keep up with?

  • Tired of clicking the same link from a dozen different blogs?

  • RSS readers collect updates, but with so many unread items, how do you know which to read first?

Attention.XML is designed to to solve these problems and enable a whole new class of blog and feed related applications.

These are rather lofty goals and as the author of a moderately popular RSS reader I am interested in solutions to these problems. Looking at the attention.xml format schema description it seems the format is primarily a serialization of the internal state of an RSS reader including information such as

  • what feeds the user reads
  • when feeds were added or removed from the users subscription list
  • the last time a user read a feed
  • the amount of time the user spent reading a post
  • which links in the post the user cliecked on
  • the users rating for a post or feed
  • etc

This list of data seems suspiciously like a format for synchronizing the state between multiple aggregators or an aggregator client and server. This makes it very similar to the Synchronization of Information Aggregators using Markup (SIAM) format  which I authored with input from a number of aggregator authors including Luke Hutteman (author of SharpReader), Morbus Iff (author of AmphetaDesk) and Brent Simmons (author of NetNewsWire).

Before going into some of the details around the technical difficulties in recording some of the information that the attention.xml format requires I want to go back and address the problem statement. I can't see how the internal state of an RSS reader serialized to some XML format solves problems like users seeing multiple blogs posts from people linking to the same item or determining the relative importance of various unread items in a users queue. The former can be solved quite readily by aggregators today (I don't do it in RSS Bandit because I think the performance cost is too high and it is unclear that this feature is actually beneficial) while the latter is bordering on an AI problem which isn't going to be solved with the limited set of information contained in the attention.xml format. In short, I can't see how the information in an attention.xml document actually solves the problems described in the problem statement.

Now on the technical and social difficulties of creating the attention.xml format. The first problem is that not every aggregator can record all the information that is required by the format. Some aggregators don't have post rating features, some won't or can't track how long a user was reading an item [which will vary from user to user anyway due to people's different reading speeds], and others don't record the user's relationship to the author do the feed. So attention.xml requires a lot of new features from RSS readers. Assuming that the spec gets some traction, I expect that different aggregators will add support for different features while ignoring others (e.g. I can see myself adding post rating features to RSS Bandit but I doubt I'll ever track reading times) which is the case with support for RSS itself within various RSS readers today. The fact that various RSS readers will most likely support different subsets of the attention.xml format is one problem. There is also the fact that logging all this information may be cumbersome in certain cases which would also reduce how likely it is that all the information described in the spec will be recorded. Then the problem is what to do when clients speak different dialects of attention.xml. Are they expected to round trip? If I send Bloglines an attention.xml file with rating information even though it doesn't have that feature, should it track that information for the next time it is asked for my attention.xml by Newsgator which supports ratings?

Don't take this post to mean that I don't think something like attention.xml isn't necessary. As it stands now I want to increase the number of synchronization sources supported by RSS Bandit to include the Bloglines sync API and Newsgator Online synchronization but they use different web services. It looks like Technorati is proposing a third with attention.xml. I'd love for there to be some standardization in this area which would make my life as an aggregator author much easier. Client<->server synchronization of user subscriptions is something that users of information aggregators really would like to see (I get requests for this feature all the time) and it would be good to see some of the major players in this area get together with aggregator authors to see how we can make the ecosystem healthier and provide a better story for users all around.

I don't believe that attention.xml is a realistic solution to the problems facing aggregator authors and users of RSS readers. I just hope that some solution shows up soon as opposed to the current fragmentation that exists in the syndication market place.


 

December 27, 2004
@ 03:58 AM

I tend to think it takes a lot of insensitivity to stun me but it seems like I was incorrect. I was taken aback by Robert Scoble's post entitled Where's the blogosphere on first-hand earthquake reports? where he writes

By the way, PubSub really rocks (it lets you search blogs only and build an RSS feed so you can watch a specific search term over time -- something none of the big three search engines let you do). My posts only took a few minutes to start showing up in the earthquake feed I built. There's remarkably little blogging going on about the earthquake.

It's really disappointing. Citizen Journalism is really failing here. Almost no first-hand reports.

The mainstream press kicked the blogosphere's a##.

This is probably one of the most insensitive and unthinking posts I've seen in a while. A giant tidal wave kills over twelve thousand people and Robert Scoble's first instinct is to complain because none of the survivors rushed to their blogs immediately afterwards to post about it.

Wow...


 

As promised in the RSS Bandit roadmap, the preview of the next version of RSS Bandit is now available for general download. You can now download it at RssBandit.1.3.0.12.Wolverine.Alpha.zip.

NEW FEATURES THAT WORK

  • Newspaper styles: Ability to view all unread posts in a feed in a Newspaper view. This view uses templates that are in the same format as those used by FeedDemon so one can use RSS Bandit newspaper styles in FeedDemon and vice versa.

  • Per feed newspaper styles: Ability to specify a particular stylesheets for a given feed. For example, one could use the slashdot.fdxsl stylesheet for viewing Slashdot, headlines.fdxsl for viewing news sites and outlook-express.fdxsl for viewing blog posts.

  • Skim Mode: Added option to 'Mark All Items As Read on Exiting a Feed' 

  • Search Folder Improvements: Made the following additions to the context menu for search folders; 'New Search Folder', 'Refresh Search', 'Mark All Items As Read' and 'Rename Search Folder'. Also deletion of a search folder now prompts the user to prevent accidental deletion

  • Item Deletion: News items can be deleted by either using the [Del] key or using the context menu. Deleted items go to the "Deleted Items" special folder and can be deleted permanently by emptying the folder or restored to the original feed at a later date.

  • UI Improvements: Tabbed browsers now use a multicolored border reminiscent of Microsoft OneNote.

  • Limited NNTP Support: Ability to add a news server via the Tools->Newsgroups menu items. Once added the available news groups on that news server can be queried.

NEW FEATURES THAT DON'T WORK

  • Subscribing to newsgroups
  • Ability to filter items in list view by some search parameters
  • Option of automatic upload/download of RSS Bandit state for synchronization purposes 
  • Column chooser to enable users pick what columns show up in the list view.

For a detailed log of the differences between the Wolverine alpha and v1.2.0.117 check out the RSS Bandit changelog


 

Categories: RSS Bandit

Jim Hill has a post where he comments on Michael Eisner's management methods as Disney CEO in his post What does a Yeti smell like?  where he writes

Michael Eisner is a micro-manager. Now, I know that that's not exactly a late breaking story. But I think that it's important to understand how truly obsessive Disney's CEO can be when it comes to getting the details at the company's theme parks just right.

Take -- for example -- the Dolphin Resort Hotel. When Eisner wasn't entirely convinced that the giant banana leaves that architect Michael Graves wanted to paint on the sides of this resort would actually look good, Disney's CEO ordered that a huge sample leaf be painted on the backside of Epcot's Mexico pavilion. Just so he could see if this particular design element would look good when it was done to full scale.

But Michael's almost-insane attention to detail doesn't just stop at just the outside of Disney's resorts. Oh, no. After all, for years now, Eisner has insisted that -- before he signs off on the construction of any new WDW hotel -- that a sample room from this proposed resort be prepared. One that features all the furnishings that guests will actually be using in this hotel.

This sample room used to be located in a backstage area at the Caribbean Beach Resort. I've actually seen photos of this squat square structure back when the Imagineers were testing design elements for the All Star Music Resort. Which is why these pictures feature a giant sample maraca leaning against this tiny brightly painting building.

As for the other part of this story ... That Michael reportedly insists on sleeping in each of these sample rooms before he will actually allow construction of the proposed resort to go forward ... That part, I've never been able to prove.

Although Jim Hill's intention is to paint Michael Eisner in a negative light with these examples I'm not sure I necessarily see them as being bad practices. Personally, I'd love it if I heard that Steve Ballmer (the CEO of the company that employs me) wouldn't allow Microsoft to ship a copy of Microsoft Money until he'd used it to manage his finances with relative success or that no MSN Direct SmartWatches should go out until he'd succesfully used one as his personal timepiece for a month or that no release of Internet Explorer would be shipped until he could browse any site on the Web without fear of his computer being taken over by spyware. Of course, I don't expect this to ever happen even if Steve Ballmer wanted to do this Microsoft ships too many products to introduce such a bottleneck in the product development process.

I do know people who've had VPs take personal interest in their products that have ended up disliking that level of scrutiny. One person even commented "Whenever I see a VP asking some nitpicky questions about one of my features, I wonder to myself why he's trying to micromanage my features instead of trying to figure out how to get our stock price out of the flat funk its been in for the past few years". Different strokes for different folks I guess. :)


 

December 24, 2004
@ 02:38 AM

Mark Fussell, my former boss, has a post entitled Smart Watch Frustration - A Christmas tale of where he talks about the various problems he's had with an MSN Direct SmartWatch. He writes  

  • I wander into the company store and excitedly purchase a Fossil FX-3001 at the end of Dec 2003. I have to wait 4 weeks before I receive it, so it was a belated christmas present.
  • I receive watch one in Jan, the plastic/metal carton nearly kills me, but I activate it and procede to continuously show my work mates how great it is and how I know exaclty where to be for my next meeting.
  • End of Jan 2004 - watch one goes blank and stops working entirely. Not a hint of life. I send it to the Fossil repair center. 
  • Feb 2004 - Get new watch two, register it, continue to enjoy it and proudly show it off again, especially to Arpan, who desparately wants one. 
  • Feb 2004 - Watch two starts to reset on an hourly basis to 12pm 1/1/2001, rendering it useless. I send it to the Fossil repair center again.
  • March 2004 - Get new watch three, register it and enjoy it and tentively showing it off. By now everyone is uninterested in it.
  • May 2004 - Watch starts to reset on a 2-3 hourly basis. Worse still I start to tell people the wrong time and cause confusing including one old lady in the street who asked my the time and then argued that I was wrong. I should have agreed with her. 
  • May 2004 - Nov 2004 - I suffer watch three.Whilst it is in a good reception area (i.e. around my home) it works Ok. If I go anywhere out of reception range (i.e. the steel buildings at work, 20 miles north of my house or the UK) the watch becomes immediately useless, resetting to 12pm 1/1/2001 continuously. i.e. it is not even a watch.
  • Nov 2004 - I give up. I send watch three to the Fossil repair center having spent 30 minutes on the phone with a technican trying to "fix" it.
  • Dec 2004 - Get new watch four which is a new design, the FX-3005. The clasp must have been invented by someone from the Spanish Inquisition and it takes me about 10 minutes to figure out how to open and close it. I take watch four out of its brand new box and set it onto the charger. There is no comforting "beep" to indicate that it is charging. I spend 1 hour trying every combination and position on the charger. The next day I speak to the Fossil technical help desk and they determine, as I did the night before, that watch four is a lifeless heap of metal and plastic.  I sent it to the Fossil repair center.
  • Dec 22nd 2004 - I recieve watch five which is also a new FX-3005. Curiously this one is not in a new box and is simply wrapped in bubble wrap. I take it out and note that it is ready charged, but at least it is working, being careful not to slash my wrist with the dangerous metal clasp. I leave it to charge overnight and sync my personal settings.
  • Dec23rd - In the morning I note that it still has not synced my settings, so I go to register the watch ID on the MSD Direct site. Worringly it replies that this watch is already registered. No problem, I phone MSN Direct. To cut a 40 minute conversation short, I am told to 'reboot' the watch by continuously pressing three painful buttons (it takes 9 attempts)  to generate a new 'dynamic' ID for the watch. It turns out that this is all in futility. And here is the crux. This watch was previously owned(by the ID) and the icing on the cake is that the ID cannot be reset and assigned to my account. I am told by the help desk that the only thing that I can do is send it back to the Fossil repair center. Aaaaargh.Aaaaargh.Aaaaargh
  • I witnessed a lot of this first hand and I was stunned at how problematic these watches were for Mark. The initial lure of the SmartWatch to Mark was having a useful, internet-connected device that would automatically track his schedule as well as keep up with news and sports scores. I also had a need for such a decide but decided to go with a SmartPhone instead.

     

    So far my AudioVox SMT 5600 has worked like a charm. It's a phone, a camera, it syncs with Outlook wirelessly so I always have an up to date email and calendar, it tracks traffic density, it can be used to catch up on news when I'm bored while I'm stuck waiting somewhere, and I've even used it to hit Google once or twice while on the go. Plus the form factor is all that and a bag of chips.

     

    I should to talk Mark into giving up on the SmartWatches and going for a SmartPhone instead.

     


     

    December 22, 2004
    @ 05:27 PM

    It seems there's been some recent hubbub in the world of podcasting about how to attach multiple binary files to a single post in an RSS feed In a post entitled Multiple-enclosures on RSS items? Dave Winer weighs in on the issue. He writes

    This question comes up from time to time, and I've resisted answering it directly, thinking that anyone who really read the spec would come to the conclusion that RSS allows zero or one enclosures per item, and no more. The same is true for all other sub-elements of item, except category, where multiple elements are explicitly allowed. The spec refers to "the enclosure" in the singular. Regardless, some people persist in thinking that you may have more than one enclosure per item.

    Okay, let's play it out. So if I have more than one enclosure per item, how do I specify the publication date for each enclosure? How do I specify the title, author, a link to comments, a description perhaps, or a guid? The people who want multiple enclosures suggest schemes that are so complicated that they're reduced to hand-waving before they get to the spec, which I would love to read, if it could be written. Some times some things are just too hard to do. This is one of them.

    And there's a reason why it's too hard. Because you're throwing out the value of RSS and then trying to figure out how to bring it back. There's no need for items any more, so you might as well get rid of them. At the top level of channel would be a series of enclosures, and then underneath each enclosure, all the meta-data. Voila, problem solved. Only what have you actually solved? You've just re-created RSS, but instead of calling the main elements "item" we now call them "enclosure".

    The value of RSS is fairly self evident to me but it seems that given the amount of people who keep wanting to reinvent the wheel it may not be as clear to others. As someone who used to work on core XML technologies at Microsoft, the value of XML was obvious to me. It allowed developers to agree to use the same data format for information interchange which led to a proliferation of a wide and uniform set of tools for working with data formats. XML is not an optimal format for most of the tasks it is used for but it more than makes up for this with the plethora of tools and technologies that exist for processing XML.  

    My expectation about XML was always that the software industry would move on to agreeing on other higher level protocols built on XML for application information interchange. So I've always been frustrated to see many attempts by various parties, including the W3C with efforts such as XML 1.1 and binary XML, take us steps back by wanting to fragment the interoperability promise of XML.

    RSS is a wonderful example of the higher level of interoperability that can be built upon XML formats. Instead of information sources using various incompatible mechanisms for providing information to end users such as NOAA's SOAP web service and the Microsoft.com web services which each require a separate custom application to consume them, sites can all standardize on RSS. This standardization creates an ecosystem of applications that produce and consume RSS feeds which is a lot larger than what would exist for each site specific web services or market specific XML syndication formats.  Specifically, it allows for the evolution of the digital information hub where users can view data from the various information sources they care about (blogs, news, weather reports, etc) in their choice of applications. 

    Additionally, RSS is extensible. This means that even if the core elements and attributes do not satisfy all the requirements of a particular problem domain, then domain-specific information can be added to the feed. This allows for regular consumers of RSS to still be able to consume the content while domain specific applications can give users a richer experience. This is a much better solution for both content producers and consumers than coming up with domain specific applications.

    As a user I want less formats not more. I want my email to come in my RSS aggregator, I want my favorite newsgroups to show up in my RSS aggregator, I'm tired of having a separate application for what is essentially the same kind of data. In fact, it seems Google agrees with me as evidenced by them exposing XML feeds for your GMail inbox and for USENET newsgroups via Google Groups. Unfortunately, if you have a plain old RSS reader, you can't view these feeds and instead have to find an aggregator that supports Atom 0.3. Two steps forward, one step back.

    We need less data interchange formats not more. It is better for content producers, better for end users and better for developers of applications that use these formats. Existing problems in syndication should focus on how to make the existing formats work for us instead of inventing new formats. 

    Vive la RSS. 


     

    Categories: Syndication Technology | XML