I've been reading the Mini-Microsoft blog for a couple of months now and recently considered unsubscribing. The main reason I haven't is that the recent story in Business Week about the blog has attracted a lot of readers which makes the comment threads interesting in a Slashdot kinda way. Since I couldn't locate an email address for the blog's author on the front page of the blog, I'll just post my comments here on why the blog jumped the shark for me.

  1. Complaints about Symptoms instead of the Root Problems: A lot of the things complained about by the author of the Mini-Microsoft blog are symptoms of a single problem. About six years ago, Motley Fool ran an article entitled The 12 Simple Secrets of Microsoft Management which listed a number of characteristics of the company which made it successfull. The fourth item on the list is Require Failure and it states

    In contrast, at Microsoft, failure is expected, and even required because risking failure is the only way to push the envelope. As a result, Microsofties relentlessly pursue success without fear of failure. And if they fail, they understand that the key is to fail quickly and not waste time.

    One of the unfortunate things about a culture that turns a blind eye to failure is that this eventually there is no difference between requiring failure and a lack of accountability. A lot of the things Mini complains about point to an environment where  a lack of accountability runs rampant. I'd rather see him ring the bell about these issues [which he does every once in a while] as opposed to meaningless distractions like complaining about vague ship dates or asking for mass firings because the company is "too large".

  2. Microsoft is Lots of Little Companies: One thing that isn't clear from Mini's posts is that a number of the complaints he raises are organization specific. The culture in the Office group is different from that at MSN, the issues facing the people working in the Windows group are different from those facing the folks working on XBox. Mini's blog not only isn't representative but it doesn't seem like he pays much detail to what is happening outside of his group. For example, it is quite telling that he didn't know that the ship date for Visual Studio 2005 was announced a while ago. Given how many people within the company work on and are impacted by the shipping of Whidbey & Yukon, it seems clear that Mini doesn't pay much attention to what is going on outside of his group.

  3. Stack Ranking: This point is a probably a repeat of my first one. First of all when it comes to performance reviews, I tend to agree with Joel Spolsky that Incentive Pay Considered Harmful. Joel wrote

    And herein lies the rub. Most people think that they do pretty good work (even if they don't). It's just a little trick our minds play on us to keep life bearable. So if everybody thinks they do good work, and the reviews are merely correct (which is not very easy to achieve), then most people will be disappointed by their reviews. The cost of this in morale is hard to understate. On teams where performance reviews are done honestly, they tend to result in a week or so of depressed morale, moping, and some resignations. They tend to drive wedges between team members, often because the poorly-rated are jealous of the highly-rated, in a process that DeMarco and Lister call teamicide: the inadvertent destruction of jelled teams.

    In general, systems where you try to competitively rank people and reward them based on their rankings suck. A lot. When you combine this with the current rewards associated with positive rankings at Microsoft, then you have a system that doubly sucks. I think Mini gets this but he keeps talking about alternative performance review systems even though there is lots of evidence that incentive pay systems simply do not work.

Those are the top three reasons that I find myself losing interest in keeping up with the Mini-Microsoft blog. However I'll probably keep reading because the comments now have me gawking at them regularly, sometimes even more than I do at Slashdot.


 

Categories: Life in the B0rg Cube

October 2, 2005
@ 02:02 AM

Tim O'Reilly has posted What Is Web 2.0? : Design Patterns and Business Models for the Next Generation of Software which further convinced me that the definition of Web 2.0 used by Tim O'Reilly and his ilk may be too wide to be useful. In the conclusion of his article he writes

Core Competencies of Web 2.0 Companies

In exploring the seven principles above, we've highlighted some of the principal features of Web 2.0. Each of the examples we've explored demonstrates one or more of those key principles, but may miss others. Let's close, therefore, by summarizing what we believe to be the core competencies of Web 2.0 companies:

  • Services, not packaged software, with cost-effective scalability
  • Control over unique, hard-to-recreate data sources that get richer as more people use them
  • Trusting users as co-developers
  • Harnessing collective intelligence
  • Leveraging the long tail through customer self-service
  • Software above the level of a single device
  • Lightweight user interfaces, development models, AND business models

The next time a company claims that it's "Web 2.0," test their features against the list above.

The list seems redundant in some places and could probably be reduced to 3 points. Half the bullet points all seem to say that the company should expose Web services [in this context I mean services over the Web whether they be SOAP, REST, POX/HTTP, RSS, etc]. So that's point number one. The second key idea seems to be that of harnessing collective intelligence such as with Amazon's recommendation engine, Wikipedia entries and folksonomies/tagging systems. The final key concept is that Web 2.0 companies leverage the long tail. One example of the difference between Web 1.0 and Web 2.0 when it comes to harnessing the long tail is the difference between http://www.msn.com which is a portal that has news and information of general interest that aims at appealing to broad audiences (one size fits all) and http://www.start.com which encourages people to build their own portal that fits their needs (every niche is king).

So let's review. Tim O'Reilly's essay can be reduced to the following litmus test for whether an offering is Web 2.0 or not

  • Exposes Web services that can be accessed on any device or platform by any developer or user. RSS feeds, RESTful APIs and SOAP APIs are all examples of Web services.
  • Harnesses the collective intelligence knowledge of its user base to benefit users
  • Leverages the long tail through customer self-service

So using either Tim O'Reilly's list or mine, I'd be curious to see how many people think http://www.myspace.com is a Web 2.0 offerings or not. If not, why not? If so, please tell me why you think all the folks who've called MySpace a Web 2.0 offering are wrong in my comments. For the record, I think it isn't but would like to compare my reasons with those of other people out there.


 

Categories: Web Development

October 2, 2005
@ 12:47 AM

Brian Jones has a post entitled Native PDF support in Office "12" where he writes

Today's another exciting day as we move closer to Beta 1. We are just wrapping up the MVP summit here in Redmond and we've finally announced another piece of functionality I've wanted to talk about for a long time now. This afternoon Steven Sinofsky announced to our MVPs that we will build in native support for the PDF format in Office "12".  I constantly get asked by customers if we can build in this support for publishing documents as PDF files, and now I can thankfully say "yes!" It's something we've been hearing about for years, and earlier in this project we decided that while there were already existing third party tools for doing this, we should do the work to build the functionality natively into the product.

The PDF support will be built into Word, Excel, PowerPoint, Access, Publisher, OneNote, Visio, and InfoPath! I love how well this new functionality will work in combination with the new Open XML formats in Word, Excel, and PowerPoint. We've really heard the feedback that sharing documents across multiple platforms and long term archiving are really important. People now have a couple options here, with the existing support for HTML and RTF, and now the new support for Open XML formats and PDF!

This is a very welcome surprise. The Office team is one of the few groups on main campus who seem to consistently get it. Of course, the first thought that crossed my mind was the one asked in the second comment in response to Brian's post.


 

There's been a bunch of MSN Virtual Earth hacking going on in my building over the past couple of weeks. There was my Seattle Movie Finder hack. Recently two folks on the MSN Messenger team created a shared map browsing application using the recently released MSN Messenger Activity API. Chandu Thota has the details in his post Virtual Earth and MSN Messenger : Peer-2-Peer Mapping Experience

If you are running MSN Messenger 6.0 or higher, open a conversation with your contact and click on "Activities" menu item; it will display a list of activities that you can use which includes "Virtual Earth Shared Map" as shown below:

Once you and your contact accept this activity you both can find, pan and zoom on the Virtual Earth map all interactively, like the one shown below:

Okay, I don't want to waste your time anymore - this is one of the coolest things I have seen in this space - try it out! you won't be disappointed! :)

PS: Kudos to Steve Gordon and Shree Madhavapeddi from MSN for creating such a wonderful app!

I got a demo of this from Steve and Shree last week, I didn't realize that it would show up in the MSN Messenger application so soon. That is some quick turn around time.

I also got a demo of a cool Start.com gadget which uses MSN Virtual Earth from Matt this week. I wonder how long that will take to sneak out onto the Web.

PS: In his post about this Robert Scoble states that the application was created by Scott Swanson. This isn't accurate, Scott wrote a similar application as a PDC demo but the version you can get in MSN Messenger activities menu isn't it.


 

Categories: MSN

September 30, 2005
@ 08:14 PM

There have been a number of amusing discussions in the recent back and forth between Robert Scoble and several others on whether OPML is a crappy XML format. In posts such as OPML "crappy" Robertson says and More on crappy formats Robert defends OPML. I've seen some really poor arguments made as people rushed to bash Dave Winer and OPML but  none made me want to join the discussion until this morning.

In the post Some one has to say it again… brainwagon writes

Take for example Mark Pilgrim's comments:

I just tested the 59 RSS feeds I subscribe to in my news aggregator; 5 were not well-formed XML. 2 of these were due to unescaped ampersands; 2 were illegal high-bit characters; and then there's The Register (RSS), which publishes a feed with such a wide variety of problems that it's typically well-formed only two days each month. (I actually tracked it for a month once to test this. 28 days off; 2 days on.) I also just tested the 100 most recently updated RSS feeds listed on blo.gs (a weblog tracking site); 14 were not well-formed XML.

The reason just isn't that programmers are lazy (we are, but we also like stuff to work). The fact is that the specification itself is ambiguous and weak enough that nobody really knows what it means. As a result, there are all sorts of flavors of RSS out there, and parsing them is a big hassle.

The promise of XML was that you could ignore the format and manipulate data using standard off-the-shelf-tools. But that promise is largely negated by the ambiguity in the specification, which results in ill-formed RSS feeds, which cannot be parsed by standard XML feeds. Since Dave Winer himself managed to get it wrong as late as the date of the above article (probably due to an error that I myself have done, cutting and pasting unsafe text into Wordpress) we really can't say that it's because people don't understand the specification unless we are willing to state that Dave himself doesn't understand the specification.

As someone who has (i) written a moderately popular RSS reader and (ii) worked on the XML team at Microsoft for three years, I know a thing or two about XML-related specifications. Blaming malformed XML in RSS feeds on the RSS specification is silly. That's like blaming the large number of HTML pages that don't validate on the W3C's HTML specification instead of on the fact that instead of erroring on invalid web pages web browsers actually try to render them. If web browsers didn't render invalid web pages then they wouldn't exist on the Web.

Similarly, if every aggregator rejected invalid feeds then they wouldn't exist. However, just like in the browser wars, aggregator authors consider it a competitive advantage to be able to handle malformed feeds. This has nothing to do with the quality of the RSS specification [or the HTML specification], this is all about applications trying to get marketshare.  

As for whether OPML is a crappy spec? I've had to read a lot of technology specifications in my day from W3C recommendations and IETF RFCs to API documentation and informal specs. They all suck in their own ways. However experience has thought me that the bigger the spec, the more it sucks. Given that, I'd rather have a short, human readable spec that sucks a little (e.g. RSS, XML-RPC, OPML etc.) than a large, jargon filled specificaton which sucks a whole lot more (e.g. WSDL, XML Schema, C++, etc). Then there's the issue of using the right tool for the job but I'll leave that rant for another day.


 

Categories: XML

While using Firefox this morning, I just realized something was missing. There is a Google Toolbar for Firefox, there is a Yahoo! Toolbar for Firefox, so how come there isn't an MSN Toolbar for Firefox? Just yesterday, Ken Moss who runs the MSN Search team posted on their blog about MSN Search Plugins for Firefox where he wrote

However, some of our customers prefer using Firefox and we respect that choice.  Some developers in our user community have created Firefox plug-ins to make it easy to do searches on MSN from the Firefox search box.  Even though it’s currently buried in Firefox under “Add Engines… Find lots of other search engines…”, it seems that our customers have been finding it since we’re listed as one of the most popular search engine plugins.

I use Firefox sometimes in the course of my job – and when I do, I love having the MSN Search engine plugged-in up in the chrome.  If you’re currently a Firefox user – I hope you’ll enjoy this little nugget. For more MSN Search fun with Firefox (or IE!), try out the PDC version of MSN Search enabled by a Trixie / Greasemonkey script.

It's cool to see the MSN Search team giving a shout out to plugins built by the developer community but I think it would be even cooler if we step up to the plate like Yahoo! and Google have done by providing an official,  full fledged toolbar for Firefox.


 

Categories: MSN

September 29, 2005
@ 07:30 PM

Kitty came by my office to remind me that the Web 2.0 conference is next week. As part of the lead up to the conference I can see the technology geek blogosphere is buzzing with the What is Web 2.0? discussion which was sparked off by Tim O'Reilly's posting of the Web 2.0 meme map created during FooCamp. The meme map is below for the few folks who haven't seen it 

The meme map is visual indication that "Web 2.0" has joined "SOA" as a buzzword that is too ill-defined to have a serious technical discussion about. It is now associated with every hip trend on the Web. Social Networking? That's Web 2.0. Websites with APIs? That's Web 2.0. The Long Tail? That's Web 2.0. AJAX? That's Web 2.0. Tagging and Folksonomies? That's Web 2.0 too. Even blogging? Yep, Web 2.0.

I think the idea and trend towards the 'Web as a platform' is an important one and I find it unfortunate that the discussion is being muddied by hypesters who are trying to fill seats in conference rooms and sell books.

I'm in the process of updating my Billl Gates Thinkweek paper on MSN and Web platforms to account for the fact that some of my recommendations are now a reality (I helped launch http://msdn.microsoft.com/msn) and more importantly given recent developments it needs to change tone from a call to action to being more prescriptive. One of the things I'm considering is removing references to "Web 2.0" in the paper given that it may cause a bozo bit to be flipped. What do you think?


 

Categories: Web Development

We're almost ready to begin public beta testing of our implementation of the MetaWeblog API for MSN Spaces. As with most technology specifications, the devil has been in the details of figuring out how common practice differs from what is in the spec.

One place where we hit on some gotchas is how dates and times are defined in the XML-RPC specification which the MetaWeblog API uses. From the spec

Scalar <value>s

<value>s can be scalars, type is indicated by nesting the value inside one of the tags listed in this table:

Tag Type Example
<dateTime.iso8601> date/time 19980717T14:08:55

The reason the above definition of a date/time type is a gotcha is that the date in the example is in the format YYYYMMDDTHH:MM:SS. Although this is a valid ISO 8601 date, most Web applications that support ISO 8601 dates usually support the subset defined in the W3C Note on Dates and Time Formats which is of the form YYYY-MM-DDTHH:MM:SS. Subtle but important difference.

Another thing that had me scratching my head was related to timezones in XML-RPC. The spec states

  • What timezone should be assumed for the dateTime.iso8601 type? UTC? localtime?

    Don't assume a timezone. It should be specified by the server in its documentation what assumptions it makes about timezones.

This just seems broken to me. What if you are a generic blog posting client like Blogjet or W.Bloggar which isn't tied to one particular server? It would seem that the only sane thing that can happen here is for the dates & times from the server to always be used and dates & times from clients be ignored since they are useless without timezones. If I get a blog post creation date of September 29th at 4:30 PM from a client, I can't use it since without a timezone I'll likely date the entry incorrectly by anything from a few hours to an entire day.

It probably would have been better to retrofit timezones into the spec than just punting on the problem as is the case now. I wonder what other interesting gotchas are lurking out there for our beta testers to find. :)


 

I've been in the process of moving apartments so I haven't had free time to work on RSS Bandit. In the meantime, we've been getting lots of excellent bug reports from people using the alpha version of the Nightcrawler release. One of the bug reports we've gotten was that somewhere along the line we introduced a bug that caused significant memory consumption (hundreds of megabytes). Since I've been busy, Torsten tracked it down and wrote about it in his post More to Know About .NET Timers. He wrote

As you may know, .NET 1.1 supports three different Timer classes:

  1. Windows timers with the System.Windows.Forms.Timer class
  2. Periodical delegate calling with System.Threading.Timer class
  3. Exact timing with the System.Timers.Timer class

The timings are more or less accurate (see CodeProject: Multithreading in .NET), but that is not the point I want to highlight today. Two sentences from the mentioned codeproject article are important for this post:

"... Events raised from the windows forms timer go through the message pump (together with all mouse events and UI update messages)..."

and

"...the System.Timers.Timer class. It represents server-based timer ticks for maximum accuracy. Ticks are generated outside of our process..."

To report state and newly retrieved items from requested feeds we used a concept to serialize the asynchronous received results from background threads with the help of a timer. This was introduced in the NightCrawler Alpha Dare Obasanjo posted last week for external tests. Some users reported strange failures, memory hog up and bad UI behavior with this Alpha so I would suggest here to not use it anymore for testing if your subscribed feeds count is higher than 20 or 30 feeds.

The idea was not as bad as it seems to be (if you only look at the issues above). The real issue in our case was to use simply the wrong timer class! The UI state refresh includes an update of the unread counters that is reported to the user within the treeview as number postfixes and (more important here) a font refresh (as user decides, default is to mark the feed caption text bold).
...

So what happens exactly now if the timer fires? I used the CLR Profiler to get the following exiting results. The event is called in sync. with the SynchronizingObject, means Control::WndProc(m) calls calls into System.Windows.Forms.Control::InvokeMarshaledCallbacks void(), MulticastDelegate::DynamicInvokeImpl()... and then our event method OnProcessResultElapsed(). The allocation graph mentions 101 MB (44.78%) used here!
...

So what to do to fix the problem(s)? Simply use the Windows.Forms.Timer! Think about it: it is driven by the main window message pump and runs always in the right context of the main UI thread (no .InvokeRequired calls). Timing isn't an important point here, we just want to process one result each time we are called. Further: no cross-AppDomain security check should happen anymore! With that timer it is just a simple update control(s) with some fresh data!

So take care of the timer class(es) you may use in your projects! Check their implications!

Tracking down bugs is probably the most satisfying and yet frustrating things about programming. I'm glad we got to the root of this problem.

By the way, don't forget to cast your vote in the RSS Bandit Logo Design contest. The time has come for us to update the imagery related the application and we thought it'd be great t have both the new logo and the decision on what it should be in the hands of our users.

 


 

Categories: RSS Bandit

Robert Scoble has a post entitled Search for Toshiba music player demonstrates search engine weakness where he complains about relevance of search results returned by popular web search engines. He writes

Think search is done? OK, try this one. Search for:

Toshiba Gigabeat

Did you find a Toshiba site? All I see is a lot of intermediaries.

I interviewed a few members of the MSN Search team last week and I gave them hell about this. When I'm writing I want to link directly to the manufacturer's main Web site about their product. Why? Because that's the most authoritative place to go.

But I keep having trouble finding manufacturer's sites on Google, MSN, and Yahoo.

Relevancy ratings on search engines still suck. Let's just be honest about it as an industry.

Can the search researchers find a better algorithm? I sure hope so.

Here, compare for yourself. If you're looking for the Toshiba site, did you find what you're looking for when you do searches Google ? On Yahoo ? On MSN ?

Here's the right answer: http://www.gigabeat.toshiba.com . Did you find it with any of the above searches? I didn't.

The [incorrect] assumption in Robert Scoble's post is that the most relevant website for a person searching for information on a piece of electronic equipment is the manufacturer's website. Personally, if I'm considering buying an MP3 player or other electronic equipment I'm interested in (i) reviews of the product and (ii) places where I can buy it. In both cases, I'd be surpised if the manufacturer's website would be the best place to get either.

Relevancy of search results often depends on context. This is one of the reasons why the talk on Vertical Search and A9.com at ETech 2005 resonated so strongly with me. The relevancy of search results sometimes depends on what I want to do with the results. A9.com tries to solve this by allowing users to customize the search engines they use when they come to the site. Google has attempted to solve this by mixing in both traditional web search results with vertical results inline. For example, searching for MSFT on Google returns traditional search results and a stock quote. Also searching for "Seattle, WA" on Google returns traditional web search results and a map. And finally, searching for "Toshiba Gigabeat" on Google returns traditional web search reults and a list of places where you can buy one. 

Even with these efforts, it is unlikely any of them would solve the problem Scoble had as well as if he just used less ambiguous searches. For example, a better test of relevance is which search engine gives the manufacturer's website for the search for "toshiba gigabeat website".

I found the results interesting and somewhat surprising. There definitely is a ways to go in web search.


 

Categories: Technology