There are two videos about MSN's AJAX efforts on Channel 9 today.

  1. Omar Shahine and team - New Hotmail "Kahuna": Hundreds of millions of people use Hotmail. Here's the first look at the next-generation of Hotmail, code-named "Kahuna."

    You meet the team which is located at Microsoft's Silicon Valley campus, hear their design philosophy, and get a first look.

  2. Scott Isaacs - MSN DHTML Foundation unveiled: Scott Isaacs is one of the inventors of DHTML. He is one of Microsoft's smartest Web developers and built a framework that's being used on Start.com, the future Hotmail, and other places like the new gadgets in Windows Vista. Hope you enjoy meeting Scott, sorry for the bad lighting at the beginning. If you've done any AJAX development, you'll find this one interesting and you'll get a look at some bleeding-edge Web development that MSN is doing.

I've been using the Hotmail beta and it is definitely hot. My girlfriend saw me using it and when I told her that was the next version of Hotmail she told me to send hugs and kisses to the Hotmail team. Kudos to Omar, Aditya, Steve Kafka, Imran, Walter, Reeves and all the other folks at Hotmail who're making Kahuna happen.

Additionally it looks like I'll be working on Hotmail features in our next release. So maybe I'll get some of those hugs and kisses next time. ;)


 

Categories: MSN

From the press release Microsoft Realigns for Next Wave of Innovation and Growth

REDMOND, Wash. Sept. 20, 2005 — In order to drive greater agility in the execution of its software and services strategy, Microsoft Corp. today announced a realignment of the company into three newly formed divisions, each of which will be led by its own president.  The Microsoft Platform Products & Services Division will be led by Kevin Johnson and Jim Allchin as co-presidents; Jeff Raikes has been named president of the Microsoft Business Division; and Robbie Bach has been named as president of Microsoft Entertainment & Devices Division. In addition, the company said Ray Ozzie will expand his role as chief technical officer by assuming responsibility for helping drive its software-based services strategy and execution across all three divisions.

The company also announced that Allchin plans to retire at the end of calendar year 2006 following the commercial availability of Windows Vista™, the next-generation Microsoft® Windows® operating system....

Microsoft Platform Products & Services Division

Johnson will succeed Allchin, taking ownership of the Microsoft Platform Products & Services Division, which comprises Windows Client, Server and Tools, and MSN®. To ensure a smooth transition, Johnson and Allchin will serve as co-presidents until Allchin’s retirement next year. The new division’s mission is to enable exciting user experiences and drive customer value through continued innovation in the software platform and software services delivered over the Internet.

"We are focused on creating exciting user experiences and enabling developers to build great applications with the combination of software and software-based services," Ballmer said. "Our MSN organization has great expertise in innovating quickly and delivering software-based services at scale. The platform groups have great expertise in creating a software platform and user experience that touches millions of people. By combining these areas of expertise, we will deliver greater value to our customers. David Cole, senior vice president, will continue to lead MSN, reporting to Johnson.

It seems I was wrong about how long it would take MSN to become more like Windows. The fun thing about Microsoft is that just when you think you have the org chart figured out, we have a reorg.  :)



 

Categories: Life in the B0rg Cube

September 20, 2005
@ 01:06 PM

In a recent interview with Business Week, Microsoft's CEO stated

We certainly have the best pipeline of new innovation [over the next 12 months] we've ever had in our history.

I was thinking about that line on my drive back from work yesterday and I think he has a point. Over the next year or so Microsoft is going to ship Windows Vista, Office 12, IE 7, Visual Studio 2005, BizTalk Server 2006, SQL Server 2005, .NET Framework v2.0, Windows Communications Foundation (Indigo), Windows Presentation Foundation (Avalon), Xbox 360 as well as the next iterations of various offerings from the MSN division including Hotmail, MSN Spaces, MSN Messenger, MSN Virtual Earth, Start.com, etc. That's a lot of stuff and is probably more stuff than has ever shipped in a 12 - 18 month time period in the company's history.

I suspect that one of the interesting consequences of this will be a significant diffusion of talent across the company and perhaps across the industry. A lot of people have been working on a big pieces of software for several years and will be looking for something new. The most restless have already started moving around (e.g. I went from the XML team to MSN, Joshua went from the XML team to Internet Explorer via the Passport team, and Michael went from the XML team to XBox). I've started having more conversations with folks interested in a change and I expect this will only increase over the next 12 months. Definitely interesting times ahead.

On an unrelated note, I have updated the track of the week from my high school days on my space. Just click on the play button on the Windows Media player module to hear this week's track.


 

Categories: Life in the B0rg Cube

Below is an excerpt from a transcript of an interview with Bill Gates by Jon Udell during last week's Microsoft Professional Developer Conference (PDC).

JU: So a few people in the audience spontaneously commented when they saw the light version of the presentation framework, I heard the words "Flash competitor" in the audience. Do you think that's a fair observation? And do you think that that's potentially a vehicle for getting Avalon interfaces onto not just devices but non-Windows desktops? To extend the reach of Avalon that way?

BG: From a technology point of view, what the Windows Presentation Foundation/Everywhere thing does -- I think it was called Jolt internally. It overlaps what Flash does a lot. Now, I don't think anybody makes money selling lightweight presentation capability onto phones and devices and stuff like that. We're making this thing free, we're making it pervasive. I don't think anybody's business model is you get a bunch of royalties for a little presentation runtime. So there'll certainly be lots of devices in the world that are going to have Flash and they're going to have this WPF/E -- which they say they're going to rename, but that's the best they could do for now -- there'll be lots of devices that have both of those, but they don't conflict with each other. It's not like a device maker say -- oh my god, do I pick Flash, do I pick WPF/E? You can have both of those things and they co-exist easily. They're not even that big.

JU: And it's a portable runtime at this point, so is it something that conceivably takes XAML apps to a Mac desktop or a Linux desktop? Is that a scenario?

BG: The Mac is one of the targets that we explicitly talked about, so yes. Now it's not 100 percent of XAML, we need to be clear on that. But the portion of XAML we've picked here will be everywhere. Absolutely everywhere. And it has to be. You've got to have, for at least reading, and even some level of animation, you've got to have pervasiveness. And will there be multiple things that fit into that niche? Probably. Because it's not that hard to carry multiple...you as a user don't even know when you're seeing something that's WPF/E versus Flash versus whatever. It just works.

One of my concerns when it came to the adoption of the the Windows Presentation Foundation (formerly Avalon) has been the lack of cross platform/browser support. A couple of months ago, I wrote about this concern in my post The Lessons of AJAX: Will History Repeat Itself When it Comes to Adoption of XAML/Avalon?. Thus it is great to see that the Avalon folks have had similar thoughts and are working on a cross-platform story for the Windows Presentation Foundation.

I spoke to one of the guys behind WPF/E yesterday and it definitely looks like they have the right goals. This will definitely be a project to watch.


 

Categories: Technology | Web Development

September 20, 2005
@ 12:22 PM

I have to agree with Robert Scoble that Google's blog search not as good at link searching.

The only feature I use the various blog search engines like Feedster, Technorati, IceRocket and Google Blog Search for is looking for references to my posts which may not have shown up in my referer logs. Therefore, the only feature I care about is link searching and my main quality criteria is how fresh the index is. Here, Bloglines Citations Search is head and shoulders above everything else out there today. I've been using the various blog search engines every day for the past few weeks and Bloglines is definitely at the head of the pack.

Compare and contrast,

  1. http://www.bloglines.com/citations?url=http://www.25hoursaday.com/weblog 

  2. http://blogs.icerocket.com/search?q=http://www.25hoursaday.com/weblog

  3. http://blogsearch.google.com/blogsearch?hl=en&q=link:www.25hoursaday.com/weblog

  4. http://www.technorati.com/search/www.25hoursaday.com/weblog


 

I mentioned last week that currently with traditional portal sites like MyMSN or MyYahoo, I can customize my data sources by subscribing to RSS feeds but not how they look. Instead all my RSS feeds always look like a list of headlines. Start.com fundamentally changes this model by turning it on its head. I can create an RSS feed and specify how it should render in Start.com using JavaScript in extension elements which basically makes it a Start.com gadget, no different from the default ones provided by the site. For example, I can create an RSS feed for weekly weather reports and specify that they should rendered as shown below within Start.com

Scott Isaacs gives some descriptions of how the RSS extensions used by Start.com work in his post Declaring Gadgets for Start.com using "RSS". He writes

Introduction to the Gadget Manifest

First, let's look at the Gadget manifest format. For defining manifests, we basically reused the RSS schema.  This format decision was driven by the fact we already have a parser in Start.com's application model for RSS, there is broad familiarity with the RSS format, and I personally did not want to invent yet another schema :-). While we reused the RSS schema, we do recognize that these are not typical RSS feeds as they are not intended to be consumed and directly rendered by an aggregator. Therefore, we are considering whether we should use a different file extension or root element (e.g., replace RSS with Manifest) but still leverage the familiar tags. For the sake of simplicity, we chose to ship reusing RSS as the format and then listen to the community on how to proceed. We are very open to suggestions.

Looking at the Gadget manifest, we extended the RSS schema with one custom tag, and one custom attribute. We defined those under the binding namespace. Below is a sample Gadget manifest:

<?xml version="1.0"?>
<rss version="2.0" xmlns:binding="
http://www.start.com ">
   <channel>
      <title>Derived Hello World</title>
      <link>http://yourhomepage.com</link>
      <description>A sample hello world binding.</description>
      <language>en-us</language>
      <pubDate>Wed, 27 Apr 2005 04:00:00 GMT</pubDate>
      <lastBuildDate>Wed, 27 Apr 2005 04:00:00 GMT</lastBuildDate>
      <binding:type>Demo.MyHelloWorld</binding:type>
      <item>
         <link>http://siteexperts.com/bindings/MyHello.js</link>
      </item>
      <item>
         <link binding:type="inherit">http://siteexperts.com/bindings/hello.xml</link>
      </item>
      <item>
         <link binding:type="css">http://siteexperts.com/bindings/myHelloWorld.css</link>
      </item>
   </channel>
</rss>

Looking at the Gadget manifest, until we reach an RSS item, the semantics of the existed RSS tags is maintained. The title serves as the Gadget title, link typically points to your home page or page about your Gadgets, description is your Gadget's description, and so on.  The added binding:type element serves as the Gadget class to instantiate from the associated resources.

Looking at each item, we do know that we left off the required title and description since this file is not intended to be directly viewed. However, adding those tags could be useful to help describe the resources being used.

The last change is we added a binding:type attribute to each resource. We currently support three types: script (the default), css, and inherit. Inherit would point to another "RSS" manifest file that would be further consumed.

Assocating a Manifest with a Feed

Start.com supports loading stand-alone Gadgets directly from a manifest. In addition, You can now define a Gadget that presents a custom experience your feed. This is very useful for a number of scenarios...The custom experiences are defined using the "RSS" Manifest format described above. However, since these Gadgets for RSS feeds are driven by the feed itself, we needed to extend traditional RSS with a single extension. This extension associates a manifest with the feed. We created a new channel element, binding:manifest that can be included in any RSS feed. This element specifies the Gadget manifest to use for the feed.

<binding:manifest environment="Start" version="1.0">
 
http://siteexperts.com/bindings/rumorcity.xml
</binding:manifest>

We created this element to not be coupled to any single implementation. Hence the required environment element. Aggregators that understand the manifest tag can examine the environment value. If they support the specified environment, they can choose to present the custom experience.

Despite the fact that I kicked off some of the initial discussions with Steve Rider for what are now Start.com gadgets, I haven't paid much attention to the design since Start.com is a work in progress. Based on the current design, I have two primary pieces of feedback.

  1. I'd suggest picking a different namespace URI. XML namespace URIs usually point to documentation about the format, in the cases that they don't it is often a cause of consternation amongst developers. For example, most XML namespaces used by Microsoft are from the schemas.microsoft.com domain which often point to schemas for the various Microsoft XML vocabularies. In the cases where they don't it is likely that they will in future. See http://www.google.com/search?q=+site:schemas.microsoft.com+%22schemas.microsoft.com%22 for some examples.

  2. If Gadget manifests aren't supposed to be consumed by traditional RSS aggregators then Start.com should not use RSS as it's manifest format. The value of using RSS is that even if a client doesn't understand your extensions then the feed is still useful. Start.com currently breaks that assumption which to me is an abuse of RSS.

Scott is currently seeking feedback for the Start.com RSS extensions and I'd suggest that interested parties should post some comments about what they like or dislike so far in response to his blog post.

Update: Since writing this post I've exchanged some mail with the Start.com team and in addition to my feedback we've discussed feedback from folks like Phil Ringnalda and James Snell. The Start.com team used RSS as the gadget manifest file format as an experiment in the spirit of the continuous experiment that is http://www.start.com. Based on the feedback from the community, alternatives will be considered and fully documented when the choices have been made. Given my experience in XML syndication technologies I'll be working with the Start.com team on exploring alternatives to the current techniques used for creating gadget manifests as well as documenting them.

Keep the feedback coming.


 

Recently, Sam Ruby announced that the Atom 0.3 syndication format would be deprecated by the Feed Validator. When I first read his post I half wondered what would happen if someone complained about being told their previously valid feed was no longer valid simply because it was now using an "old" format. This afternoon I found an  email from Donald Knuth (yes, that one) to the www-validator@w3.org mailing list complaining about just that. In his mail note from Prof Knuth, he writes

Dear Validators,

I've been happily using your service for many years --- even before w3c
took it over. I've had a collection of web pages at Stanford since
1995 or so; it now amounts to hundreds of pages, dozens of which have
tens of thousands of hits, several of which have hits in the millions.

Every time I make a nontrivial change, I've been asking the validator
to approve it. And every time, I've won the right to display the
"HaL HTML Netscape checked" logo.

Until today. Alluva sudden you guys have jerked the rug out from
under my feet.

I protest! I feel like screaming! Unfair!

I'm not accustomed to flaming, but I have to warn you that I am just
now more than a little hot under the collar and trying not to explode.

For years and years, I have started each webpage with the formula
I found in the book from which I learned HTML many years ago, namely
  <!DOCTYPE HTML PUBLIC "-//Netscape Comm. Corp.//DTD HTML//EN">

Today when I tried to validate a simple edit of one page, I found
that your system no longer is happy --- indeed, it hates every
one of my webpages. (If you need a URL, Google for "don" and take
the topmost page, unless you are in France.)

For example, it now finds 19 errors on my home page, which was 100%
valid earlier this month. The first error is "unknown parse mode!".
Apparently Stanford's Apache server is sending the page out as text/html.
You are saying text/html is ambiguous, but that you are going to continue
as if it were SGML mode. Fine; but if I get the Stanford folks to
change the MIME type to SGML mode, I'll still have 18 more errors.

The next error is "no DOCTYPE found". But guys, it is there as
plain as day. Henceforth you default to HTML 4.01 Transitional.

Then you complain that I don't give "alt" specifications with
any of the images. But the Netscape DTD I have used for more
than 3000 days does not require it.

Then you don't allow align="absmiddle" in an image.

I went to your help page trying to find another DTD that might
suit. Version 2.0 seemed promising; but no, it failed in other
ways --- like it doesn't know the bgcolor and text color attributes
in the <body> of my page.

Look folks, I know that software rot (sometimes called "progress")
keeps growing, and backwards compatibility is not always possible.
At one point I changed my TeX78 system to TeX82 and refused to
support the older conventions.

But in this case I see absolutely no reason why system people who
are supposedly committed to helping the world's users from all
the various cultures are suddenly blasting me in the face and
telling me that you no longer support things that every decent
browser understands perfectly well.

To change all these pages will cost me a week's time. I don't
want to delay The Art of Computer Programming by an unnecessary week;
I've been working on it for 43 years and I have 20 more years of work
to do, and who knows what illnesses and other tragedies are in store.
Every week is precious, especially when it seems to me that there
is no valid validation reason for a competent computer system person
to be so fascistic. For all I know, you'll be making me spend
another week on this next year, and another the year after that.

So, my former friends, please tell me either (i) when you are
going to fix the problem, or (ii) who is your boss so that I
can complain at a higher level.

Excuse me, that was a bit flamey wasn't it, and certainly egocentric.
But I think you understand why I might be upset.

Sincerely, Don Knuth


 

September 18, 2005
@ 04:15 AM

I've been a long time skeptic when it comes to RDF and the Semantic Web. Every once in a while I wonder if perhaps what I have a problem with is the W3C's vision of the Semantic Web as opposed to RDF itself. However in previous attempts to explore RDF I've been surprised to find that its proponents seem to ignore some of the real world problems facing developers when trying to use RDF as a basis for information integration.

Recently I've come across blog posts by RDF proponents who've begun to question the technology. The first is the blog post entitled Crises by Ian Davis where he wrote

We were discussing the progress of the Dublin Core RDF task force and there were a number of agenda items under discussion. We didn’t get past the first item though - it was so hairy and ugly that no-one could agree on the right approach. The essence of the problem is best illustrated by the dc:creator term. The current definition says An entity primarily responsible for making the content of the resource. The associated comments states Typically, the name of a Creator should be used to indicate the entity and this is exactly the most common usage. Most people, most of the time use a person’s name as the value of this term. That’s the natural mode if you write it in an HTML meta tag and it’s the way tens or hundreds of thousands of records have been written over the past six years...Of course, us RDFers, with our penchant for precision and accuracy take issue with the notion of using a string to denote an “entity”. Is it an entity or the name of an entity. Most of us prefer to add some structure to dc:creator, perhaps using a foaf:Person as the value. It lets us make more assertions about the creator entity.

The problem, if it isn’t immediately obvious, is that in RDF and RDFS it’s impossible to specify that a property can have a literal value but not a resource or vice versa. When I ask “what is the email address of the creator of this resource?” what should the (non-OWL) query engine return when the value of creator is a literal? It isn’t a new issue, and is discussed in-depth on the FOAF wiki.

There are several proposals for dealing with this. The one that seemed to get the most support was to recommend the latter approach and make the first illegal. That means making hundreds of thousands of documents invalid. A second approach was to endorse current practice and change the semantics of the dc:creator term to explictly mean the name of the creator and invent a new term (e.g. creatingEntity) to represent the structured approach.
...
That’s when my crisis struck. I was sitting at the world’s foremost metadata conference in a room full of people who cared deeply about the quality of metadata and we were discussing scraping data from descriptions! Scraping metadata from Dublin Core! I had to go check the dictionary entry for oxymoron just in case that sentence was there! If professional cataloguers are having these kinds of problems with RDF then we are fucked.

It says to me that the looseness of the model is introducing far too much complexity as evidenced by the difficulties being experienced by the Dublin Core community and the W3C HTML working group. A simpler RDF could take a lot of this pain away and hit a sweet spot of simplicity versus expressivity.

Ian Davis isn't the only RDF head wondering whether there is too much complexity involved when trying to use RDF to get things done. Uche Ogbuji also has a post entitled Is RDF moving beyond the desperate hacker? And what of Microformats? where he writes

I've always taken a desperate hacker approach to RDF. I became a convert to the XML way of expressing documents right away, in 1997. As I started building systems that managed collections of XML documents I was missing a good, declarative means for binding such documents together. I came across RDF, and I was sold. I was never really a Semantic Web head. I used RDF more as a desperate hacker with problems in a fairly well-contained domain.
...
I've developed an overall impression of dismay at the latest RDF model semantics specs. I've always had a problem with Topic Maps because I think that they complicate things in search of an unnecessary level of ontological purity. Well, it seems to me that RDF has done the same thing. I get the feeling that in trying to achieve the ontological purity needed for the Semantic Web, it's starting to leave the desperate hacker behind. I used to be confident I could instruct people on almost all of RDF's core model in an hour. I'm no longer so confident, and the reality is that any technology that takes longer than that to encompass is doomed to failure on the Web. If they think that Web punters will be willing to make sense of the baroque thicket of lemmas (yes, "lemmas", mi amici docte) that now lie at the heart of RDF, or to get their heads around such bizarre concepts as assigning identity to literal values, they are sorely mistaken. Now I hear the argument that one does not need to know hedge automata to use RELAX NG, and all that, but I don't think it applies in the case of RDF. In RDF, the model semantics are the primary reason for coming to the party. I don't see it as an optional formalization. Maybe I'm wrong about that and it's the need to write a query language for RDF (hardly typical for the Web punter) that is causing me to gurgle in the muck. Assuming it were time for a desperate hacker such as me to move on (and I'm not necessarily saying that I am moving on), where would he go from here?

Uche is one of the few RDF heads whose opinions seem grounded in practicality (Joshua Allen is another) so it is definitely interesting to see him begin to question whether RDF is the right path.

I definitely think there is some merit to disconnecting RDF from the Semantic Web and seeing if it can hang on its own from that perspective. For example, XML as a Web document format is mostly dead-on-arrival but it has found a wide variety of uses as a general data interchange format instead. I've wondered if there is similar usefulness lurking within RDF once it loses its Semantic Web baggage.


 

Categories: Web Development | XML

September 16, 2005
@ 05:27 PM

The announcements from about Microsoft's Linq project just keep getting better and better. In his post XML, Dynamic Languages, and VB, Mike Champion writes

Thursday at PDC saw lots of details being put out about another big project our team has been working on -- the deep support for XML in Visual Basic 9...On the VB9 front, the big news is that two major features beyond and on top of LINQ will be supported in VB9:

"XML Literals" is  the ability to embed XML syntax directly into VB code. For example,

Dim ele as XElement = <Customer/>

Is translated by the compiler to

Dim ele as XElement =  new XElement("Customer")

The syntax further allows "expression holes" much like those in ASP.NET where computed values can be inserted.

"Late Bound XML" is the ability to reference XML elements and attributes directly in VB syntax rather than having to call navigation functions.  For example

Dim books as IEnumerable(Of XElement) = bib.book

Is translated by the compiler to

Dim books as IEnumerable(Of XElement) = bib.Elements("book")

 We believe that these features will make XML even more accessible to Visual Basic's core audience. Erik Meijer, a hard core languages geek who helped devise the Haskell functional programming language and the experimental XML processing languages X#, Xen, and C-Omega, now touts VB9 as his favorite.

Erik Meijer and I used to spend a lot of time talking about XML integration into popular  programming languages back when I was on the XML team. In fact, all the patent cubes on my desk are related to work we did together in this area. I'm glad to see that some of the ideas we tossed around are going to make it out to developers in the near future. This is great news.


 

Categories: XML

A few months ago in my post GMail Domain Change Exposes Bad Design and Poor Code, I wrote Repeat after me, a web page is not an API or a platform. It seems some people are still learning this lesson the hard way. In the post The danger of running a remix service Richard MacManus writes

Populicio.us was a service that used data from social bookmarking site del.icio.us, to create a site with enhanced statistics and a better variety of 'popular' links. However the Populicio.us service has just been taken off air, because its developer can no longer get the required information from del.icio.us. The developer of Populicio.us wrote:

"Del.icio.us doesn't serve its homepage as it did and I'm not able to get all needed data to continue Populicio.us. Right now Del.icio.us doesn't show all the bookmarked links in the homepage so there is no way I can generate real statistics."

This plainly illustrates the danger for remix or mash-up service providers who rely on third party sites for their data. del.icio.us can not only giveth, it can taketh away.

It seems Richard Macmanus has missed the point. The issue isn't depending on a third party site for data. The problem is depending on screen scraping their HTML webpage. An API is a service contract which is unlikely to be broken without warning. A web page can change depending on the whims of the web master or graphic designer behind the site.

Versioning APIs is hard enough, let alone trying to figure out how to version an HTML website so screen scrapers are not broken. Web 2.0 isn't about screenscraping. Turning the Web into an online platform isn't about legitimizing bad practices from the early days of the Web. Screen scraping needs to die a horrible death. Web APIs and Web feeds are the way of the future.


 

Categories: Web Development