It looks like Windows Live Shopping is finally live. From the blog post entitled Ta Da! from the Windows Live Shopping team's blog we get the following excerpt

Today we launch the brand new Windows Live Shopping site!

What is it? It is the beta launch of Microsoft’s Web 2.0 shopping experience, featuring one of the world’s largest product catalogs, user-created content and an easier-to-use interface built on 100% AJAX technology. It uses a unified shopping engine to search or browse almost 40 million products from 7,000 stores ranging from many of the country’s leading retailers to eBay. Results are displayed in an order that is not affected by advertising; merchants cannot pay to have their items show up closer to the top. Users will be able to drag-and-drop items to a shopping list and share lists with friends; see user reviews of products and sellers; and read and create public shopping guides on any subject.

You can get more of an inside perspective on the new service from the Ian McAllister's blog post entitled Windows Live Shopping Beta Has Hatched where he talks about some of the thinking that led to the creation of the service.

Unfortunately, as noted by Mike Arrington in his post Microsoft Live Shopping Launches - But No Firefox the site doesn't support Firefox. This is a known issue and one the team will address in the future. I personally think they should have waited until Firefox support was working. As Mike Arrington points out a lot of geeks and power users have switched to Firefox from IE. Mike states that 70% of TechCrunch's traffic is from Firefox users. In December 2005, Boing Boing stated that more of their readers use Firefox than IE.

Luckily some folks from the IE team helped me fix my IE 7 problems and I got to try out the service. The user interface is definitely snazzy in the way that all Windows Live services have become. Dragging and dropping items into a shopping list is a neat touch as is the slider that lets you control the amount of detail or images in the search results. It doesn't seem that the search index is quite populated yet. Below are search results for an item I've been wanting to buy for the past few weeks [and just purchased after running these searches] from eBay, Froogle, Windows Live Shopping and Yahoo! Shopping.

  1. Search for "transformers decal" on eBay
  2. Search for "transformers decal" on Froogle
  3. Search for "transformers decal" on Windows Live Shopping
  4. Search for "transformers decal" on Yahoo Shopping

How would you rank the quality and quantity of those results?


 

Categories: Windows Live

Somewhere along the line it seems like I downloaded one too many internal builds of Internet Explorer 7 and hosed my browser setup. Since I hate mucking around with re-installing, rebooting and registry tweaks I decided to use Firefox as my full time browser instead. Not only have I not missed IE 7, I've fallen in love with certain Firefox extensions like SessionSaver which recovers all open browser tabs and form content in case you have a browser crash. This has proved to be a godsend when I come in during the morning and our IT department has rebooted my machine due to some security patch. All I need to do is fire up Firefox and have all my open tabs from the previous day show up in the browser.

The only problem I've had has been with constantly being prompted to enter my username and password on every Sharepoint page I browse to in our corporate network. I had almost resigned to having to waste my morning trying to fix my hosed IE 7 install until it occured to me to search the Web for a solution. That's how I found out about the network.automatic-ntlm-auth.trusted-uris preference. Once I tweaked this preference, it was all good.
...
Except for the fact that Sharepoint seems to use a bunch of Javascript that only works in Internet Explorer so I'm still going to have to muck with my Internet Explorer install anyway. 

*sigh*

At least I can browse different pages without that prompt showing up all the time.  I hope this information is useful to some other poor soul who's trying to use Firefox on an intranet that requires NTLM authentication.


 

Categories: Web Development

April 27, 2006
@ 05:04 AM

From Niall Kennedy's blog post entitled Facebook enters the workplace we learn

Popular social networking site Facebook is moving beyond schools and into the workplace. A new version of thesite went live this morning allowing new registrations on corporatee-mail addresses. I was able to signup using my Microsoft address andcompleted my profile.

Facebook at work

From the press release Microsoft Spins Out a Wallop we learn

Microsoft Corp. today announced the spinout of a new social networking technology, developed by Microsoft Research, to create a new Silicon Valley startup, Wallop Inc. Wallop, whose aim is to deliver the next generation of social computing, is led by experienced entrepreneur and CEO Karl Jacob, with 30-year veteran Bay Partners providing Series A financing.
...
Wallop Advances Social Networking
Launching later this year, Wallop solves the problems plaguing current social networking technologies and will introduce an entirely new way for consumers to express their individuality online. For example, today’s social networks have difficulty enabling people to interact in a way similar to the way they would in the real world. Wallop tapped legendary Frog Design Inc. to conceive a next-generation user interface enabling people to express themselves like never before. In addition, Wallop departs from the friend-of-a-friend model common in all social networks today and the root of many of their problems. Instead, Wallop developed a unique set of algorithms that respond to social interactions to automatically build and maintain a person’s social network.

Interesting moves in the world of Social Networking. Facebook moving into the competition with LinkedIn is unsurprising. Microsoft spinning of Wallop so that it may eventually be a competitor of MSN Spaces was not. I guess I'm not as plugged in as I thought at work.


 

Categories: Social Software

Last week there was an outage on NewsGator Online. This outage didn't just affect people who use the NewsGator Online but also users of their desktop readers such as FeedDemon which synchronize the users feed state between the desktop and the web-based reader.

In his post Dealing with Connectivity Issues in Desktop Applications Nick Bradbury writes

One of the more frustrating challenges when designing a desktop application that connects to the Internet is figuring out how to deal with connectivity issues caused by firewalls, proxy servers and server outages.
...
And as we discovered last week, when your application relies on a server-side API, it has to be able to deal with the server being unavailable without significantly impacting the customer. This was something FeedDemon 2.0 failed to do, and I have to take the blame for this. Because of my poor design, synchronized feeds couldn't be updated while our server was down, and to make matters worse, FeedDemon kept displaying a "synchronization service unavailable" message every time it tried to connect - so not only could you not get new content, but you were also bombarded with error messages you could do nothing about.

A couple of months ago I wrote a blog post entitled The Newsgator API Continues to Frustrate Me where I complained about the fact that Newsgator Online assumes that clients that synchronize with it actually just fetch all their data from Newsgator Online including feed content. This is a bad design decision because it means that they expected all desktop clients who synchronize with the web-based reader to have a single point of failure. As someone who's day job is working on the platforms that power a number of Windows Live services, I know from experience that service outages are a fact of life. In addition, I also know that you don't want clients making requests to your service unless they absolutely have to. This is not a big deal at first but once you get enough clients you start wanting them to do as much data retrieval and processing as they can without hitting your service. Having a desktop feed reader rely on a web service for fetching feeds instead of having it fetch feeds itself needlessly increases the costs of running your online service and doesn't buy your customers a significantly improved user experience. 

I've bumped into Greg Reinacker since I complained about the Newsgator API and he's been adamant about the correctness of their design decisions. I hope the fallout from the recent outage makes them rethink some of the design of Newsgator's RSS platform.


 

In the blog post entitled Rapleaf to Challenge eBay Feedback Mike Arrington talks about newly formed Rapleaf which aims to build a competitor to eBay's feedback system. This idea shows a lot of insight on the part of the founders. The value that eBay provides to sellers and buyers is primarily the reputation system and not as a venue for auctions. The network effects inherent in eBay's reputation system make it the ultimate kind of lock-in. No power seller or buyer will look at alternatives even if they are free (like Yahoo! Auctions) because they don't want to start from scratch with the reputation they've built or trust trading with people whose reputations haven't been built. However it isn't a slam dunk that Rapleaf will be a successful idea. 

In his post entitled Rapleaf's Fatal Flaws Ian McAllister of Windows Live Shopping writes

Flaw #1 - Transaction Unaware
Rapleaf is not in the middle of transactions. They have no way to determine if a transaction between two parties actually took place. Co-founder Auren Hoffman claims that their sophisticated human and machine-based fraud detection will be able to detect fraud but to me this seems like complete hand-waving...The success of eBay's feedback system rests completely on the fact that they attach feedback only to completed transactions where eBay collects money via commission.
 
Flaw #2 - Cold Start
Every new startup has a cold start problem and must build users, customers, partners, etc. from the ground up but Rapleaf has the mother of all cold start problems. The post mentioned nothing about how they plan to build mindshare in the market and I think they'll be dead in the water if they expect users to start going to www.rapleaf.com in droves all of a sudden and being keen to trust one of the 342 Rapleaf trusted sellers based on 2 items of feedback not attached to any verified transaction.

Flaw #2 was something I'd considered but Flaw #1 didn't even occur to me. Now that I consider it, I can't see how they can be successful as a competitor of companies like eBay since they aren't part of the transaction. It would seem to make more sense for them to be a partner of eBay except that there is no incentive for eBay to partner with them and thus provide an avenue out of the lock-in of eBay's feedback system.

Does this mean Rapleaf is DOA?


 

April 24, 2006
@ 02:25 PM

I read a number of news stories last week about Microsoft hiring a former exec from Ask.com to run MSN. A number of these news sources and corresponding blog posts got the story wrong in one way or the other

In her news story entitled Former Ask.com president will join Microsoft Kim Peterson of the Seattle Times wrote

Microsoft has hired the former president of search rival Ask.com to run its online business group, overseeing the MSN and Windows Live units and playing a big role in the company's move to the Web.

In the Reuters news story entitled Microsoft hires CEO of Ask.com to head Web unit it states

Software giant Microsoft Corp. said on Friday it hired away Steve Berkowitz, the chief executive of rival Internet company Ask.com, to head Microsoft's own Internet business.

In her blog post entitled CEO of Ask.com moves to Microsoft Charlene Li of Forrester Research wrote

Most importantly, Microsoft is taking a very important step in putting ALL of the hot consumer products under one team. Live.com is at the core of Microsoft's turnaround -- it represents fast development cycles and a totally new approach to addressing the marketplace. At the same time, Microsoft can't turn its back on the advertising juggernaut of MSN.com. In the past year, there's been uncertainty about how MSN.com and Live.com will work together. Having them all come together under Steve will be a first step in addressing the concerns of the MSN.com group while maintaining Live.com's momentum.

Highlighted in red are statements which are at best misleading. I'm not singling out the above news publications and bloggers, almost every article or blog post I read about Steve Berkowitz being hired gave the same misleading impression.

Why are they misleading? That's easy. Let's go back to the Microsoft press release Microsoft Realigns Platforms & Services Division for Greater Growth and Agility which breaks out Microsoft's internet business into the following three pieces

Windows and Windows Live Group
With Sinofsky in charge, the Windows and Windows Live Group will have engineering teams focused on delivering Windows and engineering teams focused on delivering the Windows Live experiences. Sinofsky will work closely with Microsoft CTO Ray Ozzie and Blake Irving to support Microsoft’s services strategy across the division and company.

Windows Live Platform Group
Blake Irving will lead the newly formed Windows Live Platform Group, which unites a number of MSN teams that have been building platform services and capabilities for Microsoft’s online offerings. This group provides the back-end infrastructure services, platform capabilities and global operational support for services being created in Windows Live, Office Live, and other Microsoft and third-party applications that use the Live platform. This includes the advertising and monetization platforms that support all Live service offerings.

Online Business Group
The new Online Business Group includes advertising sales, business development and marketing for Live Platforms, Windows Live and MSN — including MSN.com, MSNTV and MSN Internet Access. David Cole, senior vice president, will lead this group until his successor is named before his leave of absence at the end of April.

That's right, three pieces each with it's own corporate vice president. So Charlene Li isn't quite right when she says that MSN.com and Live.com are now aligned under Steve Berkowitz. Instead what's being aligned under him is the business development and marketing for both sites. The platform that powers Live.com should be under Blake Irving while the actual website development is under Steven Sinofsky.

I'm sure that makes as much sense to you as it does to me. However according to the press release, this organizational structure will increase Microsoft's agility in delivering innovation to customers.

I can't wait.


 

April 24, 2006
@ 01:59 PM

It's MS Poll season. This is when at work our employer encourages us to fill out an opinion poll on how we feel our day jobs and the company in general. Besides Mini-Microsoft I've seen a couple of the introspective posts about working in the B0rg cube I expect to see during this season such as Robert Scoble's How Microsoft can shut down Mini-Microsoft and Mike Torres's Playing to "not lose.

I don't really have anything intospective to add to what they've written. I probably won't fill out MS Poll this year since it's always felt to me like a pointless opinion poll. If my management can't tell what I like or dislike about working here then it's a screw up on both our parts which won't be fixed by a hastily filled out opinion poll.

I did find an interesting comment by Leah Pearlman to Mike Torres's post I felt compelled to talk about. She wrote

Re: Innovation. Hmm. I agree and disagree. I agree from the standpoint of a Microsoft employee who wants to work on innovative things.  I agree with you that there’s been too much talk about “how to beat the competition.” Reinventing the wheel because Yahoo! and Google have wheels doesn't get me out of bed in the morning. But! (you knew it was coming) My opinion lately has been that there's too much emphasis put on innovation at Microsoft, and it comes at the expense of fundamentals, intuitiveness, simplicity.  Often times there are great reasons why our competitors have done certain things , and I see people carelessly disregard these things in the name of innovation.

Innovation is one of those words Microsoft has killed. What has begun to irritate me is when people describe what is basically a new feature in their product as an innovation. To start off, by definition, since you work at a big software nothing you work on is innovative. Even Google who used to be raised up as the poster child of innovation in the software industry have been reduced to copying Yahoo! services and liberally sprinkling them with AJAX as they've grown bigger. 

Often when I hear people claiming that the new feature in their Microsoft product is an innovation it just makes them look ignorant. Most of their innovations are either (i) already shipping in products offered by competitors or startups that anyone who reads TechCrunch is aware of or (ii) are also being worked on by some poor slobs at AOL/Google/Yahoo! who also think their feature is extremely innovative. As Jeremy Zawodny pointed out in his post Secrets of Product Development and What Journalists Write

Larger companies rarely can respond that quickly to each other. It almost never happens. Sure, they may talk a good game, but it's just talk. Building things on the scale that Microsoft, Google, AOL, or Yahoo do is a complex process. It takes time.

Journalists like to paint this as a rapidly moving chess game in which we're all waiting for the next move so that we can quickly respond. But the truth is that most product development goes on in parallel. Usually there are people at several companies who all have the same idea, or at least very similar ones. The real race is to see who can build it faster and better than the others.

The culture of bragging about dubious innovations likely springs from the need to distinguish yourself from the pack in a reward culture that takes dog-eat-dog to another level. Either way, do me a favor. Stop calling your new features innovations. They aren't.

Thanks for listening


 

Categories: Life in the B0rg Cube

April 23, 2006
@ 06:59 AM

I've posted a number of blog  entries in the past about how popular various blogs on MSN Spaces are, especially the Asian ones. Unsurprisingly it's taken some of the folks from the insular geek blog set, a while to notice this trend. Recently, Scott Karp wrote about this in a blog post entitled Technorati Top 100 Is Changing Radically which was followed up by a blog post entitled Get on MSN Spaces in Asia and watch the link-love pile up. Sort of by Chris Edwards.

Both blog posts are interesting because the authors refuse to believe that it is possible for blogs they haven't heard of from Asian countries like China and Japan to be more popular than A-list technology bloggers like Dave Winer. In his post, Chris Edwards points out that the incoming links for blogs like M¥$ŤěяĬǾũ§ ĢÎѓĻ contain blogs that only link to the Space via the Recently Updated Spaces module. On an initial glance this seems to be true. When Technorati first started tracking MSN Spaces we realized this module would be a problem and added rel='nofollow' on all links to spaces from this module. This means that search engines and web crawlers should not consider these links as 'votes' for the site for page ranking purposes.

Ignoring that particular space, there are still a number of spaces in the Technorati Top 100 whose most recent links don't come from the Recently Updated Spaces module. For example, check out the incoming links to http://spaces.msn.com/MSN-SA, http://spaces.msn.com/atiger and http://spaces.msn.com/members/thespacecraft (MSN Spaces team blog).

As much as it seems to bother some technology geeks, a number of blogs hosted on MSN Spaces are more popular than so-called A-list technology bloggers.


 

Categories: Windows Live

From the Reuters article Microsoft heads to college to pitch Windows Live we get the following excerpt

The decision to outsource the University of Texas-Pan American's 17,000 student e-mail accounts to Microsoft Corp. for free was a simple one for Gary Wiggins, the school's top IT administrator.

Students hated the existing system and its limited storage, lack of features -- like a calendar, for example -- and cumbersome user interface.

"The legacy system we were moving from was so bad that the new features were very well-accepted," said Wiggins, who is the school's vice president for information technology.

The school could still create e-mail addresses ending in utpa.edu and many students were already familiar with Microsoft's Hotmail e-mail service.

The University of Texas Pan-American is not alone in linking up with Microsoft. The world's largest software maker has clinched deals to host e-mail systems for 72 institutions around the world and is in active discussions to add almost 200 more schools.

Microsoft sees its push onto college campuses as a way to promote its new Windows Live platform, an advertising-funded one-stop shop for Microsoft's Web services from e-mail to news to instant messaging to blogs.

The Windows Live @edu folks have done quite a bit over the past few months. I totally dig what we are doing with projects like theirs and Windows Live Custom Domains. I've actually started factoring in their scenarios when thinking about the next generation of Windows Live communication services we'll be building. The more Windows Live services we get to participate in this the better. Being able to give people email and IM accounts using the my own domain is a great first step but there are a bunch more things I'd like to see. 


 

Categories: Windows Live

One of the devs on Windows Live Favorites just snuck me the following screenshot

Sweet, huh?


 

Categories: Windows Live

April 20, 2006
@ 06:10 PM

Tim Bray has a post entitled The Cost of AJAX where he writes

James Governor relays a question that sounds important but I think is actively dangerous: do AJAX apps present more of a server-side load? The question is dangerous because it’s meaningless and unanswerable. Your typical Web page will, in the process of loading, call back to the server for a bunch of stylesheets and graphics and scripts and so on: for example, this ongoing page calls out to three different graphics, one stylesheet, and one JavaScript file. It also has one “AJAXy” XMLHttpRequest call. From the server’s point of view, those are all just requests to dereference one URI or another. In the case of ongoing, the AJAX request is for a static file less than 200 bytes in size (i.e. cheap). On the other hand, it could have been for something that required a complex outer join on two ten-million-row tables (i.e. very expensive). And one of the virtues of the Web Architecture is that it hides those differences, the “U” in URI stands for “Uniform”, it’s a Uniform interface to a resource on the Web that could be, well, anything. So saying “AJAX is expensive” (or that it’s cheap) is like saying “A mountain bike is slower than a battle tank” (or that it’s faster). The truth depends on what you’re doing with it. 

In general I agree with Tim that the answers to questions like "is technology X slower than technology Y" depends on context. The classic example is that it makes little sense arguing about the speed of the programming language if an application is data bound and has to read a lot of stuff off the disk. However when it comes to AJAX, I think that in general there is usually more load put on servers due to the user interface metaphors AJAX encourages. Specifically, AJAX enables developers to build user interfaces that allow the user to initiate multiple requests at once where only one requests could have been made in the traditional HTML model. For example, if you have a UI that makes an asynchronous request every time a user performs a click to drill down on some data (e.g. view comments on a blog post, tags on a link, etc) where it used to transition the user to another page then it is more likely that you will have an increased number of requests to the server in the AJAX version of your site. Some of the guys working on Windows Live have figured that out the hard way. :)


 

Categories: Web Development

April 20, 2006
@ 05:38 PM

For some reason, the following story on Slashdot had me cracking up

Microsoft Plans Gdrive Competitor

Posted by samzenpus on Wednesday April 19, @09:13PM
from the personal-virtual-bill dept.

gambit3 writes "From Microsoft Watch: The MSN team is working on a new Windows Live service, code-named Live Drive, that will provide users with a virtual hard drive for storing hosted personal data. From early accounts, it sounds an awful lot like Gdrive, the still-as-yet-publicly-unannounced storage service from Google."

I have to agree with Mike Torres, 2006 is really 1998 in disguise. With the release of Google Page Creator, Google Finance, Google Calendar and the upcoming GDrive (aka Yahoo! GeoCities, Yahoo! Finance, Yahoo! Calendar and Yahoo! Briefcase knockoffs) it is now clear to me that Google's master plan is to become Yahoo! 2.0. On the other hand, with some of the recent Windows Live announcements we seem to be giving the impression that we are chasing Google's tail lights who in turn is chasing Yahoo! tail lights who in turn is chasing the tail lights of various 'Web 2.0' startups. Crazy.

I wonder what Google will do when they run out of Yahoo! services to copy?


 

April 20, 2006
@ 05:15 PM

Joe Gregorio has a post about the Google Data APIs Protocol where he points out that Google has released the Google Data APIs Protocol which for all intents and purposes, is an Atom Store (i.e. an Atom Publishing Protocol service mixed with OpenSearch). I took a glance at the Google Data APIs overview documentation and it states the following

GData is a new protocol based on Atom 1.0 and RSS 2.0.

To acquire information from a service that supports GData, you send an HTTP GET request; the service returns results as an Atom or RSS feed.You can update data (where supported by a particular GData service) by sending an HTTP PUT request, an approach based on the Atom Publishing Protocol.

All sorts of services can provide GData feeds, from public services like blog feeds or news syndication feeds to personalized data like email or calendar events or task-list items. The RSS and Atom models are extensible, so each feed provider can define its own extensions and semantics as desired. A feed provider can provide read-only feeds (such as a search-results feed) or read/write feeds (such as a calendar application).

On the surface, this looks like it aims to solve the same problems that Ray Ozzie's Microsoft's Simple Sharing Extensions for RSS and OPML aims to solve as well. At first glance, I think I prefer it to RSS-SSE because it is explicitly about two-way interaction as well as one-way interaction. RSS-SSE provides a good solution if I am a client application synchronizing information from a master source such as my online calendaring application but it is still unclear to me how I use the mechanics of RSS-SSE to push my own updates to the server. Of course, RSS-SSE is better for pure synchronization but GData looks like it would be a better fit for a generic read/write API for data stores on the Web, the same way RSS has become a generic read API for data stores on the Web.

This definitely has Adam Bosworth  written all over it. About a year ago, it was obvious from certain posts on the atom-protocol mailing lists that Adam had some folks at Google working on using the Atom Publishing Protocol as a generic API for Web stores. I'm surprised it's taken this long for the project to make its way out of the Googleplex. This would be a good time to familiarize yourself with Adam's paper, Learning from THE WEB.

PS: 'Google Data APIs Protocol' is a horrible name. You can tell that ex-Microsoft employees had a hand in this effort. ;)


 

Categories: XML Web Services

April 18, 2006
@ 06:25 PM

I was saddened when I found out that Gretchen Ledgard was leaving Microsoft. We go back a bit given that her husband and I interned at the same company back in 1999 which is when we first met. Once I found out what she was going to be doing next, I was quite happy to see her take this step.

From the blog post on the JobSyntax blog entitled And so it begins ...

Well, hey there. Gretchen here.  After a long hiatus :), I am back in the blogosphere. Did you miss me?

Let me be the first to officially welcome you to our new home, the JobSyntax Blog.  I’ve got so much to write, but I’ll need to save all the juicy news and advice for future blog entries. So I’ll use this first entry to tell you a little bit about how we got here. It’s been a long, crazy journey.

As long-time readers know, until a few days ago,  I was a Microsoft recruiter.  About 2 ½ years ago, I was hand-picked to join a start-up team within the company’s recruiting organization. Also chosen for that team was Zoe Goldring.  We were tasked with building a community and pipeline of qualified and interested software developers for the company’s openings. (Is this starting to sound like the pilot for a cheesy sitcom yet?) :)

Two years ago, Zoe and I founded the Technical Careers @ Microsoft weblog (JobsBlog) as a way to connect with the developers who were latching onto the blogging phenomenon. Quite honestly, we had no idea what were doing and definitely not what we were getting ourselves into.

Being the public faces for Microsoft’s technical recruiting effort was both extremely exhilarating and challenging at the same time. Personally, I loved that I had such a positive impact on so many applicants. I joined the recruiting industry because I wanted to improve the candidate experience, and each day, I saw tangible results of my efforts. I impacted so many more candidates via our blog than I ever could as regular recruiter
...
Time moved along, and many things changed in our lives – yet we still held onto our promise to each other. Finally, we decided the time was right … Personally, we were both ready for new and different challenges in our careers. Professionally, it seemed that our collective passion for technical recruiting, a positive customer and client experience, and strong jobseeker and employer education coupled with, well, the returned rise of the tech market meant it was time to strike out on our own.  The time was right. The time is now.

So here we are. Welcome to JobSyntax, and we look forward to all the good times ahead.  Let the games begin!

It's great to see Zoe and Gretchen convert their skills and blog cred into their own company. There are a number of folks who toil in the B0rg cube who I've wondered why they don't strike out on their own, Robert Scoble for one. Whenever I decide to hang my hat up at the B0rg cube and go wage slave for someone else, I'll be sure to give the Moongals at JobSyntax a call.

Good luck with your new venture Zoe and Gretchen


 

From the blog post Zillow Integrates Virtual Earth into their website from the Virtual Earth team's blog we learn

Zillow.com was already a very cool site for browsing property values, comps, and other home related information, and now they've added Birds Eye Imagery to their application making the experience even more powerful. This is a great example of how the Virtual Earth platform can be used by third party developers to include unique capabilities into their applications. Once the Virtual Earth map control is integrated into a website, the end user can rotate their view North, South, East, and West to examine a property, unlike other mapping platforms that only provide a single look straight down at the roof of a house.

Congrats to the VE folks for getting such a cool web site using their API.


 

Categories: Windows Live

David Sifry has posted another one of his State of the Blogosphere blog entries. He writes

In summary:

  • Technorati now tracks over 35.3 Million blogs
  • The blogosphere is doubling in size every 6 months
  • It is now over 60 times bigger than it was 3 years ago
  • On average, a new weblog is created every second of every day
  • 19.4 million bloggers (55%) are still posting 3 months after their blogs are created
  • Technorati tracks about 1.2 Million new blog posts each day, about 50,000 per hour

As usual for this series of posts, Dave Sifry plays fast and lose with language by interchangeably using blogosphere and number of blogs Technorati is tracking. There is a big difference between the two but unfortunately many people seem to fail at critical thinking and repeat Technorati's numbers as gospel. It's now general knowledge that services like MySpace and MSN Spaces have more blogs/users than Technorati tracks overall.

I find this irritating because I've seen lots of press reports underreport the estimated size of the blogosphere by quoting the Technorati numbers. I suspect that the number of blogs out there is closer to 100 million (you can get that just by adding up the number of blogs on the 3 or 4 most popular blogging services) and not hovering around 35 million. One interesting question for me is whether private blogs/journals/spaces count as part of the blogosphere or not? Then again for most people the blogosphere is limited to their limited set of interests (technology blogs, mommy blogs, politics blogs, etc) so that is probably a moot question.

PS: For a good rant about another example of Technorati playing fast and lose language, see Shelley Powers's Technology is neither good nor evil which riffs on how Technorati equates number of links to a weblog with authority.


 

April 17, 2006
@ 04:03 PM

Robert Scoble has a blog post entitled Halfway through my blog vacation (change in comment policy)

But, mostly, this past week was about change.

Some things I've changed? 1) No more coffee. 2) No more soda. 3) Xercising. 4) No more unhappy people in my life. 5) Get balance back in my own life.
...
One of my most memeorable conversations, though, was with Buzz Bruggeman, CEO of ActiveWords and a good friend. He told me to hang around people who are happy. And I realized I had been listening to too many people who were deeply unhappy and not bringing any value into my life. He told me to listen to this recording on NPR about "finding happiness in a Harvard Classroom." He also told me about the four agreements, which are Don Miguel Ruiz's code for life. Good stuff.

Over the past year I've been on a mission to simplify my life piece by piece. Along the line I've made some promises to myself which I've kept and others which have been more difficult to stick with.

Health: I enrolled in the 20/20 Lifestyles program in the health club near Microsoft about six months ago. Since then I've lost just over 60 pounds (27.5 kilos for my metric peeps). This week is my last week with my personal trainer and dietician before I'm on my own. I had hoped to lose more weight but last month was somewhat disruptive to my schedule with my mom being in town for two weeks and travelling for ETech 2006, SPARK and New York to see my dad. I am somewhat proud that I gained less than 2 pounds even though my schedule was complete mess. I've kept two promises to myself about my health; I'll work out 5 days a week and will keep my daily caloric intake to within 2000 calories a day 5 days a week [but never over 3000 calories in one day]. The excercise promise has been easy to keep but the diet promise has been harder than I've liked. Eating out is the hard part. Giving up soda for water was easier than I thought.

Work/Life Balance: I also decided to be better at compartmentalizing my work and home life. I promised myself not to spend more than 10.5 hours a day at work [in by 9AM, out by 7-7.30 PM at the latest] and to stop using the VPN to connect to work when I'm at home. I've also tried to stop checking work email from home on weekday evenings and will check it only once a day on weekends. If I'm in crunch mode for a particular deadline then this may chaneg temporarily. Last week I averaged about 14 hours a day at work because I had a deadline I wanted to hit for Friday. However I didn't want this to mean I got home late since I value spending dinner time with my girlfriend so I left for work much earlier in the day last week. This week I'm back to my regular schedule. 

Professional Work Load: Last year, I worked on lots of things I was interested in simultaneously. I worked on the social networking platform for Windows Live, replacing MSN Member Directory with MSN Spaces Profiles, photo storage for Windows Live services from MSN Spaces to Windows Live Expo, and a bunch of other stuff which hasn't shipped so I can't mention here. This was just the stuff my boss had me working on. There was also stuff I was interested in that I just worked on without being explicitly told to such as organizing efforts around the MSN Windows Live developer platform (see http://msdn.microsoft.com/live AND keeping the spark alive on us getting an RSS platform built for Windows Live. This was a lot of stuff to try to fit into a workday besides all the other crap that fits into your day (meetings, meetings, meetings). At my last review, I got some feedback that some folks on my team felt they weren't getting my full attention because I spent so much time on 'extracurricular' activities. Although I was initially taken aback by this feedback I realized there some truth to it. Since then I've been working on handing off some of the stuff I was working on that wasn't part of my job requirements. Thanks in part to the positive response to my ThinkWeek paper there is now an entire team of people working on the stuff I was driving around the Windows Live developer platform last year. You should keep an eye on the blogs of folks like Ken Levy and Danny Thorpe to learn what we have planned in this arena. The RSS platform for Windows Live spark has now been fanned into a flame and I worked hard to get Niall Kennedy to join us to drive those efforts. Realizing I can't work on everything I am interesed in has been liberating.

Geeking at Home: I've cut down on how much time I spend reading blogs and don't subscribe to any mailing lists. Even on the blogs I read, I try to cut down on reading comment sections that have more negative energy than I can stomach which means skipping the comments section of Mini-Microsoft blog most days of the week. Even at work, I subscribe to only two or three distribution lists that aren't for my team or specific projects I am working on. I don't plan to have concurrent side projects going on at home anymore. I'll keep working on RSS Bandit for the forseeable future. Whenever there is a lull in development such as after a major release, I may work on an article or two. However I won't have two or three article drafts going at the same time while also being in bug fixing mode which used to be the norm for me a year or two ago.


I wish Robert luck in his plan to simplify his life and improve his health.


 

Categories: Personal

April 17, 2006
@ 03:05 PM

I'm still continuing my exploration of the philosophy behind building distributed applications following the principles behind the REpresentational State architectural style (REST) and Web-style software. Recent comments in my blog have introduced a perspective that I hadn't considered much before. 

Robert Sayre wrote

Reading over your last few posts, I think it's important to keep in mind there are really two kinds of HTTP. One is HTTP-For-Browsers, and one is HTTP-For-APIs.

API end-points encounter a much wider variety of clients that actually have a user expecting something coherent--as opposed to bots. Many of those clients will have less-than robust HTTP stacks. So, it turns out your API end-points have to be much more compliant than whatever is serving your web pages.

Sam Ruby wrote

While the accept header is how you segued into this discussion, Ian's and Joe's posts were explicitly about the Content-Type header.

Relevant to both discussions, my weblog varies the Content-Type header it returns based on the Accept header it receives, as there is at least one popular browser that does not support application/xhtml+xml.

So... Content-Type AND charset are very relevant to IE7. But are completely ignored by RSSBandit. If you want to talk about “how the Web r-e-a-l-l-y works”, you need to first recognize that you are talking about two very different webs with different set of rules. When you talk about how you would invest Don's $100, which web are you talking about?

This is an interesting distinction and one that makes me re-evaluate my reasons for being interested in RESTful web services. I see two main arguments for using RESTful approaches to building distributed applications on the Web.  The first is that it is simpler than other approaches to building distributed applications that the software industry has cooked up. The second is that it has been proven to scale on the Web.

The second reason is where it gets interesting. Once you start reading articles on building RESTful web services such as Joe Gregorio's How to Create a REST Protocol and Dispatching in a REST Protocol Application you realize that how REST advocates talk about how one should build RESTful applications is actually different from how the Web works. Few web applications support HTTP methods other than GET and POST, few web applications send out the correct MIME types when sending data to clients, many Web applications use cookies for storing application state instead of allowing hypermedia to be the engine of application state (i.e. keeping the state in the URL) and in a suprisingly large number of cases the markup in documents being transmitted is invalid or malformed in some ways. However the Web still works. 

REST is an attempt to formalize the workings of the Web ex post facto. However it describes an ideal of how the Web works and in many cases the reality of the Web deviates significantly from what advocates of RESTful approaches preach. The question is whether this disconnect invalidates the teachings of REST. I think the answer is no. 

In almost every case I've described above, the behavior of client applications and the user experience would be improved if HTTP [and XML]  were used correctly. This isn't supposition, as the developer of  an RSS reader my life and that of my users would be better if servers emitted the correct MIME types for their feeds, the feeds were always at least well-formed and feeds always pointed to related metadata/content such as comment feeds (i.e. hypermedia is the engine of application state).

Let's get back the notion of the Two Webs. Right now, there is the primarily HTML-powered Web which whose primary clients are Web browsers and search engine bots. For better or worse, over time Web browsers have had to deal with the fact that Web servers and Web masters ignore several rules of the Web from using incorrect MIME types for files to having malformed/invalid documents. This has cemented hacks and bad practices as the status quo on the HTML web. It is unlikely this is going to change anytime soon, if ever.

Where things get interesting is that we are now using the Web for more than serving Web documents for Web browsers. The primary clients for these documents aren't Web browsers written by Microsoft and Netscape AOL Mozilla and bots from a handful of search engines. For example, with RSS/Atom we have hundreds of clients with more to come as the technology becomes more mainstream. Also Web APIs becoming more popular, more and more Web sites are exposing services to the world on the Web using RESTTful approaches. In all of these examples, there is justification in being more rigorous in the way one uses HTTP than one would be when serving HTML documents for one's web site. 

In conclusion, I completely agree with Robert Sayre's statement that there are really two kinds of HTTP. One is HTTP-For-Browsers, and one is HTTP-For-APIs.

When talking about REST and HTTP-For-APIs, we should be careful not to learn the wrong lessons from how HTTP-For-Browsers is used today.
 

Charlene Li of Forrester research has a blog post entitled Google Calendar creates a platform for "time" applications where she writes

Having trialed a half dozen of them (including Airset, CalendarHub, 30Boxes, Planzo, and SpongeCell), Google Calendar is truly a best of breed in terms of ease of use and functionality. Here’s a quick overview of what’s different about the new product:

-          Manage multiple calendars. ....

-          Easy to use. ....

-          Sharing. ....

-          Open platform. I think this is the most interesting aspect of Google's calendar. The iCal standard along with RSS means that I will be able to synch my work calendar with my Google calendar. Although tie-ins with programs like Outlook aren’t yet available, Carl Sjogreen, Google Calendar’s product manager, said that such functionality will be coming "soon". Google is also partnering with Trumba to enabled "one-click" addition of events to your calendar (Trumba already works with calendar products from Yahoo!, Outlook, MSN Hotmail, and Apple). Also promised are synching capabilities to mobile phones. Carl also said that an API was in the works, which would enable developers to create new products on top of Google Calendar.

I've always thought that Web-based calendaring and event-based products haven't hit the sweet spot with end users because they are too much work to use for little benefit. The reason I use calendaring software at work is mainly to manage meetings. If I didn't have to attend meetings I'd never use the calendaring functionality of Outlook. In my personal life, the only times calendaring software would have been useful is integrating invitation services like eVite into my calendars at work and/or at home (I use both Yahoo! Mail and Windows Live Mail).  However either eVite doesn't provide this functionality or it's unintuitive since I've never discovered it. So web-based calendaring software has been pretty much a bust for me. AJAXifying it doesn't change this in any way. 

On the other hand, I could probably build the integration I want between my work calendar and my eVite calendar if they had an API [and I was invited to enough parties to make this a worthy excercise]. It seems there is now an awareness of this in the industry at the big three (Google, Yahoo and Microsoft) which is going to turn online calendaring into an interesting space over the next few months. Google Calendar is a step in the right direction by providing RSS feeds and announcing a forthcoming API. Yahoo! is already thinking about the same thing and also announced an upcoming Calendar API last month. As for Windows Live, our CTO has been talking to folks at work about using RSS+SSE as a way to share events and I'm sure they are paying attention [or at least will now that both Yahoo! and Google have thrown down].

With the increased use of RSS by Web-based calendaring applications perhaps it is time for RSS readers to also become more calendar aware?


 

To follow up my post asking Is HTTP Content Negotiation Broken as Designed?, I found a post by Ian Hickson on a related topic. In his post entitled Content-Type is dead he writes

Browsers and other user agents largely ignore the HTTP Content-Type header, relying on undefined sniffing heuristics to determine what the content of a page really is.

  • RSS feeds are always sniffed, regardless of their MIME type, because, to quote a Safari engineer, "none of them have the right mime type".
  • The target of img elements is almost always assumed to be an image, regardless of the declared type.
  • IE in particular is well known for ignoring the Content-Type header, despite this having been the source of security bugs in the past.
  • Browsers have been forced to implement heuristics to handle text/plain files as binary because video files are widely served with the wrong MIME types.

Unfortunately, we're now at a stage where browsers are continuously having to reverse-engineer each other to determine why they are handling content differently. A browser can't afford to render any less content than a browser with more market share, because otherwise users won't switch, and the new browser will not be adopted.

I think it may be time to retire the Content-Type header, putting to sleep the myth that it is in any way authoritative, and instead have well-defined content-sniffing rules for Web content.

Ian is someone who's definitely been around the block when it comes to HTTP given that he's been involved in Web standards groups for several years and used to work on the Opera Web Browser. On the other side of the argument is Joe Gregorio who posts Content-Type is dead, for a short period of time, for new media-types, film at 11 which does an excellent job of the kind of dogmatic arguing based on theory that I criticized in my previous post. In this case, Joe uses the W3C Technical Architecture Groups (TAG) findings on Authoritative Metadata

MIME types and HTTP content negotiation are good ideas in practice that have failed to take hold on the Web. Arguing that this fact contravenes stuff written in specs from last decade or from findings by some ivory tower group of folks from the W3C seems like religous dogmatism and not fodder for decent technical debate. 

That said, I don't think MIME types should be retired. However I do think some Web/REST advocates need to look around and realize what's happening on the Web instead of arguing from an "ideal" or "theoretical" perspective.


 

Categories: Web Development

While you were sleeping, Windows Live Academic Search was launched at http://academic.live.com. From the Web site we learn

Welcome to Windows Live Academic

Windows Live Academic is now in beta. We currently index content related to computer science, physics, electrical engineering, and related subject areas.

Academic search enables you to search for peer reviewed journal articles contained in journal publisher portals and on the web in locations like citeseer.

Academic search works with libraries and institutions to search and provide access to subscription content for their members. Access restricted resources include subscription services or premium peer-reviewed journals. You may be able to access restricted content through your library or institution.

We have built several features designed to help you rapidly find the content you are searching for including abstract previews via our preview pane, sort and group by capability, and citation export. We invite you to try us out - and share your feedback with us.

I tried a comparison of a search for my name on Windows Live Academic Search and Google Scholar.

  1. Search for "Dare Obasanjo" on Windows Live Academic Search

  2. Search for "Dare Obasanjo" on Google Scholar

Google Scholar finds almost 20 citations while Windows Live Academic Search only finds one. Google Scholar seems to use sources other than academic papers such as articles written on technology sites like XML.com. I like the user interface for Windows Live Academic Search but we need to expand the data sources we query for me to use it regularly.


 

Categories: Windows Live

Working on RSS Bandit is my hobby and sometimes I retreat to it when I need to unwind from the details of work or just need a distraction. This morning was one of such moments. I decided to look into the issue raised in the thread from our forums entitled MSN Spaces RSS Feeds Issues - More Info where some of our users complained about a cookie parsing error when subscribed to feeds from MSN Spaces.

Before I explain what the problem is, I'd like to show an example of what an HTTP cookie header looks like from the Wikipedia entry for HTTP cookie

Set-Cookie: RMID=732423sdfs73242; expires=Fri, 31-Dec-2010 23:59:59 GMT; path=/; domain=.usatoday.com

Note the use of a semicolon as a delimiter for separating cookies. So it turned out that the error was in the following highlighted line of code


if (cookieHeaders.Length > 0) {
container.SetCookies(url, cookieHeaders.Replace(";", ","));
}

You'll note that we replace the semicolon delimiters with commas. Why would we do such a strange thing when the example above shows that cookies can contain commas? It's because the CookieContainer.SetCookies method in the .NET Framework requires the delimiters to be commas. WTF ?

This seems so fundamentally broken I feel that I must be mistaken. I've tried searching for possible solutions to the problem online but I couldn't find anyone else who has had this problem. Am I using the API incorrectly? Am I supposed to parse the cookie by hand before feeding it to the method? If so, why would anyone design the API in such a brain damaged manner?

*sigh*

I was having more fun drafting my specs for work.

Update: Mike Dimmick has pointed out in a comment below that my understanding of cookie syntax is incorrect. The cookie shown in the Wikipedia example is one cookie not four as I thought. It looks like simply grabbing sample code from blogs may not have been a good idea.:) This means that I may have been getting malformed cookies when fetching the MSN Spaces RSS feeds after all. Now if only I can repro the problem...


 

Categories: RSS Bandit | Web Development

In a recent mail on the ietf-types mailing list Larry Masinter (one of the authors of the HTTP 1.1 specification) had the following to say about content negotiation in HTTP

> > GET /models/SomeModel.xml HTTP/1.1
>
> Host: www.example.org
>
> Accept: application/cellml-1.0+xml; q=0.5, application/cellml-1.1+xml; q=1

HTTP content negotiation was one of those "nice in theory" protocol additions that, in practice, didn't work out. The original theory of content negotiation was worked out when the idea of the web was that browsers would support a handful of media types (text, html, a couple of image types), and so it might be reasonable to send an 'accept:' header listing all of the types supported. But in practice as the web evolved, browsers would support hundreds of types of all varieties, and even automatically locate readers for content-types, so it wasn't practical to send an 'accept:' header for all of the types.

So content negotiation in practice doesn't use accept: headers except in limited circumstances; for the most part, the sites send some kind of 'active content' or content that autoselects for itself what else to download; e.g., a HTML page which contains Javascript code to detect the client's capabilities and figure out which other URLs to load. The most common kind of content negotiation uses the 'user agent' identification header, or some other 'x-...' extension headers to detect browser versions, among other things, to identify buggy implementations or proprietary extensions.

I think we should deprecate HTTP content negotiation, if only to make it clear to people reading the spec that it doesn't really work that way in practice. .

HTTP content negotiation has always seemed to me something that seems like a good idea in theory but didn't really seem to work out in practice. It's good to see one of the founding fathers of HTTP actually admit that it is an example of theory not matching reality. It's always good to remember that just because something is written in a specification from some standards body doesn't make it a holy writ. I've seen people debate online who throw out quotes from Roy Fieldings's dissertation and IETF RFCs as if they are evangelical preachers quoting chapter and verse from the Holy Bible.

Some of the things you find in specifications from the W3C and IETF are good ideas. However they are just that ideas. Sometimes technological advances make these ideas outdated and sometimes the spec authors simply failed to consider other perspectives for solving the problem at hand. Expecting a modern browser to send an itemized list of every file type that can be read by the applications on your operating system on every single GET request plus the priority in which these file types are preferred is simply not feasible or really useful in practice. It may have been a long time ago but not now. 

Similar outdated and infeasible ideas litter practically every W3C and IETF specification out there. Remember that the next time you quote chapter and verse from some Ph.D dissertation or IETF/W3C specification to justify a technology decision. Supporting standards is important but more important is applying critical thinking to the problem at hand. .

Thanks to Mark Baker for the link to Larry Masinter's post.


 

Categories: Web Development

I just noticed that last week the W3C published a working draft specification for The XMLHttpRequest Object. I found the end of the working draft somewhat interesting. Read through the list of references and authors of the specifcation below

References

This section is normative

DOM3
Document Object Model (DOM) Level 3 Core Specification, Arnaud Le Hors (IBM), Philippe Le Hégaret (W3C), Lauren Wood (SoftQuad, Inc.), Gavin Nicol (Inso EPS), Jonathan Robie (Texcel Research and Software AG), Mike Champion (Arbortext and Software AG), and Steve Byrne (JavaSoft).
RFC2119
Key words for use in RFCs to Indicate Requirement Levels, S. Bradner.
RFC2616
Hypertext Transfer Protocol -- HTTP/1.1, R. Fielding (UC Irvine), J. Gettys (Compaq/W3C), J. Mogul (Compaq), H. Frystyk (W3C/MIT), L. Masinter (Xerox), P. Leach (Microsoft), and T. Berners-Lee (W3C/MIT).

B. Authors

This section is informative

The authors of this document are the members of the W3C Web APIs Working Group.

  • Robin Berjon, Expway (Working Group Chair)
  • Ian Davis, Talis Information Limited
  • Gorm Haug Eriksen, Opera Software
  • Marc Hadley, Sun Microsystems
  • Scott Hayman, Research In Motion
  • Ian Hickson, Google
  • Björn Höhrmann, Invited Expert
  • Dean Jackson, W3C
  • Christophe Jolif, ILOG
  • Luca Mascaro, HTML Writers Guild
  • Charles McCathieNevile, Opera Software
  • T.V. Raman, Google
  • Arun Ranganathan, AOL
  • John Robinson, AOL
  • Doug Schepers, Vectoreal
  • Michael Shenfield, Research In Motion
  • Jonas Sicking, Mozilla Foundation
  • Stéphane Sire, IntuiLab
  • Maciej Stachowiak, Apple Computer
  • Anne van Kesteren, Opera Software

Thanks to all those who have helped to improve this specification by sending suggestions and corrections. (Please, keep bugging us with your issues!)

Interesting. A W3C specification that documents a proprietary Microsoft API which not only does not include a Microsoft employee as a spec author but doesn't even reference any of the IXMLHttpRequest documentation on MSDN.

I'm sure there's a lesson in there somewhere. ;)


 

Categories: Web Development | XML

From the inaugural post from the Windows Live ID team's blog entitled The beginning of Windows Live ID we learn

Welcome to the Windows Live ID team blog!  This is our inaugural “Hello World!” post to introduce Windows Live ID.
 
Windows Live ID is the upgrade/replacement for the Microsoft Passport service and is the identity and authentication gateway service for cross-device access to Microsoft online services, such as Windows Live, MSN, Office Live and Xbox Live.  Is this the authentication service for the world?  No :-)  It's primarily designed for use with Microsoft online services and by Microsoft-affiliated close partners who integrate with Windows Live services to offer combined innovations to our mutual customers.  We will continue to support the Passport user base of 300+ Million accounts and seamlessly upgrade these accounts to Windows Live IDs.  Partners who have already implemented Passport are already compatible with Windows Live ID.
 
Windows Live ID is being designed to be an identity provider among many within the Identity Metasystem.  In the future, we will support Federated identity scenarios via WS-* and support InfoCards.  For developers we will be providing rich programmable interfaces via server and client SDKs to give third party application developers access to authenticated Microsoft Live services and APIs.
 
Over the next few weeks as we complete our deployment, you will see the Windows Live ID service come alive through our respective partners sites and services. 

I had a meeting with Trevin from the Passport Windows Live ID team to talk about their plans for providing server-based and client SDKs to give application developers the ability to access Windows Live services and APIs. I've been nagging him for a while with a lengthy list of requirements and it looks like they'll be delivering APIs that will enable very interesting uses of Windows Live quite soon.

This is shaping up to be a good year.


 

Categories: Windows Live

Niall Kennedy has a blog post entitled Creating a feed syndication platform at Microsoft where he writes

Starting next week I will join Microsoft's Windows Live division to create a new product team around syndication technologies such as RSS and Atom. I will help build a feed syndication platform leveraged by Microsoft products and developers all around the world. I am excited to construct a team and product from scratch focused on scalability and connecting syndication clients and their users wherever they may exist: desktop, mobile, media center, gaming console, widget, gadget, and more.

Live.com is the new default home page for users of the Internet Explorer 7 and the Windows Vista operating system. Live.com will be the first feed syndication experience for hundreds of millions of users who would love to add more content to their page, connect with friends, and take control of the flow of information in ways geeks have for years. I do not believe we have even begun to tap into the power of feeds as a platform and the possibilities that exist if we mine this data, connect users, and add new layers of personalization and social sharing. These are just some of the reasons I am excited to build something new and continue to change how the world can access new information as it happens

I spoke to Niall on the phone last week and I'm glad to see that he accepted our offer. When I was first hired to work in MSN Windows Live I was told I'd be working on three things; a blogging platform for MSN Spaces, a brand new social networking platform and an RSS platform. I've done the first two and was looking forward to working on the third but something has come up which will consume my attention for the near future. I promised the my management and the partner teams who were interested in this platform that I'd make sure we got the right person to work on this project. When I found out Niall was leaving Technorati it seemed like a match made in heaven. I recommended him for the job and talked to him on the phone about working at Microsoft. The people who will be working with him thought he was great and the rest has been history.

One of the questions Niall asked me last week was why I worked at Microsoft given I've written blog posts critical of the company. The answer to that question came easily for me, I told him that Microsoft is the one place I know I can build the kind of software and end-to-end experience I'd like. Nowhere else is there the the same breadth of software applications which can be brought together to give end users a unified experience. Where else can a punk like me build a social networking platform that is not only utilized by the most popular blogging platform in China but also in the world's most popular instant messaging application? And that's just the beginning. There is a lot of opportunity to build really impactful software at Windows Live. When I'm critical of Microsoft it's because I want us to be better company for people like me not because I don't like it here. Unfortunately, lots of people can't tell the difference. ;)

By the way, we are hiring. If you are interested in developer, test or program management positions building the biggest social computing platform on the planet then send your resume to dareo@msft.com (swap msft.com with microsoft.com).


 

Categories: Windows Live

Jeff Schneider has a blog post entitled You're so Enterprise... which is meant to be a response to a post I wrote entitled My Website is Bigger Than Your Enterprise. Since he neither linked to my post nor did he mention my full name, it's actually a coincidence I ever found his post. Anyway, he writes

In regard to the comment that Dare had made, "If you are building distributed applications for your business, you really need to ask yourself what is so complex about the problems that you have to solve that makes it require more complex solutions than those that are working on a global scale on the World Wide Web today." I tried to have a conversation with several architects on this subject and we immediately ran into a problem. We were trying to compare and contrast a typical enterprise application with one like Microsoft Live. Not knowing the MS Live architecture we attempted to 'best guess' what it might look like:

  • An advanced presentation layer, probably with an advance portal mechanism
  • Some kind of mechanism to facilitate internationalization
  • A highly scalable 'logic layer'
  • A responsive data store (cached, but probably not transactional)
  • A traditional row of web servers / maybe Akamai thing thrown in
  • Some sort of user authentication / access control mechanism
  • A load balancing mechanism
  • Some kind of federated token mechanism to other MS properties
  • An outward facing API
  • Some information was syndicated via RSS
  • The bulk of the code was done is some OO language like Java or C#
  • Modularity and encapsulation was encouraged; loose coupling when appropriate
  • Some kind of systems management and monitoring
  • Assuming that we are capturing any sensitive information, an on the wire encryption mechanism
  • We guessed that many of the technologies that the team used were dictated to them: Let's just say they didn't use Java and BEA AquaLogic.
  • We also guessed that some of the typical stuff didn't make their requirements list (regulatory & compliance issues, interfacing with CICS, TPF, etc., interfacing with batch systems, interfacing with CORBA or DCE, hot swapping business rules, guaranteed SLA's, ability to monitor state of a business process, etc.)
At the end of the day - we were scratching our heads. We DON'T know the MS Live architecture - but we've got a pretty good guess on what it looks like - and ya know what? According to our mocked up version, it looked like all of our 'Enterprise Crap'.

So, in response to Dare's question of what is so much more complex about 'enterprise' over 'web', our response was "not much, the usual compliance and legacy stuff". However, we now pose a new question to Dare:
What is so much more simple about your architecture than ours?

Actually, a lot of the stuff he talks about with regards to SLAs, monitoring business processes and regulatory issues are all things we face as part of building Windows Live. However it seems Jeff missed my point. The point is that folks building systems in places like Yahoo, Amazon and Windows Live are building systems that have to solve problems that are at the minimum just as complex as those of your average medium sized to large scale business. From his post, Jeff seems to agree with this core assertion. Yet people at these companies are embracing approaches such as RESTful web services and using scripting languages which are both often dissed as not being enterprise by complexity enterprise architects.

Just because a problem seems complex doesn't mean it needs a complex technology to solve it. For example, at its core RSS solves the same problem as WS-Eventing. I can describe all sorts of scenarios where RSS falls down and WS-Eventing does not. However RSS is good enough for a large number of scenarios for a smidgeon of the complexity cost of WS-Eventing. Then there are other examples where you have complex technologies like WS-ReliableMessaging that add complexity to the mix but often don't solve the real problems facing large scale services today. See my post More on Pragmatism and Web Services for my issues with WS-ReliableMessaging.  

My point remains the same. Complex problems do not necessarily translate to requiring complex solutions.

Question everything.


 

Categories: Web Development

A few weeks ago I blogged about the current beta of Social Networking in MSN Spaces for our Australian users. What I didn't mention is that just as most features in MSN Spaces are integrated with Windows Live Messenger, so also is the Friends list feature. Australian users of Windows Live Messenger will have three integration points for interacting with the Friends List. The first is that one can right-click on Messenger contacts and select "View->Friends List" to browse their Friends List. Another integration point is that one can respond to pending requests from people to add them to your Friends List directly from Messenger client (this is also the case with other features like Live Contacts).  Finally, one can also browse the Friends List from their Contact Card. Below is a screenshot of what happens when an Australian user right-clicks on one of their Messenger contacts and selects "View->Friends List". I can't wait till we finally ship this feature to all our users.

NOTE: There is a known bug that stops the Friends list from showing up if you are using the Internet Explorer 7 beta.


 

Categories: Windows Live

April 10, 2006
@ 02:53 PM

Via Shelley Powers I found out that Mark Pilgrim has restarted his blog with a new post entitled After the Bath. Ironically, I didn't find this out from my favorite RSS reader because it correctly supports the HTTP 410(GONE) status code which Mark's feed has been returning for over a year.

Mark Pilgrim's feed being resurrected from the dead is another example of why simply implementing support for Web specifications as written sometimes bites you on the butt. :)


 

From the blog post entitled Spaces and Messenger integration added on the Windows Live Mail team's blog, Steve Kafka lets us know

I just blogged about this on my own Space and someone asked why it wasn't mentioned here on the team blog. Because I forgot, that's why.
We just added the "contact control" to contacts in Windows Live Mail. If you frequent Spaces, this will be familiar to you. For contacts that are Messenger enabled this lets you see their profile picture, Messenger presence, gleams (the "something is new" indicator), their contact card and more. Check out my original post for a screen shot. To see it for yourself, go to your contacts and click on some of your friends from Messenger.
We're definitely working on more integration between Messenger, Spaces and Mail, so consider this just the beginning.

The contact information which is used for the contact control is another one of those core services our team provides as platform piece for other Windows Live user experiences like Spaces, Messenger and Mail. Eventually I'd love to see the contact control integrated across all Windows Live experiences where users have to be identified. It's much better than just a username, dont you think? In the meantime you can check out my screenshot of the new integration in action.


 

Categories: Windows Live

From the blog post entitled Your 'wait' is over on the Office Live team blog we learn

Since we debuted our waitlist for the Microsoft Office Live Beta back in November 2005 we have had over 275,000 customers sign-up in our Beta waitlist, and we thank each and every one of them for signing up. But what we’ve heard over and over is: please don’t make me wait in a list; I want to try Office Live NOW!! So, good news to everyone who hasn’t already gotten a product key, your wait is over (maybe before it even started!)

We have dropped the requirement of a product key from our signup! The Beta is still only open to US residents for now, but ANY US resident with a valid credit card can sign up for the Beta and experience Office Live! All you need to do now is pick your product (Basics, Collaboration or Essentials) and pick your domain. So head over to www.OfficeLive.com now to get your Beta subscription started.

If you've been curious about Office Live and have balked at checking it out due to the waiting list, now's your chance. Let the product team know what you think.


 

Categories: Office Live

April 5, 2006
@ 06:11 PM

Recently, someone commented in a meeting that we were playing "schedule chicken". I hadn't heard the term before so I looked it up. That's where I found the excellent post Schedule Chicken by Jim Carson which is excerpted below

Schedule Chicken
Given the above setup, it's difficult, if not impossible to accurately estimate project delivery dates. Even when you're brutally honest, spelling out
all the things that must occur for you to meet a date, the dependencies get lost in the footnotes in the appendices at the end of the book. Management "pulls in the date" to something ridiculous that they can sell to their bosses. Their bosses do the same. And so on.

Since everyone is using largely fictitious dates as part of a mass delusion, you would think no one expects to make them, no one will make them, no harm. This is sorta true. Each technical lead assumes that the other leads are lying even more about how long it will take them to deliver.

The ruse continues past insignificant milestones until just before something is actually due. The more seasoned managers will delay admitting to the obvious for as long as humanly possible, waiting for someone else (more junior) to "turn" first. The one who does is the "chicken," and is subsequently eviscerated by their boss and made a public example of all the incompetentcies in the universe.

After this "chicken" has been identified, and summarily punished, all the other teams update their schedules with slipped dates that are slightly less than the "chicken's." The process may repeat itself a few times. Key point: You don't want to slip first. Or last.

The question I have for my readers is simple, what do you do once you realize you're a player in a game of schedule chicken?


 

April 5, 2006
@ 06:01 PM

From the New York Times article Software Out There by John Markoff we get choice quotes like

The new economics of software development poses a fresh challenge to the dominant players in the industry. In 1995, when Microsoft realized that the Netscape Internet browser created a threat to its Windows operating system business, it responded by introducing its own free browser, Internet Explorer. By doing so, Microsoft, which already held a monopoly on desktop software, blunted Netscape's momentum.

Last November, Microsoft introduced a Web services portal called Windows Live and Office Live.
...
Mr. Ozzie, who used the Firefox browser (an open-source rival to Internet Explorer) during his demonstration, said, "I'm pretty pumped up with the potential for R.S.S. to be the DNA for wiring the Web."

He was referring to Really Simple Syndication, an increasingly popular, free standard used for Internet publishing. Mr. Ozzie's statement was remarkable for a chief technical officer whose company has just spent years and hundreds of millions of dollars investing in a proprietary alternative referred to as .Net.

I've heard that it's hard to take newspapers seriously because when they write about things you are knowledgeable about they get it wrong. John Markoff does an excellent job of proving that old saw right.


 

April 4, 2006
@ 07:02 PM

Eric Gunnerson has a blog post entitled Mom and Apple Pie where he writes

What do following words all have in common:

  • Passion
  • Innovation
  • Synergy
  • Agility

They're what I call "Mom and Apple Pie" words, for two reasons.

First, they all have a positive connotation. Who wouldn't want to be more agile, more innovative? Who is going to argue against having a more synergistic approach? Shouldn't everybody have passion?

Combine that with the fact that these words are used in a content-free environment, and you get a nice-sounding platitude that means nothing, but makes it sound like you are for changing things.

You don't think we should have more apple pie? What's wrong with you? Why do you hate your mother?

People who want to make an organization more agile don't say, "We're going to improve agility". They say, "we're going to get rid of <x>, we're going to change <y>, we're going to release every <x> months". People who want to improve synergy say, "Our users are trying to do <x>, and it's way too hard. What do I need to do to help you fix this?"

I have a blog post in my head about the top 3 things I'd like to see from Microsoft executives. One of the three things I'd like to see our execs do addresses one of the words mentioned by Eric Gunnerson; agility.

I am currently part of the Plaforms & Services Division which encompasses both Windows and MSN Windows Live. There've been two reorganizations that have affected this division in the past eight months. Both times, the claim has been that these reorganizations are intended to boost our 'agility'. The first reorg involved adding a new layer of vice presidents between David Cole who ran MSN at the time and Steve Ballmer. This meant that at the time I had four people with the title 'Vice President' between me and our CEO [Brian Arbogast->Blake Irving->David Cole->Kevin Johnson]. How adding more management made us more agile was a mystery to me. Our most recent reorganization involved splitting the teams that build the Windows kernel from the team that builds the Windows user interface and the teams that build the Windows Live services from the teams that build the user interfaces. In practical terms it means that when I work on features like Photo E-Mail and Social Networking, the folks I work with not only don't share the same boss as me but our management chains don't meet until it gets all the way up to Kevin Johnson and Jim Allchin (i.e. 3 vice presidents up the hierarchy later). I don't see how this makes us more agile but maybe I'm just dumb.

At Microsoft, agility has joined innovation and passion as words that now have negative connotations to me. The longer I am at Microsoft, the more words I end up excising from my vocabulary.

I agree with Eric that what I'd like to see is less buzzwords and more concrete talk about how things are being improved for our users. I'm still waiting for the mail from Kevin Johnson which contains an FAQ explaining why adding layers of management and splitting up teams makes us more agile. I hope I'm not waiting in vain.


 

Categories: Life in the B0rg Cube

I stopped paying attention to the syndication wars a couple of months ago. I barely have time to stay on top of all the stuff I have to worry about as part of my day job let alone keeping track of the pointlessness that is the Atom vs. RSS debate. Unfortunately, every once in a while something happens that forces me to pay attention because I'm also the project lead for RSS Bandit.

One cool thing about XML syndication formats like RSS and Atom is that they are easily extensible. This means that anyone can come up with a new extension to the RSS/Atom formats which adds a new feature but is ignored by feed readers that don't understand the extension. Some of my favorite extensions are slash:comments which provides a count of the number of comments in a feed and wfw:commentRss which provides the URL to the feed for the comments to a blog post in the feed. One of my work items for the next version of RSS Bandit is to make it easier for people to 'watch' the comments for a blog post they are interested in if it supports these extensions. That way I can get get a notification everytime some comment thread I am interested in gets new posts directly from my aggregator instead of using other tools like CoComment.

A few days ago Sam Ruby posted an entry entitled Rogers Switches! where he mentioned that he now redirects all requests for his RSS feeds to his Atom feed. This meant that in RSS Bandit I now no longer see the comment count for the blog posts in Sam's feed nor can I view the comments to his posts directly within my aggregator. Sam and I had the following exchange in his blog comments when I discovered the ramifications of his change

I was trying to figure out if I’d introduced a bug in RSS Bandit to make your comment count and inline comments disappear. Instead, it seems you have made your feed less useful as part of the fallout of yet another iteration of the eternal pissing match which is the XML syndication wars.

sigh

Posted by Dare Obasanjo at 20:25

Dare, is this is your way of saying that you don’t intend to support the Feed Thread Extension?  I’d think that you would be on the watch for it.

Posted by Sam Ruby at 22:08

I looked at the draft of the Feed Thread Extension specification Sam linked to and it seems like a reinvention of the functionality provided by the slash:comments, wfw:commentRss and annotate:reference extensions. Great, not only do we now have to deal with the an increase in the number of competing XML syndication formats due to the Atom process ...By the way, have you seen the Atom 0.3 vs. Atom 1.0 debate? I told you so... we now also have to deal with duplicates of all the popular RSS extensions as well? Give me a break!

That said, you can expect support for these new extensions in the next version of RSS Bandit. At the end of the day, what matters is building useful software for our users regardless of how many petty annoyances are thrown in our way on the road there.


 

I mentioned recently that my new kick is storage platforms for Internet-scale services. Sometimes it's hard to describe the problems that large scale services have with modern relational databases. Thankfully, I now have the post I want a big, virtual database by Greg Linden to crib a simple 1 sentence description of the problem and solution. Greg writes

In my previous story about Amazon, I talked about a problem with an Oracle database.

You know, eight years later, I still don't see what I really want from a database. Hot standbys and replication remain the state of the art.

What I want is a robust, high performance virtual relational database that runs transparently over a cluster, nodes dropping in an out of service at will, read-write replication and data migration all done automatically.

I want to be able to install a database on a server cloud and use it like it was all running on one machine.

It's interesting how your database problems change when you're thinking about hundreds of transactions a second over terabytes of data belonging to hundreds of millions of users.You might also be interested in Greg's post on Google's BigTable.

PS: If building large scale services relied on by hundreds of millions of people to communicate with their friends and family sounds interesting to you, we're still hiring for test, developer and PM positions.
 

Categories: Web Development

In his post Exploring Live Clipboard Jon Udell posts a screencast he made about LiveClipboard. He writes

I've been experimenting with microformats since before they were called that, and I'm completely jazzed about Live Clipboard. In this screencast I'll walk you through examples of Live Clipboard in use, show how the hCalendar payload is wrapped, grab hCalendar data from Upcoming and Eventful, convert it to iCalendar format for insertion into a calendar program, inject it natively into Live Clipboard, and look at Upcoming and Eventful APIs side-by-side.

All this leads up to a question: How can I copy an event from one of these services and paste it into another? My conclusion is that adopting Live Clipboard and microformats will be necessary but not sufficient. We'll also need a way to agree that, for example, this venue is the same as that venue. At the end, I float an idea about how we might work toward such agreements.

The problem that Jon Udell describes is a classic problem when dealing with mapping data from different domains. I posted about this a few months ago in my post Metadata Quality and Mapping Between Domain Languages where I wrote

The problem Stefano has pointed out is that just being able to say that two items are semantically identical (i.e. an artist field in dataset A is the same as the 'band name' field in dataset B) doesn't mean you won't have to do some syntactic mapping as well (i.e. alter artist names of the form "ArtistName, The" to "The ArtistName") if you want an accurate mapping.

This is the big problem with data mapping. In Jon's example, the location is called Colonial Theater in Upcoming and Colonial Theater (New Hampshire) in Eventful. In Eventful it has a street address while in Upcoming only the street name is provided. Little differences like these are what makes data mapping a hard problem. Jon's solution is for the community to come up with global identifiers for venues as tags (e.g. Colonial_Theater_NH_03431) instead of waiting for technologists to come up with a solution. That's good advice because there really isn't a good technological solution for this problem. Even RDF/Semantic Web junkies like Danny Ayers in posts like Live clipboard and identifying things start with assumptions like every venue has a unique identifier which is it's URI. Of course this ignores the fact that coming up with a global, unique identification scheme for the Web is the problem in the first case. The problem with Jon's approach is the same one that is pointed out in almost every critique of folksonomies, people won't use the same tags for the same concept. Jon might useColonial_Theater_NH_03431 while I use Colonial_Theater_95_Maine_Street_NH_03431 which leaves us with the same problem of inconsistent identifiers being used for the same venue. 

I assume that for the near future we continue seeing custom code being written to make data integration across domains work. Unfortunately, no developments on the horizon look promising in making this problem go away.

PS: Ray Ozzie has a post on some of the recent developments in the world of Live Clipboard in his post Wiring Progress, check it out.


 

Categories: Technology | Web Development

One of the features I've been working on is soon going to see the light of day. In the post A picture is worth a thousand words, Vlada Breiburg talks about an upcoming feature in the Windows Live Mail Desktop Beta (formerly Outlook Express) which we worked on together. She writes

We are (re)introducing Photo E-mail—a super duper easy way to share photos (for those of you who use MSN Premium client this will be very familiar).


As soon as you insert a few pictures they show up in the message with an easy way to add some (funny) captions. We’ve also decided to give you a few fun and productive tools to make your pictures truly yours:

  • you’ll be able to add some borders
  • change pictures to black and white
  • change background color
  • and even auto-correct.

When designing this we debated a lot of what we should offer and decided to start with these tools until we hear more user feedback. We don’t ever want to be a full photo editing tool, but we do want to make things easier for our customers (Thank you Heather for making the tough calls; Heather was the original PM on the feature). So let us know what you think!

On sending these pictures, the photos will be uploaded to our servers and smaller versions will be placed inside the message (Thank you Dare, Richard, and Jura from the storage team on making this happen!). This will make sure that your friends and family don’t get huge messages that fill out their inboxes...

If your friends want to view bigger versions of the photos, all they have to do is hit “Play slideshow”. This is where are our friends from the Spaces team come in. They’ve created an awesome viewer for your friends and family to enjoy your pictures (Thank you DeEtte, Greg, and James).

I worked with both Heather (the original PM for the feature) and Vlada on making the Photo E-mail feature come together. It was different working on a feature for one of our desktop applications instead of a web-based property. As usual it was fun to work on a feature that I not only would use but could recommend to friends and family as well. Working on consumer software definitely rocks in this regard.

Working on the services for this feature clarified some of the thinking I've been doing around photo APIs for MSN Windows Live Spaces. I can't wait until we are ready to put some new stuff up on the Windows Live Developer Center. Exciting times indeed.


 

Categories: Windows Live