One of the devs on Windows Live Favorites just snuck me the following screenshot

Sweet, huh?


 

Categories: Windows Live

April 20, 2006
@ 06:10 PM

Tim Bray has a post entitled The Cost of AJAX where he writes

James Governor relays a question that sounds important but I think is actively dangerous: do AJAX apps present more of a server-side load? The question is dangerous because it’s meaningless and unanswerable. Your typical Web page will, in the process of loading, call back to the server for a bunch of stylesheets and graphics and scripts and so on: for example, this ongoing page calls out to three different graphics, one stylesheet, and one JavaScript file. It also has one “AJAXy” XMLHttpRequest call. From the server’s point of view, those are all just requests to dereference one URI or another. In the case of ongoing, the AJAX request is for a static file less than 200 bytes in size (i.e. cheap). On the other hand, it could have been for something that required a complex outer join on two ten-million-row tables (i.e. very expensive). And one of the virtues of the Web Architecture is that it hides those differences, the “U” in URI stands for “Uniform”, it’s a Uniform interface to a resource on the Web that could be, well, anything. So saying “AJAX is expensive” (or that it’s cheap) is like saying “A mountain bike is slower than a battle tank” (or that it’s faster). The truth depends on what you’re doing with it. 

In general I agree with Tim that the answers to questions like "is technology X slower than technology Y" depends on context. The classic example is that it makes little sense arguing about the speed of the programming language if an application is data bound and has to read a lot of stuff off the disk. However when it comes to AJAX, I think that in general there is usually more load put on servers due to the user interface metaphors AJAX encourages. Specifically, AJAX enables developers to build user interfaces that allow the user to initiate multiple requests at once where only one requests could have been made in the traditional HTML model. For example, if you have a UI that makes an asynchronous request every time a user performs a click to drill down on some data (e.g. view comments on a blog post, tags on a link, etc) where it used to transition the user to another page then it is more likely that you will have an increased number of requests to the server in the AJAX version of your site. Some of the guys working on Windows Live have figured that out the hard way. :)


 

Categories: Web Development

April 20, 2006
@ 05:38 PM

For some reason, the following story on Slashdot had me cracking up

Microsoft Plans Gdrive Competitor

Posted by samzenpus on Wednesday April 19, @09:13PM
from the personal-virtual-bill dept.

gambit3 writes "From Microsoft Watch: The MSN team is working on a new Windows Live service, code-named Live Drive, that will provide users with a virtual hard drive for storing hosted personal data. From early accounts, it sounds an awful lot like Gdrive, the still-as-yet-publicly-unannounced storage service from Google."

I have to agree with Mike Torres, 2006 is really 1998 in disguise. With the release of Google Page Creator, Google Finance, Google Calendar and the upcoming GDrive (aka Yahoo! GeoCities, Yahoo! Finance, Yahoo! Calendar and Yahoo! Briefcase knockoffs) it is now clear to me that Google's master plan is to become Yahoo! 2.0. On the other hand, with some of the recent Windows Live announcements we seem to be giving the impression that we are chasing Google's tail lights who in turn is chasing Yahoo! tail lights who in turn is chasing the tail lights of various 'Web 2.0' startups. Crazy.

I wonder what Google will do when they run out of Yahoo! services to copy?


 

April 20, 2006
@ 05:15 PM

Joe Gregorio has a post about the Google Data APIs Protocol where he points out that Google has released the Google Data APIs Protocol which for all intents and purposes, is an Atom Store (i.e. an Atom Publishing Protocol service mixed with OpenSearch). I took a glance at the Google Data APIs overview documentation and it states the following

GData is a new protocol based on Atom 1.0 and RSS 2.0.

To acquire information from a service that supports GData, you send an HTTP GET request; the service returns results as an Atom or RSS feed.You can update data (where supported by a particular GData service) by sending an HTTP PUT request, an approach based on the Atom Publishing Protocol.

All sorts of services can provide GData feeds, from public services like blog feeds or news syndication feeds to personalized data like email or calendar events or task-list items. The RSS and Atom models are extensible, so each feed provider can define its own extensions and semantics as desired. A feed provider can provide read-only feeds (such as a search-results feed) or read/write feeds (such as a calendar application).

On the surface, this looks like it aims to solve the same problems that Ray Ozzie's Microsoft's Simple Sharing Extensions for RSS and OPML aims to solve as well. At first glance, I think I prefer it to RSS-SSE because it is explicitly about two-way interaction as well as one-way interaction. RSS-SSE provides a good solution if I am a client application synchronizing information from a master source such as my online calendaring application but it is still unclear to me how I use the mechanics of RSS-SSE to push my own updates to the server. Of course, RSS-SSE is better for pure synchronization but GData looks like it would be a better fit for a generic read/write API for data stores on the Web, the same way RSS has become a generic read API for data stores on the Web.

This definitely has Adam Bosworth  written all over it. About a year ago, it was obvious from certain posts on the atom-protocol mailing lists that Adam had some folks at Google working on using the Atom Publishing Protocol as a generic API for Web stores. I'm surprised it's taken this long for the project to make its way out of the Googleplex. This would be a good time to familiarize yourself with Adam's paper, Learning from THE WEB.

PS: 'Google Data APIs Protocol' is a horrible name. You can tell that ex-Microsoft employees had a hand in this effort. ;)


 

Categories: XML Web Services

April 18, 2006
@ 06:25 PM

I was saddened when I found out that Gretchen Ledgard was leaving Microsoft. We go back a bit given that her husband and I interned at the same company back in 1999 which is when we first met. Once I found out what she was going to be doing next, I was quite happy to see her take this step.

From the blog post on the JobSyntax blog entitled And so it begins ...

Well, hey there. Gretchen here.  After a long hiatus :), I am back in the blogosphere. Did you miss me?

Let me be the first to officially welcome you to our new home, the JobSyntax Blog.  I’ve got so much to write, but I’ll need to save all the juicy news and advice for future blog entries. So I’ll use this first entry to tell you a little bit about how we got here. It’s been a long, crazy journey.

As long-time readers know, until a few days ago,  I was a Microsoft recruiter.  About 2 ½ years ago, I was hand-picked to join a start-up team within the company’s recruiting organization. Also chosen for that team was Zoe Goldring.  We were tasked with building a community and pipeline of qualified and interested software developers for the company’s openings. (Is this starting to sound like the pilot for a cheesy sitcom yet?) :)

Two years ago, Zoe and I founded the Technical Careers @ Microsoft weblog (JobsBlog) as a way to connect with the developers who were latching onto the blogging phenomenon. Quite honestly, we had no idea what were doing and definitely not what we were getting ourselves into.

Being the public faces for Microsoft’s technical recruiting effort was both extremely exhilarating and challenging at the same time. Personally, I loved that I had such a positive impact on so many applicants. I joined the recruiting industry because I wanted to improve the candidate experience, and each day, I saw tangible results of my efforts. I impacted so many more candidates via our blog than I ever could as regular recruiter
...
Time moved along, and many things changed in our lives – yet we still held onto our promise to each other. Finally, we decided the time was right … Personally, we were both ready for new and different challenges in our careers. Professionally, it seemed that our collective passion for technical recruiting, a positive customer and client experience, and strong jobseeker and employer education coupled with, well, the returned rise of the tech market meant it was time to strike out on our own.  The time was right. The time is now.

So here we are. Welcome to JobSyntax, and we look forward to all the good times ahead.  Let the games begin!

It's great to see Zoe and Gretchen convert their skills and blog cred into their own company. There are a number of folks who toil in the B0rg cube who I've wondered why they don't strike out on their own, Robert Scoble for one. Whenever I decide to hang my hat up at the B0rg cube and go wage slave for someone else, I'll be sure to give the Moongals at JobSyntax a call.

Good luck with your new venture Zoe and Gretchen


 

From the blog post Zillow Integrates Virtual Earth into their website from the Virtual Earth team's blog we learn

Zillow.com was already a very cool site for browsing property values, comps, and other home related information, and now they've added Birds Eye Imagery to their application making the experience even more powerful. This is a great example of how the Virtual Earth platform can be used by third party developers to include unique capabilities into their applications. Once the Virtual Earth map control is integrated into a website, the end user can rotate their view North, South, East, and West to examine a property, unlike other mapping platforms that only provide a single look straight down at the roof of a house.

Congrats to the VE folks for getting such a cool web site using their API.


 

Categories: Windows Live

David Sifry has posted another one of his State of the Blogosphere blog entries. He writes

In summary:

  • Technorati now tracks over 35.3 Million blogs
  • The blogosphere is doubling in size every 6 months
  • It is now over 60 times bigger than it was 3 years ago
  • On average, a new weblog is created every second of every day
  • 19.4 million bloggers (55%) are still posting 3 months after their blogs are created
  • Technorati tracks about 1.2 Million new blog posts each day, about 50,000 per hour

As usual for this series of posts, Dave Sifry plays fast and lose with language by interchangeably using blogosphere and number of blogs Technorati is tracking. There is a big difference between the two but unfortunately many people seem to fail at critical thinking and repeat Technorati's numbers as gospel. It's now general knowledge that services like MySpace and MSN Spaces have more blogs/users than Technorati tracks overall.

I find this irritating because I've seen lots of press reports underreport the estimated size of the blogosphere by quoting the Technorati numbers. I suspect that the number of blogs out there is closer to 100 million (you can get that just by adding up the number of blogs on the 3 or 4 most popular blogging services) and not hovering around 35 million. One interesting question for me is whether private blogs/journals/spaces count as part of the blogosphere or not? Then again for most people the blogosphere is limited to their limited set of interests (technology blogs, mommy blogs, politics blogs, etc) so that is probably a moot question.

PS: For a good rant about another example of Technorati playing fast and lose language, see Shelley Powers's Technology is neither good nor evil which riffs on how Technorati equates number of links to a weblog with authority.


 

April 17, 2006
@ 04:03 PM

Robert Scoble has a blog post entitled Halfway through my blog vacation (change in comment policy)

But, mostly, this past week was about change.

Some things I've changed? 1) No more coffee. 2) No more soda. 3) Xercising. 4) No more unhappy people in my life. 5) Get balance back in my own life.
...
One of my most memeorable conversations, though, was with Buzz Bruggeman, CEO of ActiveWords and a good friend. He told me to hang around people who are happy. And I realized I had been listening to too many people who were deeply unhappy and not bringing any value into my life. He told me to listen to this recording on NPR about "finding happiness in a Harvard Classroom." He also told me about the four agreements, which are Don Miguel Ruiz's code for life. Good stuff.

Over the past year I've been on a mission to simplify my life piece by piece. Along the line I've made some promises to myself which I've kept and others which have been more difficult to stick with.

Health: I enrolled in the 20/20 Lifestyles program in the health club near Microsoft about six months ago. Since then I've lost just over 60 pounds (27.5 kilos for my metric peeps). This week is my last week with my personal trainer and dietician before I'm on my own. I had hoped to lose more weight but last month was somewhat disruptive to my schedule with my mom being in town for two weeks and travelling for ETech 2006, SPARK and New York to see my dad. I am somewhat proud that I gained less than 2 pounds even though my schedule was complete mess. I've kept two promises to myself about my health; I'll work out 5 days a week and will keep my daily caloric intake to within 2000 calories a day 5 days a week [but never over 3000 calories in one day]. The excercise promise has been easy to keep but the diet promise has been harder than I've liked. Eating out is the hard part. Giving up soda for water was easier than I thought.

Work/Life Balance: I also decided to be better at compartmentalizing my work and home life. I promised myself not to spend more than 10.5 hours a day at work [in by 9AM, out by 7-7.30 PM at the latest] and to stop using the VPN to connect to work when I'm at home. I've also tried to stop checking work email from home on weekday evenings and will check it only once a day on weekends. If I'm in crunch mode for a particular deadline then this may chaneg temporarily. Last week I averaged about 14 hours a day at work because I had a deadline I wanted to hit for Friday. However I didn't want this to mean I got home late since I value spending dinner time with my girlfriend so I left for work much earlier in the day last week. This week I'm back to my regular schedule. 

Professional Work Load: Last year, I worked on lots of things I was interested in simultaneously. I worked on the social networking platform for Windows Live, replacing MSN Member Directory with MSN Spaces Profiles, photo storage for Windows Live services from MSN Spaces to Windows Live Expo, and a bunch of other stuff which hasn't shipped so I can't mention here. This was just the stuff my boss had me working on. There was also stuff I was interested in that I just worked on without being explicitly told to such as organizing efforts around the MSN Windows Live developer platform (see http://msdn.microsoft.com/live AND keeping the spark alive on us getting an RSS platform built for Windows Live. This was a lot of stuff to try to fit into a workday besides all the other crap that fits into your day (meetings, meetings, meetings). At my last review, I got some feedback that some folks on my team felt they weren't getting my full attention because I spent so much time on 'extracurricular' activities. Although I was initially taken aback by this feedback I realized there some truth to it. Since then I've been working on handing off some of the stuff I was working on that wasn't part of my job requirements. Thanks in part to the positive response to my ThinkWeek paper there is now an entire team of people working on the stuff I was driving around the Windows Live developer platform last year. You should keep an eye on the blogs of folks like Ken Levy and Danny Thorpe to learn what we have planned in this arena. The RSS platform for Windows Live spark has now been fanned into a flame and I worked hard to get Niall Kennedy to join us to drive those efforts. Realizing I can't work on everything I am interesed in has been liberating.

Geeking at Home: I've cut down on how much time I spend reading blogs and don't subscribe to any mailing lists. Even on the blogs I read, I try to cut down on reading comment sections that have more negative energy than I can stomach which means skipping the comments section of Mini-Microsoft blog most days of the week. Even at work, I subscribe to only two or three distribution lists that aren't for my team or specific projects I am working on. I don't plan to have concurrent side projects going on at home anymore. I'll keep working on RSS Bandit for the forseeable future. Whenever there is a lull in development such as after a major release, I may work on an article or two. However I won't have two or three article drafts going at the same time while also being in bug fixing mode which used to be the norm for me a year or two ago.


I wish Robert luck in his plan to simplify his life and improve his health.


 

Categories: Personal

April 17, 2006
@ 03:05 PM

I'm still continuing my exploration of the philosophy behind building distributed applications following the principles behind the REpresentational State architectural style (REST) and Web-style software. Recent comments in my blog have introduced a perspective that I hadn't considered much before. 

Robert Sayre wrote

Reading over your last few posts, I think it's important to keep in mind there are really two kinds of HTTP. One is HTTP-For-Browsers, and one is HTTP-For-APIs.

API end-points encounter a much wider variety of clients that actually have a user expecting something coherent--as opposed to bots. Many of those clients will have less-than robust HTTP stacks. So, it turns out your API end-points have to be much more compliant than whatever is serving your web pages.

Sam Ruby wrote

While the accept header is how you segued into this discussion, Ian's and Joe's posts were explicitly about the Content-Type header.

Relevant to both discussions, my weblog varies the Content-Type header it returns based on the Accept header it receives, as there is at least one popular browser that does not support application/xhtml+xml.

So... Content-Type AND charset are very relevant to IE7. But are completely ignored by RSSBandit. If you want to talk about “how the Web r-e-a-l-l-y works”, you need to first recognize that you are talking about two very different webs with different set of rules. When you talk about how you would invest Don's $100, which web are you talking about?

This is an interesting distinction and one that makes me re-evaluate my reasons for being interested in RESTful web services. I see two main arguments for using RESTful approaches to building distributed applications on the Web.  The first is that it is simpler than other approaches to building distributed applications that the software industry has cooked up. The second is that it has been proven to scale on the Web.

The second reason is where it gets interesting. Once you start reading articles on building RESTful web services such as Joe Gregorio's How to Create a REST Protocol and Dispatching in a REST Protocol Application you realize that how REST advocates talk about how one should build RESTful applications is actually different from how the Web works. Few web applications support HTTP methods other than GET and POST, few web applications send out the correct MIME types when sending data to clients, many Web applications use cookies for storing application state instead of allowing hypermedia to be the engine of application state (i.e. keeping the state in the URL) and in a suprisingly large number of cases the markup in documents being transmitted is invalid or malformed in some ways. However the Web still works. 

REST is an attempt to formalize the workings of the Web ex post facto. However it describes an ideal of how the Web works and in many cases the reality of the Web deviates significantly from what advocates of RESTful approaches preach. The question is whether this disconnect invalidates the teachings of REST. I think the answer is no. 

In almost every case I've described above, the behavior of client applications and the user experience would be improved if HTTP [and XML]  were used correctly. This isn't supposition, as the developer of  an RSS reader my life and that of my users would be better if servers emitted the correct MIME types for their feeds, the feeds were always at least well-formed and feeds always pointed to related metadata/content such as comment feeds (i.e. hypermedia is the engine of application state).

Let's get back the notion of the Two Webs. Right now, there is the primarily HTML-powered Web which whose primary clients are Web browsers and search engine bots. For better or worse, over time Web browsers have had to deal with the fact that Web servers and Web masters ignore several rules of the Web from using incorrect MIME types for files to having malformed/invalid documents. This has cemented hacks and bad practices as the status quo on the HTML web. It is unlikely this is going to change anytime soon, if ever.

Where things get interesting is that we are now using the Web for more than serving Web documents for Web browsers. The primary clients for these documents aren't Web browsers written by Microsoft and Netscape AOL Mozilla and bots from a handful of search engines. For example, with RSS/Atom we have hundreds of clients with more to come as the technology becomes more mainstream. Also Web APIs becoming more popular, more and more Web sites are exposing services to the world on the Web using RESTTful approaches. In all of these examples, there is justification in being more rigorous in the way one uses HTTP than one would be when serving HTML documents for one's web site. 

In conclusion, I completely agree with Robert Sayre's statement that there are really two kinds of HTTP. One is HTTP-For-Browsers, and one is HTTP-For-APIs.

When talking about REST and HTTP-For-APIs, we should be careful not to learn the wrong lessons from how HTTP-For-Browsers is used today.