March 4, 2008
@ 04:00 AM

Dean Hachamovitch who runs the Internet Explorer team has a blog post entitled Microsoft's Interoperability Principles and IE8 where he addresses some of the recent controversy about how rendering pages according to Web standards will work in IE 8. He wrote

The Technical Challenge

One issue we heard repeatedly during the IE7 beta concerned sites that looked fine in IE6 but looked bad in IE7. The reason was that the sites had worked around IE6 issues with content that – when viewed with IE7’s improved Standards mode – looked bad.

As we started work on IE8, we thought that the same thing would happen in the short term: when a site hands IE8 content and asks for Standards mode, that content would expect IE7’s Standards mode and not appear or function correctly. 

In other words, the technical challenge here is how can IE determine whether a site’s content expects IE8’s Standards mode or IE7’s Standards mode? Given how many sites offer IE very different content today, which should IE8 default to?

Our initial thinking for IE8 involved showing pages requesting “Standards” mode in an IE7’s “Standards” mode, and requiring developers to ask for IE8’s actual “Standards” mode separately. We made this decision, informed by discussions with some leading web experts, with compatibility at the top of mind.

In light of the Interoperability Principles, as well as feedback from the community, we’re choosing differently. Now, IE8 will show pages requesting “Standards” mode in IE8’s Standards mode. Developers who want their pages shown using IE8’s “IE7 Standards mode” will need to request that explicitly (using the http header/meta tag approach described here).

Going Forward

Long term, we believe this is the right thing for the web.

I’m glad someone came to this realization. The original solution was simply unworkable in the long term regardless of how much short term pain it eased. Kudos to the Internet Explorer team for taking the long view and doing what is best for the Web. Is it me or is that the most positive the comments have ever been on the IE blog?

PS: It is interesting to note that this is the second time in the past week Microsoft has announced a technology direction related to Web standards and changed it based on feedback from the community.

Now playing: Usher - In This Club (feat. Young Jeezy)


 

Categories: Web Development

David Treadwell has a blog post on the Windows Live Developer blog entitled David Treadwell on New and Updated Windows Live Platform Services where he previews some of the announcements that folks will get to dig into at MIX 08. There are a lot of items of note in his post but there is some stuff that stands out that I felt was worth calling out.

Windows Live Messenger Library (new to beta) – “Develop your own IM experience”

We are also opening up the Windows Live Messenger network for third-party web sites to reach the 300 million+ Windows Live Messenger users. The library is a JavaScript client API, so the user experience is primarily defined by the third party. When a third party integrates the Windows Live Messenger Library into their site they can define the look & feel to create their own IM experience. Unlike the existing third party wrappers for the MSN Protocol (the underlying protocol for Windows Live Messenger) the Windows Live Messenger Library securely authenticates users, therefore their Windows Live ID credentials are safe.

A couple of months ago we announced the Windows Live Messenger IM Control which enables you to embed an AJAX instant messaging window on any webpage so people can start IM conversations with you. I have one placed at http://carnage4life.spaces.live.com and it’s cool to have random readers of my blog start up conversations with me in the middle of my work day or at home via the IM control.

The team who delivered this has been hard at work and now they’ve built a library that enables any developer to build similar experiences on top of the Windows Live Messenger network. Completely customized IM integration is now available for anyone that wants it.  Sweet. Kudos to Keiji, Steve Gordon, Siebe and everyone else who had something to do with this for getting it out the door.

An interesting tidbit is that the library was developed in Script#. Three cheers for code generation.

Contacts API (progressed to Beta) – “Bring your friends”

Our goal is to help developers keep users at the center of their experience by letting them control their data and contact portability, while keeping their personal information private. A big step forward in that effort is today’s release to beta of Windows Live Contacts API. Web developers can use this API in production to enable their customers to transfer and share their contacts lists in a secure, trustworthy way (i.e., no more screen scraping)—a great step on the road toward data portability. (For more on Microsoft’s view on data portability, check out Inder Sethi’s video.) By creating an optimized mode for invitations, it allows users to share only the minimum amount of information required to invite friends to a site, this includes firstname / lastname / preferred email address. The Contacts API uses the new Windows Live ID Delegated Authentication framework; you can find out more here.

A lot of the hubbub around “data portability” has really been about exporting contact lists. Those of us working on the Contacts platform at Windows Live realize that there is a great demand for users to be able to access their social graph data securely from non-Microsoft services.  

The Windows Live Contacts API provides a way for Windows Live users to give an application permission to access their contact list in Windows Live (i.e. Hotmail address book/Live Messenger buddy list) without giving the application their username and password. It is our plan to kill the password anti-pattern when it comes to Windows Live services. If you are a developer of an application or Web site that screen scrapes Hotmail contacts, I’d suggest taking a look at this API instead of continuing in this unsavory practice.

Atom Publishing Protocol (AtomPub) as the future direction

Microsoft is making a large investment in unifying our developer platform protocols for services on the open, standards-based Atom format (RFC 4287) and the Atom Publishing Protocol (RFC 5023). At MIX we are enabling several new Live services with AtomPub endpoints which enable any HTTP-aware application to easily consume Atom feeds of photos and for unstructured application storage (see below for more details). Or you can use any Atom-aware public tools or libraries, such as .NET WCF Syndication to read or write these cloud service-based feeds.

In addition, these same protocols and the same services are now ADO.NET Data Services (formerly known as “ Project Astoria”) compatible. This means we now support LINQ queries from .NET code directly against our service endpoints, leveraging a large amount of existing knowledge and tooling shared with on-premise SQL deployments.

The first question that probably pops into the mind of regular readers of my blog is, “What happened to Web3S and all that talk about AtomPub not being a general purpose editing format for the Web?”. The fact is when we listened to the community of Web developers the feedback was overwhelmingly clear that people would prefer if we worked together with the community to make AtomPub work for the scenarios we felt it wasn’t suited for than Microsoft creating a competing proprietary protocol.

We listened and now here we are. If you are interested in the technical details of how Microsoft plans to use AtomPub and how we’ve dealt with the various issues we originally had with the protocol. I suggest subscribing to the Astoria team’s blog and check out the various posts on this topic by Pablo Castro. There’s a good post by Pablo discussing how Astoria describes relations between elements in AtomPub and suggests a mechanism for doing inline expansion of links. I’ll be providing my thoughts on each of Pablo’s posts and the responses as I find time during the coming weeks.

Windows Live Photo API (CTP Refresh with AtomPub end point)

The Windows Live Photo API allows users to securely grant permission (via Delegated Authentication) for a third party web site to create/read/update/delete on their photos store in Windows Live. The Photo API refresh has several things which make it easier and faster for third parties to implement.

  • Third party web sites can you link/refer to images directly from the web browser so they no longer need to proxy images, and effectively save on image bandwidth bills.
  • A new AtomPub end point which makes it even easier to integrate.

At the current time, I can’t find the AtomPub endpoint but that’s probably because the documentation hasn’t been refreshed. Moving the API to AtomPub is one of the consequences of the decision to standardize on AtomPub for Web services provided by Windows Live. Although I was part of the original decision to expose the API using WebDAV, I like the fact that all of our APIs will utilize a standard protocol and can take advantage of the breadth of Atom and AtomPub libraries that exist on various platforms.

I need to track down the AtomPub end point so I can compare and contrast it to the WebDAV version to see what we’ve gained and/or lost in the translation. Stay tuned.

Now playing: Jay-Z - Can't Knock the Hustle


 

Categories: Windows Live | XML Web Services

Over the past week, two Windows Live teams have shipped some good news to their users. The Windows Live SkyDrive team addressed the two most often raised issues with their service with the announcements in their post Welcome to the bigger, better, faster SkyDrive! which reads

You've made two things clear since our first release: You want more space; and you want SkyDrive where you are. Today we're giving you both. You now have five times the space you had before — that’s 5GB of free online storage for your favorite documents, pictures, and other files.
 
 
SkyDrive is also available now in 38 countries/regions. In addition to Great Britain, India, and the U.S., we’re live in Argentina, Australia, Austria, Belgium, Bolivia, Brazil, Canada, Chile, Colombia, Denmark, the Dominican Republic, Ecuador, El Salvador, Finland, France, Guatemala, Honduras, Italy, Japan, Mexico, the Netherlands, New Zealand, Nicaragua, Norway, Panama, Paraguay, Peru, Puerto Rico, Portugal, South Korea, Spain, Sweden, Switzerland, Taiwan, and Turkey.
 

Wow, Windows Live is just drowning our customers with free storage. Thats 5GB in SkyDrive and 5GB for Hotmail.  

The Windows Live Spaces team also shipped some sweetness to their customers as well. This feature is a little nearer to my heart since it relies on Contact platform APIs I worked on a little while ago. The feature is described by Michelle in on the their team blog in a post entitled More information on Friends in common which states

In the friends module on another person’s space, there is a new area that highlights friends you have in common.  Right away you can see the number of people you both know and the profile pictures of some of those friends. 

Want to see the rest of your mutual friends?  Click on In common and you’re taken to a full page view that shows all of your friends as well as separate lists of friends in common and friends that you don't have in common.  This way you can also discover new people that you might know in real life, but are not connected with on Windows Live.

           Friend_in_common_1                                      Friends_in_common_2

 

Finding friends in common is also especially important when planning an event on Windows Live Events.  Who wants to go to a party when none of your friends are going? 

On the Guest list area of every event, you can now quickly see how many of your friends have also been invited to the event.  Just click on See who’s going and see whether or not your friends are planning to go. 

Friends_in_common_3

Showing mutual friends as shown above is one of those small features that makes a big impact on the user experience. Nice work Michelle and Shu on getting this out the door.

Now playing: Iconz - I Represent


 

Categories: Windows Live

I found Charles Hudson’s post FriendFeed and the Facebook News Feed - FriendFeed is For Sharing and Facebook Used to be About my Friends somewhat interesting since one of the things I’ve worked on recently is the What’s New page on Windows Live Spaces. He writes

I was reading this article on TechCrunch “Facebook Targets FriendFeed; Opening Up The News Feed” and I found it kind of interesting. As someone who uses FriendFeed a lot and uses Facebook less and less, I don’t think the FriendFeed team should spend much time worrying about this announcement. The reason is really simple.

In the beginning, the Facebook News Feed was really interesting. It was all information about my friend and what they were doing. Over time, it’s become a lot less interesting.

I would like to see Facebook separate “news” from “activity” - “news” is stuff that happened  to people (person x became friend with person y, person x is no longer in a relationship, status updates, etc) and “activities” are stuff related to applications, content sharing, etc. Trying to stuff news and activity into the same channel results in a lot of chaos and noise.

FriendFeed is really different. To me, FriendFeed is a community of people who like to share stuff. That’s a very different product proposition than what the News Feed originally set out to do.

This is an example of a situation where I agree with the sentiment in Jeff Atwood’s post I Repeat: Do Not Listen to Your Users. This isn’t to say that Charles Hudson’s increasingly negative user experience with the Facebook should be discounted or that the things he finds interesting about FriendFeed are invalid. The point is that in typical end user fashion, Charles’s complaints contradict themselves and his suggestions wouldn’t address the actual problems he seems to be having.

The main problem Charles has with the news feed on Facebook is its increased irrelevance due to massive amounts of application spam. This has nothing to do with FriendFeed being more of a community site than Facebook. This also has nothing to do with separating “news” from “activity” (whatever that means).  Instead it has everything to do with the fact that Facebook platform is an attractive target for applications attempting to “grow virally” to send all sorts of useless crap to people’s friends. Friendfeed doesn’t have that problem because everything that shows up in your feed is pulled from a carefully selected list of services shown below

The 28 services supported by FriendFeed

The thing about the way FriendFeed works is that there is little chance that stuff in the feed would be considered spammy because the content in the feed will always correspond to a somewhat relevant user action (Digging a story, adding a movie to a Netflix queue, uploading photos to Flickr, etc).

So this means one way Facebook can add relevance to the content in their feed is to pull data in from more valid sources instead of relying on spammy applications pushing useless crap like “Dare’s level 17 zombie just bit Rob’s level 12 vampire”. 

That’s interesting but there is more. There doesn’t seem to be any tangible barrier to entry in the “market” that Friendfeed is targetting since all they seem to be doing is pulling the public RSS feeds from a handful of Web sites. This is the kind of project I could knock out in two months. The hard part is having a scalable RSS processing platform. However we know Facebook already has one for their feature which allows one to import blog posts as Notes. So that makes it the kind of feature an enterprising dev at Facebook could knock out in a week or two.

The only thing Friendfeed may have going for it is the community that ends up adopting it. The tricky thing about social software is that your users are as pivotal to your success as your features. Become popular with the right kind of users and your site blows up (e.g. MySpace) while with a different set of users your site eventually stagnates due to it’s niche nature (e.g. LiveJournal).

Friendfeed reminds me of Odeo; a project by some formerly successful entrepenuers that tries to jump on a hyped bandwagon without actually scratching an itch that the founders have or fully understanding the space.

Now playing: Jae Millz - No, No, No (remix) (feat. Camron & T.I.)


 

Categories: Social Software

February 27, 2008
@ 03:51 PM

MSFT vs GOOG vs AAPL 

Now playing: Supremes - Where Did Our Love Go?


 

I'm slowly working towards the goal of making RSS Bandit a desktop RSS client for Google Reader, NewsGator Online and Exchange (via the Windows RSS platform). Today I made some progress integrating with the Windows RSS platform but as with any integration story it is some good news and some bad news. The good news can be seen in the screen shot below

RSS Bandit and Internet Explorer sharing the same feed list

The good news is that for the most part the core application has been refactored to be able to transparently support loading feeds from sources such as the Windows RSS platform or from the RSS Bandit feed cache. It should take one or two more weekends and I can move on to adding similar support for synchronizing feeds from Google Reader.

The bad news is that using the Windows RSS platform has been a painful exercise. My current problem is that for some reason I can't fathom I can't receive events from the Windows RSS platform. I can write the same code and receive events from a standalone program but for some reason the event handlers aren't received triggered when the exact same code is running in RSS Bandit. The main problem I had turned out to have been due to a stupid oversight. With that figured out we're about 80% done with integration with the Windows RSS platform. There are lots of smaller issues too, such as the fact that there is no event that indicates an enclosure has finished being downloaded although the documentation seems to imply the FeedDownloadCompleted does double duty. Or the various exceptions that can occur when accessing properties of a feed including BadImageFormatException for accessing IFeed.Title if the underlying feed file has been corrupted somehow or a COMException complaining that the "Element not found" if you access IFeed.DownloadUrl before you've attempted to download the feed.

I've used up my budget of free time for coding this weekend so I'll start up again next weekend. In the meantime, if you have any tips on working with the Windows RSS platform from C#, don't hesitate to share.

 Now Playing: Bone Thugs 'N Harmony - No Surrender


 

Categories: RSS Bandit

Sam Ruby has an insightful response to Joe Gregorio in his post APP Level Patch where he writes

Joe Gregorio: At Google we are considering using PATCH. One of the big open questions surrounding that decision is XML patch formats. What have you found for patch formats and associated libraries?

I believe that looking for an XML patch format is looking for a solution at the wrong meta level.  Two examples, using AtomPub:

  • In Atom, the order of elements in an entry is not significant.  AtomPub servers often do not store their data in XML serialized form, or even in DOM form.  If you PUT an entry, and then send a PATCH based on the original serialization, it may not be understood.
  • A lot of data in this world is either not in XML, or if it is in XML, is simply there via tunneling.  Atom elements are often merely thin wrappers around HTML.  HTML has a DOM, and can flattened into a sequence of SAX like events, just like XML can be.

I totally agree with Sam. A generic “XML patch format” is totally the wrong solution. At Microsoft we had several different XML patch formats produced by the same organization because each targetted a different scenario

  • Diffgram: Represent a relational database table and changes to it as XML.
  • UpdateGram: Represent changes to an XML view of one or more relational database tables optionally including a mapping from relational <-> XML data
  • Patchgram: Represent infoset level differences between two XML documents

Of course, these are one line sumarries but you get the point. Depending on your constraints, you’ll end up with a different set of requirements. Quick test, tell me why one would choose Patchgrams over XUpdate and vice versa? 

Given the broad set of constraints that will exist in different server implementations of the Atom Publishing Protocol, a generic XML patch format will have lots of features which just don’t make sense (e.g. XUpdate can create processing instructions, Patchgrams use document ordered positions of nodes for matching).

If you decide you really need a patch format for Atom documents, your best bet is working with the community to define one or more which are specific to the unique constraints of the Atom syndication format instead of hoping that there is a generic XML patch format out there you can shoehorn into a solution. In the words of Joe Gregorio’s former co-worker, “I make it fit!”.

Personally, I think you’ll still end up with so many different requirements (Atom stores backed by actual text documents will have different concerns from those backed by relational databases) and spottiness in supporting the capability that you are best off just walking away from this problem by fixing your data model. As I said before, if you have sub-resources which you think should be individually editable then give them a URI and make them resources as well complete with their own atom:entry element.  

Now playing: Oomp Camp - Time To Throw A Chair


 

Categories: XML Web Services

February 23, 2008
@ 04:00 AM

About five years ago, I was pretty active on the XML-DEV mailing list. One of the discussions that cropped up every couple of weeks (aka permathreads) was whether markup languages could be successful if they were not simple enough that a relatively inexperienced developer could “View Source” and figure out how to author documents in that format. HTML (and to a lesser extent RSS) are examples of the success of the “View Source” principle. Danny Ayers had a classic post on the subject titled The Legend of View ‘Source’ which is excerpted below

Q: How do people learn markup?
A: 'View Source'.

This notion is one of the big guns that gets wheeled out in many permathreads - 'binary XML', 'RDF, bad' perhaps even 'XML Schema, too
complicated'. To a lot of people it's the show stopper, the argument that can never be defeated. Not being able to view source is the reason format X died; being able to view source is the reason for format Y's success.

But I'm beginning to wonder if this argument really holds water any more. Don't get me wrong, I'm sure it certainly used to be the case, that many people here got their initial momentum into XML by looking at that there text. I'm also sure that being able to view existing source can be a great aid in learning a markup language. What I'm questioning is whether the actual practice of 'View Source' really is so widespread these days, and more importantly whether it offers such benefits for it to be a major factor in language decisions. I'd be happy with the answer to : are people really using 'View Source' that much? I hear it a lot, yet see little evidence.

One last point, I think we should be clear about what is and what isn't 'View Source'. If I need an XSLT stylesheet the first thing I'll do is open an existing stylesheet and copy and paste half of it. Then I'll get Michael's reference off the shelf.  I bet a fair few folks here have the
bare-bones HTML 3.2 document etched into their lower cortex. But I'd argue that nothing is actually gained from 'View Source' in this, all it is is templating, the fact that it's a text format isn't of immediate relevance.

The mistake Danny made in his post was taking the arguments in favor of “View Source” literally. In hindsight, I think the key point of the “View Source” clan was that it is clear that there is a lot of cargo cult programming that goes on in the world of Web development. Whether it is directly via using the View Source feature of popular Web browsers or simply cutting and pasting code they find at places like quirks mode, A List Apart and W3C Schools, the fact is that lots of people building Web pages and syndication feeds are using technology and techniques they barely understand on a daily basis.

Back in the days when this debate came up, the existence of these markup cargo cults was celebrated because it meant that the ability to author content on the Web was available to the masses which is still the case today (Yaaay, MySpace Wink ). However there has been a number of down sides to the wide adoption of [X]HTML, CSS and other Web authoring technologies by large numbers of semi-knowledgeable developers and technologically challenged content authors.

One of these negative side effects has been discussed to death in a number of places including the article Beyond DOCTYPE: Web Standards, Forward Compatibility, and IE8 by Aaron Gustafson which is excerpted below

The DOCTYPE switch is broken

Back in 1998, Todd Fahrner came up with a toggle that would allow a browser to offer two rendering modes: one for developers wishing to follow standards, and another for everyone else. The concept was brilliantly simple. When the user agent encountered a document with a well-formed DOCTYPE declaration of a current HTML standard (i.e. HTML 2.0 wouldn’t cut it), it would assume that the author knew what she was doing and render the page in “standards” mode (laying out elements using the W3C’s box model). But when no DOCTYPE or a malformed DOCTYPE was encountered, the document would be rendered in “quirks” mode, i.e., laying out elements using the non-standard box model of IE5.x/Windows.

Unfortunately, two key factors, working in concert, have made the DOCTYPE unsustainable as a switch for standards mode:

  1. egged on by A List Apart and The Web Standards Project, well-intentioned developers of authoring tools began inserting valid, complete DOCTYPEs into the markup their tools generated; and
  2. IE6’s rendering behavior was not updated for five years, leading many developers to assume its rendering was both accurate and unlikely to change.

Together, these two circumstances have undermined the DOCTYPE switch because it had one fatal flaw: it assumed that the use of a valid DOCTYPE meant that you knew what you were doing when it came to web standards, and that you wanted the most accurate rendering possible. How do we know that it failed? When IE 7 hit the streets, sites broke.

Sure, as Roger pointed out, some of those sites were using IE-6-specific CSS hacks (often begrudgingly, and with no choice). But most suffered because their developers only checked their pages in IE6 —or only needed to concern themselves with how the site looked in IE6, because they were deploying sites within a homogeneous browserscape (e.g. a company intranet). Now sure, you could just shrug it off and say that since IE6’s inaccuracies were well-documented, these developers should have known better, but you would be ignoring the fact that many developers never explicitly opted into “standards mode,” or even knew that such a mode existed.

This seems like an intractible problem to me. If you ship a version of your software that is more standards compliant than previous versions you run the risk of breaking applications or content that worked in previous versions. This reminds me of Windows Vista getting the blame because Facebook had a broken IPv6 record. The fact is that the application can claim it is more standards compliant but that is meaningless if users can no longer access their data or visit their favorite sites. In addition, putting the onus on Web developers and content authors to always write standards compliant code is impossible given the acknowledged low level of expertise of said Web content authors. It would seem that this actually causes a lot of pressure to always be backwards (or is that bugwards) compatible. I definitely wouldn’t want to be in the Internet Explorer team’s shoes these days.

It puts an interesting wrinkle on the exhortations to make markup languages friendly to “View Source” doesn’t it?

Now playing: Green Day - Welcome To Paradise


 

Categories: Web Development

From the press release entitled Microsoft Makes Strategic Changes in Technology and Business Practices to Expand Interoperability we learn

REDMOND, Wash. — Feb. 21, 2008 — Microsoft Corp. today announced a set of broad-reaching changes to its technology and business practices to increase the openness of its products and drive greater interoperability, opportunity and choice for developers, partners, customers and competitors.

Specifically, Microsoft is implementing four new interoperability principles and corresponding actions across its high-volume business products: (1) ensuring open connections; (2) promoting data portability; (3) enhancing support for industry standards; and (4) fostering more open engagement with customers and the industry, including open source communities.
...
The interoperability principles and actions announced today apply to the following high-volume Microsoft products: Windows Vista (including the .NET Framework), Windows Server 2008, SQL Server 2008, Office 2007, Exchange Server 2007, and Office SharePoint Server 2007, and future versions of all these products. Highlights of the specific actions Microsoft is taking to implement its new interoperability principles are described below.

  • Ensuring open connections to Microsoft’s high-volume products.
  • Documenting how Microsoft supports industry standards and extensions.
  • Enhancing Office 2007 to provide greater flexibility of document formats.
  • Launching the Open Source Interoperability Initiative.
  • Expanding industry outreach and dialogue.

More information can be found on the Microsoft Interoperability page. Nice job, ROzzie and SteveB.

Now playing: Timbaland - Apologize (Feat. One Republic)


 

Categories: Life in the B0rg Cube