Brian Jones has a blog post entitled Politics behind standardization where he writes

We ultimately need to prioritize our standardization efforts, and as the Ecma Office Open XML spec is clearly further along in meeting the goal of full interoperability with the existing set of billions of Office documents, that is where our focus is. The Ecma spec is only a few months away from completion, while the OASIS committee has stated they believe they have at least another year before they are even able to define spreadsheet formulas. If the OASIS Open Document committee is having trouble meeting the goal of compatibility with the existing set of Office documents, then they should be able to leverage the work done by Ecma as the draft released back in the spring is already very detailed and the final draft should be published later this year.

To be clear, we have taken a 'hands off' approach to the OASIS technical committees because:  a) we have our hands full finishing a great product (Office 2007) and contributing to Ecma TC45, and b) we do not want in any way to be perceived as slowing down or working against ODF.  We have made this clear during the ISO consideration process as well.  The ODF and Open XML projects have legitimate differences of architecture, customer requirements and purpose.  This Translator project and others will prove that the formats can coexist with a certain tolerance, despite the differences and gaps.

No matter how well-intentioned our involvement might be with ODF, it would be perceived to be self-serving or detrimental to ODF and might come from a different perception of requirements.   We have nothing against the different ODF committees' work, but just recognize that our presence and input would tend to be misinterpreted and an inefficient use of valuable resources.  The Translator project we feel is a good productive 'middle ground' for practical interoperability concerns to be worked out in a transparent way for everyone, rather than attempting to swing one technical approach and set of customer requirements over to the other.

As someone who's watched standards committees from the Microsoft perspective while working on the XML team, I agree with everything Brian writes in his post. Trying to merge a bunch of contradictory requirements often results in a complex technology that causes more problems than it solves (e.g. W3C XML Schema). In addition, Microsoft showing up and trying to change the direction of the project to supports its primary requirement (an XML file format compatible with the legacy Microsoft Office file formats) would not be well received.

Unfortunately, the ODF discussion has seemed to be more political than technical which often obscures the truth. Microsoft is making moves to ensure that Microsoft Office not only provides the best features for its customers but ensures that they can exchange documents in a variety of document formats from those owned by Microsoft to PDF and ODF. I've seen a lot of customers acknowledge this truth and commend the company for it. At the end of the day, that matters a lot more than what competitors and detractors say. Making our customers happy is job #1. 


 

Categories: XML

July 18, 2006
@ 05:12 PM

Yesterday, I spent way too much time trying to figure out how to import an OPML feed list into Bloglines from the UI before giving up and performing a Web search to find out the how to do it. Below is a screenshot of the key choices one has for managing ones feeds in Bloglines.

And this is what the Bloglines FAQ has in response to the question How Can I Import An Existing List of Subscriptions?

Once you have registered with Bloglines and replied to the confirmation email, click on the My Feeds tab at the top of the screen. Then, click on the Edit link. At the bottom of the left panel will be a link to import subscriptions. The subscription list must be in OPML format.
Why is importing a feed list an 'Edit' operation and not an 'Add'? Who designs this crud?

 

Given the amount of time I now spend working on features for Windows Live Messenger I've started reading blogs about the IM/VOIP industry such as Skype Journal and GigaOm. I now find news stories that I'd traditionally miss, such as the Skype Journal blog post entitled "We have no interest in cracking, replicating, reverse engineering or pirating Skype's software."which links to a blog entry entitled not rumors about the recent news that the Skype protocol had been reverse engineered. The linked blog post is excerpted below.

Well, rumors are not rumors :) But things are not going like the ways people think, as they are disclosed a little without many further explainations, anecdotes breeds especially when sensitively relating with a big hot biz and politics.

As a long-term friend with this Chinese team, but an outsider in Skype, VOIP or P2P tech and biz, I observed the whole process of that shock and wondered why they did not give a word to declare their status. Because I once heard the beautiful prospect in their minds and know it is not what some people talked about in the Internet. But even then I was confused with blooming gossips. I thought that they just did not realize how a small stone could stir big waves. So I quoted some interesting, constructive (well, I like the open source ideas most), exaggerating and offensive comments and wrote an email to them.

This morning, I received a call from China and then followed an email. In the call, I urgently asked them about that rumors, they did not deny but said they also bothered with endless calls and emails for all purposes - interviews, verifications, legal affairs, biz talks... which disturbed their main aim and daily work- research, and in email they wrote

    "We have no interest in cracking, replicating, reverse engineering or pirating Skype's software. We just want to invent a better one. Having learned from and inspired by Skype, we are going to create a P2P Internet platform where all social groups can enjoy efficient, secure and free communication. This network platform will be better than SkypeNet that we are using today."

Then we chatted about some broad issues to fulfill my curiosity, which mainly related to the (potential) reaction of Skype Corp. They said they are just kids standing on the shoulders of giants.

If this blog post is accurate then it looks like the various pundits claiming that this will lead to a plethora of 3rd party desktop clients using the Skype network are out of luck. Of course, this could still happen if the research team publishes their findings but if they truly are fans of the Skype team they may not want to raise their ire. Either way, it'll be interesting to see what they end up building based on their knowledge of the Skype protocol. 


 

Larry Hryb (aka Major Nelson) has a blog post entitled It's back: Xbox Live Friends list on Messenger where he writes

Finally...you can check your Xbox Live Friends list from messenger!

After a 14–month hiatus, it’s back! You can now check your Xbox Live friends list from MSN Windows Live Messenger*. Don’t have Messenger yet? Download it here. If you already have messenger, click on the Xbox tab and you’ll see your friends list. Plus, you can even click a friend to go to their profile page. Nope…no word on when/if we’ll combine the Messenger and Xbox Live friends lists, but at least we've got this back.

*Note this is for US and Japan passport accounts only. Other regions may have this function, but it is purely up to the regional Windows Live Messenger teams if they want an Xbox tab...the Xbox team does not make this decision.

 Edit: Having trouble signing in? Arne360 posts some help.

It's been a good month for Windows Live Messenger users. First, we get interoperability with Yahoo! Messenger users and now this. Sweet.


 

Categories: Windows Live

July 15, 2006
@ 10:25 PM

Nathan Torkington has a blog post entitled A Week in the Valley: GData on the O'Reilly Radar blog that talks about the growth of the usage of GData & the Atom Publishing Protocol within Google as well as Marc Lukovsky's take on how this compared to his time at Microsoft working on Hailstorm. Nat writes

They're building APIs to your Google-stored data via GData, and it's all very reminiscent of HailStorm. Mark, of course, was the architect of that. So why's he coming up with more strategies to the same ends? I figure he's hoping Google won't screw it up by being greedy, the way Microsoft did...The reaction to the GData APIs for Calendar have been very positive. This is in contrast to HailStorm, of course, which was distrusted and eventually morphed its way through different product names into oblivion. Noting that Mark's trying again with the idea of open APIs to your personal data, I joked that GData should really be "GStorm". Mark deadpanned, " I wanted to call it ShitStorm but it didn't fly with marketing".

Providing APIs to access and manipulate data owned by your users is a good thing. It extends the utility of the data outside that of the Web applications that may be the primary consumer of the data and it creates an ecosystem of applications that harness the data. This is beneficial to customers as can be seen by looking around today at the success of APIs such as the MetaWeblog API, Flickr API or del.icio.us API.

Five years ago, while interning at Microsoft, I saw a demo about Hailstorm in which a user visiting an online CD retailer was showed an ad for a concert they'd be interested in based on their music preferences in Hailstorm. The thinking here was that it would be win-win because (i) all the user's data is entered and stored in one place which is convenient for the user (ii) the CD retailer can access the user's preferences from Hailstorm and cut a deal with the concert ticket provider to show their ads based on user preferences and (iii) the concert ticket provider gets their ads shown in a very relevant context.

The big problem with Hailstorm is that it assumed that potential Hailstorm partners such as retailers and other businesses would give up their customer data to Microsoft. As expected most of them told Microsoft to take a long walk of a short pier. 

Unfortunately Microsoft didn't take the step of opening up these APIs to its online services such as Hotmail and MSN Messenger but instead quietly canned the project. Fast forward a few years later and the company is now playing catchup to ideas it helped foster. Amusingly, people like Mark Lucovsky and Vic Gundotra who were influential during the Hailstorm days at Microsoft are now at Google rebuilding the same thing.

I've taken a look at GData and have begun to question the wisdom of using Atom/RSS as the baseline for information interchange on the Web. Specifically, I have the same issues as Steven Ickman raised in a comment on DeWitt Clinton's blog where he wrote

From a search perspective I’d argue that the use of either format, RSS or Atom, is pretty much a hack. I think OpenSearch is awesome and I understand the motivators driving the format choices but it still feels like a hack to me.

Just like you I want to see rich structured results returned for queries but both formats basically limit you to results of a single type and contain a few known fields (i.e. link, title, subject, author, date, & enclosure) that are expected to be common across all items.

Where do we put the 100+ Outlook defined contact fields and how do we know that a result is a contact and not an appointment or auction? Vista has almost 1000 properties defined in its schema so how do we convey that much metadata in a loseless way? Embedded Microformats are a great sugestion for how to deal with richer content but it sort of feels like a hack on top of a hack to me? What’s the Microformat for an auction? Do I have to wait a year for some committee to arrive at joint aggreement on what attributes define an auction before I can return structured auction results?

When you have a hammer, everything looks like a nail. It seems Steven Ickman and I reviewed OpenSearch/GData/Atom with the same critical lens and came away with the same list of issues. The only thing I'd change in his criticism is the claim that both formats (RSS & Atom) limit you to results of a single type, that isn't the case. Nothing stops a feed from containing data of wildly varying types. For example, a typical MSN Spaces RSS feed contains items that represent blog posts, photo albums, music lists, and book lists which are all very different types.

The inability to represent hierarchical data in a natural manner is a big failing of both formats. I've seen the Atom Threading Extensions but that seems to be a very un-XML way for an XML format to represent hierarchy. Especially given how complicated message threading algorithms can be for clients to implement.

It'll be interesting to see how Google tackles these issues in GData.


 

July 14, 2006
@ 06:15 PM

Microsoft has stated that the recently announced interop between Yahoo! Messenger and Windows Live Messenger has created the world's largest IM network. Exactly how big is it compared to the others? Check out the table below which contains ComScore numbers from May 2006. The excerpt is from the Silicon Valley Sleuth blog post entitled Google Talk fails to find an audience

Google's instant messaging service ranks at the bottom of the overall ranking, which is dominated by MSN Messenger/Windows Live Messenger (204m subscribers), Yahoo! Messenger (78m), AIM (34m) and ICQ (33.9m).

ICQ actually grew by more than 10 per cent year-over-year, the data indicated. The network is owned by AOL and is considered the first mainstream instant messaging application.

Another interesting factoid from the data is that E-buddy (formerly known as E-messenger) rules the unified messenger category ahead of Trillian, claiming 3.9m vs. 1.3m unique visitors.

E-buddy offers on online unified messenger for MSN, AOL and Yahoo – no installation required. The great benefit is that it allows users on bolted down corporate networks to connect to instant messaging services without any intervention from the IT department.

Immarket

Interestingly enough, when I read geek blogs I tend to see people assume that Trillian, Meebo and AOL Instant Messenger are the dominant applications in their category. People often state anecdotally that "All my friends are using it so it must be #1", given that IM buddy lists are really social networks it's unsurprising when everyone you know uses the same IM application in much the same way that is unsurprising that everyone you know hangs out at the same bar or coffee shop. However one doesn't extrapolate the popularity of a bar or coffee shop just because everyone you know likes it. The same applies to online hangouts whether they be instant messaging applications, social networking sites, or even photo sharing sites.


 

Categories: Social Software

The Google Adwords API team has a blog post entitled Version 3 Shutdown Today which states

Please take note… per our announcement on May 12, we will shutdown Version 3 of the API today.

Please make sure you have migrated your applications to Version 4 in order to ensure uninterrupted service. You can find more information about Version 4 (including the release notes) at http://www.google.com/apis/adwords/developer/index.html.

-- Rohit Dhawan, Product Manager

This is in compliance with the Adwords API versioning policy which states that once a new version of the WSDL for the Adwords API Web service is shipped, the old Web service end point stops being supported 2 months later. That's gangsta.

Thanks to Mark Baker for the link.


 

From the press release entitled Yahoo! and Microsoft Bridge Global Instant Messaging Communities we learn

SUNNYVALE, Calif., and REDMOND, Wash. — July 12, 2006 — Yahoo! Inc. (Nasdaq: “YHOO”) and Microsoft Corp. (Nasdaq: “MSFT”) today will begin limited public beta testing of interoperability between their instant messaging (IM) services that enable users of Windows Live® Messenger, the next generation of MSN® Messenger, and Yahoo!® Messenger with Voice to connect with each other. This interoperability — the first of its kind between two distinct, global consumer IM providers — will form the world’s largest consumer IM community, approaching 350 million accounts.1

Consumers worldwide from Microsoft and Yahoo! will be able to take advantage of IM interoperability and join the limited public beta program. They will be among the first to exchange instant messages across the free services as well as see their friends’ online presence, view personal status messages, share select emoticons, view offline messages and add new contacts from either service at no cost.2 Yahoo! and Microsoft plan to make the interoperability between their respective IM services broadly available to consumers in the coming months.

The Windows Live Messenger team also has a blog post about this on their team blog entitled Talk to your Yahoo! friends from Windows Live Messenge which points out that Windows Live Messenger users can sign up to participate in the beta at http://ideas.live.com. Once accepted in the beta, Windows Live Messenger users can add people on the Yahoo! IM network to theor Windows Live Messenger buddy list simply by adding new contacts (i.e. add 'Yahoo ID' + @yahoo.com to our IM contact list). Windows Live Messenger users don't need a Yahoo! account to talk to users of Yahoo! Messenger and vice versa. That is how it should be.

Where it gets even cooler is how we handle Windows Live Messenger users that utilize an "@yahoo.com" email address as their Passport account Windows Live ID (e.g. yours truly). If you add such a user to your IM contact list, you get the following dialog

You then get two buddies added for that person, one buddy represents that contact on the Yahoo! IM network and the other is the same buddy on the Windows Live IM network. This is a lot different from what happens when Windows Live Messenger interops with a corporation that uses Microsoft Office Live Communication Server because people are forced to change their Passport account Windows Live ID to an @messengeruser.com address to resolve the ambiguity of using one email address on two IM networks. I much prefer the solution we use for Yahoo! IM interop.


 

Categories: Windows Live

Last month Clemens Vasters wrote a blog post entitled Autonomy isn't Autonomy - and a few words about Caching where he talks about "autonomous" services and data caching. He wrote

A question that is raised quite often in the context of "SOA" is that of how to deal with data.  Specifically, people are increasingly interested in (and concerned about) appropriate caching strategies
...
By autonomous computing principles the left shape of the service is "correct". The service is fully autonomous and protects its state. That’s a model that’s strictly following the Fiefdoms/Emissaries idea that Pat Helland formulated a few years back. Very many applications look like the shape on the right. There are a number of services sticking up that share a common backend store. That’s not following autonomous computing principles. However, if you look across the top, you'll see that the endpoints (different colors, different contracts) look precisely alike from the outside for both pillars. That’s the split: Autonomous computing talks very much about how things are supposed to look behind your service boundary (which is not and should not be anyone’s business but yours) and service orientation really talks about you being able to hide any kind of such architectural decision between a loosely coupled network edge. The two ideas compose well, but they are not the same, at all.

..
However, I digress. Coming back to the data management issue, it’s clear that a stringent autonomous computing design introduces quite a few challenges in terms of data management. Data consolidation across separate stores for the purposes of reporting requires quite a bit of special consideration and so does caching of data. When the data for a system is dispersed across a variety of stores and comes together only through service channels without the ability to freely query across the data stores and those services are potentially “far” away in terms of bandwidth and latency, data management becomes considerably more difficult than in a monolithic app with a single store. However, this added complexity is a function of choosing to make the service architecture follow autonomous computing principles, not one of how to shape the service edge and whether you use service orientation principles to implement it.
...
Generally, my advice with respect to data management in distributed systems is to handle all data explicitly as part of the application code and not hide data management in some obscure interception layer. There are a lot of approaches that attempt to hide complex caching scenarios away from application programmers by introducing caching magic on the call/message path. That is a reasonable thing to do, if the goal is to optimize message traffic and the granularity that that gives you is acceptable. I had a scenario where that was a just the right fit in one of my last newtelligence projects. Be that as it may, proper data management, caching included, is somewhat like the holy grail of distributed computing and unless people know what they’re doing, it’s dangerous to try to hide it away.

That said, I believe that it is worth a thought to make caching a first-class consideration in any distributed system where data flows across boundaries. If it’s known at the data source that a particular record or set of records won’t be updated until 1200h tomorrow (many banks, for instance, still do accounting batch runs just once or twice daily) then it is helpful to flow that information alongside the data to allow any receiver determine the caching strategy for the particular data item(s).

Service autonomy is one topic where I still have difficulty in striking the right balance. In an ideal SOA world, you have a mesh of interconnected services which depend on each other to perform their set tasks. The problem with this SOA ideal is that it introduces dependencies. If you are building an online service, dependencies mean that sometimes you'll be woken up by your pager at 3AM in the morning and it's somebody else's fault not yours. This may encourage people who build services to shun dependencies and build self-contained web applications which reinvent the wheel instead of utilizing external services. I'm still trying to decide if this is a bad thing or not.

As for Clemens' comments on caching and services, I find it interesting how even WS-* gurus inadvertently end up articulating the virtues of HTTP's design and the REST architectural style when talking about best practices for building services. I wonder if we will one day see WS-* equivalents of ETags and If-Modified-Since. WS-Caching anyone? :)


 

Categories: XML Web Services

I was chatting with Kurt Weber yesterday and asked when Windows Live Expo would be getting out of beta. He asked me to check out the team blog later in the day and when I did I saw his blog post entitled Official U.S. Launch of Windows Live Expo. It turns out that yesterday was launch day and below is an excerpt of his blog post describing some of the new features for the launch

 Some of the new features for our latest release include:
  • New Look - A brand new look & feel for the site which includes the official Windows Live look and integration, accessibility, scaling, and easier to use.
  • Comments on a listing – Similar to comments on a blog; this feature will allow users to discuss issues in the soapbox area or ask the seller for more details about an item.
  • APIs – Developers can now access all of our listings using a variety of parameters in order to create cool mash-ups (such as http://www.blockrocker.com). Full details about the API are available at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnlive/html/winliveexpo.asp
  • Driving directions – Users can now easily get driving directions to whatever listing they are viewing (courtesy of our friends at Live Local) by simply clicking a button.

For those keeping score, Expo is the fourth fifth Windows Live service to come out of beta.

Update: Thanks to Szajd for reminding me that there have been five Windows Live services to come out of beta; (Windows Live OneCare, Windows Live Favorites, Windows Live Messenger, Windows Live Custom Domains and Windows Live Expo).
 

Categories: Windows Live