February 20, 2006
@ 09:14 PM

Patrick Logan has a post on the recently re-ignited discussion on REST vs. SOAP entitled REST and SOAP where he writes

Update: Mike Champion makes an analogy between messaging technologies (SOAP/WSDL and HTTP) and road vehicle types (trucks and cars). Unfortunately this is an arbitrary analogy. That is, saying that SOAP/WSDL is best "to haul a lot of heavy stuff securely and reliably, use a truck" does not make it so. The question is how to make an objective determination.

Mike is fond of implying that you need to use WS-* if you want security and reliability while REST/POX is only good for simple scenarios. I agree with Patrick Logan that this seems to be an arbitrary determination not backed by empirical evidence. As an end user, the fact that my bank allows me to make financial transactions using REST (i.e. making withdrawals and transfers from their website) is one counter example to the argument that REST isn't good enough for secure and reliable transactions. If it is good enough for banks why isn't it good enough for us?

Of course, the bank's website is only the externally focused aspect of the service and they probably do use systems that ensure reliability and security internally that go beyond the capabilities of the Web's family of protocols and formats. However as someone who builds services that enable tens of millions of end users communicate with each other on a daily basis I find it hard to imagine how WS-* technologies would significanlty improve the situation for folks in my situation.

For example, take the post Clemens Vasters entitled The case of the missing "durable messaging" feature where he writes

I just got a comment from Oran about the lack of durable messaging in WCF and the need for a respective extensibility point. Well... the thing is: Durable messaging is there; use the MSMQ bindings. One of the obvious "problems" with durable messaging that's only based on WS-ReliableMessaging is that that spec (intentionally) does not make any assertions about the behavior of the respective endpoints.

There is no rule saying: "the received message MUST be written do disk". WS-ReliableMessaging is as reliable (and unreliable in case of very long-lasting network failures or an endpoint outright crashing) and plays the same role as TCP. The mapping is actually pretty straightforward like this: WS-Addressing = IP, WS-ReliableMessaging = TCP.

So if you do durable messaging on one end and the other end doesn't do it, the sum of the gained reliability doesn't add up to anything more than it was before.

The funny thing about Clemens's post is that scenarios like the hard drive of a server crashing are the exact kind of reliability issues that concern us in the services we build at MSN Windows Live. It's cool that specs like WS-ReliableMessaging allow me to specify semantics like AtMostOnce (messages must be delivered at most once or result in an error) and InOrder (messages must be delivered in the order they were sent) but this only scratches the surface of what it takes to build a reliable world class service. At best WS-* means you don't have to reinvent the building blocks when building a service that has some claims around reliability and security. However the specifications and tooling aren't mature yet. In the meantime, many of us have services to build.   

I tend to agree with Don's original point in his Pragmatics post. REST vs. SOAP is mainly about reach of services and not much else. If you know the target platform of the consumers of your service is going to be .NET or some other platform with rich WS-* support then you should use SOAP/WSDL/WS-*. On the other hand, if you can't guarantee the target platform of your customers then you should build a Plain Old XML over HTTP (POX/HTTP) or REST web service.


 

Categories: XML Web Services

For the past few releases, we've had work items in the RSS Bandit roadmap around helping users deal with information overload. We've added features like newspaper views and search folders to make it easier for users to manage the information in feeds they consume. Every release I tried to make sure we add a feature that I know will make it easier for me to get the information I want from the feeds I am subscribed to without being overwhelmed.

For the Jubilee release I had planned that the new feature we'd add in the "dealing with information overload" bucket would be the ability to rate posts and enable filtering based on this rating. After thinking about this for a few weeks, I'm not sure this is the right route any more. There are tough technical problems to surmount to make the feature work well but I think the bigger problems are the expected changes to user behavior. Based on my experiences with rating systems and communities, I suspect that a large percentage of our user base will not be motivated to start rating the feeds they are subscribed to or the new items that show up in their aggregator.

On a related note, I've recently been using meme trackers like Memeorandum and TailRank which try to show the interesting topics among certain technology blogs. I think this is very powerful concept and is the next natural evolution of information aggregators such as RSS Bandit. The big problem with these sites is that they only show the current topics of interest among a small sliver of blogs which in many cases do not overlap with the blogs one might be interested in. For example, today's headline topic on Tech.Memeorandum is that a bunch of bloggers attended a house party which I personally am not particularly interested in. On the other hand, I'd find it useful if another way I could view my subscriptions in RSS Bandit pivoted around the current hot topics amongst the blogs I read. This isn't meant to replace the existing interface but instead would be another tool for users to customize their feed reading experience the same way that newspaper views and search folders do today. 

If you are an RSS Bandit user and this sounds like a useful feature I'd like to hear your thoughts on what functionality you'd like to see here. A couple of opening questions that I'd like to get opinions on include

  • Would you like to see the most popular links in new posts? For an example of what this looks like, see the screenshot in Nick Lothian's post on  Personalized meme tracking
  • How would you like to classify new posts; unread posts or items posted within the last day? The reason I ask this is that you may already have read a few posts that linked to a very popular topic, in which case should it be ranked higher than another link for which you haven't read any of the related posts but hasn't been linked to as much?
  • Would you like a 'mark discussion as read' feature? Would it be nice to be able to mark all posts that link to a particular item as read?

I have a bunch of other questions but these should do for now.


 

Categories: RSS Bandit

February 19, 2006
@ 06:33 PM

This is basically a "me too" post. Dave Winer has a blog post entitled Blogging is part of life where he writes

I agree with the author of the Slate piece that’s getting so much play in the blogosphere, up to a point. The things that called themselves blogs that came from Denton and Calacanis are professional publications written by paid journalists that use blogging software for content management. That’s fine and I suppose you can call them blogs, but don’t get confused and think that their supposed death (which itself is arguable) has anything to do with the amateur medium that is blogging. They’re separate things, on separate paths with different futures.

To say blogging is dead is as ridiculous as saying email or IM or the telephone are dead. The blog never belonged on the cover of magazines, any more than email was a cover story (it never was) but that doesn’t mean the tool isn’t useful inside organizations as a way to communicate, and as a way for businesses to learn how the public views them and their competitors.

Whenever Dave Winer writes about blogging I tend to agree with him completely. This time is no exception. Blogs are social software, they facilitate communication and self expression between individuals. Just like with email and IM, there are millions of people interacting using blogs today. There are more people reading and writing blogs on places like MySpace and MSN Spaces than the populations of a majority of the countries on this planet. Blogs are here to stay. 

Debating on whether companies that build companies around blogs will survive is orthogonal to discussing the survival of blogging as a medium. It's not like debating whether companies that send out email newsletters or make mailing list software will survive is equivalent to discussing the survival of email as a communication medium. Duh. 


 

Categories: Social Software

Recently there was a question asked on the RSS Bandit forums from a user who was Unable to Import RSSBandit-Exported OPML into IE7. The question goes

I exported my feeds from RSSBandit 1.3.0.42 to an OPML file in hopes of trying the feed support in IE7. IE7 seems to try to import, but ultimately tells me no feeds were imported. The exported file must have over a 100 feeds, so it's not that. Has anyone else been able to import feeds from RSSBandit into IE7?

I got an answer for why this is the case from the Internet Explorer's RSS team.  The reason is provided in the the RSS Bandit bug report, Support type="rss" for export of feeds, where I found out that somewhere along the line someone came up with the convention of adding a type="rss" attribute to indicate which were the RSS feeds in an OPML file. The Internet Explorer RSS team has decided to enforce this convention for indicating RSS feeds in an OPML file and will ignore entries that don't have this annotation.

Since RSS Bandit supports both RSS/Atom feeds and USENET newsgroups, I can see the need to be able to differentiate which are the feeds in an OPML file without having applications probe each URL. However I do think that type="rss" is a misnomer since it should also apply to Atom feeds. Perhaps type="feed" instead?


 

One of the more thankless jobs at MSN Windows Live is to work on the Passport team. Many of the product teams that are customers of the service tend to view it as a burden, myself included. One of the primary reasons for this is that instead of simply being the username/password service for MSN Windows Live it is actually a single-sign in system which encompasses a large number of sites besides those owned by Microsoft. For example, you can use the same username and password to access your email, travel plans or medical information.

Trevin Chow of the Passport team has written a blog post entitled Why does Passport sign-in suck? where he addresses one of the pain points its customers face due to its legacy as a single sign-in system. He writes

Q: Why do you keep asking me to sign in over and over again even though I've checked "automatically sign me in"?  What don't you understand about "automatic"?!
 
One of the biggest problems with see in the network of MSN, Windows Live and Microsoft sites is that Passport sign-in is seen way too often by users.  It appears as if we are disregarding your choice of "automatically sign me in" and randomly asking you to sign in when we want with no rhyme or reason...
 
Passport sign-in 101
Passport sign in is based on cookies. Because HTTP is stateless, we have only 2 ways of persisting information across requests -- the first being to carry it on the query string, and second via HTTP cookies.  The first method (query string) isn't useful across browser sessions (open IE, close it, and re-open), which leaves us only option 2 (cookies).  Cookies are the mainstay of modern web sites, and allows very powerful personalization and state management.  Passport leverages this to provide the world's largest web authentication (aka sign-in) system in the world.
 
Passport first validates your identity by validating your "credentials" (email address and password combination) that you typed in on our sign-in UI.  Once validated, Passport uses cookies in the passport.com and the partner's domain (eg. www.live.com, MSN Money, MSDN) to vouch for your identity.  The cookies in our partner's domain act as assertions that you are who you say you are.    Because each partner site trusts Passport, the sign-in authority, assertions about a user's identity from Passport are also trusted by the partner.
...
After you sign into one partner site in the "passport network", users can freely go to subsequent partner sites and sign in. This is where the magic of Passport comes into play and single sign-on is achieved.  When you visit another partner site, and click "sign in" you are redirected to Passport servers. Because you already authenticated once to Passport (represented through your passport.com cookies), we don't need to validate your credentials again and can issue a service ticket for this new partner website.
 
But Trevin, you just said that "because you already authenticated once to Passport <snip>, we don't need to validate you credentials again...".  That clearly isn't the case since I seem to keep getting asked for my password!
 
In the last section, especially the last paragraph, I purposely left out some detail for simplicity. We can dive into more detail now that you have a better high-level understanding of the flow of passport sign-in.
 
In order to have a secure single sign-on system, you simply cannot have one prompt for a login then be able to access any site.  It sounds counter-intuitive, since that's what "single sign-on" seems to imply.  This would only be possible if every single website you accessed had the same level of security and data sensitivity.  We all know that this is not the case, and instead, sites vary in the level of security needed to protect it. 
 
On the lower end of the spectrum (least sensitive), we have sites like www.live.com, which is merely personalization.  In the middle, have sites like Live Mail, which has personal information such as email from your friends.  On the extreme end of the scale (most senstitive) we have sites like Microsoft Billing which contains your credit card information.  Because of this varying levels of data sensitivity, each site in the Passport network configures what we'll call their "security policy" which tells passport parameters to enforce during sign in which is supposed to be directly related to their data sensitivity -- the more sensitive the information therein, the "tighter" the security policy.
...
All our partner websites currently have a mis-matched set of security policies, each set at their own discretion of their team's security champ.  It's because of the inconsistent security plicies, you keep getting asked for your password over and over.
 
Wow, so this sounds like a tough problem to solve.  How are you going to fix this? 
 
Our team is absolutely committed to make the sign in experience the best on the internet.  To fix this specific problem, our team is moving to a centralized definition of security policies.  What does this mean? Instead of each partner website telling us the specific parameters of the security policy (such as time window), they instead will tell us an ID of a security policy to enforce, whose definition will be on the Passport sign-in servers.  This means, that by offering a limited set of security policies we limit the mistakes partner websites can make, and we will inherently have more consistency across the entire network for sign in.  Additionally, it gives us more agility to tweak both the user experience and security of the network since Passport is in total control of the parameters.

This is just one consequence of Passport's legacy as a single-sign in system causing issues for MSN Windows Live sites. Another example of an issue we've faced was when deciding to provide APIs for MSN Spaces. If you read the Getting Started with the MetaWeblog API for MSN Spaces document you'll notice that instead of using the user's Passport credentials for the MetaWeblog API, we instead use a different set of credentials. This is because a user's Passport credentials were deemed to be too valuable to have them being entered into random blog editing tools which may or may not be safeguarding the user's credentials properly.

I now consider identity systems to be one big headache based on my experiences with Passport. This is probably why I've steadfastly avoided learning anything about InfoCard. I know there are folks trying to make this stuff easier at Microsoft but it seems like everytime I think about identity systems it just makes my teeth hurt. :(


 

Categories: Windows Live

Don Box has an excellent post on the entire REST vs. SOAP debate entitled Pragmatics where he writes

The following design decisions are orthogonal, even though people often conflate two or more of them:
 
  1. Whether one uses SOAP or POX (plain-old-XML).
  2. Whether or not one publishes an XML schema for their formats.
  3. Whether or not one generates static language bindings from an XML schema.
  4. The degree to which one relies on HTTP-specific features. That stated, screw with GET at your peril.
  5. Whether one adopts a message-centric design approach or a resource-centric design approach.
           
Some of the decisions (specifically 5) are architectural and sometimes philosophical.
 
Some of the decisions (specifically 1-2) are simple business decisions that are determined by who your target audience is.
 
  1. If you want a great experience for .NET/Java devs, you’ll typically publish schemas (through wsdl) and support SOAP.
  2. If you want a great experience for LAMP folks, you’ll support POX messages and will provide a non-XSD description of your formats.
  3. If you want to reach both audiences, you’ll do both #1 and #2.
  4. If you want to reach both audiences before your competition does, you'll avoid indulging in religious debates and ship something.

This is so true it hurts. Most of the discussion around XML Web services has unfortunately been driven by platform vendors in either the Java or .NET camps which has unnecessarily skewed the discussion. When you are an actual business decision maker faced with building services on the Web, a lot of the silly dogma around REST vs. SOAP simply dissipates. Instead it boils down to deciding how broad of an audience you want to reach and how much work you want to do to reach that audience.

That said, I'd quibble about needing to do both REST and SOAP if you want to reach both the enterprise developer crowd (i.e. Java/.NET developers) and the LAMP crowd. One counter example to this theory is RSS, it is a RESTful web service which hasn't needed to be SOAP based to result in lots of great apps built on it using the .NET Framework such as SharpReader, RSS Bandit and NewsGator Outlook Edition. From my perspective as an RSS reader developer, I will admit that it would have taken a lot less code to handle the feed processing in RSS Bandit if it was SOAP-based. On the other hand, from my perspective as a service provider I'd note that the amount of work needed to implement and support two APIs that do the same thing is often not worth it.

Coincidentally, today is the date of my semi-regular lunches with Don and now he's provided some good fodder for us to chitchat about over MSFT's cafeteria grub.


 

Categories: XML Web Services

February 17, 2006
@ 02:23 AM

For those who missed it, the MSN AdCenter team now has a blog at http://blogs.msdn.com/adcenter. The most recent post is about the New adCenter Release Coming Up and it reads

You may be asking yourself, "Why a new version?" Well, our goal is both to make your adCenter campaign management easier and to improve your user experience. We've learned a lot from the customer feedback we've gotten so far, and now we're ready to share our ideas with you.

Updates include:
1. Order creation process simplified into 4 steps
2. Broader differentiation between campaigns and orders  
3. New pricing tab includes all budget, bidding, and incremental pricing
4. Negative keywords can be applied at the order level
5. Keyword / ad rejections include reason codes

 and lots of other cool changes - I'll post more details here later in the week so you'll know what to expect when you login after the release - and when the release will happen.

This is one product I can't wait for Microsoft to ship. It's cool that we want to improve our search engine and other online properties to be more competitive but it is all for naught if we don't have a good story around how we and our customers make money from our services. AdCenter to the rescue...


 

Categories: MSN

February 16, 2006
@ 06:03 PM

Jason Fried of 37 Signals has a post critical of Office Live entitled Microsoft Office Live is "web based" where he writes

Office Live, Microsoft’s entry into the web-based office application space, went beta today.

Check out some of the system requirements for certain features of this “web-based” service:

  • To use the Edit in Datasheet feature within the Business Applications and Shared Sites areas requires Microsoft Office 2003.
  • To export to Business Contact Manager requires Microsoft Office 2003, Microsoft Office XP, or Microsoft Office 2000.
  • To import contacts from Microsoft Office Outlook requires Microsoft Office 2003 or Microsoft Office XP.
  • To link contacts to Microsoft Office Outlook requires Microsoft Office 2003.

And of course you must use IE. I never thought I’d see a web app suite that has more system requirements than a desktop app, but I guess I should never underestimate Microsoft.

A number of comments in response to the blog post have pointed out that it is misleading since it implies that Office Live requires Microsoft Office when in truth most of the features mentioned are related to importing and exporting data to and from Microsoft Office products like Outlook. Since the target audience for Office Live is the same as that for the majority of the products of 37 Signals it is unsurprising that they are so hostile to the service.

However this isn't to say that there isn't some valid criticism here. Jason is right that Internet Explorer is required to use Office Live. I also had an issue with this especially since in Windows Live we have an explicit goal that Internet Explorer and Firefox users should get an equivalent user experience. When I talked to the Office Live folks about this they pointed out to me that although Internet Explorer is required to create a site using the service, the websites created with it (such as http://daresofficelivesite.com) work fine in all major Web browsers. This is a good step but they know they can do better.

As it is with all feature requests in product development, the best way to get Firefox support to show up in Office Live is for users demand it. That's what happened with Windows Live and I'm sure the same will end up happening for  Office Live. I'm sure the question won't be if but rather when it shows up. 


 

Categories: Office Live | Windows Live

February 16, 2006
@ 05:22 PM

A few days ago, I asked Edgeio: An eBay Killer or Just Another Lame Startup? which seems to be a question that was asked by several other bloggers. On the Edgeio blog there is a post entitled More Bloggers Discuss Edgeio which promises to address some of the questions raised about the service. The blog post states

There are three key things people bring up when questioning whether or not edgeio will be successful.

1. Will bloggers want to post classified listings on blogs?

2. How to deal with the inevitable spam onslaught?

3. Assuming 1 & 2 are overcome, what stops everyone from entering the market?

These are all great questions whose answers I'd love to see. For #1 the edgeio folks have to convince vendors of blogging tools and hosted blogging services to make it easy for people to mark up their blog posts as auction listings. I called this a 'Make this blog post a classified listing' checkbox in my previous post on Edgeio. Then there is the task of convincing people that instead of listing items for sale on eBay or Craig's List, they should instead post an entry in their blog. This will be an uphill battle. Assuming they solve that, question #2 points out that the next hurdle is dealing with the inevitable avalanche of splogs which will pollute the system. Blog search engines like Technorati seem to be doing a decent job at filtering out splogs so this is tough but not an insurmountable problem.

The real doozy is question #3. Once they've convinced the blogosphere to start posting classified listings on their blogs instead of using existing listing sites AND have done a decent job holding down spam, they still have to contend with the Google factor. From my perspective, the functionality Edgeio plans to provide is the kind of feature that could be added to Google Base by an enterprising developer in his or her 20% time let alone if a bigger player like eBay or Craig's List decides to get in this space.  Besides proclaiming that they have patents protecting their business model, I can't see how Edgeio plans to answer question #3 above. I'll be watching their blog to see what their answers to the above questions. My curiousity is definitely piqued. 


 

Categories: Social Software

February 15, 2006
@ 06:04 PM

David Hunter has a blog post entitled Microsoft relaunches bCentral, calls it Office Live where he writes

Press release:

Sept. 23, 1999 — Microsoft Corp. today announced the launch of Microsoft® bCentral, a new portal created specifically to meet the needs of small and growing companies. Microsoft bCentral provides a comprehensive and integrated suite of services to help growing companies leverage the Internet to improve their business. The site delivers services in three key areas: getting a business started online by connecting to the Web and building a Web site; promoting and marketing online to reach new customers; and managing a business more effectively. A beta version of the new site will be available in the United States beginning Sept. 30, 1999, at http://www.bCentral.com/ .
Change the menu a little and call it Office Live and you have today’s announcement:
Feb. 15, 2006 — Microsoft Corp. today announced the beta availability of Microsoft® Office Live (http://www.OfficeLive.com), offering small-business customers a cost-free opportunity to experience the company’s new Internet-based software services firsthand. A milestone for the online services previewed last fall, Microsoft Office Live combines the power of software and services to deliver rich and seamless experiences to small companies that want a presence online.

Microsoft Office Live helps lower the barriers to doing business online by offering small companies a set of Internet-based business services. Designed for ease of use and affordability, the online services are designed to give small businesses the same advantages as larger enterprises by getting them up and running on the Internet quickly, easily and inexpensively.

There were no surprises from the various “preannouncements” yesterday or even from the original Office Live announcement...So what’s with the Office moniker? There had been some expectations, despite all the clues to the contrary, that there were to be online versions of at least some of the Office products. Those hopes were dashed...The use of “Office” in “Office Live” apparently connotes business usage, and that’s it...So I guess we take it for what it is. There may well be a play in the hosted “intranet replacement” offering if they roll out some useful applications, but that’s a story we heard about the now defunct bCentral too (e.g. [1], [2]). Presumably, Microsoft thinks they’ll have more luck this time around, but it’s not clear why.

The folks working on Office Live have big plans for the service. The big question is whether our execs will let them execute on their vision or whether we'll continue to practice death by risk aversion.


 

Categories: Office Live