Scott Isaacs has written a series of posts pointing out one of the biggest limitations of applications that use Asynchronous Javascript  and XML (AJAX). The posts are XMLHttpRequest - Do you trust me?, XMLHttpRequest - Eliminating the Middleman, and XMLHttpRequest - The Ultimate Outliner (Almost) . The limitation is discussed in the first post in the series where Scott wrote

Many web applications that "mash-up" or integrate data from around the web hit the following issue: How do you request data from third party sites in a scalable and cost-effective way? Today, due to the cross-domain restrictions of xmlhttprequest, you must proxy all requests through a server in your domain.

Unfortunately, this implementation is very expensive. If you have 20 unique feeds across 10000 users, you are proxying 200,000 unique requests! Refresh even a percentage of those feeds on a schedule and the overhead and costs really start adding up. While mega-services like MSN, Google, Yahoo, etc., may choose to absorb the costs, this level of scale could ruin many smaller developers. Unfortunately, this is the only solution that works transparently (where user's don't have to install or modify settings).

This problem arises because the xmlhttprequest object can only communicate back to the originating domain. This restriction greatly limits the potential for building "mash-up" or rich aggregated experiences. While I understand the wisdom behind this restriction, I have begun questioning its value and am sharing some of my thoughts on this

I encountered this limitation in the first AJAX application I wrote, the MSN Spaces Photo Album Browser, which is why it requires you to add my domain to your trusted websites list in Internet Explorer to work. I agree with Scott that this is a significant limitation that hinders the potential of various mashups on the Web today. I'd also like to see a solution to this problem proposed. 

In his post, Scott counters a number of the reasons usually given for why this limitation exists such as phishing attacks, cross site scripting and leakage of private data. However Derek Denny-Brown describes the big reason for why this limitation exists in Internet Explorer in his post XMLHttpRequest Security where he wrote

I used to own Microsoft's XMLHttpRequest implementation, and I have been involved in multiple security reviews of that code. What he is asking for is possible, but would require changes to the was credentials (username/password) are stored in Windows' core Url resolution library: URLMON.DLL. Here is a copy of my comments that I posted on his blog entry:

The reason for blocking cross-site loading is primarily because of cached credentials. Today, username/password information is cached, to avoid forcing you to reenter it for every http reference, but that also means that script on yahoo.com would have full access to _everything_ in your gmail/hotmail/bank account, without a pop-up or any other indication that the yahoo page was doing so. You could fix this by associating saved credentials with a src url (plus some trickery when the src was from the same sight) but that would require changes to the guts of windows url support libraries (urlmon.dll)

Comparing XML to CSS or images is unfair. While you can link to an image on another sight, script can't really interact with that image; or example posting that image back to the script's host sight. CSS is a bit more complicated, since the DOM does give you an API for interacting with the CSS, but I have never heard of anyone storing anything private to the user in a CSS resource. At worst, you might be able to figure out the user's favorite color.

Ultimately, it gets back to the problem that there needs to be a way for the user to explicitly enable the script to access those resources. If done properly, it would actually be safer for the user than the state today, where the user has to give out their username and password to sights other than the actual host associated with that login.
I'd love to see Microsoft step up and provide a solution that addresses the security issues. I know I've run against this implementation many times.

That makes sense, the real problem is that a script on my page could go to http://www.example.com/yourbankaccount and it would access your account info because of your cookies & cached credentials. That is a big problem and one that the browser vendors should work towards fixing instead of allowing the status quo to remain.

In fact, a proposal already exists for what this solution would look like from an HTTP protocol and API perspective. Chris Holland has a post entitled ContextAgnosticXmlHttpRequest-An informal RFC where he posts on some of the pros and cons of allowing cross-site access with IXmlHttpRequest but having the option to not send cookies and/or cached credentials.


 

Saturday, October 29, 2005 6:58:28 PM (GMT Daylight Time, UTC+01:00)
The more general problem is that the site hosting the script can access anything that the user's browser can access. This includes:

- Access to resources behind a firewall. If I can get you to run my script, then my script can read http://blogs from inside the Microsoft firewall and post it back to my site.

- Access to sites using cached credentials.

I think Macromedia has solved the problem in a safe way for Flash. See http://www.macromedia.com/cfusion/knowledgebase/index.cfm?id=tn_14213 .
Flash Jones
Sunday, October 30, 2005 2:27:46 AM (GMT Standard Time, UTC+00:00)
I think he's too quick to dismiss phishing attacks. One thing you could do is to create a fake site that actually accesses the real site. One can imagine all sorts of incredibly nasty things you could do. Imagine creating www.amazn.com. For every page, you just load the data from the real amazon.com site in the back end. The user could go through the entire process of filling their cart, checking out and paying, while you record everything they do, including their credit card number. Everything would appear perfectly correct to the user, including saved-for-later content, addresses, etc.
ucblockhead
Sunday, October 30, 2005 6:37:02 AM (GMT Standard Time, UTC+00:00)
Flash Jones:

Indeed, and to address the corporate firewall issue you mentioned, on the what-wg mailing list discussion that ensued following my proposal, we mostly came to some consensus outlining a site having to send an extra http header with its responses, to be allowed to be read/interpreted as part of a ContextAgnosticXmlHttpRequest. Such header could be something like "X-Allow-Foreign-Access: true".

Which means that if you publish xmlhttp data behind a firewall, on a private intranet, of which a remote attacker might have knowledge, and which they might try to load up from a foreign document sitting on a malicious phishing site, you're basically immune by default. It would be as if the foreign attacker was loading your document in an iframe, but their document wouldn't have a way to access its contents because of domain-based restrictions.

This would be a security measure *added* to already proposed security measure of 1) discarding cookie directives 2) discarding/not-passing http basic authentication.
Comments are closed.