Niall Kennedy has a blog post entitled Authenticated and private feeds where he writes

Examples of private feeds intended for 1:1 communication include bank balances, e-mail notifications, project status, and the latest bids on that big contract. Data in the wrong hands could be dangerous, and many companies will stay away from the feed syndication space until they feel their users' personal data is secure.

A private feed's data could be exposed in a variety of ways. A desktop aggregator's feed content might be available to other users on the same computer, either through directory access or desktop search. An online aggregator might expose a feed and its content in search results or a preview mode.
...
A feed publisher could whitelist the user-agents it knows comply with its access policies. SSL encryption might not be a bad idea either as shared aggregation spaces might not store content requested over HTTPS. It would place extra load on the server as each request requires extra processing, but if the alternative is placing your customer's data in the Yahoo! search index then that's not such a bad thing.

I believe large publishers such as Salesforce.com or eBay would produce more feed content if they knew their customers' data was kept private and secure. There's a definite demand for more content transmitted over feed syndication formats but it will take the cooperation and collaboration of security formats and consistent aggregation practices to really move the needle in the right direction.

How to properly support private and authenticated feeds is a big problem which Niall highlights but fails to go into much detail on why it is hard. The main problem is that the sites providing the feed have to be sure that the application consuming the feed is secure. At the end of the day, can Bank of America trust that RSS Bandit or Bloglines is doing a good job of adequately protecting the feed from spyware or malicious hackers?

More importantly, even if they certify these applications in some way how can they verify that the applications are the ones accessing the feed? Niall mentions white listing user agents but those are trivial to spoof. With Web-based readers, one can whitelist their IP range but there isn't a good way to verify that the desktop application accessing your web server is really who the user agent string says it is.

This seems to be yet another example of where Web-based software trumps desktop software.


 

Categories: Syndication Technology
Tracked by:
"Accessing Private and Authenticated Feeds - Why it's important" (Scott Hanselma... [Trackback]
http://www.hanselman.com/blog/AccessingPrivateAndAuthenticatedFeedsWhyItsImporta... [Pingback]
"IE7 RC1 can't update Password Protected Feeds" (Scott Hanselman's Computer Zen) [Trackback]
http://www.hanselman.com/blog/IE7RC1CantUpdatePasswordProtectedFeeds.aspx [Pingback]
"Web based not greater than desktop." (zhasper.com) [Trackback]
"If it is really so simple, why is it so hard?" (Oppositionally Defiant) [Trackback]
http://blog.oppositionallydefiant.com/PermaLink,guid,dee6fb31-d90b-4d4b-b6e2-125... [Pingback]
http://silauma.info/york/sitemap1.html [Pingback]
http://jawf5j3.net/pets/sitemap1.html [Pingback]
http://fwmwly7.net/alabama/sitemap1.html [Pingback]
http://weujmru.net/childcare/sitemap1.html [Pingback]
http://yftbsy1.net/lottery/sitemap1.html [Pingback]
http://box405.bluehost.com/~dugablog/sitemap1.html [Pingback]
http://restablog.com/games/sitemap1.html [Pingback]
http://restablog.dreamhosters.com/motorcycles/sitemap1.html [Pingback]
http://kf7ujzd.net/lottery/sitemap1.html [Pingback]
http://oymykjb.net/sitemap1.html [Pingback]
http://infrans.ifrance.com [Pingback]
http://lordless.250Free.com [Pingback]
http://ayavqoc.net/sitemap1.html [Pingback]
http://hrtfjhk.net/sitemap1.html [Pingback]
http://restablog.dreamhosters.com/erotic/sitemap1.html [Pingback]
http://kiva.startlogic.com/sitemap1.html [Pingback]
http://host239.hostmonster.com/~blogford/sitemap1.html [Pingback]
http://host239.hostmonster.com/~blogford/sitemap3.html [Pingback]
http://fastblog.sc101.info/career/sitemap1.html [Pingback]
http://gator413.hostgator.com/~digital/babies/sitemap1.html [Pingback]
http://lyxlyiy.net/table/index.html [Pingback]
http://da7fcil.net/13/index.html [Pingback]
http://blogfordummies.org/sitemap1.html [Pingback]
http://ok7mmwm.net/sitemap1.html [Pingback]
http://qyaq5qm.net/travel/sitemap1.php [Pingback]
http://pnge0ap.net/sitemap1.html [Pingback]
http://box432.bluehost.com/~zbloginf/sitemap1.html [Pingback]
http://d579737.u108.floridaserver.com/sitemap1.html [Pingback]
http://gator442.hostgator.com/~hockteam/university/sitemap1.html [Pingback]
http://gator442.hostgator.com/~hockteam/school/sitemap1.html [Pingback]

Monday, 18 September 2006 18:58:42 (GMT Daylight Time, UTC+01:00)
Web-based software doesn't really provide much benefit in this case, as it is the web browser client's responsibility to not cache SSL responses. Some web browsers allow SSL responses to be cached.

Also, the user-agent can just as easily be spoofed in a web-based software scenario -- just think of all the browsers out there that let the user change their user-agent string so that the browser appears as IE to the server.
Matt Sherman
Monday, 18 September 2006 20:27:31 (GMT Daylight Time, UTC+01:00)
Isn't the "real" solution to this something like Kerberos ? Ideally I'd like none of services that I use to hold or even cache authentication info from any other services.
Monday, 18 September 2006 21:27:04 (GMT Daylight Time, UTC+01:00)
Matt,
You're right that there is still the issue of whether the Web browser behaves properly with regards to caching SSL responses but for the most part since we already have this problem with SSL responses on the HTML Web, I don't think this is a problem specific to authenticated feeds.

Robert,
How does this resolve the problem of the feed reader caching the data it receives from the server after decrypting it? Caching the content of feeds is a fundamental feature of every RSS reader.
Monday, 18 September 2006 21:37:01 (GMT Daylight Time, UTC+01:00)
I agree that caching is fundamental (kind of the whole point) so then we get into the whole Common Feed Store issue and why it's problematic. It'll need to be Encrypted itself, so then how do we (aggregators) protect access to the Storage from "Rogue Aggregators?"
Monday, 18 September 2006 23:28:16 (GMT Daylight Time, UTC+01:00)
I think to solve the cache vunerability one would have to start thinking about encrypting the content (at source) at the XML level and using certificates or private keys etc to unlock the content every time it is accessed from the cache. In RSS 2.0 for example this could be encrypting the description part of the item (it would probably need to be item level to allow category routing/aggregation etc..
Tuesday, 19 September 2006 05:06:07 (GMT Daylight Time, UTC+01:00)
Dare: I wasn't thinking of encryption (I guess that would be good for caching private feeds, wouldn't it ?), just of the trust issue of storing credentials for any service (say an authenticated feed) on another service (like a web-based aggregator). In a non-feed context, it's like when Flickr or YouTube wants your weblog credentials to make posting content from their to your blog more convenient, except now you multiply that problem over many authenticated feeds.

Kerberos, from my vague understanding, allows you to establish a trust relationship where one party can "vouch for" another – in this case, convincing an authenticated feed supplier that the aggregator is acting on behalf of user that has the right credentials.

Another way to approach this problem (or maybe you'd layer it on top of a Kerberos-like scheme) is to have a federated identity scheme. As you are probably aware, version 1.0 of that didn't go very well for Microsoft…
Tuesday, 26 September 2006 16:06:10 (GMT Daylight Time, UTC+01:00)
Email readers have the login credentials for all the user's email accounts. And those email accounts are usually private. Why are private feeds different? Why would this require any more stringent process, or any different handling?

A problem may be if, for example, a bank feed used the same login credentials needed to access the bank site to move money around. But the solution for that can be as simple as to offer the private feed service using different username/passwords than those needed to take any other action. Which shouldn't be complicated.

As for choosing which program is, or isn't, allowed to access a feed, that seems like overkill.
The bank will let you log in with a browser, and won't limit the browser (well, often they limit to IE only, but that surely isn't because the other browsers are so much less secure). Why should it limit the feed reader?

The companies offering these services do have responsibility for their user's privacy. But at some point it comes to be the user's decision.
The user choose which browser to use, and some may be less secure. The user choose which email program to use, and some may be less secure, and even go as far as not to delete email message after they are read, even if the messages were taken from the server over SSL/TLS (huge gasp, anyone?). The user even chooses which email server/provider to use, and some of them may be less secure.
Not to mention it's the user choice what to do with the paper mail they get from the bank, containing all their information, and whether to shred it, put it on the safe, or just leave it in an unguarded drawer accessible by any houseguest.

And the user can certainly choose their feed reader. In this particular case a simple disclaimer on the feed provider's side (e.g. "If you have bloglines/newsgator/etc get your bank statement, they could read your bank statements. And if you have your feed reader keep your bank statements, anyone else using it could read your bank statements") should be more than enough.
Yaron
Comments are closed.