Ian McKellar has a blog post entitled Insecurity is Ruby on Rails Best Practice where he points out that by default the Ruby on Rails framework makes sites vulnerable to a certain class of exploits. Specifically, he discusses the vulnerabilities in two Ruby on Rails applications, 37 Signals Highrise and Magnolia, then proposes solutions. He writes

Cross Site Request Forgery
CSRF is the new bad guy in web application security. Everyone has worked out how to protect their SQL database from malicious input, and RoR saves you from ever having to worry about this. Cross site scripting attacks are dying and the web community even managed to nip most JSON data leaks in the bud.

Cross Site Request Forgery is very simple. A malicious site asks the user's browser to carry out an action on a site that the user has an active session on and the victim site carries out that action believing that the user intended that action to occur. In other words the problem arises when a web application relies purely on session cookies to authenticate requests.
...
Solutions
Easy Solutions
There aren't any good easy solutions to this. A first step is to do referrer checking on every request and block GET requests in form actions. Simply checking the domain on the referrer may not be enough security if there's a chance that HTML could be posted somewhere in the domain by an attacker the application would be vulnerable again.

Better Solutions
Ideally we want a shared secret between the HTML that contains the form and the rails code in the action. We don't want this to be accessible to third parties so serving as JavaScript isn't an option. The way other platforms like Drupal achieve this is by inserting a hidden form field into every form that's generated that contains a secret token, either unique to the current user's current session or (for the more paranoid) also unique to the action. The action then has to check that the hidden token is correct before allowing processing to continue.

Incidents of Cross Site Request Forgery have become more popular with the rise of AJAX and this is likely to become as endemic as SQL injection attacks until the majority of Web frameworks take this into account in their out of the box experience. 

At Microsoft, the Web teams at MSN and Windows Live have given the folks in Developer Division the virtue of their experience building Web apps which has helped in making sure our Web frameworks like ASP.NET Ajax (formerly codenamed Atlas) avoid this issue in their default configuration. Scott Guthrie outlines the safeguards against this class of issues in his post JSON Hijacking and How ASP.NET AJAX 1.0 Avoids these Attacks where he writes

Recently some reports have been issued by security researchers describing ways hackers can use the JSON wire format used by most popular AJAX frameworks to try and exploit cross domain scripts within browsers. Specifically, these attacks use HTTP GET requests invoked via an HTML include element to circumvent the "same origin policy" enforced by browsers (which limits JavaScript objects like XmlHttpRequest to only calling URLs on the same domain that the page was loaded from), and then look for ways to exploit the JSON payload content.

ASP.NET AJAX 1.0 includes a number of default settings and built-in features that prevent it from being susceptible to these types of JSON hijacking attacks.
...
ASP.NET AJAX 1.0 by default only allows the HTTP POST verb to be used when invoking web methods using JSON, which means you can't inadvertently allow browsers to invoke methods via HTTP GET.

ASP.NET AJAX 1.0 requires a Content-Type header to be set to "application/json" for both GET and POST invocations to AJAX web services. JSON requests that do not contain this header will be rejected by an ASP.NET server. This means you cannot invoke an ASP.NET AJAX web method via a include because browsers do not allow append custom content-type headers when requesting a JavaScript file like this.

These mitigations would solve the issues that Ian McKellar pointed out in 37 Signals Highrise and Magnolia because HTML forms hosted on a malicious site cannot set the Content-Type header so that exploit is blocked. However neither this approach nor referrer checking to see if the requests come from your domain is enough if the malicious party finds a way to upload HTML or script onto your site. 

To completely mitigate against this attack, the shared secret approach is most secure and is what is used by most large websites. In this approach each page that can submit a request has a canary value (i.e. hidden form key) which must be returned with the request. If the form key is not provided, is invalid or expired then the request fails. This functionality is provided out of the box in ASP.NET by setting the Page.ViewStateUserKey property. Unfortunately, this feature is not on by default. On the positive side, it is a simple one line code change to get this functionality which needs to be rolled by hand on a number of other Web platforms today.


 

Categories: Web Development

The good folks on the Microsoft Experimentation Platform team have published a paper which gives a great introduction to how and why one can go about using controlled experiments (i.e. A/B testing) to improve the usability of a website. The paper is titled Practical Guide to Controlled Experiments on the Web: Listen to Your Customers not to the HiPPO and will be published as part of the Thirteenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. The paper begins

In the 1700s, a British ship’s captain observed the lack of scurvy among sailors serving on the naval ships of Mediterranean countries, where citrus fruit was part of their rations. He then gave half his crew limes (the Treatment group) while the other half (the Control group) continued with their regular diet. Despite much grumbling among the crew in the Treatment group, the experiment was a success, showing that consuming limes prevented scurvy. While the captain did not realize that scurvy is a consequence of vitamin C deficiency, and that limes are rich in vitamin C, the intervention worked. British sailors eventually were compelled to consume citrus fruit regularly, a practice that gave rise to the still-popular label limeys.

Some 300 years later, Greg Linden at Amazon created a prototype to show personalized recommendations based on items in the shopping cart (2). You add an item, recommendations show up; add another item, different recommendations show up. Linden notes that while the prototype looked promising, ―a marketing senior vice-president was dead set against it, claiming it will distract people from checking out. Greg was ―forbidden to work on this any further. Nonetheless, Greg ran a controlled experiment, and the ―feature won by such a wide margin that not having it live was costing Amazon a noticeable chunk of change. With new urgency, shopping cart recommendations launched. Since then, multiple sites have copied cart recommendations.

The authors of this paper were involved in many experiments at Amazon, Microsoft, Dupont, and NASA. The culture of experimentation at Amazon, where data trumps intuition (3), and a system that made running experiments easy, allowed Amazon to innovate quickly and effectively. At Microsoft, there are multiple systems for running controlled experiments. We describe several architectures in this paper with their advantages and disadvantages. A unifying theme is that controlled experiments have great return-on-investment (ROI) and that building the appropriate infrastructure can accelerate innovation.

I learned quite a bit from reading the paper although I did somewhat skip over some of the parts that involved math. It's pretty interesting when you realize how huge the impact of changing the layout of a page or moving links can be on the bottom line of a Web company. Were talking millions of dollars for the most popular sites. That's pretty crazy.

Anyway, Ronny Kohavi from the team mentioned that they will be giving a talk related to the paper at eBay research labs tomorrow at 11AM. The talk will be in Building 0 (Toys) in room 0158F. The address is 2145 Hamilton Avenue, San Jose, CA. If you are in the silicon valley area, this might be a nice bring your own lunch event to attend.


 

The MSN Soapbox team has a blog post entitled Soapbox Is Now Open For Full Video Viewing which states

We have just opened Soapbox on MSN Video for full video viewing. You no longer need to be signed into the service to view Soapbox videos!
 
However just as before, you still need to sign in to upload, comment, tag or rate videos. e Embedded Player is also available for full viewing and embedding.
 
While it might not be visible, we have spent the last month working hard on improving our backend, to make encoding and performance as scalable as possible.
 
We are also now conducting proactive filtering of all uploaded content, using technology from Audible Magic.Audible Magic bases their filtering on the video's audio track and their database is being updated constantly. By using this technology, we see this as an important step to ensure the viability and success of Soapbox over the long run.

You can dive right in and explore the site's new interface by checking out my friend Ikechukwu's music video. I like the redesign of the site although there seem to be a few minor usability quibbles. The content filtering is also an interesting addition to the site and was recently mentioned in the Ars Technica article Soapbox re-opens, beating YouTube to the punch with content filtering. Come to think of it, I wonder what's been taking Google so long to implement content filtering for copyrighted materials given how long they've been talking about it. It's almost as if they are dawdling to add the feature. I wonder why?

Speaking of usability issues, the main problem I have with the redesign is  that once you start browsing videos the site's use of AJAX/Flash techniques works against it. Once you click on any of the videos shown beside the one you are watching, the video changes inline instead of navigating to a new page which means the browser's URL doesn't change. To get a permalink to a video I had the hunt around in the interface until I saw a "video link"  which gave me the permalink to the video I was currently watching. Both Google Video and YouTube actually reload the page and thus change the URL when you click on a video. Even then YouTube makes the URL prominent on the page as well. Although I like the smoothness of inline transitions, making the permalink and other sharing options more prominent would make the site a lot more user friendly in my opinion.


 

Categories: MSN

If you are a member of the Microsoft developer community, you've probably heard of the recent kerfuffle between Microsoft and the developer of TestDriven.NET that was publicized in his blog post Microsoft vs TestDriven.Net Express. I'm not going to comment directly on the situation especially since lawyers are involved. However I did find the perspective put forth by Leon Bambrick in his post TestDriven.net-Gate: Don't Blame Microsoft, Blame Jason Weber to be quite insightful.

Leon Bambrick wrote

If you have time, you really ought to read the whole thing. I've read as much as I can, and here's my analysis.

Right from the start, before tempers flared, Microsoft's representative, Jason Weber should've done a much better job of convincing Jamie not to release the express sku. Jason did put a lot of effort in, and Microsoft spent a lot of effort on this, but Jason sabotaged his own sides efforts right throughout.

The first clear mistake is that Jason should've never referred to Jamie's work as a "hack". He did this repeatedly -- and it seems to have greatly exacerbated the situation. What did that wording gain Jason (or Microsoft)? It only worked to insult the person he was trying to come to agreement with. Name calling doesn't aid negotiation.

When Jamie finally agreed to remove the express version, he wanted a credible reason to put on his website. Note that Jamie had backed down now, and with good treatment the thing should've been resolved at that point. Here's the wording that Jason recommended:

"After speaking with Jason Weber from Microsoft I realized that by adding features to Visual Studio Express I was in breach of the Visual Studio license agreements and copyrights. I have therefore decided to remove support for the Visual Studio Express SKU's from TestDriven.Net. Jason was very supportive of TestDriven.Net's integration into the other Visual Studio 2005 products and I was invited to join the VSIP program. This would allow me to fly to Redmond each quarter and work closely with the Visual Studio development team on deeper integration."

This wording is offensive on four levels. One Two Three Four. That's a lot of offense!

Firstly -- it acts as an advertisement for Jason Weber. Why? Arrogance maybe? He's lording it over Jamie.

Second -- it supposes that Jason should publicly admit to violations. He need never admit such a thing.

Third -- it includes mention of breach of "copyright". I don't think such an allegation ever came up until that point. So this was a fresh insult.

Fourth -- it stings Jamie's pride, by suggesting that he was bribed into agreement. Ouch

So just when they got close to agreement, Jason effectively kicked Jamie in the nuts, pissed in his face, poked him in the eye, and danced on his grave.

That's not a winning technique in negotiations.

I believe there is a lesson on negotiating tactics that can be extracted from this incident. I really hope this situation reaches an amicable conclusion for the sakes of all parties involved.


 

Categories: Life in the B0rg Cube

June 4, 2007
@ 04:47 PM

Last week there was a bunch of discussion in a number of blogs about whether we need an interface definition language (IDL) for RESTful Web services. There were a lot of good posts on this topic but the posts from Don Box and Bobby Woolf which gave me the most food for thought.

In his post entitled WADL, WSDL, XSD, and the Web Don Box wrote

More interesting fodder on Stefan Tilkov's blog, this time on whether RESTafarians need WSDL-like functionality, potentially in the form of WADL.

Several points come to mind.

First, I'm doubtful that WADL will be substantially better than WSDL given the reliance on XSD to describe XML payloads. Yes, some of the cruft WSDL introduces goes away, but that cruft wasn't where the interop probems were lurking.

I have to concur with Don's analysis about XSD being the main cause of interoperability problems in SOAP/WS-* Web services. In a past life, I was the Program Manager responsible for Microsoft's implementations of the W3C's XML Schema Definition Language (aka XSD). The main problem with the technology is that XML developers wanted two fairly different things from a schema language

  1. A grammar for describing and enforcing the contract between producers and consumers of XML documents so that one could, for example, confirm that an XML document received was a valid purchase order or RSS feed.
  2. A way describe strongly typed data such as database tables or serialized objects as XML documents for use in distributed programming or distributed query operations.

In hindsight, this probably should have been two separate efforts. Instead the W3C XML Schema working group tried to satisfy both sets of consistuencies with a single XML schema language.  The resulting technology ended up being ill suited at both tasks.  The limitations placed on it by having to be a type system made it unable to describe common constructs in XML formats such as being able to have elements show up in any order (e.g. in an RSS feed title, description, pubDate, etc. can appear in any order as children of item) or being able to specify co-occurrence constraints (e.g. in an Atom feed a text construct may have XML content or textual content depending on the value of its type attribute).

As a mechanism for describing serialized objects for use in distributed computing scenarios (aka Web services) it caused several interoperability problems due to the impedance mismatch between W3C XML Schema and object oriented programming constructs. The W3C XML schema language had a number of type system constructs such as simple type facet restriction, anonymous types, structural subtyping, namespace based wildcards, identity constraints, and mixed content which simply do not exist in the typical programming language. This lead to interoperability problems because each SOAP stack had its own idiosyncratic way of mapping the various XSD type system constructs to objects in the target platform's programming language and vice versa. Also no two SOAP stacks supported the same set of XSD features. Even within Microsoft, let alone across the industry. There are several SOAP interoperability horror stories on the Web such as the reports from Nelson Minar on Google's problems using SOAP in posts such as Why SOAP Sucks and his ETech 2005 presentation Building a New Web Service at Google. For a while, the major vendors in the SOAP/WS-* space tried to tackle this problem by forming a WS-I XML Schema Profile working group but I don't think that went anywhere primarily because each vendor supported different subsets of XSD so no one could agree on what features to keep and what to leave out.

To cut a long story short, any technology that takes a dependency on XSD is built on a shaky technological foundation. According to the WADL specification there is no requirement that a particular XML schema language is used so it doesn't have to depend on XSD. However besides XSD, there actually isn't any mainstream technology for describing serialized objects as XML. So one has to be invented. There is a good description of what this schema language should look like in James Clark's post Do we need a new kind of schema language? If anyone can fix this problem, it's James Clark.

Ignoring the fact that 80% of the functionality of WADL currently doesn't exist because we either need to use a broken technology (i.e. XSD) or wait for James Clark to finish inventing Type Expressions for Data Interchange (TEDI). What else is wrong with WADL?

In a post entitled WADL: Declared Interfaces for REST? Bobby Woolf writes

Now, in typing rest, my colleague Patrick Mueller contemplates that he "wants some typing [i.e. contracts] in the REST world" and, among other things, discusses WADL (Web Application Description Language). Sadly, he's already gotten some backlash, which he's responded to in not doing REST. So I (and Richard, and others?) think that the major advantage of WSDL over REST is the declared interface. Now some of the REST guys seem to be coming around to this way of thinking and are thinking about declared interfaces for REST. I then wonder if and how REST with declared interfaces would be significantly different from WSDL (w/SOAP).

One thing I've learned about the SOAP/WS-* developer world is that people often pay lip service to certain words even though they use them all the time. For example, the technologies are often called Web services even though the key focus of all the major vendors and customers in this area is reinventing CORBA/DCOM with XML protocols as opposed to building services on the Web. Another word that is often abused in the SOAP/WS-* world is contract. When I think of a contract, I think of some lengthy document drafted by a lawyer that spells out in excruciating detail how two parties interact and what their responsibilities are. When a SOAP/WS-* developer uses the word contract and WSDL interchangeably, this seems incorrect because a WSDL is simply the XML version of OMG IDL. And an IDL is simply a list of API signatures. It doesn't describe expected message exchange patterns, required authentication methods, message traffic limits, quality of service guarantees, or even pre and post conditions for the various method calls. You usually find this information in the documentation and/or in the actual business contract one signed with the Web service provider. A WADL document for the REST Web service will not change this fact.

When a SOAP/WS-* says that he wants a contract, he really means he wants an interface definition language (IDL) so he can point some tool at a URL and get some stubs & skeletons automatically generated. Since this post is already long enough and I have to get to work, it is left as an exercise for the reader as to whether a technological approach borrowed from distributed object technologies like DCE/RPC, DCOM and CORBA meshes with the resource oriented, document-centric and loosely coupled world of RESTful Web services.

PS: Before any of the SOAP/WS-* wonks points this out, I realize that what I've described as a contract can in theory be implemented for SOAP/WS-* based services using a combination of WSDL 2.0 and WS-Policy. Good luck actually finding an implementation in practice that (i) works and (ii) is interoperable across multiple vendor SOAP stacks. 


 

Categories: XML Web Services

June 1, 2007
@ 04:09 AM

Mike Torres has the scoop in his blog post Windows Live betas - Writer, Messenger, Mail where he writes

Three applications I use on a daily basis just got updated today:

All three of them are rock-solid in terms of stability (of course, they're still betas!) and come highly recommended by yours truly.

There are also a couple of blog posts on the various Windows Live team blogs about the betas. The Windows Live Hotmail team writes about the newly renamed "Windows Live Mail" in the post New beta software to access your Hotmail offline, the Windows Live Writer team has a blog post letting us know Windows Live Writer Beta 2 Now Available, and finally the Windows Live Messenger team runs down the changes in their latest release in the post Messenger 8.5 Beta1 released.

Like Mike I've been using the betas for a while and they are all rock solid. Check 'em out and let the product teams know what you think.


 

Categories: Windows Live

Matt Warren has an excellent Microsoft history lesson in his blog post entitled The Origin of LINQ to SQL which explores how LINQ to SQL (Microsoft's Object/Relational mapping technology with programming language integration) came to be despite the internal politics at Microsoft which encouraged the entire company to bet on WinFS. I'm excerpting a lot of his blog post because I wouldn't be surprised if he ends up taking it down or redacting it later. He writes

LINQ to SQL, possibly Microsoft’s first OR/M to actually ship in ten years of trying, was never even supposed to exist.It started out as a humble Visual Studio project on my desktop machine way back in the fall of 2003
...
Luckily, it didn’t take me long to get the basics up and running. You see, it wasn’t the first time I’d slapped together an OR/M or modified a language to add query capabilities; having already designed ObjectSpaces and parts of C-Omega so I was certainly up to the task. Fortunately, it gets a lot easier the ‘next’ time you design and build something, especially if it was you that did the designing before and you have the opportunity to start over fresh.
...
Why didn’t I start with WinFS? After all, it was all the rage inside the SQL Server org at the time. Unfortunately, it was the same story as with ObjectSpaces. They were shipping before us. We weren’t on their radar. Their hubris was bigger than ours. Not to mention my belief that WinFS was the biggest fiasco I’d ever bore witness to, but that’s another story.

Yet, part of that story was the impetus to turn LINQ to SQL into an actual product.

WinFS client API even started out as a complete copy of the ObjectSpaces codebase and had all the same limitations. It just had more political clout as it was being lead by a figure at a higher-point in the corporate org chart, and so it was positioned as part of juggernaut that was making a massive internal land grab. We on the outside used to refer to WinFS as the black hole, forever growing, sucking up all available resources, letting nothing escape and in the end producing nothing except possibly a gateway to an alternate reality. Many of our friends and co-workers had already been sucked in, and the weekly reports and horror stories were not for the weak-of-heart. It eventually sucked up ObjectSpaces too and in the process killing it off so that in WinFS v2 it could all be ‘aligned’.

At that point, those of us designing LINQ got a bit worried. There were not too many in the developer division that believed in the mission of WinFS. As a developer tool for the masses, something simple that targeted the lower end was paramount. ObjectSpaces had been it, and now it was gone. There was still some glimmer of possibility that WinFS v2 might eventually get it right and be useful as a general OR /M tool. But all hope of that was shot when WinFS was pulled out of Vista and its entire existence was put in doubt. Had they immediately turned around and brought back ObjectSpaces, that might have worked, but in the intervening months ObjectSpaces had slipped past the point of no return for being part of .Net 2.0, turnover within the SQL org was spiraling out of control, and most of the brain-trust that knew anything about OR/M had already fled.

That’s when we realized we had no choice. If LINQ was to succeed it needed some legs to stand on. The ‘mock’ OR/M I was building was shaping up to be a pretty good contender. We had co-designed it in tandem with LINQ as part of the C# 3.0 design group and it really was a full-fledged implementation; we just never thought it was actually going to be a shipping product. It was simply meant to act as a stand-in for products that now no longer existed. So, for the sake of LINQ and the customer in general, we took up the OR/M torch officially, announcing our intention internally and starting the political nightmare that became my life for the next three years.

This takes me back. I had friends who worked on ObjectSpaces and it was quite heart breaking to see what internal politics can do to passionate folks who once believed that technologies stand based on merit and not who you know at Microsoft. At least this story had a happy ending. Passionate people figured out how to navigate the internal waters at Microsoft and are on track to shipping a really compelling addition to the developer landscape.

Editor's Note: I added the links to ObjectSpaces and C-Omega to the excerpts from Matt's post to provide some context.


 

Categories: Life in the B0rg Cube

Robert Scoble breaks the news that Google brings developers offline with "Gears" where he writes

Right now in Sydney, Australia, the first of 10 Google Developer days are starting up and the audience there is hearing about several new initiatives. The most important of which is “Google Gears,” an open source project that will bring offline capabilities to Web Applications — aimed at developers
...
Regarding Gears. It works on Macs, Windows, Linux on IE, Firefox, Opera. Enables versioned offline storage. Extension to HTML/JavaScript.

They are showing me a demo of the new Google Reader using the new Gears plugin. After you load the Gears plugin you get a new icon at the top of your Reader window which enables offline capabilities of Google Reader. They showed how Google Reader then downloaded 2,000 feed items. They took the browser offline and it continued to work great.
...
Gears supports using Adobe’s Apollo and Flash and should support other technologies including Microsoft’s Silverlight.

Gears will be submitted to a standards organization eventually, they said, but want to make sure the technology is rock solid first.

Am I the only one wondering what took them so long? I remember chatting about this in mid-2005 with Joshua Allen, we were both pretty sure it we would see it happen within a year. I guess a year and a half isn't so bad. :)

The bit about standardizing the technology is a nice touch, not that it matters. What matters is that "it doesn't work offline" is no longer a valid criticism for Google's family of Microsoft office knock offs (i.e. Google Docs & Spreadsheets) or any other AJAX/Flash application that competes with a desktop application. Running it through a standards body wouldn't make a significant difference one way or the other to the adoption of the technology for use with Google apps. It may prevent other developers from adopting the technology but I doubt many developers would look this gift horse in the mouth, after all it is freaking offline support for Web apps. 

Welcome to the future.


 

Kent Newsome has a blog post entitled Educating Kent: Facebook where he asks

I have a genuine question.

What is so much better about Facebook (and MySpace and other similar platforms) than an ordinary blog on a popular platform- say WordPress?

I would love it if someone could explain this to me.

As someone who's worked on a blogging/social networking service for the past two and a half years I have some perspective on why social networking sites are more popular than blogs (i.e. more people have a social network profile than a traditional "blog").

MY ANSWER: Social networking sites [especially Facebook] take better advantage of the human need to communicate by leveraging the following trends that became obvious once blogging took off

  1. Personal publishing is more than just text, it spans all media. Videos, music and photos are just as important for people to share as is text. Traditional blogging tools/services like WordPress and Blogger have not taken advantage of this fact.

  2. People like to be informed about what is going on in their circle of friends (i.e. social networks). Bloggers tend to do this by subscribing to RSS feeds in their favorite RSS reader. Unfortunately, subscribing to RSS feeds has and always will be a fairly cumbersome means to satisfy this need regardless of how many browsers, email clients and Web sites add RSS reading functionality. On the other hand, a model where subscription is automatic once a user declares another user as being of interest to them (e.g. adding them as a friend) as opposed to locating and subscribing to their RSS feed is easier for users to adopt and use. In addition, integrating the process of keeping abreast of updates from "friends" into an existing application the user is familiar with and uses regularly is preferable to introducing a new application. I like to call this the LiveJournal lesson.

The above phenomena are the reason that MSN Spaces Windows Live Spaces grew to having over 100 million unique visitors less than two years after it first showed up. MSN Spaces was one of the first major personal publishing sites to place publishing of other media (e.g. photo albums) on the same footing as blogging/creating a journal. This was a big hit with users and the service followed up with tools for embedding music and videos, however we didn't provide media hosting or a library of content which users could choose from. These mistakes weren't made by MySpace which thanks to its widget platform could rely on services like PhotoBucket and YouTube to provide both media hosting and a library of content for users to share. Now MySpace is one of the most popular sites on the Web.

The second major reason for the initial success of MSN Spaces Windows Live Spaces lies in its integration with Windows Live Messenger . The key aspect of this integration was the gleams feature which was described as follows by Paul Thurrott in his review of MSN Messenger 7

Additionally, when you click on your own display picture in Messenger 7.0, your Contact Card displays (Figure). This small window provides a range of personal information and links to other MSN services. You can access other users' Contact Cards by clicking their picture in the main Messenger window (Figure). But Messenger 7.0 takes this capability a bit further with another new feature called a gleam, that visually reminds you when one of your contacts has updated their MSN Spaces blog or other personal information. Gleams appear as small orange stars next to contact pictures in the main MSN Messenger window (Figure).

With gleams, the act of adding someone as an IM buddy also subscribes you to getting updates about changes on their Windows Live space. Our users loved it. In hindsight, where we dropped the ball is that it isn't much of a stretch to imagine a Web interface which summarizes these updates from your friends so you can access it from anywhere not just your IM client. In addition, it is also lame that we don't provide details of the nature of the update inline and  instead require users to click on the contact card to tell which of their friends information has changed. Once you add those two features, you've pretty much got Twitter (text only) and the Facebook News Feed which have both turned out to be big hits on the Web.

To recap, social networking sites like MySpace and Facebook are better bigger than blogging sites because they enable people to connect, communicate and share with each other in richer and easier ways than blogging does.


 

Categories: Social Software

Recently the mainstream media has been running profiles on people who have built businesses worth hundreds of millions of dollars by buying lots of Internet domain names then filling them with Google ads. Last week, CNN Money ran an article entitled The man who owns the Internet which contained the following excerpt

When Ham wants a domain, he leans over and quietly instructs an associate to bid on his behalf. He likes wedding names, so his guy lifts the white paddle and snags Weddingcatering.com for $10,000. Greeting.com is not nearly as good as the plural Greetings.com, but Ham grabs it anyway, for $350,000. Ham is a devout Christian, and he spends $31,000 to add Christianrock.com to his collection, which already includes God.com and Satan.com. When it's all over, Ham strolls to the table near the exit and writes a check for $650,000. It's a cheap afternoon.
...
Trained as a family doctor, he put off medicine after discovering the riches of the Web. Since 2000 he has quietly cobbled together a portfolio of some 300,000 domains that, combined with several other ventures, generate an estimated $70 million a year in revenue. (Like all his financial details, Ham would neither confirm nor deny this figure.)
...
And what few people know is that he's also the man behind the domain world's latest scheme: profiting from traffic generated by the millions of people who mistakenly type ".cm" instead of ".com" at the end of a domain name. Try it with almost any name you can think of -- Beer.cm, Newyorktimes.cm, even Anyname.cm -- and you'll land on a page called Agoga.com, a site filled with ads served up by Yahoo

The New York Times has a profile on another multimillion dollar company in the same business in today's article entitled Millions of Addresses and Thousands of Sites, All Leading to One which contains the following excerpts

What Internet business has raised $120 million in financing in the last year, owns 725,000 Web sites, and has as its chief executive the former head of Primedia and International Data Group? If you guessed NameMedia, a privately held owner and developer of Web sites based in Waltham, Mass., you take the prize.
...
“What we’ve wanted to do, quietly, is amass the largest real estate position on the Internet, which we feel we have,” Mr. Conlin said. Some of those properties, he said, are the equivalent of “oceanfront” sites, or high-value addresses like Photography.com or DailyHoroscope.com that NameMedia will populate with relevant editorial content. Those who type in any of NameMedia’s other 6,000 or so photography-related Internet addresses, like photographyproducts.com, will land on Photography.com.
...
So far the company’s strategy is paying off, Mr. Conlin said, with company revenue doubling last year, to $60 million.

Companies like this are bad for the Internet for several reasons. For one, they artificially reduce the pool of domain which has resulted in legitimate domains having to choose name that are either awful misspellings or sound like they were stolen from Star Wars. Secondly, a lot of these sites tend to clog up search results especially when they have generic domain names and a couple thousand sites all linking or redirecting back to one domain. Finally, the fact that these companies are making so much money in a manner that is user-hostile and ethically questionable encourages more such businesses which prey on naive Internet users to be formed.

What I've found most shocking about this trend is that the big Web advertising companies like Google go out of their way to court these businesses. In fact, Google has a service called Google AdSense for Domains [with the tastefully chosen URL http://www.google.com/domainpark] which caters exclusively to these kinds of sites.

One of the things I've disliked about the rush towards advertising based business models on the Web is that if unchecked it leads to user-hostile behavior in the quest to satisfy the bottom line. The recent flap over Google and Dell installing the equivalent of spyware on new PCs to show users ads when they make a typo when browsing the Web is an example of this negative trend. Now it turns out that Google is in bed with domain name squatters. These are all examples of Google's Strategy Tax, the fact that they make their money from ads compromises their integrity when there is a conflict between between doing what's best for users and doing what's best for advertisers.

Do no evil. It's now Search, Ads and Apps