As I read about the U.K. partially nationalizing major banks and the U.S. government's plan to do the same, it makes one wonder how the financial system could have become so broken that these steps are even necessary. The more I read, the more it seems clear that our so called "free" markets had built into the some weaknesses that didn't take into account human nature. Let's start with the following quote from the MSNBC article about the U.K. government investing in major banks

As a condition of the deal, the government has required the banks to pay no bonuses to board members at the banks this year.

British Treasury chief Alistair Darling, speaking with Brown Monday, said it would be "nonsense" for board members to be taking their bonuses. The government also insisted that the bulk of future bonuses be paid in shares to ensure that bonuses encourage management to take a more long-term approach to profit making.

The above statement makes it sound like the board members of the various banks were actually on track to make their bonuses even though the media makes it sound like they are guilty of gross incompetence or some degree of negligence given the current state of the financial markets. If this is the case, how come the vast majority of the largest banks in the world seem to all have the same problem of boards and CEOs that were reaping massive rewards while effectively running the companies into the ground? How come the "free" market didn't work efficiently to discover and penalize these companies before we got to this juncture?

One reason for this problem is outlined by Philip Greenspun in his post Time for corporate governance reform? where he writes

What would it take to restore investor confidence in the U.S. market?  How about governance reform?

Right now the shareholders of a public company are at the mercy of management.  Without an expensive proxy fight, the shareholders cannot nominate or vote for their own representatives on the Board of Directors.  The CEO nominates a slate of golfing buddies to serve on the Board, while he or she will in turn serve on their boards.  Lately it seems that the typical CEO’s golfing buddies have decided on very generous compensation for the CEO, often amount to a substantial share of the company’s profits.  The golfing buddies have also decided that the public shareholders should be diluted by stock options granted to top executives and that the price on those options should be reset every time the company’s stock takes a dive (probably there is a lot of option price resetting going on right now!  Wouldn’t want your CEO to lose incentive).

Corporations are supposed to operate for the benefit of shareholders.  The only way that this can happen is if a majority of Directors are nominated by and selected by shareholders.  It may have been the case that social mores in the 1950s constrained CEO-nominated Boards from paying their friend $50 million per year, but those mores are apparently gone and the present structure in which management regulates itself serves only to facilitate large-scale looting by management.

For one the incentive system for corporate leadership is currently broken. As Phil states, companies (aka the market) have made it hard for shareholders to effect the decision making at the top of major corporations without expensive proxy fights and thus the main [counterproductive] recourse they have is selling their shares. And even then the cronyism between boards and executive management is such that the folks at the top still figure out how to get paid big bucks even if the stock has been taking a beating due to shareholder disaffection.

Further problems are caused by contagion, where people see one group getting rewards for particular behavior and then they want to join in the fun. Below is an excerpt of a Harvard Business School posting summarizing an interview with Warren Buffett entitled Wisdom of Warren Buffett: On Innovators, Imitators, and Idiots 

At one point, his interviewer asked the question that is on all our minds: "Should wise people have known better?" Of course, they should have, Buffett replied, but there's a "natural progression" to how good new ideas go wrong. He called this progression the "three Is." First come the innovators, who see opportunities that others don't. Then come the imitators, who copy what the innovators have done. And then come the idiots, whose avarice undoes the very innovations they are trying to use to get rich.

The problem, in other words, isn't with innovation--it's with the idiocy that follows. So how do we as individuals (not to mention as companies and societies) continue to embrace the value-creating upside of creativity while guarding against the value-destroying downsides of imitation? The answer, it seems to me, is about values--about always being able to distinguish between that which is smart and that which is expedient. And that takes discipline. Can you distinguish between a genuine innovation and a mindless imitation? Are you prepared to walk away from ideas that promise to make money, even if they make no sense?

It's not easy--which is why so many of us fall prey to so many bad ideas. "People don't get smarter about things that get as basic as greed," Warren Buffett told his interviewer. "You can't stand to see your neighbor getting rich. You know you're smarter than he is, but he's doing all these [crazy] things, and he's getting rich...so pretty soon you start doing it."

As Warren Buffett points out, our financial markets and corporate governance structures do not have safeguards that prevent greed from taking over and destroying the system. The underlying assumption in a capitalist system is that greed is good if it is properly harnessed. The problem we have today is that the people have moved faster than the rules and regulations to keep greed in check can keep up and in some cases successfully argued against these rules only for those decisions to come back and bite us on the butt.


So what does this have to do with designing social applications and other software systems? Any system that requires human interaction has to ensure that it takes into account the variations in human behavior and not just focus on the ideal user. This doesn't just mean malicious users and spammers who will have negative intentions towards the system. It also includes regular users who will unintentionally misuse or outwit the system in ways that the designers may not have expected.

A great example of this is Greg Linden's post on the Netflix Prize at KDD 2008 where he writes

Gavin Potter, the famous guy in a garage, had a short paper in the workshop, "Putting the collaborator back into collaborative filtering" (PDF). This paper has a fascinating discussion of how not assuming rationality and consistency when people rate movies and instead looking for patterns in people's biases can yield remarkable gains in accuracy. Some excerpts:

When [rating movies] ... a user is being asked to perform two separate tasks.

First, they are being asked to estimate their preferences for a particular item. Second, they are being asked to translate that preference into a score.

There is a significant issue ... that the scoring system, therefore, only produces an indirect estimate of the true preference of the user .... Different users are translating their preferences into scores using different scoring functions.

[For example, people] use the rating system in different ways -- some reserving the highest score only for films that they regard as truly exceptional, others using the score for films they simply enjoy .... Some users [have] only small differences in preferences of the films they have rated, and others [have] large differences .... Incorporation of a scoring function calibrated for an individual user can lead to an improvement in results.

[Another] powerful [model] we found was to include the impact of the date of the rating. It seems intuitively plausible that a user would allocate different scores depending on the mood they were in on the date of the rating.

Gavin has done quite well in the Netflix Prize; at the time of writing, he was in eighth place with an impressive score of .8684.
Galvin's paper is a light and easy read. Definitely worthwhile. Galvin's work forces us to challenge our common assumption that people are objective when providing ratings, instead suggesting that it is quite important to detect biases and moods when people rate on a 1..5 scale.

There were two key insights in Gavin Potter's paper related to how people interact with a ranking system on a 1 to 5 scale. The first is that some people will have coarser grained scoring methodology than others (e.g. John only rank movies as 1 for waste of time, 3 for satisfactory and 5 for would watch again while Jane's movie rating system ranges from 2.5 stars to 5 stars). The second insight is that you can detect and correct for when a user is having a crappy day versus a good day by seeing if they rated a lot of movies on a particular day and whether the average rating is at an extreme (e.g. Jane rates ten movies on Saturday and gives them an average score under 3 stars).

The fact that users will treat your 1 to 5 rating scale as a 2.5 to 5 rating scale or may rate a ton of movies poorly because they had a bad day at the office is a consideration that a recommendation system designer should keep in mind if they don't want to give consistently poor results to users in certain cases.  This is human nature in effect.

Another great example of how human nature foils our expectations of how users should behave is the following excerpt from the Engineering Windows 7 blog post about User Account Control

One extra click to do normal things like open the device manager, install software, or turn off your firewall is sometimes confusing and frustrating for our users. Here is a representative sample of the feedback we’ve received from the Windows Feedback Panel:

  • “I do not like to be continuously asked if I want to do what I just told the computer to do.”
  • “I feel like I am asked by Vista to approve every little thing that I do on my PC and I find it very aggravating.”
  • “The constant asking for input to make any changes is annoying. But it is good that it makes kids ask me for password for stuff they are trying to change.”
  • “Please work on simplifying the User Account control.....highly perplexing and bothersome at times”

We understand adding an extra click can be annoying, especially for users who are highly knowledgeable about what is happening with their system (or for people just trying to get work done). However, for most users, the potential benefit is that UAC forces malware or poorly written software to show itself and get your approval before it can potentially harm the system.

Does this make the system more secure? If every user of Windows were an expert that understands the cause/effect of all operations, the UAC prompt would make perfect sense and nothing malicious would slip through. The reality is that some people don’t read the prompts, and thus gain no benefit from them (and are just annoyed). In Vista, some power users have chosen to disable UAC – a setting that is admittedly hard to find. We don’t recommend you do this, but we understand you find value in the ability to turn UAC off. For the rest of you who try to figure out what is going on by reading the UAC prompt , there is the potential for a definite security benefit if you take the time to analyze each prompt and decide if it’s something you want to happen. However, we haven’t made things easy on you - the dialogs in Vista aren’t easy to decipher and are often not memorable. In one lab study we conducted, only 13% of participants could provide specific details about why they were seeing a UAC dialog in Vista.  Some didn’t remember they had seen a dialog at all when asked about it.

How do you design a dialog prompt to warn users about the potential risk of an action they are about to take if they are so intent on clicking OK and getting the job done that they forget that there was even a warning dialog afterwards?

There are a lot more examples out there but the fundamental message is the same; if you are designing a system that is going to be used by humans then you should account for the various ways people will try to outwit the system simply because they can't help themselves.

Note Now Playing: Kanye West - Love Lockdown Note