It isn't hard to find criticisms of Microsoft's employee appraisal system. Whether it's almost decade old complaints such as Mini-Microsoft’s Microsoft Stack Ranking is not Good Management or more recent forays into blaming it for the company's "decline" such as Kurt Eichenwald's 2012 Vanity Fair opus Microsoft’s Lost Decade or the follow up The Poisonous Employee-Ranking System That Helps Explain Microsoft’s Decline from Slate, there are many who would lay the practice of ranking employees on a vitality curve as the root cause of any problems facing Microsoft today. Kurt Eichenwald's article persuasively makes that argument in the in excerpt below

Every current and former Microsoft employee I interviewed—every one—cited stack ranking as the most destructive process inside of Microsoft, something that drove out untold numbers of employees. The system—also referred to as “the performance model,” “the bell curve,” or just “the employee review”—has, with certain variations over the years, worked like this: every unit was forced to declare a certain percentage of employees as top performers, then good performers, then average, then below average, then poor. …

For that reason, executives said, a lot of Microsoft superstars did everything they could to avoid working alongside other top-notch developers, out of fear that they would be hurt in the rankings. And the reviews had real-world consequences: those at the top received bonuses and promotions; those at the bottom usually received no cash or were shown the door. …

“The behavior this engenders, people do everything they can to stay out of the bottom bucket,” one Microsoft engineer said. “People responsible for features will openly sabotage other people’s efforts. One of the most valuable things I learned was to give the appearance of being courteous while withholding just enough information from colleagues to ensure they didn’t get ahead of me on the rankings.” Worse, because the reviews came every six months, employees and their supervisors—who were also ranked—focused on their short-term performance, rather than on longer efforts to innovate.

This reads as a very damning indictment of the Microsoft performance appraisal system. Although I've now been at Microsoft for 12 years and a manager for the last three of them, the purpose of this blog post isn't to defend or expand on the details of performance reviews at Microsoft. The purpose of this blog post is to point out that one often needs multiple data points before coming to sweeping generalizations when discussing something as complex as the success or failure of technology companies.

 

The Four Horsemen: Facebook, Amazon, Google and Apple

A few years ago, Eric Schmidt described the "gang of four" companies driving innovation and growth in tech as Facebook, Amazon, Google and Apple.  The implication being that these were the four leading companies in the tech industry. For the purpose of this blog post, I’ll take this implication at face value and will consider these 4 companies as leading examples of what other companies should aim to emulate in the technology industry. In fact, I’ll go one further and reference Mobile is eating the world, autumn 2013 edition by Benedict Evans where he cites these four companies as “setting the agenda” whereas when Microsoft is mentioned, it’s only to speak about it’s “growing irrelevance”.

Now that we’ve established that these four companies are worthy of emulation, how exactly do these companies evaluate employee performance anyway?

Unfortunately, I could not find any information either off-the-record from friends or on the Internet about how performance reviews work at Apple. On the other hand, there has been enough written about the other three companies that we can still draw some conclusions about how performance appraisal works at relevant technology companies. 

Let’s start with Facebook. A good overview of Facebook’s performance review system can be found in the answers to the Quora question, What does Facebook's performance review process look like? Below is the answer from Molly Graham who used to work in Facebook HR

Then there is a (roughly) two week period of calibration where managers meet to look at the assessments of everyone on their team and ensure that people are rated correctly relative to their peers. Facebook has seven performance assessments as well as a guideline for what % of employees should be at each level, however it is explicitly not a forced curve, particularly for small teams. The curve exists to ensure that extraordinary performance is rewarded (I believe the distribution is such that only 2% or less of employees are given the highest rating every cycle) and that if hard conversations need to happen, they happen.


Calibration happens at the team level and at the senior management level (Mark, Sheryl, and all of their direct reports look at the numbers for the whole company, lists of the highest performers, lists of the lowest performers, etc). Performance Assessments are final and they are used to determine compensation like raises, bonuses, and additional equity grants. Facebook gives out raises and additional equity once a year but they do promotions and bonuses twice a year. Compensation at Facebook is almost entirely formulaic with multipliers (based on the Performance Assessment) for bonuses, raises, and additional equity grants.

Hmmm, so it looks like Facebook uses a curve but the argument seems to be that it is to ensure that extraordinary performance can be rewarded yet there is a quota on how many extraordinary performers that can exist according to the system.

Let’s see what Amazon does in the area of employee appraisals next. For this topic, I’ll use an excerpt from an article in Business Insider which references a leaked document that has since been pulled and an article by Amazon chronicler, Brad Stone. The Business Insider article is titled Amazon Has A Brutal System For Employees Trying To Get Promoted and is excerpted below

In the second meeting, which takes place in September or October, the leaders talk some more about who's getting a promotion, and talk about who is doing well and who is doing poorly. Amazon's managers group employees into three tiers: The top 20%, who are groomed for promotions, the next 70% who are kept happy, and the bottom 10%, who are either let go, or told to get it together.

This system, which was created by Jeff Bezos, is supposed to cut down on politics and in-fighting. Unfortunately, Stone says it has the opposite effect."Ambitious employees tend to spend months having lunch and coffee with their boss’s peers to ensure a positive outcome once the topic of their proposed promotion is raised in [the meetings]," says Stone.

Stone also notes that promotions are very limited at Amazon, so if you fight for your employee to get a promotion, it means someone else's employee gets snubbed. And anyone in the room can nuke someone else's promotion.

OK, so it seems Amazon also has a curve and it seems more explicit that the bottom 10% are targeted for negative messaging. But it seems there is a new concept we’re being introduced to. A peer review based system for promotion which in theory is in place to reduce cronyism (e.g. working for a boss that’s a friend who then promotes you for simply having a pulse) which in reality turns into politics-driven affair since everyone needs to like you for you to get ahead at the company. Good luck, rocking the boat in such environments.

So far we’re not really seeing much alternative ideas for tech companies that decide Microsoft’s employee appraisal system is one they don’t want to emulate. Let’s see what we can learn from how performance reviews are done at Google to turn the tide. For this I think we’ll look at two perspectives on the Google performance appraisal system. First here is an excerpt from a review on Glassdoor from a Google employee that loves everything about working at the company describing the performance review system.

Promotion and work performances is entirely reliant on peer reviews. In other words, to get ahead at Google and to get a positive performance review, you must get positive reviews from your fellow co-workers. Your manager might love you, but if your co-workers don't like you, you have some work to do. Managers are also required to seek peer review from those they manage. (I have never seen this before in my career.) Senior level employees from other fields are also encouraged to seek peer reviews from people in other departments. For example, engineers need reviews from people other than engineers in order to advance. For this reason, a culture of cooperation is endemic at Google. This is great because the percentage of "cowboys" that seems common at other high tech companies is quite low at Google. It also fosters an awareness of the type of contribution made by people outside your department, since everyone reviews people in other fields, and therefore must learn a bit about what others do outside their sphere.

Peer reviews sound like a great idea. Now we have a performance appraisal technique we can emulate that is different from applying recommended or forced curves as Facebook and Amazon do. Before embracing this whole heartedly, let’s balance our perspective with excerpts from Quora answers to What are the major deficiencies of the performance review process at Google?

Google's original intention in designing the byzantine monstrosity known as "Perf" was noble: to provide multiple avenues toward success. Someone who got mediocre reviews from his manager but excellent peer reviews could move up, or at least laterally. (This prevented the scenario where a manager uses mediocre or even negative reviews in order to prevent transfers, a known problem at Google.) It was an "OR-gate". If you had good managerial reviews or good peer reviews or objectively demonstrable accomplishments, you'd be in good standing and move up.Eventually, it became an AND-gate. To get a promotion or even a transfer, you had to have managerial blessing and good peer reviews and high visibility and the willingness (as Piaw Na alluded) to spend considerable amounts of time and energy marketing yourself. So it became a morale-killing, time-consuming "No Machine" that people spent a considerable amount of time figuring out how to game. The typical corporate manager-as-SPOF dynamic that Perf was invented to extinguish was strengthened by the "objective" soothsaying they call "calibration scores". – Anonymous

The big one is that engineers have to apply for a promotion and put together their own promotion packet. There's no human being who can do that and not end the process thinking, "Oh boy, I did so much work. I really deserve a promotion." Since the process doesn't promote everyone, that creates a number of disgruntled employees. Even if these employees were to eventually get promoted later, they tend to think, "I should have gotten this N quarters/years ago,", not "I'm so glad I got this promotion." The net result is that very few people are pleasantly surprised when they get promoted, while a lot of people get disappointed.Piaw Na

Although there isn’t forced ranking in place, it does looks like Google one-upped Amazon in making it difficult to climb up their corporate ladder by having a promotion process based on pleasing everyone.

 

In the Land of the Blind, the One-Eyed Man is King

To summarize, so far it looks like Amazon and Facebook have fairly similar performance review structures as Microsoft much-lambasted system while Google seems to have a performance that seems to trade one set of problems for a different set. As an honorary mention, I’d like to point to the QPR system recently put in place at Yahoo by Marissa Mayer which is also a vitality curve based system.

This then raises the question of why if forced ranking and other similarly disheartening employee appraisal processes are commonplace in the industry that tech blogs makes it seem the practice is limited to Microsoft and then blame it for the challenges the company has faced in recent years? From what I can tell, the reason is twofold. The first is that Microsoft employees stick around at the company for far longer than peers at other companies. From the Geekwire article Amazon, Google employees ranked as ‘least loyal’ which looks at data from PayScale, we learn

Amazon.com tied for second for the least loyal employees with a median tenure of one year, while Google tied for fourth with just 1.1 years of tenure on average. Apple, meanwhile, tied for 36th at two years…Microsoft, however, was all the way down the list tied for 259th with an average tenure of four years.

Most people at companies like Amazon, Facebook and Google either just got there or left before they would have to deal with the frustration of being disappointed by the performance appraisal system over multiple cycles. On the other hand, the average Microsoft employee has been at the company for far longer and when they do leave have had multiple brushes with the performance appraisal system. Just from a raw numbers perspective given average tenure at the company and number of employees, there are a lot more people who you can find that would complain about the performance review system at Microsoft than say at Amazon which has very similar systems in place.

The second reason I believe there are more people willing to talk to the press about the performance review system at Microsoft than at say Google or Amazon, is that it doesn’t fit the narrative to air those complaints. When you see a chart like below, it is easy to look for simple answers to explain the differences in the stock market’s belief in the success of Microsoft versus the “four horsemen”.

On the other hand, it is also hard for employees to complain when the company they work for is winning in new markets and is being praised by the industry press. Glassdoor is full of complains about poor work/life balance at Apple but no one is going to write a damning expose about the company’s employee morale problems as long as the company doesn’t slip in the marketplace. However once a company starts to falter in the marketplace, everything they do is bad and is the cause of their demise according to the pundits. I’ve been amused by the number of articles blaming Blackberry’s dual CEO model for the company’s failures even though that is effectively the Facebook model if you ask anyone who works there and one could argue at one point Google had a triple CEO model with Eric, Larry & Sergey all running the company in their different ways.

 

Everything Sucks

The bottom line is that performance appraisal systems at large companies always suck for the set of reasons covered extremely well by Steven Sinofsky in his blog post Realities of Performance Appraisal. He does a good job of pointing out some of the realities of businesses and human nature that guarantee that these processes will always come across as soul crushingly awful when applied at large enough scale including 

  • Performance systems conflate performance and compensation with organizational budgets. No matter how you look at it, one person cannot be evaluated and paid in isolation of budgets. The company as a whole has a budget for how much to pay people (salary, bonus, stock, etc.) No matter what an individual’s compensation is part of a system that ultimately has a budget. The vast majority of mechanical or quantitative effort in the system is not about one person’s performance but about determining how to pay everyone within the budget. While it is desirable to distinguish between professional development and compensation, that will almost certainly get lost once a person sees their compensation or once a manager has to assign a rating. Any suggestion as to how to be more fair, allow for more flexibility, provide more absolute ratings, or otherwise separate performance from compensation must still come up with a way to stick to a budget. The presence of a budget drives the existence of a system. There is always a budget and don’t be fooled by “found money” as that’s just a budget trick.

This is the fundamental conceit of performance appraisal systems. For large companies they are primarily about answering the question of “how do we distribute our promotion and bonus budget?” by drawing a fuzzy line between employee work activities and how much money they actually have to spend (e.g. policy that only 2% of Facebook employees can have extraordinary rewards is a function of budgets not a natural law of distribution of extraordinary employees at Facebook or any other company in the world). Companies can’t just say everyone who does an excellent job gets $1,000 bonus because they may not have $1,000 to spend per employee in the budget.  

And with that you have the answer to the question in the title of this blog post.

Note Now Playing: The GameMartians vs Goblins (featuring Tyler the Creator & Lil Wayne) Note


 

Tuesday, 12 November 2013 16:12:48 (GMT Standard Time, UTC+00:00)
So what's the TL;DR on this? Stack Ranking is a bad idea, or Microsoft has an image problem and Stack Rank is a part of that problem?
Tuesday, 12 November 2013 21:49:13 (GMT Standard Time, UTC+00:00)
He is saying that Stack Ranking sucks, but there isn't anything that clearly sucks less. Just read the last paragraph or two first. :)
Bruce Williams
Comments are closed.