December 13, 2005
@ 06:27 PM

Nicholas Carr has a post entitled Sun and the data center meltdown which has an insightful excerpt on the kind of problems that sites facing scalability issues have to deal with. He writes

a recent paper on electricity use by Google engineer Luiz André Barroso. Barroso's paper, which appeared in September in ACM Queue, is well worth reading. He shows that while Google has been able to achieve great leaps in server performance with each successive generation of technology it's rolled out, it has not been able to achieve similar gains in energy effiiciency: "Performance per watt has remained roughly flat over time, even after significant efforts to design for power efficiency. In other words, every gain in performance has been accompanied by a proportional inflation in overall platform power consumption. The result of these trends is that power-related costs are an increasing fraction of the TCO [total cost of ownership]."

He then gets more specific:

A typical low-end x86-based server today can cost about $3,000 and consume an average of 200 watts (peak consumption can reach over 300 watts). Typical power delivery inefficiencies and cooling overheads will easily double that energy budget. If we assume a base energy cost of nine cents per kilowatt hour and a four-year server lifecycle, the energy costs of that system today would already be more than 40 percent of the hardware costs.

And it gets worse. If performance per watt is to remain constant over the next few years, power costs could easily overtake hardware costs, possibly by a large margin ... For the most aggressive scenario (50 percent annual growth rates), power costs by the end of the decade would dwarf server prices (note that this doesn’t account for the likely increases in energy costs over the next few years). In this extreme situation, in which keeping machines powered up costs significantly more than the machines themselves, one could envision bizarre business models in which the power company will provide you with free hardware if you sign a long-term power contract.

The possibility of computer equipment power consumption spiraling out of control could have serious consequences for the overall affordability of computing, not to mention the overall health of the planet.

If energy consumption is a problem for Google, arguably the most sophisticated builder of data centers in the world today, imagine where that leaves your run-of-the-mill company. As businesses move to more densely packed computing infrastructures, incorporating racks of energy-gobbling blade servers, cooling and electricity become ever greater problems. In fact, many companies' existing data centers simply can't deliver the kind of power and cooling necessary to run modern systems. That's led to a shortage of quality data-center space, which in turn (I hear) is pushing up per-square-foot prices for hosting facilities dramatically. It costs so much to retrofit old space to the required specifications, or to build new space to those specs, that this shortage is not going to go away any time soon.

When you are providing a service that becomes popular enough to attract millions of users, your worries begin to multiply. Instead of just worrying about efficient code and optimal database schemas, things like power consumption of your servers and data center capacity become just as important.

Building online services requires more than the ability to sling code and hack databases. Lots of stuff gets written about the more trivial aspects of building an online service (e.g. switch to sexy, new platforms like Ruby on Rails) but the real hard work is often unheralded and rarely discussed.


 

Tuesday, 13 December 2005 18:53:14 (GMT Standard Time, UTC+00:00)
I think the reason it is not discussed is because it really only affects the big boys. The long tail of web services probably run on 1-6 servers.
Tuesday, 13 December 2005 21:01:19 (GMT Standard Time, UTC+00:00)
Perhaps the pendulum is swinging back, really? About three years ago I got a tour of one of four buildings on the massive EDS campus in Plano, Texas. Each large building used to house mainframes for hosted services. The building I saw was nearly empty (although there were lots of machines!). The mainframe equipment was almost entirely gone. I understand Sabre has a similar, underground complex in Oklahoma.

Although energy costs are a real concern, I've got to believe that facilities will shift to accomodate reality. There are still places where energy is relatively cheap. Large facilities don't buy energy at residential rates, and they use long-term contracts to weather fluctuations like we saw this year.

So let the facilities people sweat the details. I don't think their fundamentals are going to change radically. They'll be pumping bits as usual in the glow from that nearby reactor...
Tuesday, 13 December 2005 23:27:55 (GMT Standard Time, UTC+00:00)
This really *only* applies to the Pentium 4 series, not all x86 chips. The P4 has an utterly horrific bang-per-watt ratio compared to the Athlon64 and even Intel's own Pentium M chips. It's easily 1.5x-2.5x worse.

http://www.tomshardware.com/2005/07/13/the_amd_and_intel_energy_crisis/

"From a performance per Watt point of view, Pentium-based computers perform far worse under a heavy load due to the higher clock speed level. A maximum of 166 W under 3DMark 05 (Athlon 64 4000+ with nForce4 Ultra) is much more acceptable than 255 W (Pentium 4 660 with 925XE)."

A Pentium-M of equivalent perf (4000+) is an even better bang-per-watt proposition than the A64 -- it would consume maybe 100w, at most, for the entire PC. It's a HUGE difference.

Lots more obsessive detail on this stuff at places like http://www.silentpcreview.com , because power = heat, and heat = noise.

Heat and power are big reasons why Intel has abandoned the P4 architecture in favor of the Pentium M arch for all future chips. Mindlessly ramping to higher clock speeds isn't all it's cracked up to be...
Comments are closed.