The interplay of power and heat have long been limiting factors in the design of chips. Tens of millions of transistors slammed together throw off a lot of heat, and too much heat will destroy the circuits themselves. Reducing the voltage that drives a chip helps, but only to a point because at levels that are too low, noise intrudes making it impossible to reliably distinguish 0 and 1 states. Lots of work goes therefore goes into the use of exotic materials and geometries, making chip design far more than just a problem of logical circuit engineering.
The same problems of power and heat appear to be emerging as limiting factors in the design of large scale web-centric systems. Scaling up or scaling out from a logical perspective involves adding more servers, but again only to a point because tens of thousands of servers will eat prodigious amounts of power and in turn cast off considerable heat. Thus, it's no surprise the Google is building many of its new centers near rivers, where there exists the promise of cheap electricity and relatively free cooling. However, the costs of energy are not that elastic: for small systems, one can largely ignore the power bills, but there comes a threshold one crosses where power and heat at the macro level become part of the systems design problem. Adding a few thousand more servers might seem like the right logical thing to do, but it may be physically and/or economically impractical.
Software developers have gotten away with being sloppy because of the abundance and rate of growth of compute cycles. However, the problems of large numbers start to intrude: just as power and heat become factors in the presence of tens of millions of transistors, power and heat become architectural considerations in the presence of teraflops of computation and petraflops of storage.
Quote of the day:If you can't stand the heat, get out of the kitchen.