Here is the future of computing according to DDJ: “Worries about runaway power consumption may replace concerns about speed on the next generation of CPUs”.
Here is the future of computing according to DDJ: “Worries about runaway power consumption may replace concerns about speed on the next generation of CPUs”.
N/T
they really need a new editor. that thing was riddled with typos, missing words…
-Our processors are getting more powerfull but also more powerhungry.
-Current software isn’t efficient, mostly due to the use of high-level languages, and compilers that don’t optimize well
-Once we get the compilers to optimize better, and programmers to write better software, we’ll realise we can do with less, and thus focus on other things besides raw computingpower, such as lower powerconsumption.
..apparently. But I learned something about VLIW’s, so the article wasn’t a total loss.
Actually, the power hungry trend is being broken by Intel and AMD. If you look at the latest generation of processors like Core Duo 2 for the desktop and Merom for the notebook you’ll see they are more powerful AND consume the same or MUCH less power. Intel now sees the trend going towards chips that use less power over the next couple generations.
maybe the future of CPU will be lower Mhz, but not because software inefficience, but because of multi core CPUs. the best tradeoff between speed and power consuption will be used and then multiply by a factor of 2 cores… 2, 4, 8… 1024 cores in a few years?
Bill Worley must be sad over that comment.
The VLIW component of Itanium originated at HP, as part of the “scientific workstation” program under Bill.
And yes, the rest of the article is that error prone.
It, for example, completely missed the ECL/BiCMOS discontinuity in the watt/mip discussion, neglected to factor in that compilers have been complained about for producing inefficient code for 30 years, and completely ignores the economics of software development.
for any organization that has even a moderate server farm, monthly power bills are starting to get serious. this is why you see google/yahoo/msft putting up massive server farms in oregon and washington state – cheap power. a small change in power consumption per cpu can translate into millions per month for these firms.
Good point. And it seems like the CPU manufacturers are taking that into consideration as well. I mean look at the latest chips from Intel. They are huge improvements all around all across the board and that goes for the power consumption as well.
I was quite amazed to see dual core 1.5 ghz ULV processors from Intel in a Panasonic laptop give 9 hrs of battery life! I mean it is dual core!
I remember my dad telling me about when he lived on the Army base in Maryland where one of the first computers were. The lights on the entire base dimmed when they turned it on. Yes processors today use more power than maybe 10 years ago, but the mips/watt ratio i bet has continued to climb. I dont think we are getting less efficient.
With VMWare ESX or Xen, multiple operating systems can be run on a single server. So you can cut back drastically on the number of servers needed, and they can make use of the exceptional performance of these machines (until they reach some other bottleneck).
The manufacturers know this, and I don’t think they’ll be out of a job because of power consumption anytime soon. Just wish they’d get smarter with the excess heat produced — Stirling engines anyone?
With VMWare ESX or Xen, multiple operating systems can be run on a single server. So you can cut back drastically on the number of servers needed, and they can make use of the exceptional performance of these machines (until they reach some other bottleneck).
Yeah but this doesn’t help laptops or my electric bill.
Multiple cores and/or multiple processors are going to chew LESS power than a single faster CPU…
RIGHT… I’m sorry, but doesn’t that seem a HAIR nonsensical? We’re talking about the same thing as trading someone two nickles for a quarter.