“Over the past several years of computer hardware engineering, raw speed has been the primary goal of hardware manufacturers. This has traditionally come at the expense of power consumption, which has skyrocketed since the first days of the x86-compatible home PC. Just how much electricity does a computer and its related devices use? Are there disadvantages to turning everything off when you’re done? This article will give you an insight into computer power usage.”
This was a pretty cool article. My girlfriend is always razzing me saying the PC chews up so much power when I leave it on throughout the day, now I have some handy specs to proove that her night light takes up more wattage
😛
Funny to see an article like this when I just calculated the cost for my computer just last week.
I own a 1GHz via epia M10k motherboard with 512Mb RAM, 3,5″ HDD, a DVD/CDRW and a 17″ LCD monitor. Mind you this is not a gaming machine but still only uses 40% CPU when playing DVD in fullscreened 1280×1024 with tv-out. The computer draws around 65 Watts at 50% CPU power with DVD and HDD going strong, less when idle of course. The monitor is around 35 Watts.
I calculated that if I use the computer 12 hours per day, 7 days a week at 50% CPU usage with the monitor turned on, the total cost per month is only 4 euro.
Now it’s time to start to measure the other appliances scattered around. I think the real power thieves in our household is (in this order) fridge, water heater, washer/dryer, stowe, radiators and lights. Everything runs on electricity here in Paris, no gas to lower the costs unfortunatley.
You could turn it on only when you need it, but that puts a lot of thermal stress on a machine.
Yeah, and freezing CD’s makes them sound better.
Umm…I don’t know about the freezing cds thing but changing the temperature of anything rapidly and repeatedly is bound to shorten it’s lifespan. Aruge you it all you want but any mechanical engineer would shoot you down and probably explain it better than I can.
Turning a PC off certainly doesn’t count as changing the temperature “rapidly”. It’s not like you threw it in a freezer. Everything will just cool down to room temperature very gradually.
This is a very good article. I wonder if the author could compare power-saver CPU’s such as Mobile/VIA/Geode CPU’s.
VIA CPU’s are available in mini-itx boards and Geode CPU’s in AMD PIC’s.
I’m very much interested in comparing such CPU’s since I have 4 computers in the house (running AMD XP 1800+ CPU) used for multimedia (listening music), browsing and productivity (office stuff).
This is a very good article. I wonder if the author could compare power-saver CPU’s such as Mobile/VIA/Geode CPU’s.
I’ve been working on it for a while. There is a huge performance difference between low-power CPUs, though. I’m having trouble getting some of them in for review, but if this article is a success I shouldn’t have any trouble getting more parts to test.
I have plans for an article showing how much power each computer component draws as well.
Since my KW/hr rate jumped 10c to 13c I’ve been paying much more attention to my bills & my PC network consumption.
If your heating in winter is set by thermostat, then you’re likely trading some heating between your home heating system with your unintended heating appliances, especially PCs, CRTs, TVs, incandescant lights. So you’re monthly heating bill shifts a few dollars from oil/gas/wood etc to electric. So its probably moot in winter.
However if you use A/C extensively in summer, its a double wammy since your A/C has to work extra hard to remove that extra waste heat.
Before doing anything with your PCs, simply replacing multiple incandescant bulbs (~20% eff) with fluorescant substitutes (~90% eff) probably gives the quickest easiest savings year round. It maybe more significant than switching CRTs to LCDs but your math will vary.
For CRTs, I use screen blankers instead of savers and use power standby more aggressively than before.
britbrian
The single biggest improvement I’d like to see in my GNU/Linux desktop is an easy to use “sleep” function.
The Apple Powerbook next to me right now goes to sleep when I close the lid, and wakes up when I open it. I almost never shut it off, and all my work stays right there on the screen between snoozes.
My desktop is GNU/Linux. I turn it on when I want to use it, and off when I’m not using it. One case fan is starting to make a rattling noise and needs to be replaced. Of course, the Powerbook fan shuts down when you put that machine to sleep.
All that’s required is the distros to get up off their asses and write good ACPI and APM scripts. Linux has, for the most part, low-level support for sleeping and other goodies, but often times you have to write ACPI sleep scripts yourself. With a good bit of testing and some targetted work in this area, at least for machines that support such, sleeping should be able to work out of the box.
All that’s required is the distros to get up off their asses and write good ACPI and APM scripts. Linux has, for the most part, low-level support for sleeping and other goodies, but often times you have to write ACPI sleep scripts yourself. With a good bit of testing and some targetted work in this area, at least for machines that support such, sleeping should be able to work out of the box.
Suspend works OTB for some, I never had an issue with it on my laptop. But I agree that’s certainly not the case for the majority of users judging from posts I see on various linux forums. Even if the fault lies with vendors not properly implementing ACPI, I do think it’s something the linux community needs to find a solution for. I don’t think I’d be able to use linux as my full-time system if I couldn’t properly suspend my notebook when not in use, I’d be shutting down/powering up several times a day.
Even aside from acpi suspend, most desktop systems could work in a similar laptop mode with cpu scaling, monitor standby via dpms and powering down the hard drives. Most of those energy saving features are supported on standard hardware, but not necessarily implemented by distros when installing on desktops (versus laptops). While not perfect, that alone would considerably reduce energy consumption.
“The single biggest improvement I’d like to see in my GNU/Linux desktop is an easy to use “sleep” function.
The Apple Powerbook next to me right now goes to sleep when I close the lid, and wakes up when I open it.”
You mean like this? http://www.davidcourtney.org/Linux-sleep_medium-%5BXvid].avi
You mean like this? http://www.davidcourtney.org/Linux-sleep_medium-%5BXvid].avi
Yes – just like this, but working reliably enough so people aren’t moved to record it for posterity when they actually manage to make it happen!
Yes – just like this, but working reliably enough so people aren’t moved to record it for posterity when they actually manage to make it happen!
Touche’ … I can’t disagree with you there.
I was floored when I installed Linux onto my iBook and the power management worked so flawlessly. It wasn’t even slightly difficult. It “just worked” when I setup Ubuntu – with a completely default install.
The iBook is a pretty outstanding Linux laptop – shocked? So was I.
I have a Sony Vaio also … and I have fought endlessly with that Laptop at every turn. *Nothing* works on that thing. Even if you run the OS that it came with, MS Windows, it still sucks. Trying to run Linux on the Sony Vaio is a sick joke.
Actually, at first Mr. Courtney seemed to like showing off a lot.
Then, on second thought, it hit me he is right: why bother calling names, using foolproof argumentation to prove desktop Linux is ready when you can just kill Windows trolls with a movie shot like this one?
Nice work, indeed. My hat’s off to him.
d°J°b
Actually, at first Mr. Courtney seemed to like showing off a lot.
Eh … not it at all. I really like Unix-like Operating Systems. But they have their short comings and I’m the first to admit it. There’s no point in being an apologist for your platform of choice. I created the video because anyone can __say__ “Blah blah blah … it works for me, it’s easy… blah blah blah.” But like the saying goes, seeing is believing.
Then, on second thought, it hit me he is right: why bother calling names, using foolproof argumentation to prove desktop Linux is ready when you can just kill Windows trolls with a movie shot like this one?
I should have added a full disclosure to my video post. My experience with power management with Linux on the iBook is the *exception to the rule*. Generally speaking, PM with Linux is a royal pain. If it works at all (which it usually doesn’t), it is difficult to setup, and once you have it setup, it’s even more difficult to maintain. A lot of hardware won’t recover from the sleep state. If you’re lucky, you can remove the module and re-enable it. If you’re unlucky, that hardware is unusable until you do a full system reboot so the hardware can be properly initialized.
All that said, I do see improvements in PM with Linux all the time. It’s taking longer than any of us would like, but hopefully it will be a non issue before 2006 is over.
Also an important aspect to consider is what electrical production does to our home, the Earth. Whole mountain ranges are reduced to rubble and our rivers degraded by runoff from mine sites. If you care about the air you breathe and are aware of the limited resources we have inherited, reduce your power consumption please.
Just because you don’t see the damage in your daily life doesn’t mean its not happening.
Are you kidding? The environment is WAY overrated. Like anyone on this site ever gets outside anyway.
Sacrificing the Earth is a small price to pay for racking up all my Einstien@home and Distributed.net points!
Hehe! As funny as it sounds, there’s a slight chance you are actually serious! If you ever get a chance to go into space and look at the Earth from far away, you’ll have a different perspective on just how finite and precious the Earth is.
We need to preserve the Earth for the future… the coming of either the robots, monkeys, or aliens.
As engergy prices and CPU performance continue to rise, so will the market for low power chips that will be good enough for 99% of users in the world.
You can already see the trend with Sun T1 chips and AMD EE and HE Operterons for the data room. Aside from gamers and pros, most people will likely end up using Intel VIIV and Mac Mini type computers on their desks. Hopefully all this will happen in time to catch the rising wave billions of people from developing countries.
Are you kidding? The environment is WAY overrated.[i]
Are you an American…!!!!!????
Seriously though, I think long term we need machines that scale their requirements based on what the user is doing at the time. Surfing the internet doesn’t (shouldn’t?) require my PC to be running at 3+GHz with all the hard drives blazing.
As we start to move away from measuring PC performance in terms of raw GHz perhaps this is something we’ll start to see…
[i]Edited 2005-12-30 10:36
How about throwing the consumption on a laptop in there…
A year or two ago I measured the power consumption on an old P2 400 MHz laptop that I was intending to use as a 24/7 P2P server. IIRC after the initial spike caused by the hard disk spin-up it settled down to around 30 Watts (15″ TFT screen was on as well). Very cheap to run 24/7.
I don’t know what country you’re in but in the U.K. I used something like this to measure power consumption –
http://www.maplin.co.uk/module.aspx?TabID=1&criteria=power%20me…
They are both cheap and very useful – well worth purchasing.
if amd uses less power than intel and even apple, would this be a good marketing gimmick?
BUY AMD and save electricity and the earth and global warming?
BUY AMD and save electricity and the earth and global warming?
Well, I’m not really convinced that global warming is a problem. But I am convinced that CPU’s waste an enormous amount of energy while doing absolutely nothing. And if you don’t much care about Mother Earth, then maybe you should look at it another way.
If there’s one thing anyone should be able to understand, it’s cooling. Gamers especially.
How about passive cooling? (i.e. no fans) While you are away from your desk, your computer should be able to enter a state where it uses virtually no power. The CPU fan should shut off and the CPU will be near room temperature.
Or while you are sitting there typing a message into a forum online, or typing an e-mail, or reading text in a web browser, there’s no reason for your CPU to be clocked at 3GHz+.
When you’re doing simple tasks, your CPU should be able to lower its clock speed so the fans can turn off, or at least spin down a whole lot. I guarantee you that you wouldn’t know the difference between 500MHz and 3GHz while you’re sitting there typing an e-mail message. And wouldn’t you just love to look at your CPU sensor and see that your system was sitting at 30 degrees C?
Even when you are watching a DVD, or listening to music, a super fast system still has a lot of head room. The CPU doesn’t need to run with the pedal to the metal.
When compiling source code or rendering images/video, *then* you need all the GHz you can get. There’s no such thing as too many megahertz when it comes to compiling. I run Gentoo Linux, so trust me on this one.
My AMD Sempron 2500+ @ 1,750MHz is only running at 34c right now at full throttle. That seems to be a nice low temp to me for a machine under medium stress.
Although I must agree with you on the dynamic clock adjusting scenario. Personally, I would love to have a CPU that could do it without any perceived impact. ‘Cause in my opinion, most users in this world could probably get by with a flat 1GHz machine and never know the difference. My friend bought a really great machine that could be considered a medium gaming rig (the sucker doesn’t know how to build computers… hahaha) and what does he do with it? Listen to music and chat…. sheesh, anything over 500 megasplats and he is just wasting away the life of that hardware. Go figure, people rarely do what is best for them nor do they exercise restraint when presented with a viable opportunity to demonstrate their vanity.
New CPUs (can) do exactly that. If you install and enable powernowd or cpufreqd on an Athlon 64+ it will scale down under 1GHz when idle. The technology was initially designed for laptops but Intel and AMD are both including it in recent, high consumption desktop chips now.
forgot to mention, most motherboards these days can control one or two CPU fan headers according to system temperature, spinning the fans attached to those headers faster or slower depending on the temperature. My system fan is at 1577RPM currently, it spins up to 2000RPM or occasionally more under heavy load.
This was a very interesting article, that raises almost as many questions as it answers.
A couple of things I would like to see someone try to address in a fashion similar to this article:
1. What real longevity impact does power and sleep cycling really have on modern hardware?
2. How much impact to overall power consumption do the power states of the individual system components have? For example, does having the OS power-down the hard drives during periods of inactivity make a significant difference?
3. I assume that the CPU isn’t the only potential differentiator that one could make in a system for power consumption tuning. How large a difference in power consumption would different video and disk subsystem have (e.g. Nvidia vs. ATI)?
Great article though. Makes one think.
blixel — nice!
Now if only my desktop (say, via right-click on the desktop context menu item) could do that…
I may try Ubuntu again (I’m back to using straight Debian right now) if that functionality comes OOTB.
BTW, regarding the Vaio’s, they’re a pain to repair too. If you can get the parts, Apples aren’t so bad. Dells aren’t so bad either.
Oh, also, you wrote:
> Well, I’m not really convinced that global warming is a problem.
It’s not a problem for the Earth — planets don’t care how unsuitable for human life they become.
Two things:
1) A G5 at 2.3 ghz consumes at most 55 watts, an opteron with around the same frequency will ask you 89 watts. Those numbers are per processor for cpu only, not including all the system power consumption. So how can he get 60 watt hours for the sun worsktation. That does not make sense, his number seem completely wrong. And how does he get the min and max wattage, his numbers do not seem meaningfull for me at all.
2) From the author:
“But compare it to the G4 PowerMac, which has one hard drive, a lot less RAM, and a more comparable video card. It still uses more than 50% more electricity than the Athlon 64 X2 (and slightly more than the power-hogging Pentium D) and is considerably weaker in terms of computing power. Minor variables aside, the difference is striking, especially considering the hardware costs involved.”
Thats stupid to say that and meaningless. He is now conparing two systems with many years separating them. The G4 that he is talking about is built with a 0.18 micron process, the Athlon X2 is built with a 90 nanometers process. Thats two dam different worlds, no comparison is possible between those two systems, it just make him appear foolish. Not only the processors have different process technology that affects proportionally the watt numbers, the memory, the hard drive, etc are years older than those of the X2 system. Everything in the G4 takes more current to work because thats a old design. Even if the X2 has more memory, this memory uses smaller transistors and is therefore more power efficient than the memory of the G4.Comparing the G4 with a similar system of the same period makes sense, comparing with brand new system is the sign of someone that does not know really what is talking about.
Sorry this article is bad, done by a AMD troll lover.
There’s a mirror of the article up here:
http://www.thejemreport.com/mambo/content/view/211/1/
In case the hardwareinreview.com server goes down.
-Jem
Which it has indeed done. Thx for the mirror.
There seems to be some problem with the link provided to the article (404 not found). Is there an alternate link for viewing this article?
Me too – have we slashdotted it?
Try the mirror:
http://www.thejemreport.com/mambo/content/view/211/1/
Mirror good.
Mirror not working either.
OK – if you really can’t get it any other way I’ve put it up (text only, so you’ll need to maybe adjust the tables) ‘cos its Creatve Commons (so I can) and it’s interesting and important (IMHO).
http://www.georgeoldham.co.uk/oddstuff/CompElecnU.rtf
One thing to bear in mind – it probably pays to run down your disks every month or so because otherwise the reading head can stick in accumulated cruft and not restart – people have had this problem wnen moving office and their server has been running for 1yr+.
Edited 2005-12-30 14:46
Nice article!
Although not directly related to your article I have an interesting question.
How much power do the computers need to be produced?
Let’s say that I have a CRT monitor for example. It uses double as much electricity as a TFT. I’m a green boy and want to reduce power usage so that the earth won’t be polluted as much. So I buy a new TFT.
But what I didn’t think off is how much power was needed to produce that TFT. Maybe TFT’s need more power then CRT’s to be produced? If that’s the case maybe a CRT and TFT use just as much power if you take the power needed to be produced in account?
Also, laptops use much less power. But I can guess that to produce such small parts, there is needed much more power then with bigger normal parts.
I’d like to see an article about that as well. But who could get those numbers from the manufacturers…?
I know it can get very complicated if you are going to take everything into account. Like mining for the materials, transportation, factory produce, transportation, etc.
I guess you save more energy for “the earth community” by just using your old computer as long as possible with fast lightweight software. Ofcourse you save more by turning it off, but we were still talking about computers.
How much power do the computers need to be produced?
Let’s say that I have a CRT monitor for example. It uses double as much electricity as a TFT. I’m a green boy and want to reduce power usage so that the earth won’t be polluted as much.
Don’t forget about the calorie consumption needed to dispose of the CRT. CRT’s weigh a lot more than a TFT. The guy who disposes of the CRT may have to consume an extra cheesburger to pay the physical energy cost of carying away the CRT. So, don’t forget to take that into account.
I cannot read the link, it returns “403 forbidden, you don’t have permission…”
Maybe I was promoted from nuisance to “hit the road, Jack”.
d°L°b
This one?
http://www.georgeoldham.co.uk/CompElecnU.txt
or
http://www.georgeoldham.co.uk/CompElecnU.rtf
I meant the link provided by OSNews. But thank you for gently providing a quick solution.
I suspect power consumption significantly follows the size of the PC casing. I notice the industry esp the Taiwanese box producers pay almost no attention to serving small form factor market and most SFF PCs are relatively overpriced mostly due to low volumes.
As a cpu designer, one very good option is clockless circuit design, draws power only when signals change. No argument over who has the fastest clock when there isn’t one. However such cpus are a rarity in the industry, 1 old asyncronous ARM design and some embedded designs use it for very low power (<1w). But clocked designs are much easier to design and can fake the advantages of asyncronous by varying the clock from fmax to perhaps 1/3 of that. Once you saved maybe 70% of max cpu power level, its more prudent to look at other components. As power saving is pushed down, return to full throttle takes longer, obviously it should be graduated.
Also PCs that use 2.5″ HDs rather than 3.5″ HDs use far less power and thats why laptops use them. But 2.5″ are almost always smaller in capcity and a little slower but thats probably changing and also cost 2x per same Gig.
It wasn’t that long ago the entire industry shifted from 5.25″ HDs to 3.5″, it should consider going to 2.5″ completely for much lower power and the possibility of moving to SFF cases.
I run BeOS right now off an old laptop 2.5″ HD with a 3.5″ for capacity. The 1st is almost silent and almost room temp, while the latter can get toasty & noisy.
As for CRT v LCD production costs. I suspect the LCD uses far less energy to build again due to amount of material used in production, and I bet disposal costs are way lower too, no lead in TFT (except maybe solder). Also flat panel production technology is on a very fast ramp in improvement and we may see OLED and other displays use much less power still. CRTs are basically on last legs but for us EEs they do have far larger pixel resolution so I use them for a few more years.
transputer guy
Just being a nitpick here but you’d be surprised if you checked out how much I/O has to do with compiling, especially the one involved in compiling software like you do in Gentoo.
Get more RAM and faster disks I/O instead. That way you can cut down the time for building Xorg from 45min to around 30, on an older system. A faster cpu would probably just generate heat. Unless you pony up for a complete system upgrade that is.
Some really interesting information here.
The G4 thing I agree with–no one is going to consider a Power Mac G4 who might also be considering a new P4 or Athlon64 X2.
1) A G5 at 2.3 ghz consumes at most 55 watts, an opteron with around the same frequency will ask you 89 watts. Those numbers are per processor for cpu only, not including all the system power consumption. So how can he get 60 watt hours for the sun worsktation. That does not make sense, his number seem completely wrong. And how does he get the min and max wattage, his numbers do not seem meaningfull for me at all.
The Wh rating is likely measured over time during use (it would be nice if he said how he did it). It looks under the load difference, but over the idle (if you compare the Wh differences to the idle and load wattage differences).
Very simply, though, your assumption that it needs that much power is wrong. That isn’t all that’s wrong (I think PSU), but that’s wrong with your thinking the numbers are wrong, there, and that they should match up with the max poer draw specs. Also, his numbers match up very well with the rest of the world in the x86 power consumption (the G5 numbers look wrong, not the Opteron).
From the very beginning, the Opterons did not hit their peak rated power draw, and still have not. In fact, both idle and load power consumption has been going down a little with each new core generation.
Just a few years ago, idle and load power was close. A TBird would use most of its power at idle (which was a lot, too…). That started going away with the Athlon XPs, and the Athlon64s are even better. With CnQ enabled and working, an idle A64 should be using under 25w idle.
When the G5 came out, it had better performance per watt than anything else. It has not changed much, though, while AMD has been making a lot of headway (Intel, too, but not for desktops or servers, yet).
It will never use that 89w. It should be besting the dual G5. However, the G5 is insane, at 400+w. I would have figured the dual G5 would use maybe 20% more power, not 70%. OTOH, note that the Wh isn’t that far apart.
My guess on the Power Mac is the PSU (remember: he’s measuring actual AC power from the wall, not the DC power the CPUs are pulling). If Sun used a PSU that was >80% efficient under load (some newer ones near 90%, actually), and the Apple used a highly inefficient-but-beefy PSU, that could account for the high numbers for the G5. I would not be at all surprised if the PSU in the Power Mac has greater efficiency at idle and low lods than high load.