Sun has been touting the efficiency of servers using its first-generation UltraSparc T1 ‘Niagara’ processor, but it’s promising greater gains with the chip’s sequel. The first Niagara consumes about 70 watts running flat out. Sun now thinks Niagara 2 will consume between 70 and 80 watts, John Fowler, executive vice president of systems, said in a meeting with reporters at Sun offices here Tuesday. Although that power consumption is ‘just a teeny bit above Niagara 1’, Fowler said, the newer chip absorbs several functions that today require separate electronics and also can handle 64 simultaneous instruction sequences, called threads – twice that of Niagara 1.
I’m all for green. I try to keep my carbon footprint low.
But, economically speaking (what corps care about), how important is power efficiency in servers, in real terms?
I know that for larger server rooms, one can come up with impressive absolute numbers for power costs. But in relation to the total cost of maintaining those servers, does the electric bill really figure in?
Again, I’m all for efficiency. And power efficiency has been a successful marketing angle for CPU suppliers. But does it really make the kind of short term sense that drives capitalism? Or is this more a marketing game?
Probably not financially important, but remember the story about the NSA a while ago?
http://www.datacenterknowledge.com/archives/2006/Aug/06/nsa_maxes_o…
So there could be power grid problems. With computing needs continuing to climb, it stresses the grids even more. How much the Niagara 2 servers will mean on a big scale is hard to say, since they may or may not have a big marketshare, but working towards low-power systems is always a good idea, especially for massive undertakings like supercomputers.
>>Probably not financially important, but remember the story about the NSA a while ago? <<
Indeed each of the Cores in Niagra2 has an “encryption engine” added to the FPU. This allows Niagra2 to encrypt with AES 2 x 10Gbit/s ethernet in realtime. The article made a mistake saying 2 x 10Mbit/s. Now with encrytpion plugins been talked about for ZFS having basically free hardware encryption by default should be interesting.
know that for larger server rooms, one can come up with impressive absolute numbers for power costs. But in relation to the total cost of maintaining those servers, does the electric bill really figure in?
Over the life of a server (assumption atleast 3 years), the cost of electricity for running and cooling is very significant (it can give initial cost of the server a run for its money). Also add in the cost of installation and running of aircon’s and UPS’s (+space).
The processor converts all that power into heat. Lower heat = greater server densidy = cheaper real estate.
Also, outside of the electricity and cooling bills you need to factor in the extra costs of providing backup power for your mission critical servers.
Power might not make much difference to Joe Home user with 1 or 2 machines in their houses, but it does to most buisnesses with server rooms.
Speaking from experience as a DC owner, power efficiency is probably one of the most important things, in “real terms”. The more gear you can pack into an area, the further your space goes. The further your space goes, the less the client has to spend (they use less space) and the more money you can make as a DC (put more people into less space.)
Keep in mind, electricity is the #1 cost of a DC. The millions in upfront costs are NOTHING compared to the operational cost in electricity. This is both the power needed to run the server, and the power needed to keep the server cool.
In summary, power efficiency is probably the most important thing for servers to work on in the near future. Vertical scaling of processors is hitting a ceiling, so now there is horizontal scaling. Efficiency is the name of the game, with this kind of layout!
I did the math for my home PC:
Upfront cost: 1000€ for a 1400 MHz, 500MB Athlon 3 years ago + 300 € for a Monitor.
The whole system (including Monitor) consumes 160 W of electrical power.
I guess the machine runs 4-6 hours/day as it is also used as a TV and I think I will have this machine for a total of 4 years.
Now comes the math: 0.16kW * 6h/d * 365d/y * 4y = 1401.6kWh total energy consumption.
I pay 0.18 €/kWh for my electricity, so I end up with paying 252.28 € for energy costs during the lifespan of my computer. Thats 20% of the costs. So, I would say it makes sense to take aclose look at the power consumption of a computer, because the latest hot gaming PCs come with a AC/DC converter which can sustain 1000W of power throughput, and they don’t put such a power supply into a computer for the pure fun of it. If you have some crazy quad-core setup with 2 grafic cards you end up with as much as 700W. That would amount to 1100€. Even if that machine costs 2500 €, the energy costs are close to half the hardware cost.
So, even if energy consumption is not the main issue for private use, it makes sense to take a look at it.
If you are a company, lots of machines will run 24/7, so the energy cost of my private machine would not be 250€, but 1000€ if the same electricity price was taken. It would still cost a lot if you calculate with cheaper electricity prices companies can get.
For server rooms every Watt even counts double, because first you heat it into the computer, and then you have to shovel it out by air condition.
You need a bigger air condition (€€€) and more energy for air conditioning (€€€) because you consumed more energy (€€€).
So I would say, power efficiency is a major cost factor especially in server rooms today.
Nice analysis.
Interesting to note that in some places the Niagara-based servers get rebates from the power companies:
http://www.sun.com/emrkt/energy-rebate/index.jsp
That apparently amounts to a $700-$1000 savings per year per server (!).
Dmitri
There are 2 edges to power efficiency:
1) How much it takes to power the machine
2) How much it takes to cool said machine
It turns out that sometimes after just 2 years, the total cost of operation electricity-wise is almost higher than the initial purchase cost of the machine. So you bet that power efficiency is at the top of the list of purchasing considerations.
If you have to spend more than the competition in electricity for the same computing power, you are losing money. And thus you have to find cheaper alternatives.
However, just because a machine uses less watts doesn’t mean that it is more power efficient. If a machine takes 1/2 the watts of a competitors, but performs 1/4 as fast. The rule of thumb is that you may end up paying twice as much in electricity to get the same job done.
The 2 year thing is interesting. Do you have any data for that assertion? I’m not being sarcastic, I would really like to read about that.
Power can be the #1 cost in data centers, right. But how much of that power is consumed by the CPU in comparison to power consumed by disk arrays? Today’s server workloads are not CPU intensive at all so unless you run a compute cluster CPU power consumption is quite irrelevant.
To me, a 64 thread server with an FPU for each core (I believe that makes 8 FPUs) and hardware encryption make for the ideal server for any high traffic application. Whether you’re running a web server, app server, or even a database, to be able to scale like that (especially with such a small footprint and low power consumption) makes the new CPUs extremely attractive.
Way to go Sun!