While using an AMD Barcelona server to create a portable benchmarking kit, InfoWorld’s Tom Yager discovered something unexpected: “I could incur variances in some benchmark tests ranging from 10 to 60 percent through combined manipulation of the server’s BIOS settings, BIOS version, compiler flags, and OS release.” Yager put this matter to AMD’s performance engineers and was told he was seeing an effect widely known among CPU engineers, but seldom communicated to IT – that the performance envelope of a CPU is cast in silicon, but is sculpted in software. “Long before you lay hands on a server,” Yager writes, “BIOS and OS engineers have reshaped its finely tuned logic in code, sometimes with the real intent of making it faster […] sometimes to homogenize the server to flatten its performance relative to Intel’s.”
That’s pretty much the idea behind Gentoo (and others), isn’t it? You compile the entire OS using flags that make the most of your processor.
The problem in the IT world is that you generally want stable and consistent. Performance is nice, but you don’t want to have to track down bugs the depend on the underlying hardware and optimizations. You want plug and play hardware pieces.
Edited 2008-07-31 22:03 UTC
well on a server one should rarely run more then the required minimum. so optimizing and shaking out bugs should be easier.
whats worrisome is that the bios can have such an effect on things. makes one wonder if some motherboard bios can result in a under-performing pc…
hmm, flash bios, get a more responsive vista?
That’s actually true. Vista (and Linux with no-tick) use the HPET (high performance event timer) which is hardware that has been present on many motherboards for years now, but often hidden from the OS by the BIOS.
ok, now i feel like its required to move on to coreboot or something as after getting to know stuff like that, one start to feel potentially cheated…
its like knowing that if you swap a chip or remove a part somewhere your vehicle gains maybe 60% more power.
as in, its cheaper to make the same parts and then sell it underclocked for those that cant or wont go for the premium products, then it is to make a specific part that cant perform any better…
I am told that with modern cars that’s exactly the case. Allegedly many of them can be tuned by hacking the electronics on board to unleash more power.
Off Topic, but yes. You can generally (on modern cars) tweak the computers that mix the fuel and manage all the engine parameters. Sometimes you’ve got small microchips that are “optimized” for a different mixture (therefore extracting more power from the same engine) and sometimes you have to change the entire “computer”. You can also (and they do) get into the computer and tweak parameters.
Risk? Well, to put an analogy, it would be like overclocking a computer. Nothing happens… theoretically. But it could overheat, fail, break, blow, etc. You never now.
Difference is: a computer costs <2k. A car >10.000
Absolutely. I can up my car from 170 bhp to 220 bhp with a simple software update. It’s mostly about optimizing it for higher octane fuels.
Yes, this is called “chip tuning” of a car.
And believe me, it is a really bad idea. I am a calculation engineer for piston engines, I know what can go wrong.
Car manufacturers put the fuel injection parameters into their chips to find their way in between the following border lines:
– Try to have low fuel consumption (OK, that is not really true in the USA)
– Meet the emission legislation throughout the lifetime of the car
– Don’t destroy the engine
– Don’t destroy the gearbox
– Don’t make the car undriveable
– Don’t make the car noisy
With chip tunig you are likely overstepping several of these border lines.
I can be a bad idea, but there are several cars which can handle the extra power very well, and there are several chip tuning specialists who perform very solid tests on both engines and transmission before releasing their products to the market.
I have a friend that works as an engineer for GM and he used to work on the computers in the cars. In his own car, he modified the computer to have a cartridge slot where he could load different chips of his own design to alter the performance characteristics of the car. He had a sport module and an economy module that he kept in the glove compartment, along with a module that was the standard program that came with that model.
That’s known as “flashing the chip”. You’re basically playing with the air, fuel, and timing variables among other things. It’s the modern day equivalent of changing your timing and setting up your carb(s) for a particular setup. The problem is, while you strive for perfomance gains, you can potentially sacrifice gas milage or hurt your engine(and void your warranty!)
Why is this a surprise? this has been known for years; crappy bios with a crappy ACPI implementation which results in a crappy operating system experience. If there are issues with resource allocation at the lowest level, things are not going to be pretty when it comes to running an operating system.
This is why you see weird compatibility issues in the PC world; you’ll see motherboards requiring ‘routine’ BIOS upgrades because of an incompatibility with a video card or sound card. Hence the reason I’ve said it many times when people crap on about Apple – they don’t know what the heck they’re talking about.
Build you’re own PC and it is Russian roulette whether or not you stumble over bugs in the firmware – get a computer off the likes of Dell or HP and find that the bare minimum testing has been done for all possible scenarios. These are things which people on here, who appoint themselves as ‘all knowing guru’s’ when it comes to computers ignore.
Yes, I have actually seen peoples computers perform better after a firmware upgrade; I remember having an old HP; upgraded the firmware and the performance and battery life was better – same said for my old Lenovo laptop too. It does happen, its unfortunate that people don’t check their computer manufacturers website for BIOS updates instead of spitting and cursing at Microsoft for things outside their control.
there is one thing to release bios updates that fix bugs, but one that cleans up tweaking goofs that should not be there in the first place if the bios people didnt try to second guess the cpu people?
still, i should really be to jaded to get riled up about second guessing or big brother knows better.
but then if i go down that path, i may as well stay in bed…
Hmm,that may explain why I had to reset the BIOS about every other month because it stopped detecting the DVD drive, until I went to DFI website, downloaded the upgrades and upgraded (with no Windows and no floppy, thanks to FreeDos and Syslinux :p ).
Now I’ve only had to do that once in a year. Yay!
I’m not sure that I fully understood the implications of this interesting article. The essence seems to be the following:
The chip makers give advice to the BIOS and OS makers to maximize performance. The BIOS and OS makers think they were smarter than the chip makers so they mess up perfomance. But sometimes it’s just because they are lazy or because they have different goals, e.g. power efficiency.
But the following questions remain unanswered for me:
Would I (in theory, of course, I’m not planning to do that!) have to re-write or by-pass the BIOS to get maximum performance, given that some BIOS totally messes up CPU performance? I was under the impression that the OS could revert anything the BIOS has done. After all, the author mentions “enthusiast-tweaked machines” in a context that makes it sound as if such people in fact could achieve maximum performance. I somehow doubt they all re-write their BIOS’s.
He only mentions how Windows tries (and fails) to improve AMD perfomance. What about other operating systems?
How about Intel CPU’s?
On a factory build machine, you would typically have to replace the BIOS. There are a lot of settings that the OS can’t directly control. Think the deep magic stuff like memory timings, AGP memory allocation, bus speeds, CPU multiplier, etc.
Dell/HP/Apple/etc all come with heavily sanitized (or crippled) BIOS versions. They generally have the bear minimum options available to do whatever the machine is supposed to do. Boot order, basic power settings, BIOS password, that sort of stuff. The “enthusiast” machines come with basically no BIOS restrictions. On a good BIOS setup, you can change at least a dozen different memory timings alone; you can increment the bus speed 1Mhz at a time, or enable and disable specific features or optimizations.
Another difference is that the enthusiast boards take the time to figure out specialized performance work for the various chips while large factory OEMs won’t bother because their goal is to minimize support calls. So if you buy a performance motherboard from DFI or Abit, they’re going to not only have lots of options in the BIOS to tweak, they’re also going to be using chipsets specifically designed to maximize performance of an AMD chip (or Intel). The factory guys just want to “make it work”.
If you have that kind of board and can tweak with it, then both Linux and XP can run optimized for AMD hardware to a degree. In XP you can install the AMD processor driver. In Linux, it’s built into the kernel already.
one more reason to build ones own computers
Ah, didn’t know that. Thanks for clarifying!
Wherever there is a software, there is a problem.
BIOS has a small software that controls the OS or at least talks to. If the BIOS is not well written and bug fixed then it will cripple your system experience.
Recently I have purchased an Asus Workstation Motherboard that refused to recognize GF 9800GTS or 8400 graphics cards. clearing the CMOS then starting up have solved the problem. Asus support was not available at that time to help with troubleshooting because of circuits jam.
So BIOS is vital for the performance, stability, control and remote management.
Because of its limitation and bugs Apple and other high end OEM replaced it with EFI.
The easy way to solve this is of course usage of open source software in your base system and an open source BIOS.