CoyoteGulch just posted an analysis of GNU C and C++ optimizations, using a genetic algorithm to discover the most effective optimization flags for different algorithms.
CoyoteGulch just posted an analysis of GNU C and C++ optimizations, using a genetic algorithm to discover the most effective optimization flags for different algorithms.
very good, nice writen, well structured and informative article. i hope some source based linux freaks will read it and finaly start to understand that overoptimizing an system is not an one gcc setting for all applications! some applications need diffrend gcc flags to work faster.
That is a very good article that I recommend anyone who uses GCC to read. It definitely helps you to know what different optimization flags do to increase your apps performance. And Acovea is a pretty useful piece of software, that I think would really benefit people who want their apps to run as fast as possible.
Where are the gentoo freaks?
In fact the article implies that, yes, in fact, one gcc setting for all applications (-O3) is a useful generalization. It directly says that -O3 is not harmful in any case except one, and I doubt that most binary distributions use it, so in cases where it’s helpful, a source based distribution would be generally faster. Although I still believe that the real reason Gentoo is faster (on my box) than RedHat is that I bothered to optimize various things (such as the settings to hdparm) that I never knew how to work with on RedHat, and that there seems to be a finer-grain control over what gets included or not, what gets started up automatically or not (the default is to start almost nothing up at boot-up, with each and every possible service individually optional), so that speeds booting. I suppose I could also optimize them on RedHat, and get the same results, but I’ve decided to stick with Gentoo. In short, installing Gentoo facilitated a user upgrade, so to speak.
Erik
“Where are the gentoo freaks?”
Here. The conclusion of the paper is nothing new to me. There are other reasons to run Gentoo then compiler flags. I compile everything with a simple -O2. It’s not fair to think that every Gentoo user is of this “my gcc flags are longer then your gcc flags” breed.
I wish he did a little more experimentations with -Os. That’s the new fad in gentoo land. ๐ I use -Os. For the desktop, I think it’s reasonable. It produces smaller code sizes. And my gentoo box *feels* faster.
The Gentoo freaks are right here. There is a “CFLAGS!!!” contingent, but most people use Gentoo for other reasons, like its strong community, extreme flexibility, etc.
Very interesting article, and a unique way of examining this issue. But consider that all of the benchmarks were numerically intensive – far more than the vast majority of common Linux applications. As a scientist, I appreciate such benchmarks, and have noticed similar patterns of performance in my own code. But for programs like OpenOffice, KDE, etc., you aren’t going to see nearly as much difference across compiler options.
And yes, I’m a recently converted Gentoo freak, too, but its because I like the BSD feel of Portage.
I can`t find any data that says how long it takes to find the best flags per program, just that it is significantly less than the bruteforce method. Did I miss something when I read it, or is there none?
The neat thing is, you could take the tool and run it on _your_ app, not some benchmark.
If you are writing an app that is performance-critical, it might be worth running this tool (on a unit test of your code? difficult to drive a gui test I’d bet) on your code before release.
I am playing with a GA of my own; I would like to try to optimise my simulator using this GA ๐
Having said that, GCC isn’t the choice for performance-critical apps on most architectures.
Having said that, GCC isn’t the choice for performance-critical apps on most architectures.
On the contrary – I’d say that apart from a few architectures, GCC is the choice for performance-critical apps.
Those “few” architectures probably constitue 95% of systems out there, though ๐
GCC has abstracted both the programming language and the target architecture, so it will never be as fast as e.g. a x86-specific C compiler.
But GCC is free, standards-compliant, extremely portable, language-agnostic and almost as fast the proprietary architecture-specific compilers (sometimes faster).
The only real issue is compile time performance, but apple is commited to improving that (witness some of the compilation time-reducing features in XCode), and i think the redhat (cygnus) guys are on it too.
Oh and IA64 performance suck too, but that’s a pretty new architecture and it’s a bitch to copmile for. Expect it to improve slowly but steadily over the next couple of years.
GCC is something the free software community shold be REAL proud of. Thanks RMS, Cygnus/RedHat and all other people that have contributed to GCC. You rock.
GCC has abstracted both the programming language and the target architecture, so it will never be as fast as e.g. a x86-specific C compiler.
I’m sorry, but I don’t see any reason why they can’t optimize the GCC backend (the bit that generates machine code) so that it generates code that is as fast as the Intel C++ for example.