Some people prefer the convenience of pre-compiled binaries in the form of RPMs or other such installer methods. But this can be a false economy, especially with programs that are used frequently: precompiled binaries will never run as quickly as those compiled with the right optimizations for your own machine. If you use a distributed compiler, you get the best of both worlds: fast compile and faster apps. This article shows you the benifits of using distcc, a distributed C compiler based on gcc, that gives you significant productivity gains.
No, there is a huge time cost in compiling your own code – the time to compile. All of the Gentoo users can tell me about the 2% performance gains, but it took them four days of compiling to get these gains, so in the end they are spending time, not saving it.
Like the “are python apps real?” or “java is slow!”, the roll-your-own argument is really limited these days to the people still running Pentium 3s.
Well I have a Gentoo-System installed on my machine. I like gentoo for a number of reasons, but certainly not for the performance gains. Acctually it is a pain in the neck compiling a whole system (I’m running a P3-850 MHz).
On the other hand compiling your system has advatages. Because you have so much choise when writing Linux software, developers sometimes include the support to optionaly use certain libraries. So using gentoo-linux or compiling the software for your own use isn’t done for the sake of some performance gain, rather for building an operating enviroment fitting your choise, belives, needings and hardware side limits.
So if a pice of software supports both QT and GTK and I am only using KDE, there is no need to integrate the GTK part of this certain pice of software.
I recently discovered ccache (http://ccache.samba.org/) for doing my nightly CVS builds of Firefox. After a full build is cached, it looks at a preprocessed source file and detects if it’s going to compile into exactly the same object code as the previous build and bypasses the compilation for that file. There’s no sense recompiling a file that just had some comments changed or style reformatting.
Thanks for the tip on ccache!
Since the real bottleneck in software is almost always I/O operations, CPU optimizations on the compiler side often don’t speed up actual runtime performance very much. But never mind that…
Even if you DO get a 2% performance increase from optimizations when compiling your own binaries, this still doesn’t really increase your actual productivity with the program even 2%. Most of the time you spend using frequently-used applications isn’t waiting for the computer to finish doing something, it’s time you spend thinking. And no amount of compiler optimization is going to fix that bottleneck.
Yes, gcc is slow compared to most every other compiler, but this doesn’t make it suck, it’s a trade off. It is very convenient to use the same compiler across multiple operating systems and architectures, name one compiler that supports as many platforms as gcc? Personally I find this outweighs the loss in compile time speed and platform optimization, however others may have different needs so other compilers exists.
“No, you don’t reduce compile time by distributing the work load. You reduce compile time by writing good compilers.”
How about you reduce compille time using both? One is currently available, the other one ain’t. You say instead: “I don’t like this solution.” Another alternative is to upgrade the hardware, saying the other solutions “are the wrong way”. I think the best solution(s) depend rather on the situation; there is no clear “better” solution which always should apply.
You also haven’t bothered to state where GCC is slow. It’s not always the most bad choice regarding optimized builds and sometimes MIPSPro, ICC or other proprietary compilers are better choices. I take it you always use these, or TenDRA?
I’m currently checking out DistCC and CCache. Seems they’re able to solve a major problem here.
GCC doesn’t suck. It’s code generation, relative to Intel C++ 8.0, is actually very good
http://gcc.gnu.org/ml/gcc/2004-05/msg00021.html
There are a few benchmarks that show poor performance, but for the rest, the results are close.
It is, however, as slow as frozen tar. As far as optimizing compilers go, Visual C++ is the compile-time king, at least on x86.
Well, as someone who recompiles FreeBSD/DragonFly quite frequently, I’ve got to say that the best way to reduce the time it takes is to build eveything in a ramdisk. I’ve cut 100 minute compile times down to about half an hour by mounting /usr/obj in a ramdisk instead of on my hard drive.
http://bsdvault.net/sections.php?op=viewarticle&artid=53
Question for the more enlightened: so what is a faster compiler on linux (x86 or alpha)? I may want to take a look at it/them.
well for x86 you could try icc i dont know if the compile time is faster but the binaries should be.
and compaq did release a compiler for alpha on linux dont know the compile time there ither but then you should get faster binaries.
if your writing your own software other compilers shouldn be a problem but if you want tocompilegnu software or the kernel you could get great problems
Um, you’d be hard-pressed to find a compiler on x86 that generates faster binaries than GCC. Intel C++ sometimes has an edge on floating point, but only really beats GCC when it can auto-vectorize the code (which neither GCC nor Visual C++ can).
LCC will definitely have slower code (it has a much simpler optimizer) but will generate code very quickly. TCC will generate even slower code, but compiles incredibly fast.
It is because no one from GCC developer team hasn’t try to adopt precompiled headers support. It is supported by Borland C++, Visual C++ and few other. If preprocessor have to build 600kB file for only <iostream> than compiler will have to compile it much longer.
“C” compilation is also longer in current version of GCC than in 2.95. Maybe no one there try to optimize code. The best optimization is not machine specific but better algorithm.
I got two computers, one AMD 1800+ and one P3 733MHz, do I gain compiletime by using distcc to compile on both computers compared to if I only compile on the fastest of the two?
Even if you DO get a 2% performance increase from optimizations when compiling your own binaries, this still doesn’t really increase your actual productivity with the program even 2%. Most of the time you spend using frequently-used applications isn’t waiting for the computer to finish doing something, it’s time you spend thinking. And no amount of compiler optimization is going to fix that bottleneck.
No offense to Gentoo users, but the speed improvements may largely be a myth. I’ve seen some load time and execution comparisons and Debian (i386 or i486 optimized) would often be faster. This could be due to running services or other factors, as (stock) Debian and (stock) Gentoo are dissimilar. Having tinkered with Lunar, Sorcerer, and Sourcemage, I’m not entirely convinced of the benefits of source-based distros. Sujectively, the Slackware-based distros seem fastest to me, but I’m admittedly Slack-biased. Certainly you must choose the RIGHT flags. I’ve broken a few packages getting crazy with the optimizations. But hey if you have the time and CPU, then go for it.
well for x86 you could try icc i dont know if the compile time is faster but the binaries should be.
icc used to be better because of SSE and improved floating point code, I don’t know what the situation is in year 2004. I’d love to recompile everything with icc, but i understand it’s not a drop-in replacment and that much F/OSS is intended for gcc compilation. Anyone tried this by chance?
“GCC doesn’t suck. It’s code generation, relative to Intel C++ 8.0, is actually very good
http://gcc.gnu.org/ml/gcc/2004-05/msg00021.html“
That is a cool link, but it’s referring to AMD 64 not IA32. Still gcc isn’t too shabby given the price.
The author makes an interesting point that -O1 plus carefully chosen optimizations can beat solitary -O3. I will remember that …
Distcc surely reduces the overall compile time however it is not always the case that distcc can do the job. A few semestres ago we made a 3D engine in Linux and everytime distcc got to the OpenGL calls it crashed. Maybe we hadn’t configured distcc correctly or maybe something else was wrong. Hopefully it has been corrected (if it needed to be corrected).
Another aspect worth mentioning is the bandwidt of the underlying network. A slow connection isn’t adequate. I tried it on my 512/128 kbit connection and it sucks big time. On a 100 mbit connection it’s great .
Have they developed an app that takes the advantage of distcc and SETI.
Some people leave their computers on all day (bad for the planet BTW), anyway if they ran a distcc then developers could take advantage of their compilation power.
This would work best with the opensource community. Imagine a KDE developer is out to lunch, while a GNOME is in another timezone compiling GNOME. That would be cool.
GCC compilation speed will also improve a bit if Niall Douglas and Brian Ryners patch which hides ELF symbols is incorporated. That could happen with gcc 3.5
See here:
http://www.nedprod.com/programs/gccvisibility.html
Like one year ago, I read that you can distribute compiles on a Gentoo network — maybe it was in their weekly updates..?
Visual C++ generates more optimized code and compiles faster than gcc or other x86 based compilers. For x86, visual C++ compiler aka cl.exe is the best.
“Like one year ago, I read that you can distribute compiles on a Gentoo network — maybe it was in their weekly updates..?”
such kind of systems have been available for more than a decade. gentoo has been doing this for atleast a couple of years
It’s more important to write a correct compiler than a fast one.
“Visual C++ generates more optimized code and compiles faster than gcc or other x86 based compilers. For x86, visual C++ compiler aka cl.exe is the best.”
Nope. Visual C++ only runs on Win32/x86. Afaik it is also only able to compile for Win32/x86. Hence, it is the best for Win32/x86 at best (i found the benchmark i read flawed).