The entire realm of open-source software could get a performance boost if all goes well with a plan to overhaul a crucial programming tool called GCC.
The entire realm of open-source software could get a performance boost if all goes well with a plan to overhaul a crucial programming tool called GCC.
Looks like most compilers are getting improvements. I’m especially excited about the improvements being performed on the gc++ compiler.
http://gcc.gnu.org/gcc-4.0/changes.html
One more technicality GCC is not a “programming tool” it’s a collection of tools (gcc, ld, etc.). Just a minor technicality.
I’m sure some Gentooer is running a pure 4.0.0 system somewhere. Any benchmarks on it vs. 3?
ld is part of the binutils and not of GCC. But it is a collection of compilers as the name suggests.
I’m looking forward to the 4.0 version because it shows in some benchmarks its potential. Compared to the bad optimizing 3.4.x line the 4.0pre version is 50% faster on a recent cpu-bound test.
Do you have a link to these benchmarks? I’ve only seen mixed results for GCC 4.0. Some improvements, some regressions.
Tiger will be build and shipped with gcc 4
Tiger will use gcc4; however, please remember that Apple uses a modified GCC and not the base.
Off the top of my head, the Apple has:
1) Support for Objective-C++ (it might make the 4.0 base; else it’ll be in 4.1)
2) Objective-C/C++ style @throw/@catch handling.
3) Several minnor changes to support things that Apple thinks are important which the base hasn’t accepted or has rejected in the past. (Apple still maintains several sections of code which has been removed from the base)
I was not clear enough. I made benchmarks with some of my own programms. And I was only reporting the best result for GCC 4.0. Other programms show mixed behaviour as you experienced it. But if you want to see it yourself I can send you the small programm showing the most impressive result.
ld is part of the binutils and not of GCC. But it is a collection of compilers as the name suggests.
Well if you want to get pedantic the compilers present in the GCC suite all produce assembly code which is then assembled using GAS, which is part of the binutils package. Without binutils GCC gets reduced to an interesting exercise in how to produce large text files full of GAS syntax assembly code. I imagine would be pretty difficult to separate binutils and GCC at this point, probably best to just think of them as the GNU toolchain.
It’s always fun to read how some of these very crucial projects (GCC, Glibc etc) still have really tiny development teams. Guess it’s a testament to how much folks can get done if they are in a position to concentrate on the project full-time.
Good to hear that the project is progressing, but no matter how far they get the compiler is still going to get beaten by a decent assembly programmer :p
An experimental version of gcc-4.0 can be found in the ubuntu universe repositories (and probably just debian). It’s not super-stable yet. I had several problems with the generated code on a rather sizeable C++ program. In particular passing double a[4][4] as a function parameter caused it to create bad SSE code when it referenced a. It automagically updates versions every once in a while and does better each time.
The most interesting thing you can do with it at this point is to compile your programs with the ‘-ftree-vectorize -fdump-tree-fect-stats” options and tweak your loops so that they auto-vectorize into SSE/ALTAVEC code. That way you’ll be all set when the stable release comes along in a few months.
Michael
I meant pedantic about the article, not the comments. Specifically this part:
Almost all open-source software is built with GCC, a compiler that converts a program’s source code–the commands written by humans in high-level languages such as C–into the binary instructions a computer understands.
*bzzt* Incorrect. Please insert another credit?
**
Just thought I’d best clarify. People are getting real twitchy on the board lately.
Of course! There’s always a Gentooer willing to trash his/her system for the sake of advancing the bleeding edge. Work on compiling systems with 4.0 actually began last May when it was still referred to as 3.5. The thread starts here: http://forums.gentoo.org/viewtopic-t-176085-start-0.html
Some of them are even starting to work with 4.1.0 alpha!!
”
It’s always fun to read how some of these very crucial projects (GCC, Glibc etc) still have really tiny development teams. Guess it’s a testament to how much folks can get done if they are in a position to concentrate on the project full-time.
”
GCC development is mostly paid for by Red Hat and Codesourcery. They have full time developers working on it
Fedora development tree already has everything compiled with gcc4 and fedore core 4 will ship with it. test1 is planned to be released today
Fedora Core 3 had it when it was release 11/08/05
Rawhide is built only aganist it right now, in preperation for FC4test1 (tomorrow).
I must say my boxs running rawhide, is quite fast in “feeling” compared to the before gcc4 tree.
Obvisously Fedora Core 4 will be “built” against gcc4.
Technology is sweet
“Fedora Core 3 had it when it was release 11/08/05 ”
fedora core 3 had a gcc4 branch which was NOT default and none of the rpms were built against it. fc4 is the first targetted version
“Without binutils GCC gets reduced to an interesting exercise in how to produce large text files full of GAS syntax assembly code. I imagine would be pretty difficult to separate binutils and GCC at this point, probably best to just think of them as the GNU toolchain.”
gcc isn’t tied to the GNU binutils. For example, it can use the native as and ld on Solaris.
anonymous@IP: —.249.247.61.touchtelindia.net
I didn’t SAY that it was default did I? NOPE, I said it had it, you can pretty much tell by the fact that I said FC4 will be “BUILT” against it.
Must read context, thx…….
“”The primary purpose of 4.0 was to build an optimization infrastructure that would allow the compiler to generate much better code,” Mitchell said.
I think this is the most important part of the news.
The new compilers will have (at least as I understand it) an easier way to add optimization to them, it to run better, in general or in a particular architecture.
I hope we will see soon this kind of optimizations, specially those that build up full program optimizations.
How they managed not to mention OS X and Tiger for visibility itself is really sad.
The amount of code Apple will be adding after Tiger gets released should hopefully shut up the whining on the mailing lists about broken ObjC and the backpeddling from the non-Apple/GNUstep folks who clearly don’t get along with the main presence from Apple who is credited with purposely breaking compatibility so that the future release has many improvements between C/ObjC/C++.
those were heady days indeed.
Just to mention that the Tiger version of GCC will support the Tree SSA (Single Static Assignment) as mentionned in the article and will get a new C++ parser described by apple extremely faster than the GCC 3.3 one.
But its is intersting to mention that the autovectorization will be proposed in GCC 4.1 according to the article, however the autovectorization (at least for the Powerpc) will be proposed in GCC 4.0 with Tiger.
Ofcourse new version will be faster, if they’d make it even slower then it would start to de-compile.
Seriously, GCC is probably the slowest compiler out there. But it produces cross-platform code, so I guess it’s still pretty cool.
“Seriously, GCC is probably the slowest compiler out there. But it produces cross-platform code, so I guess it’s still pretty cool.”
Actually, the code that it produces is platform- and architecture-bound. It is the compiler itself that is cross-platform.
i like how ibm competes with opensource. didnt know that, i was under the impression their business depended on oss….
Seriously, GCC is probably the slowest compiler out there. But it produces cross-platform code, so I guess it’s still pretty cool.
actually, its one of the better ones. theres better, but gcc is far from the slowest. and as you mentioned, it is cross platform.
I’d like the possibility to choose between very optimized code (slow compiled times), and very fast compile times.
Currently, including things from the STL *really* hurts compile times *very badly*. But even things like Qt take ages to compile on GCC. I’m on a pretty fast machine (AMD Athlon XP 2500+), but developing C++ software with GCC gets annoying fast.
I believe you’re looking for -O .. -O3
gcc 4 compiles much faster, up to 50%. check the gcc mailinglist for benchmarks.
Actually, the mailing list says compilation time is up to 1.5 times SLOWER (on ppc). Although this was with -O1 and apparently a lot more optimizations are on by default.
http://gcc.gnu.org/ml/gcc/2005-01/msg00082.html
But the wiki says it is 10-20% slower compiling.
http://gcc.gnu.org/wiki/Regressions