The GNU Compiler Collection is a suite of compilers that compile C, C++, Objective C, Fortran, Java, and CHILL for a wide variety of architectures. Version 3.4.3 was released recently.
The GNU Compiler Collection is a suite of compilers that compile C, C++, Objective C, Fortran, Java, and CHILL for a wide variety of architectures. Version 3.4.3 was released recently.
GNU’c GCC site (gcc.gnu.org) still lists 3.4.2 as the most recent release.
It’s in Arch’s repository before GCC’s site updates
Everytime someone announces a new release (of whatever) they should add a link to the change log…
Just my 0.02 Euro
There doesn’t seem to be a changelog for 3.4.3 on GCC’s website.
http://gcc.gnu.org/gcc-3.4/changes.html#3.4.3
d’oh…there is a link in the post. forgive me, it’s late.
The release tarball is currently being _created_. There’s no announcement because they want to give the mirrors a chance to get the data before they get swamped.
Which is to say that the release isn’t out yet. In fact, the ChangeLog is still being created.
Also note that CHILL no longer has a GCC front-end. Ada however does. This is also the last release series that will include the old f77 fortran compiler.
Hmm, i thought changelogs are frequently updated, as opposed to leaving it till the last minute before releasing the software and then write it all up.
Change Logs are kept upto date. Just download the source code and check it out. The last minute part is converting the developers criptic log into something that matches the bug reports (they may have 3 partial patches to fix one issue; the developer change log will show each patch and explain what the patch is doing, the end user will only want to know what bugs were fixed).
Kernel developers are still using outdated GCC !!!
http://kerneltrap.org/node/view/4126
for some thing gcc series 2.95.x it’s best than 3.x version
for example if you want to learn assembler decompiling
c code without view some bloated piece of code,then you
must use the 2.95 version for do that.
Sorry, I meant version notes, not ChangeLog. ChangeLogs are indeed kept up to date. But the version notes are still being written out (what bugs were fixed, what changes users will see, etc.).
Dag gone it, if only it didn’t take so long to build. Guess I’ll have to wait until my distrobution makes a package for it
g77 sucks. Intel compiler is one order of magnitude faster, and this is not a figure of speach!!
g77 isn’t well supported, this is true. But then, Fortran isn’t either.
Interesting concept posted by Mark to decompile c code to learn assembler! Why not simply learn assembler, sometimes it seems to me from the code produced by gcc that the developers didn’t bother! 😉
Seriously though, something needs to be done about the increase in bloat between 2.x and 3.x, it’s effecting everything from the kernel up. It’s late, maybe it’s a no-brainer but I just don’t get how my hardware in the last five years has moved on so much, with HT and all manner of Media instructions (MMX, SSE1/2), whilst compilers don’t seem to have moved with the GHz increases and the data-crunching CPU functionality.
I do appreciate all the work the gcc guys put into their compiler, but something just doesn’t seem to add up, I’d really like to see some comparisons of the different compilers and their code output. Intel, GCC, what about others like Codewarrior?
I’m a big Linux and BeOS fan, and I’d hate to see them continue down into the M$ spiral of bloat and cludge which is certainly where Linux seems to be heading fast.
I used to hate c for this exact reason, preferring to write stuff in assembler. I’m interested, I’m inspired by Mark, I’d like to take a look at some of the code these compilers are generating and maybe even offer up some useful suggestions to the developers, I wonder if they’ve stopped looking at what they’re outputting.
Anyone know of any good sites or resources?
How does the G++ C++ compiler compare to Comeau C++? Is Comeau C++ any good?
I used to hate c for this exact reason, preferring to write stuff in assembler.
Well, there’s a tradeoff of time vs effort when you use any HLL vs assembly. Sure, you can write it in assembly – but do you need to? Using the 80/20 or 90/10 rule only break out that assembly if you really can profile it down to that 10% that is really critical. At least you’re on the right path in even considering assembly, many people consider assembly redundant it seems.
I wonder if they’ve stopped looking at what they’re outputting.
There’s work on gcc with tree-ssa to get a single framework for optimizations going for gcc 4 (I think). If you’re familiar with gcc, there is no single optimization framework in place as the code is converted from 1 language into RTL and this is different for each language (and hence, the level of optimizations differ all the time).
With tree-ssa, code is converted into gimple/generic which enables a lot of other optimizations to be done on several languages and focuses optimization efforts rather than diluting it over several languages.
Anyone know of any good sites or resources?
Check : http://gcc.gnu.org/projects/tree-ssa/
There’s other work on SSA like the Mono project for making a common optimization front end for the Mono JIT/AOT compiler, I’ve heard of references to other IR languages being used as well, so there’s a lot of work around but I’d say tree-ssa is probably what you’re looking for if you’re keen to get gcc back on track.
Comparison of GCC and the Intel compiler:
http://www.coyotegulch.com/reviews/linux_compilers/
I used to hate c for this exact reason, preferring to write stuff in assembler.
Well, there’s a tradeoff of time vs effort when you use any HLL vs assembly. Sure, you can write it in assembly – but do you need to? Using the 80/20 or 90/10 rule only break out that assembly if you really can profile it down to that 10% that is really critical. At least you’re on the right path in even considering assembly, many people consider assembly redundant it seems.
Thanks PC for the constructive and informative reply. I wasn’t quite sure what to expect, sometimes when I read things on here people get very defensive and the flames begin to rise as soon as someone even begins to ask the awkward questions. Thing is, I fully appreciate the work the guys are doing, it’s just in my own experience working with C (and its offspring) over the past 10 years or so I’ve noticed a real decline in quality of code the compilers are generating. It’s funny, years ago we HAD to be really anal about writing things, in assembler usually, using every last corner of a machines memory map, and thinking our way around wasting valuable CPU cycles.
C was seen almost as a GUI language, not something you’d write a rendering engine or driver with.
You’re right about the tradeoff, absolutely, and as CPUs continue to increase in speed and we have more memory to use, the arguments against using assembler increase. Why spend 10 times as long assembling something which will only execute 5 times faster than its C counterpart?
Problem is, the tradeoff assumes optimisation. It’s difficult to write slow assembler, you have to be a really inexperienced programmer or simply wasteful, the two things which most assembler programmers are absolutely not, otherwise they’d have never progressed (or regressed? – hehe)
Seriously though, I feel strongly that the compiler itself MUST be optimal in the most anally-rectal ass-centric way! …just think of the ramifications of a leaky or lazy compiler, particularly when it’s being used to compile every bit of an OS from the kernel up. Slower compile times, larger binaries, all accumulative too due to the kernel, drivers, libs, apps layering paradigm.
I’ve noted comments from various quarters about the differences between versions of gcc, some suggesting the older versions are in some way faster and leaner.
I wonder if they’ve stopped looking at what they’re outputting.
There’s work on gcc with tree-ssa to get a single framework for optimizations going for gcc 4 (I think). If you’re familiar with gcc, there is no single optimization framework in place as the code is converted from 1 language into RTL and this is different for each language (and hence, the level of optimizations differ all the time).
That’s really interesting.
With tree-ssa, code is converted into gimple/generic which enables a lot of other optimizations to be done on several languages and focuses optimization efforts rather than diluting it over several languages.
Makes sense.
Anyone know of any good sites or resources?
Check : http://gcc.gnu.org/projects/tree-ssa/
There’s other work on SSA like the Mono project for making a common optimization front end for the Mono JIT/AOT compiler, I’ve heard of references to other IR languages being used as well, so there’s a lot of work around but I’d say tree-ssa is probably what you’re looking for if you’re keen to get gcc back on track.
Thanks. I’ll check this out further, I don’t want to be one of those people who complain about stuff and don’t take action
We could do with a crack team of bloat-intolerant assembler programmers whos objective is to claw back every wasted cpu cycle, it’s a dirty job…..
Thanks PC for the constructive and informative reply.
Not a problem. I am open to informative discussion, it happens rarely where I am. (It usually degenerates into an argument which ultimately goes nowhere).
… in my own experience working with C (and its offspring) over the past 10 years or so I’ve noticed a real decline in quality of code the compilers are generating.
Odd, my experience is that compilers are generally generating good code, however the complexity of keeping things working as a whole is holding compilers back. Like not being able to maximize register usage, keeping calling conventions accurate, etc. Assembly programmers are able to set contexts where they know exactly how they want the CPU to execute, and are able to program much more efficiently.
Then there’s CPU complexity, which has made life a lot harder than in the earlier CPU’s. For instance, is it immediately obvious that lea (%eax,%ebx,1),%ecx is the good way to do something like C = A + B? Or that something like xor %eax,%eax is the obvious way to do A = 0? Or maybe it’s the other way around, depending on register operations prior or after you may need to rewrite it to avoid stalls. To be honest, it’s pretty hard finding and working these things out. Not to mention changes to the CPU internal architecture will start making code optimizations redundant or harmful in some cases.
Why spend 10 times as long assembling something which will only execute 5 times faster than its C counterpart?
Well, if it take 5 hours to compute and you can shorten it to 1 hour, that’s 4 hours worth of time (and power saved). Multiply that by how many users… it adds up. I’m not saying all optimizations are worthwhile, but anything that saves time in an appreciable manner as a whole is worth looking at.
It’s difficult to write slow assembler, you have to be a really inexperienced programmer or simply wasteful…
Actually it’s pretty easy to write slow assembly if you don’t know the CPU that well and start hitting pipeline stalls all the time. Finding and eliminating them usually turns your assembly into something fairly unreadable, but if you have a better design than the compiler in general you’ll probably beat it by using more registers and setting a suitable context to do your job.
I’ve noted comments from various quarters about the differences between versions of gcc, some suggesting the older versions are in some way faster and leaner.
This is probably the lack of a single optimization framework showing up. Code that runs well on say, a Pentium, doesn’t work as well on a Pentium 4 so you’d expect it to slow down somewhat. (Again, CPU complexity in action here).
One person I know (who does Intel assembly all the time) mentioned you had to pretty much code it 3+ times, then time it a few thousand times across all the major CPU’s and then pick the code that was fastest for the most variants of Intel CPU’s to be sure it was the fastest. Note – they may have coded something faster for say, the Pentium 4, but it usually was taken out as it would have been detrimental to the other Intel CPU’s to leave it in.
Thanks. I’ll check this out further, I don’t want to be one of those people who complain about stuff and don’t take action
By all means have a look and see what you think. There’s a lot of work being done and they can do with some extra people who have some pretty specific skills. (Apparently, not many programmers are low level enough anymore…)
We could do with a crack team of bloat-intolerant assembler programmers whos objective is to claw back every wasted cpu cycle, it’s a dirty job…..
Actually the work in bytecode/IR languages is pretty interesting. Some are less bloated than native code if you go on size factors. The work on JITs is progressing to the point where I expect all code will be bytecode JIT’ed or compiled into native code as part of the install process, probably a combination of the 2 to generate optimal code.
For example, Dynamo/RIO has shown that code that has been optimized at runtime can be 10-15% faster than the fastest pre-generated compiler code by optimizing hot paths and code structures to a degree.
We’re reaching the point where code will probably all become bytecode and is designed for easier conversion to native code and conferring what the programmer was trying to do, like what MS’s .Net, Mono, .GNU and CIL is trying to do and get multi-platform compatibility going.
On the whole – a win for programmers out there for true multi-platform code. Though, you still need those hard code assembly programmers to program the JIT’s and bytecode to native code compilers, but instead of diluting the work over multiple CPU’s, you have a single source to work off and can leverage a lot of work once the infrastructure is in place.
Anyhow, best to email me to discuss if you’re interested.