After glibc is rebuilt is all cool again or do the end user programs need to be rebuilt too?
How can Windows maintain backwards compatability but linux always needs a rebuild? (thats not a bagging of linux because stuff broke between macos 10.1 and 10.2)
The difference is that Microsoft defines the Windows C++ ABI, whereas Linux is trying to use the multi-vendor standard. The Linux C ABI has for instance has remained quite stable, since there’s been no attempt to make it conform to anything else.
Anyhow, GCC 3.4.0 Linux ABI changes only affect C++ programs, meaning glibc/XFree86/gnome doesn’t need to be rebuilt. KDE/OO.org etc. do though.
I wonder if the binary size has improved at all. I’m sick of GCC generating code twice as large as Visual C++ 6. It’s a joke that GCC can get away with such wasteful output.
And yes I know about striping and debug info and all that, and it’s still 2x the size of the same code compiled on Win32.
Probably not too much. Visual C++’s compiler is famous for producing some of the smallest binaries for x86. Even Intel’s compiler isn’t anywhere close. Perhaps GCC 3.5, and its new SSA-based optimizer will show some improvements.
In the section on Runtime Library (libstdc++) optimizations, they mention that static linking file sizes have been reduced. But if you’re keen on small file sizes and code in C, lcc-win32 generates by far the smallest executables on the Windows platform.
One of the reasons Windows is backwards compatible is that windows-compilers work with…(drumroll)…Windows, whereas GCC works with a number of platforms.
And even windows-compilations aren’t always compatible; some programs only work on WinXP/2000.
Secondly, nobody forces you to use gcc-3.4. You can still get gcc-2.95 from 1999 if you have to compile for an old system.
Thirdly, these ABI changes corrects bugs in previous versions.
And, this is 3.4 which is not a bugfix of 3.3 but a new major version.
It should also be noted that Windows makes compilers adhere to a less-than-ideal ABI. For example, Visual C++ to this day cannot do zero-overhead exception handling. Its EH model is built on top of SEH, which cannot use that approach.
Has there been any changes in the linking times also?
Is someone aware of compile time (NOT runtime) performance of GCC, especially for c++, with or without precompiled headers, and how it relates to 3.3.x and 2.95?
SEH is something different than C++ EH. Also, how should zero-overhead exception handling be possible ? It works, it does something, so it must leave traces, both in form of created code and cost (however neglibible or not) at runtime.
Is there some link to new benchmarks regarding compiled ode performance/size (to get a rough idea), especially benchmarks comparing different compilers ? The latest stuff I’ve found dates to the 3.2 version.
Some say VCC is much better than GCC on x86, but I’d like to see if it’s religious wars or fact
Structured Exception Handling (SEH) is different from C++ Exception Handling (EH), true, but Microsoft based their C++ EH implementation on top of SEH.
Try a simple program which would normally abort the program (such as dereferencing a null pointer):
catch (…) {printf (“caught SEH exception!
“); /* works on MSVC */}
}
Microsoft’s C++ will let you dereference the null pointer, and catch the hardware exception, using Windows SEH. Other compilers/runtimes will abort the program (G++ on Darwin, for example, generates a “Bus error”).
Microsoft’s approach may not be zero overhead, but it can certainly be useful for preventing program crashes, adding additional program logging, and other functionality.
You are right. But dereferencing an invalid pointer results in undefined behaviour, which means you´re program may crash or may not crash but is from that point in an undefined state anyway and nothing can be relied on from that point on. So there is no argument for how efficient or elegant any OS or C++ compiler handles these kinds of situations since, philosophically (in terms of C++-Philosophy) speaking, after that point there is no running c++-program anymore, but a ticking bomb (which may go off or not).
The linker warns me that the older libstdc++ used in qt MIGHT conflict, but this seems not to be a problem for the qt libraries.
I installed gcc as a user in my home dir and build it with profiledbootstrap. Only things that had to be done after installation was updating the PATH variable and ld.so.conf
The most notable speedup I got in the linking phase, this used to work the most on my nerves.
Visual C++’s C++ EH is built on top of SEH. And zero-overhead refers to zero runtime cost in the case of non-taken exceptional paths. SEH has to do some fiddling with the stack on entering try blocks, and on executing function calls inside try blocks. G++ uses a table-based approach where the only runtime cost is an exception table in memory, and that table is in a seperate section of the executable, so its not loaded into memory unless needed.
GCC 3.4.0 Linux ABI changes only affect C++ programs
Actually, they did change the C ABI this time. There were a lot of changes for SPARC and MIPS; and for x86/AMD64 they changed the way MMX and SSE operands are passed (the new behavior matches the icc ABI).
As for the link you pointed out: there are actually two issues. The first sounds like a gcc bug. The second is a general property of all table-based exception mechanisms. They prevent -fomit-frame-pointer from having any effect, because they need a well-defined stack frame. This is only really a (minor) issue on x86, which might otherwise get away with not having a well-defined stack frame. See:
Q: Why did it took so long to reach an acceptable level of conformance?
A: Releasing a product the size of Visual Studio takes a long time. In VS.NET 2002, we were primarily focused on features other than conformance (such as attributes, and managed C++) Based on customer feedback we are now much more focused on C++ conformance.
AND
A: In general, we have seen little demand for many C99 features. Some features have more demand than others, and we will consider them in future releases provided they are compatible with C++. It is more likely we’ll entertain C99 features if they are picked up in the next version of the C++ standard.
So they don’t care for standards, all they care is for what customers ask for…
Don’t think of it as “how many people were asking for Managed C++”. Think of it as “how many people want to be able to fully use their existing C++ code?”
Or, “how many people want to continue using their fully written, debugged, and working C++ libraries?”
No one wants to rewrite code unless they have to. Especially customers who have their own customers, and want to continue suporting their existing (C++) customers, but also want to support new .NET customers.
So, what do you do? Use COM? Yech. Write C wrappers to P/Invoke against? Double Yech.
Or, do you provide a product which allows the existing C++ codebase to be used, effectively unchanged, while exposing some of that code to .NET consumers? That is what Managed C++ attempted to do. Granted, the original syntax sucked (which is being rectified by ECMA’s C++/CLI standard), but it fulfilled it’s purpose in life: making C++ interop easy for developers.
I have to admit that I unabashedly, viscerally detest GCC to the very depths of me. That compiler–that god damn, hacked up, shit bag vomit compiler is full of half-assed “optimizations,” phenomenal, agressive, and anti-social standards non-compliant default extensions, and utter incompetent quality control.
In my life, I have written hundreds of thousands of lines of C code, and the vast majority of them have at one time or another been molested by the rotting carpuscles of this 20 year old half-starved, bullet-ridden, carnivorous dinosaur that takes a mile where an inch is given and prompty barfs on its own shoes. It exhibits irrational behavior on complex code; schizophrenic manic depression on mind-numbling simple but frighteningly unnatural and obscure constructs that few people on earth could identify correctly on a multiple choice test. It regularly invites missteps that are much later rewarded with impossible results; coupled with gdb it conspires to obscure program meaning–lying repeatedly, sweeping details under the rug, stonewalling investigation in the darkest paths of code, and failing horribly at providing any type of contract to the programmer. If GCC were the space shuttle, it would soar triumphantly into the sky to a perfect circular low earth orbit, deploy the satellite payload to its appropriate orbit, while ground control would pop champagne corks and cheer, then promptly smash the satellite to smithereens with the retractable arm, sending it hurtling into other satellites, then begin spinning wildly at extreme velocity, killing all crewmembers, transmitting high energy pulses that jam all communication on earth, and then begin a fiery descent into the most populated city on the planet, culminating in a Hiroshima-scale detonation, while simultaneously creating the most destructive viral chemical reaction possible from pure nothing which would burn off the entire atmosphere of the Earth and all life would cease to exist.
I write compilers. And that is my professional opinion.
GCC isn’t offically released as is hasn’t finished propagating to all the mirrors yet. Offical release is tommorow.
April 18, 2004
The GNU project and the GCC developers are pleased to announce the release of GCC 3.4.0.
Source: http://gcc.gnu.org/gcc-3.4/index.html
great…they broke some ABI’s…again…MIPS this time was one…joy
http://freshmeat.net/screenshots/3088/
Just wait until new versions of glibc and binutils come out, and rebuild against 2.6 headers with nptl.
This isn’t exactly useful, except for people doing <a href=”http://belfs.linux-phreak.net/“>this
Will the new exception model eventually work with the gnu runtime?
I just hope that it’s now possible to generate MorphOS binaries
properly… still using 2.95…
After glibc is rebuilt is all cool again or do the end user programs need to be rebuilt too?
How can Windows maintain backwards compatability but linux always needs a rebuild? (thats not a bagging of linux because stuff broke between macos 10.1 and 10.2)
confoosled!
The difference is that Microsoft defines the Windows C++ ABI, whereas Linux is trying to use the multi-vendor standard. The Linux C ABI has for instance has remained quite stable, since there’s been no attempt to make it conform to anything else.
Anyhow, GCC 3.4.0 Linux ABI changes only affect C++ programs, meaning glibc/XFree86/gnome doesn’t need to be rebuilt. KDE/OO.org etc. do though.
I wonder if the binary size has improved at all. I’m sick of GCC generating code twice as large as Visual C++ 6. It’s a joke that GCC can get away with such wasteful output.
And yes I know about striping and debug info and all that, and it’s still 2x the size of the same code compiled on Win32.
Probably not too much. Visual C++’s compiler is famous for producing some of the smallest binaries for x86. Even Intel’s compiler isn’t anywhere close. Perhaps GCC 3.5, and its new SSA-based optimizer will show some improvements.
The release note only says the ABI was broken on MIPS. x86 users should probably be unaffected.
# Tuning for K8 (AMD Opteron/Athlon64) core is available via -march=k8 and -mcpu=k8.
Yay. Wonder if there’ll be any noticable difference in the executable speed.
In the section on Runtime Library (libstdc++) optimizations, they mention that static linking file sizes have been reduced. But if you’re keen on small file sizes and code in C, lcc-win32 generates by far the smallest executables on the Windows platform.
One of the reasons Windows is backwards compatible is that windows-compilers work with…(drumroll)…Windows, whereas GCC works with a number of platforms.
And even windows-compilations aren’t always compatible; some programs only work on WinXP/2000.
Secondly, nobody forces you to use gcc-3.4. You can still get gcc-2.95 from 1999 if you have to compile for an old system.
Thirdly, these ABI changes corrects bugs in previous versions.
And, this is 3.4 which is not a bugfix of 3.3 but a new major version.
I remember reading on a Gentoo forum that gcc 3.4 contains a feature will improve compile time for C++ applications most specifically QT/KDE apps.
Sorry to sound stupid but is this true?
It should also be noted that Windows makes compilers adhere to a less-than-ideal ABI. For example, Visual C++ to this day cannot do zero-overhead exception handling. Its EH model is built on top of SEH, which cannot use that approach.
I remember reading on a Gentoo forum that gcc 3.4 contains a feature will improve compile time for C++ applications most specifically QT/KDE apps.
Sorry to sound stupid but is this true?
According to the changes the parser now is hand-coded not YACC-generated, which should be the first big speed improvement.
And then there is the option to precompile headers, which is another improvement.
Both combined should result in faster compile times.
Has there been any changes in the linking times also?
Is someone aware of compile time (NOT runtime) performance of GCC, especially for c++, with or without precompiled headers, and how it relates to 3.3.x and 2.95?
SEH is something different than C++ EH. Also, how should zero-overhead exception handling be possible ? It works, it does something, so it must leave traces, both in form of created code and cost (however neglibible or not) at runtime.
Is there some link to new benchmarks regarding compiled ode performance/size (to get a rough idea), especially benchmarks comparing different compilers ? The latest stuff I’ve found dates to the 3.2 version.
Some say VCC is much better than GCC on x86, but I’d like to see if it’s religious wars or fact
stF
Structured Exception Handling (SEH) is different from C++ Exception Handling (EH), true, but Microsoft based their C++ EH implementation on top of SEH.
Try a simple program which would normally abort the program (such as dereferencing a null pointer):
#include <stdio.h>
int main () {
try {int *n = NULL; *n = 42; /* dereference null pointer — bad! */}
catch (…) {printf (“caught SEH exception!
“); /* works on MSVC */}
}
Microsoft’s C++ will let you dereference the null pointer, and catch the hardware exception, using Windows SEH. Other compilers/runtimes will abort the program (G++ on Darwin, for example, generates a “Bus error”).
Microsoft’s approach may not be zero overhead, but it can certainly be useful for preventing program crashes, adding additional program logging, and other functionality.
elsewhere,
MS relesae its free (as in free beer) C/C++ compiler
for Windows and .NET CLR
Microsoft Visual C++ Toolkit 2003
http://msdn.microsoft.com/visualc/vctoolkit2003/
> -march=k8
No difference unless you run your system in 32-bit mode. If you (presumably) ran it 64-bit then the compiler assumed it’s a K8.
P.S.: Sorry for the short comment (lynx’ textarea is so small).
I don’t have benchmarks, but the estimation is atleast twice as long to compile with 3.x than it took with 2.95.
These links provide a little info on the subject…
http://gcc.gnu.org/ml/gcc/2004-04/msg00840.html
http://gcc.gnu.org/ml/gcc/2004-04/msg00913.html
You are right. But dereferencing an invalid pointer results in undefined behaviour, which means you´re program may crash or may not crash but is from that point in an undefined state anyway and nothing can be relied on from that point on. So there is no argument for how efficient or elegant any OS or C++ compiler handles these kinds of situations since, philosophically (in terms of C++-Philosophy) speaking, after that point there is no running c++-program anymore, but a ticking bomb (which may go off or not).
Let me also thank Karl for the link, anyway…
I downloaded and compiled gcc.
I have a std/Qt based project that I’m working on and wanted to know if I could speedup the compiling.
Here are the results for my particular case:
gcc 3.3.2
102.73user 3.58system 1:58.96elapsed 89%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (16853major+93643minor)pagefaults 0swaps
gcc 3.4
84.53user 2.90system 1:36.36elapsed 90%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (14739major+75245minor)pagefaults 0swaps
The linker warns me that the older libstdc++ used in qt MIGHT conflict, but this seems not to be a problem for the qt libraries.
I installed gcc as a user in my home dir and build it with profiledbootstrap. Only things that had to be done after installation was updating the PATH variable and ld.so.conf
The most notable speedup I got in the linking phase, this used to work the most on my nerves.
Enjoy c++
Visual C++’s C++ EH is built on top of SEH. And zero-overhead refers to zero runtime cost in the case of non-taken exceptional paths. SEH has to do some fiddling with the stack on entering try blocks, and on executing function calls inside try blocks. G++ uses a table-based approach where the only runtime cost is an exception table in memory, and that table is in a seperate section of the executable, so its not loaded into memory unless needed.
I cannot really believe that you can implement C++ exception handling on a SEH basis.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vc…
For g++ : i agree that the overhead is minimal, but i wonder how the true-zero overhead for the no-throw path may be achieved:
http://gcc.gnu.org/ml/gcc/2002-11/msg00101.html
GCC 3.4.0 Linux ABI changes only affect C++ programs
Actually, they did change the C ABI this time. There were a lot of changes for SPARC and MIPS; and for x86/AMD64 they changed the way MMX and SSE operands are passed (the new behavior matches the icc ABI).
See a Microsoft developer’s answer to why Visual C++ cannot do zero-overhead EH:
http://msdn.microsoft.com/chats/vstudio/vstudio_022703.asp
As for the link you pointed out: there are actually two issues. The first sounds like a gcc bug. The second is a general property of all table-based exception mechanisms. They prevent -fomit-frame-pointer from having any effect, because they need a well-defined stack frame. This is only really a (minor) issue on x86, which might otherwise get away with not having a well-defined stack frame. See:
Here http://msdn.microsoft.com/chats/vstudio/vstudio_022703.asp
They say things like:
Q: Why did it took so long to reach an acceptable level of conformance?
A: Releasing a product the size of Visual Studio takes a long time. In VS.NET 2002, we were primarily focused on features other than conformance (such as attributes, and managed C++) Based on customer feedback we are now much more focused on C++ conformance.
AND
A: In general, we have seen little demand for many C99 features. Some features have more demand than others, and we will consider them in future releases provided they are compatible with C++. It is more likely we’ll entertain C99 features if they are picked up in the next version of the C++ standard.
So they don’t care for standards, all they care is for what customers ask for…
Well, since its the customers that buy their product…
How many customers were asking for managed C++?
Don’t think of it as “how many people were asking for Managed C++”. Think of it as “how many people want to be able to fully use their existing C++ code?”
Or, “how many people want to continue using their fully written, debugged, and working C++ libraries?”
No one wants to rewrite code unless they have to. Especially customers who have their own customers, and want to continue suporting their existing (C++) customers, but also want to support new .NET customers.
So, what do you do? Use COM? Yech. Write C wrappers to P/Invoke against? Double Yech.
Or, do you provide a product which allows the existing C++ codebase to be used, effectively unchanged, while exposing some of that code to .NET consumers? That is what Managed C++ attempted to do. Granted, the original syntax sucked (which is being rectified by ECMA’s C++/CLI standard), but it fulfilled it’s purpose in life: making C++ interop easy for developers.
Hi Rayiner,
this looks like a really good link. I need to read (and digest) that first before i can really answer.
Thanks for the info.
best regards,
Frank
I have to admit that I unabashedly, viscerally detest GCC to the very depths of me. That compiler–that god damn, hacked up, shit bag vomit compiler is full of half-assed “optimizations,” phenomenal, agressive, and anti-social standards non-compliant default extensions, and utter incompetent quality control.
In my life, I have written hundreds of thousands of lines of C code, and the vast majority of them have at one time or another been molested by the rotting carpuscles of this 20 year old half-starved, bullet-ridden, carnivorous dinosaur that takes a mile where an inch is given and prompty barfs on its own shoes. It exhibits irrational behavior on complex code; schizophrenic manic depression on mind-numbling simple but frighteningly unnatural and obscure constructs that few people on earth could identify correctly on a multiple choice test. It regularly invites missteps that are much later rewarded with impossible results; coupled with gdb it conspires to obscure program meaning–lying repeatedly, sweeping details under the rug, stonewalling investigation in the darkest paths of code, and failing horribly at providing any type of contract to the programmer. If GCC were the space shuttle, it would soar triumphantly into the sky to a perfect circular low earth orbit, deploy the satellite payload to its appropriate orbit, while ground control would pop champagne corks and cheer, then promptly smash the satellite to smithereens with the retractable arm, sending it hurtling into other satellites, then begin spinning wildly at extreme velocity, killing all crewmembers, transmitting high energy pulses that jam all communication on earth, and then begin a fiery descent into the most populated city on the planet, culminating in a Hiroshima-scale detonation, while simultaneously creating the most destructive viral chemical reaction possible from pure nothing which would burn off the entire atmosphere of the Earth and all life would cease to exist.
I write compilers. And that is my professional opinion.
Looks like somebody forgot to take their medication this morning.
I agree. So please write us a new open-source compiler
That was the most impressive mindless rant I’ve seen here in a while! Good stuff!
BTW, can you do another one about Linux? That’d be swell!