As reported by KernelTrap, CVS version of the yet to be released GCC 3.4 is reaching parity with ICC on floating point performance according to SPECFP2000. SPECINT still isn’t as good however when compared to ICC.
As reported by KernelTrap, CVS version of the yet to be released GCC 3.4 is reaching parity with ICC on floating point performance according to SPECFP2000. SPECINT still isn’t as good however when compared to ICC.
I thinks its too bad most gcc developers work for the IA32.
But anyway it’s very good news. Most software on the IA32 that does not works on windows, will be boosted by such enhencements.
Ludo
—
http://homepage.mac.com/softkid
that gcc is used for _many_ architectures it is very impressive
/R
I just hope that GCC 3.4 will be compatibile with GCC 3.2.1. Sometimes comatibility is more important than speed.
“Sometimes comatibility is more important than speed.”
… and sometimes fixing bugs in the ABI is more important then compatibility.
Not to rain on the parade, but somehow I bet floating point performance is closer tied to a platform than integer. I could really care less about i386 performance.
It would be nice to know that valuble potential cross platform improvements are not being neglected for the sake of speed one one platform.
I hear performance is pretty unnaceptable on ARM forcing many to use poor quality comercial (but fast) compilers. I would love to see gcc become the de facto compiler for ARM.
gcc already has better performance on i386 than any other supported platform. It seems Apple is doing the best work in gcc right now.
(Let’s copy & paste my comment at kerneltrap)
Although I respect gcc, I don’t see it quite possible that gcc could ever match icc. Given that Intel knows more than anyone else about their processors and has very good (and well paid) team of engineers. My only reason using gcc could be the number of platforms it supports. And on the IA-64 ground, which will become quite a standard in a couple of years, ICC’s power is indubitable. On the EPIC arena to harness all the potent powers of IA-64 architecture (like speculation, predication) gcc hasn’t gotten far yet. That’s mainly why Intel developed ICC, and that’s where its power lies.
Are the OO features also getting faster? I think this is the major downside of gcc. It is quite a good C compiler, but once you start writing C++ using all bells and whistles such as exceptions and rtti, it compiles really slow and produces horribly inefficient code.
Projects such as KDE really could benefit from a better C++ support. And they should *finally* have a stable ABI.
GCC has some interesting properties. GCC development is done by the community, and as a result, performance improvements happen on the most popular architectures. When (if) IA-64 becomes the dominent architecture, GCC’s hackers will concentrate more on it, and it’s performance on IA-64 will improve, like it’s performance on IA-32 has. Second, GCC cannot use many exisitng, patented algorithms, and thus has to depend on newer developments and research work, or older more mature methods. Thus, GCC will probably never be as fast on a given architecture as the native compiler, initially. However, as time goes on, and GCC-specific methods are developed and implemented, GCC will improve to match the performance of native compilers.
386 didn’t have an included FPU (floating point unit) at all, it sold separatelly as 387. 486 was the first Intel processor to incorporated an FPU.
I’m betting that AMD’s 64bit implementation will become more popular in numbers due to their policy to have 32bit backward compability as well as the planned Athlon 64 processors for use on desktops. Furthermore AMD doesn’t have an own compiler suit akin to Intel’s ICC so we can expect them supporting GCC. This might well turn out being a problem for Intel in the long run unless they intend to change some of their plans.
the 386 came in a number of variants, and SOME did infact come with an FPU.
Maybe they could have worked on the standards instead of the speed first?
If the compiler worked and followed a standard across versions many open source OS’s could save TONS of wasted time making apps work with the “all new” ways of doing the same damn things as the previous version.
Using a GCC with a higher revision than needed should only produce better code, not unworkable code.
In short, whoop-te-doo, it’s faster but still breaks programs not written for this exact point release.
Don’t even get me started on the similar problems with Linux kernel modules…
Mutiny
Source compatibility to newer GCC releases is already easy to achieve. Binary compatibility is what we need.
Binary compatibility to the Linux kernel can not be promised and shouldn’t be expected anyway since external closed source binaries are only tolerated even though they break GPL.
They *did* work on standards first. There is a reason GCC 3.x is second only to Comeau C++ in terms of C++ standards complience. Source compatibility was broken between 2.9.x and 3.x in order to make GCC standards conformant. If Visual C++ ever becomes as strictly standards complient as GCC, then you can expect similar source compatibility breaks. As for binary compatibility, it’s not a huge priority in GCC’s target market. Linux and FreeBSD are largely C-based, so breaks in binary compatibility are localized to a few applications. 2.9.x wasn’t binary compatible because evolving support for C++ necessitated changes to the binary format. 3.0.x adapted the new C++ multi-vendor ABI, which necessitated a change in binary format. 3.1 and 3.2 were not compatible due to ABI bugs found in the new standard. 3.2 is thought to be generally free of ABI bugs, so the official reccomendation is to base your platforms on 3.2. The FreeBSD project has deemed it stable enough to base FreeBSD 5.x on GCC 3.2.
As for Linux kernel module compatibility, do you think you’re the first person to have the great idea that kernel modules should be binary compatibile? Binary compatibility was considered for Linux kernel modules. It is an concious design decision by the kernel developers to keep the driver interface fluid. The Linux kernel changes *very* quickly. The kernel developers decided that it was worth a little pain on the part of distributions to update recompiled modules with new kernels (which most distributions do) for the benifet of not stagnating the kernel or burdening it with huge amounts of compatibility code. If you don’t like their decision, that’s fine. Either think up a way to accomplish the many incompatible improvements made in recent kernels, or read linux-kernel and try to understand why things are the way they are.
Thank you for the explanation on the ABI issue, I was un aware that it was no longer a moving target. I’m very glad to hear it.
Do you think you are the first person to realize that the kernel developers wanted the modules to be tied to the kernel compilation, rendering modules virtually useless for end users? Calm down with the loaded remarks.
I was simply stating an oppinion that until drivers can be loaded easily from a third party without creating one module for each and every flavor of Linux, vendors will balk at making drivers and users will be put off.
How many “newbie” issues are because of compiling a module for a device? I’d bet over 70% of Linux questions are module/kernel compile related.
If you could download a module that would work with all 2.4.x kernels life would be massively easier.
These two issues, now only one issue, are very real problems with Linux today.
Mutiny
This is OT, but here goes:
1) There are almost no binary-only modules for Linux. Binary only modules really aren’t welcomed on the Linux platform. Not only because of licensing issues and ideology, but because external binary modules are a major source for problems for any kernel. Microsoft gets around the problem by requiring extensive driver signing and verification. Linux gets around it by pretty much mandating open source modules that can be easily debugged by the kernel developers. The realities of the kernel development model pretty much mandate the current solution.
2) If a binary-only module is *really* important to the hardware company, they can put up with a bit of pain to support one. NVIDIA, for example, seems to be doing just fine on that front thanks to their well-designed abstraction layer (which supports even bleeding edge 2.5 kernels). However, having a binary-only module is not that important to most hardware makers, who choose instead to just go ahead and release specifications. You’ll notice that most modern hardware is supported via open source drivers thanks to such companies.
3) All of the above is utterly irrelevent to the user. The user has no need to recompile his kernel or manually download driver modules. The distro should be doing this automatically. RedHat, Mandrake, and Debian *do* do this automatically. The Windows “the user has to f* with the drivers” model is obsolete. The user shouldn’t even know what a driver is. It’s a goal of the kernel developers (from what I gather on lkml) to make all drivers fully self-configuring. They’ve already advanced pretty far, to the point where only a few devices actually require any manual intervention.
In summary, if a certain piece of hardware requires the user to manually download a driver, the solution should not be to make it easier for the user to download and install that driver, but to make it entirely unnecessary. The fact that it isn’t is really what is holding back Linux today, not the lack of a stable driver API.
More piles of bullshit all over the map.
1) The conclusion that GCC 3.4 is “reaching parity” is wrong, wrong, wrong. Adding some more of what icc can do, beyond “-O3”, such as -axW -ipo -ansi_alias will add miles to the performance gap.
2) The C++ front-end actually does some special optimizations only for C++ because it was difficult to fit in generically. GCC isn’t a “only C goes fast” compiler, infact if there is any marked bias it is in favor of C++.
3) There are no patented algorithms that are holding GCC down. Please disable your made-up-out-of-your-ass-several-years-old-parroting-mode at this time, Rayiner.
4) “GCC 3.x is second only to Comeau C++” Nice try fanboy. Just counting all of the C++ compilers that use the EDG front-end (icc, aC++, …) puts GCC in a very long line before 2nd.
5) “3.2 is thought to be generally free of ABI bugs” Bzzrt! Score +3 for made up shit of the day for Rayiner. There are known bugs in the current 3.2.x ABI, but fortunately GCC isn’t planning on changing the ABI for quite some time as they promised some stability. The idea is to change it as few times as possible.
I’d love to see a distro that doesn’t need user intervention.
Right now I have several examples in front of me.
1. Adaptec 29230R, Adaptec has binary drivers for several distros, except for the current kernels or kernels released tomorrow. As long as I don’t BOOT from it and I don’t upgrade it works. The source version doesn’t compile into the kernel correctly so I have to boot off of IDE or make an initrd. How’s that for no user intervention?
2. Nvidia cards, not a bad work around, but every time a new distro (i.e. RH8) comes out, you have to wait for drivers even though they worked on the same kernel on the old distro.
3. LIRC, a simple comm port IR remote software has to be compiled into the kernel. Most distros support and include Lirc, but in most or all cases it is broken. RH, Mandrake and Suse are examples.
4. Intel 845 audio. Nobody gets this to work without the 2.4.20 kernel. The patches simply do not work.
5. Intel Gigabit EtherPro, driver is supported in some kernels, but not compiled by RH. Intel supplies a patch.
6. Lack of NTFS support in RH. They don’t bother to compile it. There is actually a user’s site with precompiled ntfs.o’s for every stock RH kernel. That’s a great thing to leave out when every MS OS defaults to NTFS these last few years.
7. Security updates can hose your system when a driver no longer works with your new “fixed” kernel. Ever seen a user update their system as per the update notice and suddenly their video card quit working? I have.
Some of these are in the “don’t buy unsupported hardware” category, but having to compile drivers is plain stupid for an end-user OS. Linux in it’s current state punishes you for upgrading your hardware.
BTW, I am a Linux advocate. I am not a dual-booting newbie. Both of my systems only run Linux, but I can still point out obvious flaws in Linux’s mass appeal and plain old functionality.
Mutiny
Hey,
I understand where you’re coming from. I recently had a similar experience with a BTTV card (the pinnacle) that needed a new version of the tuner drivers because the hw-manufacturers changed the tuner on the PCB. I thought 2.4.20 would cover this, but apparently it didn’t.
You have to realize that people only start working on updating the drivers _after_ they have access to the hardware. If the manufacturers felt like updating the source of the drivers they could have done so before releasing the card.
modversions is the best effort to provide the ability to use modules across kernels. But the fact is, the kernel moves fast – even with modversions the interfaces change sometimes. If you want your hardware working out of the box, try to get your driver in the kernel (yes that means GPL/BSD’ing) and your hardware will work out of the box, no drivers required, on all linux systems.
If you insist on having binary-only drivers NVidia’s approach is the best. And no I don’t have to wait for drivers from them – you can just as easily recompile them yourself – that’s a choice you have. Debian provides an automatic recompiler package, I’m sure something similar must exist for RedHat somewhere.
So I’d say there’s four categories of problems:
1) braindead companies that provide nothing at all (no drivers, no specs)
2) braindead companies that just provide a (usually very low-quality) binary-only module for an old redhat kernel.
3) only slightly evilish companies that insist on providing binaries, but at least make sure there’s a wrapper that gives the freedom to run any kernel you like. (read: NVIDIA) If you’re using other achitectures you might still be in trouble (read: PowerPC) but in some cases, you have a two part driver that’s still functional without the binary part (read: pwc driver for (Philips/Creative/Logitech) webcams that works even without its binary-only pwcx module containing proprietary compression routines.)
4) companies that provide specs, but noone got around making or updating a driver for, or someone has but it’s not included in the latest kernel release yet.
Of these, I will not buy from 1) or 2), and may buy from 3) or 4).
The FreeBSD project has deemed it stable enough to base FreeBSD 5.x on GCC 3.2.
Not only the FreeBSD project, but pretty much most (if not all) Open Source OSes, including Darwin.