CoyoteGulch.com has published an interesting article, benchmarking GCC 3.04 and ICC 6 (the article will be updated again after GCC 3.1’s release). In the tests, ICC seems to pull ahead in most tests. When it’s behind, it’s never very far behind. The opposite is not true, and there are benchmarks where ICC is very far ahead, generating code that can be up to 5 times better than gcc’s. Especially interesting, the “Stepanov” benchmark, which shows that ICC is capable of understanding the most complex c++ constructs, whereas GCC gets confused by them and ends up generating much slower code. This is bothersome, because it means that developers who want to get the most speed out of their gcc-compiled system need to write their code pretty much in C, whereas those using ICC can use all the productivity-enhancing features of C++ without speed penalty. As for the “WhetStone” benchmark, shows that gcc still has serious issues dealing with x87 floating-point code. It would also be very interesting to run the very same benchmark on a Pentium4, varying the compiler options, in order to see how both compilers can take advantage of the Pentium4’s extra features, and especially SSE-2. Update: Another benchmark can be found here.
since it does not compair the merrits of GCC with MS C++ Borland C++ or any other Intel 3rd party compiler.
one should expect the semiconductor producer to have a better compiler for its product than a 3rd party developer.
GCC3.1 is significantly better than GCC3.0.x so it’ll be interesting to see the updated benchmarks.
There are apparently lots of improvements, speed wise, in 3.1, so the updated comparison after 3.1 is released will IMHO really decide the debate.
The benchmarks were done under Slackware Linux, therefore it is irrelevant to try with MsVC++ or Borland C++.
As for the “semiconductor producer has a better compiler”, so what? What matters is that ICC is faster. And don’t forget that AMD does not have a compiler. In fact, AMD itself uses ICC when publishing their bench results at spec.org! Motorola has a G4 compiler, but Apple does not even using it…
Eugenia I think that many people already said in the other threads that GCC has NOT the same aims ICC has.
The speed comparison between C and C++ code blew me away. It’s incredible how little, if any, overhead is added by ICC.
I wonder how well these closed source yet free compilers will be accepted by the community. Obviously there will be the vocal minority screaming bloody murder because it’s not free.
What would be absolutely FANTASTIC is if intel realeased non commercial/unsupported versions for other OSs like the BSD line.
I though ICC cost 500 bucks.
oh and Eugenia, I was, as I state above in this post, under the impression that ICC cost money.
that matters to a lot of folks, but anyway.
I was not ranting about ICC being better than GCC, I am fine with that, I was just saying that you can not assume that GCC is bad based on that comparison as it does not compair other 3rd party compilers. to make it fair, we can use GCC on windows and ICC and VC++ and Borland. then, if GCC turns up last, I can say that I would not have a problem with the comparison.
saying ICC is better is, like I said in my previous post, to be expected. However, that does not reflect on the worthieness of GCC as it is a 3rd party compiler and it should be compaired to 3rd party compilers with ICC as a standard to compair against.
Ruprecht, I agree! I hope Intel will port their compiler to FreeBSD. This would be so cool!
GCC is about supporting all or as many systems ,and languages as possible.
Can you use ICC on a PPC or MIPS system?
Does ICC run on Linux on an IBM mainframe?
How about if I run Linux on a Sun Sparc, can I use ICC then?
I think you get my point. If the GCC developers only had to support x86 then GCC would be very close to ICC in speed.
What about Ada95? does ICC do Ada95, because if it can’t then its of zero value to me because I program in Ada.
Gnat(GNUAda compiler) is based on GCC and does Ada95 quite well.
Oh and GCC is of course free as in beer and source.
Thanks but I will stick with GCC so that I am not dependent on the whims of Intel.
If it’s free, closed Intel OS developers should license that baby!
Metrowerks should check it out!
Palm should definitely license and use it for BeOS R6 (in addition to GCC of course)
An OS News comment thread just wouldn’t be right without a BeOS comment!
ciao
yc
Besides the performance increase (my current coding project runs very nearly 2x faster when compiled with icc 6.0) this Intel Compiler is worth every penny (err… none for the evaluation version) just for its OpenMP support and the neat profiling tools.
That makes two threads in a day in which zealots claim that speed does not matter. I think gcc and Mac users have something in common.
well, if you are trying to make very portable code with one makefile, speed does not matter at all since you will need a very portable Compiler that reads your makefile.
GCC is where it is at for that.
but, if you are making a single arch program, then go with what ever you want.
The tests are certainly “meaningful”, in that, for a majority of PC and Linux developers, the Intel architecture is the platform of choice/necessity. If I’m writing code for an Intel-based, number-crunching Beowulf cluster (which I am), it is very useful to know that Intel’s compiler prodcues the fastest code.
GCC 3.1 still looks to be a few days off; I do not want to publish benchmarks with prerelase or snapshot code. When I get the release verion, I’ll update the benchmarks.
I totally agree that gcc has different aims than icc; people need to read me “Conclusions” to see what I really think, as opposed to jumping to conclusions based on comments in this thread. Let’s just say I haven’t remove gcc from my system yet.
Intel’s compiler is available under a “non-commercial” (no cost) license. This isn’t the same as the GPL, but it does make the Intel’s complete C++ and Fortran 95 product available for non-profit use. Again, see my review.
I can’t say anything about the Borland compiler because I’m under non-disclosure. As soon as I’m free to talk, you can be assured that I will!
C++ is nothing but trouble and bloat. C is where all the action is, it is the very heart of programming. how many other readers agree???
> how many other readers agree???
Definately not me.
c vs c++ depends on what your trying to accomplish
some things lend themselves more readily to proceedural design, others to oo design
I like Objective-C, IMO it is C++ done right ๐
“I can’t say anything about the Borland compiler because I’m under non-disclosure. As soon as I’m free to talk, you can be assured that I will! ”
Ahhh, so you are a beta tester for the C++ version of Kylix
Technically, you just violated your NDA by saying you are under a NDA with Borland
C++ is perfectly capable of running code in a proceedural design.
ever hear the adage “useing C++ as a better C”
that refers to not utilising C++’s OO functions.
the only thing is that you still end up with slightly slower code than if you did the same thing in C.
“C++ is nothing but trouble and bloat. C is where all the action is”
Uh huh. 9 years between language enhancements…that’s a lot of “action”.
C code will eventually go the way of COBOL..lot’s of it around but not much fun to use compared to the newer Object Oriented languages.
I just went over to Intel and even the Academic version was $100 for Windows and Linux, where the heck do you get the no cost one?
what is funny is that I heared a COBOL programmer one time say that C++ was not such a big deal since he could do everything that C++ did in COBOL ๐
“since he could do everything that C++ did in COBOL ๐ ”
Oh my! COBOL must have come a long way since I last used it
Yes, gcc supports a whole range of other CPUs. But it’s documentation is horrible, and the API and binary compatibility issues can be very important indeed. Icc on the other hand tries to do its best on iNTELs own CPUs, and I would guess it gives even better results on the P4 (Xeon).
What matters here is which compiler can produce better code. It’s nice and snuggly if you want to play with free software, but when it comes down to businesses, running your code faster means a lot. Those who go on and on about how Linux should be used more commercially, this is what Linux needs. Being able to serve several more users on a server will very quickly eat up the very small cost of a compiler like this.
And the comment about C/C++. How much do you know about object orientation? Have you worked with any other languages than C? Coding C with a few classes and a C++ compiler isn’t the same thing as coding real C++. I urge you to take a look at KDE/QT for an example of how powerful C++ is. The source code is there and everything. And I would also suggest taking a look at ruby, which is a very beautifully designed object orientated language.
Can the intel compiler run against srpms? I’d like to see how incompatible it is with the gcc. I’d just love to compile XFree, the kernel, and Gnome under Intel and see what sort of differences there are.
that is a realy cool language!!! I read about it in the last months Linux Journal…..my god, I thought Python was nice….Ruby is realy nice, though I would have to say Python is easier for some one to use as a learning language.
Starting with ICC v6, Intel has tried to support better GCC’s arguments and “make”. However, some minor tweakings will always be required.
I believe that the biggest problem will be with C++ programs, not with the plain C ones. ICC has a better compliance towards C++, therefore it will spawn errors when let’s say, trying to compile KDE with it. GCC 2.9x was very broken, as C++ is concerned, and illegally allowed programmers to write bad C++. GCC 3.1 has fixed this, it is more C++ standard compliant. As long your C++ program compiles with GCC 3.1, it should compile with ICC too. C programs should not have (many) problems.
“The Intelยฎ C++ Compiler for Linux is substantially source and object compatible with GNU C, and is designed to work with other commonly used GNU development tools such as linkers and debuggers. ”
this is what is says about compatability
oh…and I found the non-commercial use version ๐
The June issue of DDJ has article “Test C++ Compilers for ISO Language Conformance”. GCC scores pretty high. I wish they included ICC though.
High as in High marks or High as in high number of non-conforming factors?
I agree… gcc was very weak compiling c++ code. Starting with release 3.0 things have gotten to the level of conformance to standard c++ as other compilers have been for years (including Microsoft). Perhaps being gcc-compatible is not somthing we should strive for, it is probably better to aim for standard c++ conformance.
The main issue here is NOT about Intels compiler produces faster code because it compiles code to ONLY the Intel processor–the main issue is that gcc is weak of optimizing c++ code to intermediate code and thus giving optimizations in later steps of the compilation a worse starting point than the Intel compiler. Saying that this shortcoming in gcc is a tradeoff when writing a portable compiler etc. is just a lot of bull. The difference here appears to lie in the ‘front end’ of the compiler where the actual c++ is analyzed (and optimized). Perhaps the performance of gcc compiled c++ code will pick up in time as they will have time to improve c++ optimiztion.
Why don’t we just congratulate programmers at Intel for showing us that there need not be any significant overhead in performance using c++ instead of c. Nice to have it for free for non-commercial use.
Speed does not matter–no, I won’t even comment that. ๐
I love this arguement C definately has far fewer uses than C++. C++ is good for procedural, object oriented, generic, etc, programming, while C does procedural programming. There is nothing C can do that C++ can’t (because you can always write essentially C code in C++). Even stuff where C is traditionally very important, such as OS kernels, could probably be better accomplished with C++. If you think about it, a kernel is essentially object oriented. There are process objects, file objects, driver objects, etc. The fact that most kernels already emulate certain C++ features (such how many data structures in the linux kernel such as virtual files and driver objects essentially have virtual function tables) lends credence to the “goodness” of C++ in a kernel. Then there are templates, which are great “speed vs. space” tradeoffs, since they allow much more inlining that equivalent C code. Take, for example, a generic data structure (kernel’s rely heavily on approprite data structures). In C, if you want to write a generic data structure, you have to pass a lookup routine a comparison function for each object type. This is an indirect call on every iteration of a lookup. In C++, you can use a function object which is automatically inlined by the compiler, eliminating all function calls altogether! After using STL for awhile now, and trying to apply STL concepts to an OS kernel, I’m a convert. C++ kicks ass!
So does this mean any of the current Linux distros could be built by ICC & if so would they be that much faster, and are there GPL issues associated with that?
And would OBOS benefit too, if the OBOS team can produce measurably faster builds, I will buy them a license, the least I can do for them since I can’t help with code.
No, it takes a lot of work to “fix” all these broken C++ KDE applications to compile with ICC. Even making them to work with GCC 3.1 (which now has good C++ compliance as opposed to the standard GCC 2.9x) would be painful for either the distro company or the individual programmers who would now have to write compliant code.
In fact, the C++ issue is the most important reason why GCC 3.x itself is not adopted yet widely. Because most of the KDE apps and KDE itself, won’t compile out of the box.
Let’s see how gcc compares to ICC when compiling a kernel for my netbsd IBM z50 (mips processor), my Alpha UDB, or one of our PPC linux boxes at work.
Oh wait, what’s this you’re telling me? It can’t compile for any of those machines?
What kind of lousy compiler is ICC?
Adam
>What kind of lousy compiler is ICC?
Who said that a compiler SHOULD be portable in order to be used or to be called “fast” or “powerful”? Is that what determines if a compiler is good or not? Its portability, or the kind/quality of code it generates? ICC is the Intel compiler. It is THE compiler for the 95% of the computers out there: Intel and compatibles x86 and 64-bit. That is what most (not all) of the people (including the readers over here) care about. GCC can support other platforms. So what? This test is about the x86 compiling capabilities.
Stop behaving like a kid Adam. I have respect for you, you can be more clever than this than mouthing against ICC because it happens to be better than GCC. People have worked equally hard for it as people have worked hard for GCC. Calling it “lousy” just because it ain’t Free or whatever, it is just not right. It is childish and **unfair**.
You mentioned your z50, Alpha and PPC boxes. You did not mention your Intel box though. Food for thought.
I’m incredibly biased when this topic pops up, which causes endless unproductive debates in my office. You see, I’m biased towards plain old C, and even though I respect the C++ philosophy, I dont think its the pancrea everyone thinks it is. Why?
C started as essentially a cross platform assembly language. A very low high level language (oxymoron alert). You’ve all probably heard the phrase “[I}C: “a language that combines all the elegance and power of assembly language with all the readability and maintainability of assembly language.[/i]”. As a low level language, its great for writing speed critical routines (like Kernels and Drivers), without actually using assembly. Some people will bring up an argument about modern CPU’s doing ‘out-of-order’ processing and the influence of pipelining and flushing etc, but a good engineer can still outperform a modern compiler if they have the time to do it.
C++ introduces concepts like polymorphism, operator overloading, OO philosophy and such. These concepts allow developers to use higher level language constructs. But guess what – C people have been doing polymorphism for years (function pointers), OO primitives (classes) have existed for years (structures), and most of C++ introduces hacks to break encapsulation (friends, static methods/members etc). Granted, one of C++ best features is operator overloading, but internally the compiler needs to create seperate instances anyway, so the implementation is hidden from the developer but still exists. I still think that a competant human can better structure this seperation than what a compiler can do.
The biggest reason I see academics pushing C++ over plain old C is something I call forced structuring. C can be badly structured, hacked into spaghetti in no time. C++ requires a fair bit of discipline, and literally forces you to have better structured code. At the end of the day, code will need to be maintained and read by a human. As Martin Fowler (the Refactoring dude, the book is highly recommended) wrote “Anyone can write code which a machine can understand – the trick is to write code which another human can understand.“. C++ helps write structured code, but I’m sure that we’ve all also seen badly written OO code.
At the end of the day, use the tool which will help you accomplish your task easiest. But for Pete’s sake, do not create a religion out of a philosophy. Do not apriori dismiss POC (plain old C) as a remnant of the 80’s. If C++ was the pancrea everyone claims it is, why do so many intelligent and very capable developers still insist on using POC. Maybe because POC is still the closest you can get to a cross platform assembly language.
PS. Enter Tao Stage 2 with VP.
C++ still has some key advantages over C. You can always code C-style code in C++, but you can do it better. Take templates, for example. If you want ultimate performance, you’d adapt each data structure to a particular type. If you write seperate data structures for seperate types, you kill code readability. With templates, you can have data structures specialized for their types (ie. no function pointers such as comparison routines or allocation routines) but allow the compiler to do the work of customizing the data structure. Also, stuff like polymorphism is used in C code all the time. Take a look through the linux (or gnome or XFree86) sources and see all the tables of function pointers. C++ simply makes this synactically cleaner, but compiles it to the same machine code as C implementations. That’s the nice thing about C++ vs C. It gives you lots of options, but doesn’t force you into any particular mold. If POC-style code is the way to go, then fine. You can do that in C++ As for cross-platform, g++ 3.x+ really makes that a non-issue except for the most extreme cases.
BTW> This thread is OT, so this is the last you’ll hear from me on this topic…
BTW, Cobol is available for the .NET platform. It is very strange to see WinForms and ASP pages being written using COBOL.
-G
“Let’s see how gcc compares to ICC when compiling a kernel for my netbsd IBM z50 (mips processor), my Alpha UDB, or one of our PPC linux boxes at work.”
Why does that matter? Why would Intel (or anybody for that matter) support those platforms that have a combined user base of about 6 people?
-G
That was my point… Portability is not something that everyone wants from their compiler.
Speed, on the other hand, is something that many people are willing to compromise in exchange for portability.
Oh, and I never said that portability is what makes a compiler good or not. ๐ But it is a major advantage.
Zenja
the big diffrence is that C++ does all the work for you so you do not have to spend all that time writing a bunch of code over and over again.
And don’t forget that AMD does not have a compiler. In fact, AMD itself uses ICC when publishing their bench results at spec.org!
AMD uses GCC for it’s new processor: http://www.x86-64.org/
First things first: C++ can be made to do everything one can do with C easily, and with about as good efficiency as C (bar som of the newer things in C such as VLA’s and restricted pointers). The real problem with C++ and the reason why I dont use it is size, C++ is an insanely huge and complex programming language, as it happens one of the things that has come up over and over again in this thread is KDE not compiling on ICC due to non-standard code. Also saw a post comparing compliancy of C++ implementations, these both stem from the simple problem that C++ is too large and convoluted to be easily understood or for that matter implemented.
Thing is that it is a well shown fact that the quality of a language absolutely not is the sum of the usefulness of its features, the second “big” IBM language designed after FORTRAN followed that principle and included everything and the kitchen sink, in fact it probably has far more far more useful features than C++ has or ever will, the language was called PL/I. Ever used PL/I? I doubt you have, it is a convoluted unusable mess.
Same thing goes for ADA, has all the features of C++ and more, but it is too a convoluted mess and largely unusable, in my opinion C++ gets the same problems, too hard to understand and too hard to implement, people love to go “You dont pay for what you dont use” but that is bullshit of course since you have to learn every little detail about the language to read other peoples source anyway since everyone uses a different subset.
Lanmguage design is about picking the right features based on the readability, writability and simplicity the final language ends up with. In my opinion C++ does a poor job.
They didn’t even use profiling features of gcc/icc.
Or compare O2 to O3 with gcc. There goes the value of the article.
http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
-fprofile-arcs and -fbranch-probabilities
/opt/intel/compiler60/docs/c_ug_lnx.pdf
-prof_genx and -prof_use
For example, povray-3.1g is 19% faster with profiling with ICC, but only 1% faster with profiling with GCC.
BTW, newsflash, gcc 3.0.4 has different inlining limit than earlier 3.0.x:
g++ -O2 -march=pentiumpro: 13.73s
g++ -O3 -march=pentiumpro -finline-limit=10000: 14.26s
g++ -O3 -march=pentiumpro: run time 59.40s
icc: 11.33s
I find it interesting that just about a week ago, you told people on this board that it’s unfair to compare windows boot time to the boot time for Linux with a simple window manager, since they provide completely different services with different functionality. A fair comparison would be to compare windows boot time to linux boot time with KDE or Gnome.
Yet you then turn around and have no problems comparing gcc and ICC, despite the fact that they provide completely different services with completely different functionality.
No offense, Eugenia, but does anyone else see some slight hypocrisy in this? ๐
> Because KDE itself won’t compile out of the box
Which KDE apps are you speaking of? KDE2 apps or third party apps? I am typing this from Konqueror 3.0 that I built using gcc 3.0.4, along with all the rest of the KDE3 tarballs, and didn’t have any problems (with CXXFLAGS+=-O3, even!)
iirc, ICC generates code that is optimised for parallelism in the processor, because Intel chips don’t rearrange the instructions but instead rely on clever compilers like ICC.
meanwhile, iirc, Athlons (like Alphas etc) do instruction rearrangement ‘live’ in the processor at runtime, so are less choosy about clever compilation.
So, what does this mean? (other than I ought to check my facts, but I am too lazy to)
Does it mean that some of the Athlon advantage has been that the poorly optimised code of most applications is optimised internally by the Athlon, whereas the Intel processors chug through it sequentially? Will ICC narrow the gap megahertz-wise that everyone seems to find when comparing these two desktop antagonists?
Hi,
<grin>
I always thought that if I write C or C++ ANSI/ISO standard
code *and* had a compliant compiler (and/or) library on my target platform… I was all set.
Honestly…. when it comes down to it… I don’t care if I’m using XLc (AIX), Sun Workshop (Solaris), Codewarrior (earlier Beos releases) or GCC (AIX, Solaris, Beos, etc. ,etc).
The Dinkum libraries are always a consideration for standards adherance (QNX licensed them last summer for their RTOS).
The important things are:
(1) my code should run and comply to standards
(2) my code should run as effeciently as possible for the target platform.
***My focus is on my code being portable and not whether or not my compiler is portable ***
In a simplistic example… its like taking a Macintosh (68k) program and using the toolbox functions for opening and writing to a file. Then porting that code to Windows and changing those functions to use the Win32 API.
Then when I want to do UNIX… use the standard library calls.
<grin> Why on earth a Win32 or other developer wouldn’t code the POSIX api’s is beyond me (if portablity is important).
GCC is cool and it is free. I’m totally greatful for the GNU.
Okay…. back to my nefarius(sp?) lurking…
>So, what does this mean? (other than I ought to check my facts, but I am too lazy to)
Yup, you should check. What you say is only true for the pentium family. Starting with the PPro, the CPU does live re-ordering of instructions. It’ll even break down the “CISC” instructions into smaller “RISC” instructions (x86 to uops) and execute the “RISC” parts out of order.
There are still bottlenecks that can be accounted for when compiling the code (see http://www.agner.org/assem/#optimize for details), but the re-ordering of instructions for the various pipelines has been a thing of the past for… 7 years.
JBQ
http://groups.google.com/groups?q=%22DFA+Scheduler+and+Processo…
It’ll be interesting to see how things fare with X86-64 too.