“We don’t have all the results yet but we’re giving you what we have so far on the new iMac G5s (2.1GHz and 1.9GHz) compared to the previous model (2.0GHz). I hypothesized that we’d see small gains in CPU intensive tasks and big gains in graphics intensive tasks. I noticed in various discussion groups that many consumers are trying to decide between the high end iMac and low end Power Mac. So I included the results from the Dual-Core G5/2.0GHz Power Mac.”
I hope Apple will keep PowerPC line and use Intel in laptops. So we could have high-end desktop and decent laptops. With fat binaries and OS X for both CPUs is shouldn’t be so hard.
Not going to happen. A current-gen Dothan will stomp the G5 in integer performance, and Conroe is only going to make things more embarrasing.
One thing that has occurred to me is that the transition to x86 is worthwhile, for Intel’s compiler technology *alone*. GCC optimizes quite poorly for PowerPC, relative to what the chip is capable of. They’re going to see a 20% performance boost, per clock, just by switching compilers. If Apple can get most vendors to compile with Intel C++, their machines will have a nice little edge over the competition (to the tune of making a 2.3 GHz CPU perform like a 2.5 GHz one), even though they are using thee same chips.
the only thing is that Apple don’t need C++ they need an Obj-C compiler – and this is gcc for now.
Edited 2005-11-06 21:16
Sure Apple needs a C compiler. What do you think Quartz is written in? Darwin? OpenGL? CoreAudio/Video/Image? Aside from Cocoa, the large OS X frameworks are C libraries. Since the Intel compiler is binary-compatible, at the C level, with GCC, the only thing Apple can’t compile with Intel C++ is the GUI applications and the Cocoa framework.
Which vendor ships Quartz again? Which GPU manufacturer relies on the C compiler for the optimization of their drivers? ICC will improve which shaders, again? CoreImage is an Objective-C framework. ISVs that ship software people actually use, use Objective-C. They’d have to construct an Objective-C frontend for ICC if they expected most of their ISVs to use it.
Using ICC for components that are already hand-optimized or dependent on the performance of the GPU isn’t much of a win. Other areas would see performance gains. It certainly isn’t going to give their platform a competitive edge over Windows or Linux on the x86. Would it be marginally better? Sure. Is it worth transitioning your entire platform to a new architecture, disrupting all of your developers? No. Not even slightly.
That Apple can now use ICC is fine. Intel’s C++ compiler is excellent. But if Apple really cared about performance to such an extreme extent they wouldn’t have stuck with the G4 for so long, they wouldn’t have used Mach as the basis of their operating system to begin with, they wouldn’t ship most of their models with mediocre GPUs, they wouldn’t promote development with Objective-C, and they would have licensed XLC.
Which vendor ships Quartz again?
Apple, which is kind of my point. There is quite a bit of C code for Apple to use Intel C++ on. If anything, it’ll make for a nice little boost come 10.5.
Which GPU manufacturer relies on the C compiler for the optimization of their drivers?
All of them. All of the drivers are likely very optimized C code, but good machine code still has to be generated from that C code.
ISVs that ship software people actually use, use Objective-C.
Last I checked, apps like Photoshop, Maya, etc, are all C applications.
Would it be marginally better?
The difference between GCC on PowerPC and Intel C++ on x86 is quite a bit more than “marginal”. But let’s put it, arbitrarily, at 20%. That doesn’t sound like a lot, and in truth, it isn’t a lot, but 20% is the difference between a $350 Opteron and a $1000 Opteron. GCC seems to be completely unable to extract the full potential from the G5 CPU. That means that even if the G5 was very bit as fast as its competition, it would still be slower in practice, and high-end G5s would be competing, in performance, with mid-range x86 chips.
Sure. Is it worth transitioning your entire platform to a new architecture, disrupting all of your developers? No. Not even slightly.
I was, of course, facetious when I said Intel C++ alone was worth the transition. However, I don’t think that the compiler factor is a minor one. Whether Apple wants to or not, it has to keep up with the Jonses. Doing that is hard enough without an automatic 20% performance penalty on your platform. Intel C++ offers one less thing for Apple to worry about when on the x86 platform.
> Apple, which is kind of my point.
And my point was that everything in your list was exclusively Apple, rather than their ISVs which I was still under the impression that you meant.
> All of them. All of the drivers are likely very
> optimized C code, but good machine code still has to be
> generated from that C code.
The major two driver teams do not rely upon the C compilers to perform the meaningful optimizations of their drivers. If nVidia or ATi thought that they could obtain a meaningful improvement in driver performance with ICC their installers would just bundle ICC-compiled drivers for Intel processors.
> Last I checked, apps like Photoshop, Maya, etc, are
> all C applications.
Yeah and not one of them relies on the C compiler to optimize its math routines. It’s nice how Photoshop and Maya somehow represent the majority of software for OS X, isn’t it?
> The difference between GCC on PowerPC and Intel C++
> on x86 is quite a bit more than “marginal”.
Um, whereas the performance difference in most code between GCC and ICC on the x86 is “marginal,” which is what I’m talking about. Comparing GCC PPC970 to ICC x86 is prettty pointless. Apple could license XLC for its platform, but as I said, it doesn’t really care about performance that much. Or if it does, it hides it especially well.
> Intel C++ offers one less thing for Apple to worry
> about when on the x86 platform.
And one thing more, namely having the flagship of the platform and all of the software that uses it use GCC and everything else ICC. Unless they write a frontend for ICC, in which case that becomes their problem. ICC isn’t even especially common in Windows for shrink wrap software developers.
Edited 2005-11-07 05:24
The major two driver teams do not rely upon the C compilers to perform the meaningful optimizations of their drivers.
I’m sure code quality plays a role. No matter how well-optimized the drivers’ C code is, GCC isn’t going to generate great machine code in a lot of cases.
If nVidia or ATi thought that they could obtain a meaningful improvement in driver performance with ICC their installers would just bundle ICC-compiled drivers for Intel processors.
That’s a good point.
Yeah and not one of them relies on the C compiler to optimize its math routines.
They surely rely on the C compiler to generate good code from their math routines. I’m not talking about high-level optimizations here. I’m talking about things like instruction scheduling. These things really hurt the G5 quite badly using GCC.
Um, whereas the performance difference in most code between GCC and ICC on the x86 is “marginal,” which is what I’m talking about.
That wasn’t my point. Let me try to state it differently. If Apple deploys Intel C++ to any significant degree, then in the transition from PowerPC to x86, they aren’t just going to gain the X% that Conroe performs over the G5, but X%+20% because of the compiler issue.
And one thing more, namely having the flagship of the platform and all of the software that uses it use GCC and everything else ICC.
I don’t quite follow that statement.
ICC isn’t even especially common in Windows for shrink wrap software developers.
That’s quite possibly because Visual C++ vs Intel C++ isn’t as big of a difference as Visual C++ vs GCC. Visual C++, in particular, has gotten quite a bit better in code generation as of late, while GCC in my regards seems to have gotten slower (with 4.0).
That’s quite possibly because Visual C++ vs Intel C++ isn’t as big of a difference as Visual C++ vs GCC. Visual C++, in particular, has gotten quite a bit better in code generation as of late, while GCC in my regards seems to have gotten slower (with 4.0).
Gcc4 is slightly slower than gcc3 in a lot of cases but it’s way faster than earlier versions on ppc/ppc64.
> I’m sure code quality plays a role. No matter how
> well-optimized the drivers’ C code is, GCC isn’t going
> to generate great machine code in a lot of cases.
But the point is to simply ensure that the code paths that are important are optimal, which will be done independently of the compiler used to generate the bulk of the code, while the remainder of the cases being less significant to the overall performance are left to the compiler. Given the competition between ATi and nVidia (including ‘cheating’ optimizations), neither of them will just sit around and let compiler performance dictate which of them has a performance edge. Since most of the optimization here is really in areas that are not significantly-dependent on the host CPU, that’s less of an issue, but the critical paths won’t be heavily-dependent on compiler performance.
> They surely rely on the C compiler to generate good
> code from their math routines.
In any case where SIMD instructions have been utilized by a program before the introduction of autovectorization into the common platform compilers (which really isn’t the norm even now, really) you can just go ahead and assume that was all written by hand in assembly, using SIMD intrinsics, or libraries that use the prior two. This is especially the case when a program has to run a variety of processors with different instructions, different numbers of execution units, and different instruction timings. Take for example the Accelerate framework.
> If Apple deploys Intel C++ to any significant degree,
> then in the transition from PowerPC to x86, they
> aren’t just going to gain the X% that Conroe performs
> over the G5, but X%+20% because of the compiler
> issue.
While 20% is really generous projection for an overall performance improvement, the point is that Apple could license a better compiler for the PPC970 and then comparing that with GCC on that platform would make more sense than comparing GCC on the PPC to ICC on the x86. It’s difficult to decide how to compare these two anyway outside of what optimizations they perform, since we’re talking about entirely different compilers for entirely different architectures. Even comparing GCC x86 to ICC and then using that to compare performance with GCC on the PPC970 is sort of flawed, because the x86 backend is already better, and we have a shaky foundation from which to determine what percentage of the performance difference lies solely to the compiler.
More importantly to the original discussion, though, is that any percentage isn’t going to give Apple a performance advantage against its competition on the x86. ICC is available on both of the other two platforms that ‘matter.’ The architectural differences in the kernel design and the userland frameworks will make a much more significant difference and be far less fragile in the face of changing processors (both in terms of the ever-changing ISA of the x86, changes in the number of execution units, pipeline depth, and possible transitions from Intel to AMD to Intel to AMD …) than just compiling with ICC.
> That’s quite possibly because Visual C++ vs Intel C++
> isn’t as big of a difference as Visual C++ vs GCC.
It’s mostly because most software doesn’t need and doesn’t make use of processor-specific optimization. Large quantities of Windows software just targets the Pentium Pro. Processor-specific optimizations when applied globally to an executable are fragile, artificially limit the install base, and are mostly superfluous for typical software. A binary may outlive a popular processor by a significant margin.
Where the deployment environment is easily controlled and computational performance is important ICC is a good investment.
Gcc isn’t that bad. Gcc4 is actually pretty good at optimizing code for ppc64. And icc isn’t that much better either.
GCC in general isn’t that bad, but GCC on PowerPC (on the G5 anyway) really isn’t very good. Looking at my own benchmarks and comparing them to IBM’s latest SPEC scores (which use XLC 8.0), I’d make a very crude guess that GCC is about 85% of XLC in integer and 75% of it in floating-point.
Take a look at the object code from gcc and see if it is taking parameters passed in by registers, sticking them into local variables on the stack, then retrieving those same values back into registers to manipulate them. I’ve seen this in code not flagged -Ox for optimizing. While I can’t prove it, I suspect that the problem is writing compilers for register starved arch’s like x86 instead of CPUs with no shortage of registers like PPC.
And it really smokes me to see compilers use non-volatile registers for operations that could be done in volatile ones. Between saving and restoring non-volatile registers and the above behavior, PPC code can look 50% slower than it really is.
What version of gcc was that? Apple contributed a lot of code to gcc4 to make it good at optimzing for ppc. So gcc4 is way better than older gccs on ppc.
GCC 4.0.0 (the default compiler in Tiger).
Incorrect; the default until the shipping of 2.1 was actually 3.3; MacOS 10.4 is still compiled with 3.3 for compatibility reasons; they made 4.0 the default when they shipped xCode 2.1.
I’m looking forward to 2.2 though, and hopefully the improvements in GCC 4.0 will come through.
Incorrect; the default until the shipping of 2.1 was actually 3.3; MacOS 10.4 is still compiled with 3.3 for compatibility reasons; they made 4.0 the default when they shipped xCode 2.1.
Are you sure? I haven’t used tiger myself yet but I’m almost certain that gcc 4 is the system compiler. Or at least that’s what I’ve read.
The bundled compiler that comes with the xcode included with the retail box, includes 3.3 AND 4.0, however, 3.3 is still the default. MacOS X 10.4 is also still compiled with 3.3 as well, for compatibility reasons.
4.0 didn’t become the default compiler until 2.1 of Xcode.
But then again, having talked to some of the webkit developers, even if Apple were to go all out in the optimisation department, the performance boost would be hardly noticable, hence the reason they just use -Os when compiling – the speed boost vs the possible instability can’t really be justified in the grand scheme of things.
“Not going to happen. A current-gen Dothan will stomp the G5 in integer performance, and Conroe is only going to make things more embarrassing. ”
BZZZT!
BZZZT!
BZZZT!
You lose dimwit.
Desktop Intel PowerMacs are going to be an embarrassment late next year. Apple is already sandbagging the clock speed of the latest PowerMacs so when they are forced to ship Intel desktops they won’t be laughed at.
Enjoy your PPC desktops Mac loonies. You have about a year before the pain begins. It is hilarious in a pathetic way just how oblivious to what a total joke Intel’s roadmap CURRENTLY is – not even taking into account all the cancelations and delays that are yet to happen.
Remember Mac loonies, two years ago Intel fans were bragging about where Intel was going to be today…
Who are you calling a dimwit? Are you arguing that the G5 doesn’t get stomped in integer performance? Just take a look at the SPEC results for yourself.
And what in earth are you talking about that Apple is sandbagging the clockspeed on the latest PowerMacs? The new dual-core chips are built on the same 90nm process the G5 has been on for awhile now. If IBM couldn’t deliver 3 GHz G5s back when Steve expected them, they certainly can’t deliver them now.
Apple isn’t going to push icc onto its developers after providing them with free dev tools for this long. They’ll be using GCC on x86 in the typical case. Most software will see no meaningful difference in end-user performance because most software isn’t performance-bound and will see no meaningful benefit for their $400 ICC license.
There’ll be more overhead in the Objective-C runtime than will matter for anything except performance-criticial code, and that doesn’t make up the majority of Apple’s ISVs.
I’m more interested in Apple using it for compiling their own OS X frameworks, and third-parties using it for performance-critical apps like Photoshop etc.
The hot Photoshop filters are already going to be developed in assembly or in terms of SIMD intrinsics. The lowest-level implementation of any of the performance-critical sections of frameworks like math datatype implementations and codecs will be the same. I just recall reading you say that you’d like to see most of Apple’s vendors make use of ICC, and that would be rather superfluous. It’s good that developers have the option of ICC now, though. I’m sure a few people are going to be fond of the idea of having Intel’s Fortran compiler available, too.
I don’t think there is nearly as much SIMD and assembly out there as you seem to think there is. Most code is just plain C. Breaking out the ASM is something most developers are hesitent to do.
And on a unrelated note. Who the hell modded down Japail’s last comment? Please, explain yourself.
There is a lot of assembly used in performance-critical code. Since most programmers don’t have to write performance-critical code, most programmers are free to run from SSE intrinsics and hand-written assembly. And for people writing codecs, game engines, math libraries, and so forth that’s the norm. Since most compilers don’t do autovectorization, and those that do don’t do so optimally, it isn’t remotely abnormal to do it by hand when it actually matters.
Oh, as for the moderation, I think you’ll notice that any time either of us post at all in a Mac discussion a certain someone unleashes his mod points on the comments that in any way can be construed as less than glowing of Apple. I’ve had to mod up a bunch of comments so far.
Edited 2005-11-07 05:27
Hi
I’ve been working for 10 years tuning High-performance code for large universities and corporation and generally icc is my favorite compiler. It has lots of nice ‘black belt’ switches which is not documented and may break compatibility, but in most cases they work excellent.
I also thought itanium was crap until I learned register programming and register block IO. Wow that is fast.
You’re assuming that the G5 stands still…
Given some recent announcements from IBM and from the new company developing a low power PPC CPU, that’s not a good assumption.
There may be newer G5’s out by the time Conroe comes out that eliminates any difference.
Yes, I’m assuming the G5 stands still, because that’s precisely what it has been doing for the last couple of years. It’ll likely get a bit of a boost if it moves to 65nm by the time Conroe comes out, but its seriously going to take a 3.5-4 GHz G5 to equal the integer performance of a 2.5 GHz Pentium-M architecture processor, and I don’t see that happening.
On top of that, the low-power PPC CPU is wholly unimpressive. They claim 1100 SPECint in 2007. That’s 1.6 GHz Pentium M territory. A SPECfp of 2000 is quite impressive (at the level of a 2.6 GHz Opteron), but in another year and a half when this thing comes out, 2.6 GHz will be a low-end Opteron.
Sure, the graphs were nice. The text, though, particularly on the second page, reads like a giant advertisement.
As for their conclusions, well, the CPU tests were quite unsurprising. The GPU tests only surprised me in the Core Image tests. I guess Apple’s ATI drivers are better than their nVidia drivers, since a 6600 should outperform an X600, especially at OpenGL.
The GPU tests only surprised me in the Core Image tests. I guess Apple’s ATI drivers are better than their nVidia drivers, since a 6600 should outperform an X600, especially at OpenGL.
The nVidia was a 6600LE – LE’s being the super-low cost pieces of junk. Also, CoreImage is 2D, not 3D, which is probably why there was such a big difference. OpenGL doesn’t factor into CoreImage scores.
CoreImage is 2D via OpenGL. Furthermore, the 6600LE is a 6200 in disguise.
From what I understand about Core Image is that it uses the programable gpu found on the ATI 9600+ and Nvidia FX+ cards. So the performance of it depends more on drivers and the cards ability of handling the programming then raw Open GL performance
Correct me if I am wrong, but my understand was that OpenGL sat the bottom of the graphical user interface stack and that basically the UI, CoreImage and so forth were all based off that one core – OpenGL, hence the reason why CoreImage requires a certain type of video card, namely one with OpenGL shaders as to allow those wizbang, CPU off loaded effects, but then again, that depends on which CoreImage considers faster at the time the stuff is being processed.
I’m sure this has been mentioned a hundred times, but you should really at least add a blurb about *where* the quote comes from on the OSNews.com home page. Something like “From Bare Feats: ‘We don’t …'” with a link from “Bare Feats” to barefeats.com.
It’s not only courteous, but also a fair use thing.
I’ve only noticed this so far with Thom, on every article he posts. Does OSNews.com not have an editorial guideline?
I want to share with you a very deep concern I have about Apple. The nitty-gritty of what I’m about to write is this: You won’t find many of Apple’s collaborators who will openly admit that they favor Apple’s schemes to pollute the great canon of English literature with references to its abhorrent, inconsiderate insults. In fact, their slurs are characterized by a plethora of rhetoric to the contrary. If you listen closely, though, you’ll hear how carefully they cover up the fact that many people think of Apple’s blockish threats as a joke, as something only half-serious. In fact, they’re deadly serious. They’re the tool by which craven, indelicate self-proclaimed arbiters of taste and standards will infringe upon our most important constitutional rights by the end of the decade. A second all-too-serious item is that narcissism is dangerous. Apple’s manipulative version of it is doubly so. Apple’s warnings are a logical absurdity, a series of deductions from a premise that has been denied. Speaking of absurdities, knowledge is the key that unlocks the shackles of bondage. That’s why it’s important for you to know that Apple unquestionably believes that it has the mandate of Heaven to promote a culture of dependency and failure. What kind of Humpty-Dumpty world is it living in? Let me give you a hint: We must take up the mantle and criticize its complicity in the widespread establishment of wowserism. Only then can a society free of its nerdy assertions blossom forth from the roots of the past. And only then will people come to understand that it asserts that its grievances are our final line of defense against tyrrany. That assertion is not only untrue, but a conscious lie. Apple has gotten away with so much for so long that it’s lost all sense of caution, all sense of limits. If you think about it, only an organization without any sense of limits could desire to reward those who knowingly or unknowingly play along with its prevarications while punishing those who oppose them. Who is Apple to say that its decisions are based on reason? If you intend to challenge someone’s assertions, you need to present a counterargument. Apple provides none.
Small minds are little troubled by this. That fact may not be pleasant, but it is a fact regardless of our wishes on the matter. Apple is always trying to change the way we work. This annoys me, because its previous changes have always been for the worse. I’m positive that Apple’s new changes will be even more primitive, because its lascivious editorials are in full flower, and their poisonous petals of sectarianism are blooming all around us.
Apple doesn’t reck one whit about how others might feel, period. You may find it amusing or even titillating to read about Apple’s fulminations, but they’re not amusing to me. They’re deeply troubling.
It goes almost without saying that I wouldn’t judge Apple’s representatives too harshly. They’re just cannon fodder for Apple’s plot to create a regime of puerile lexiphanicism. Apple says that it can be trusted to judge the rest of the world from a unique perch of pure wisdom. What it means by this, of course, is that it wants free reign to take us over the edge of the abyss of masochism. There is no doubt that Apple will sully my reputation by next weekend. Believe me, I would give everything I own to be wrong on that point, but the truth is that wily and venal, Apple’s platitudes resemble a dilapidated shed. Kick in the door and the whole rotten structure will collapse, proving my claim that if we take Apple’s perversions to their logical conclusion, we see that one day, Apple will harvest what others have sown. Apple frequently avers its support of democracy and its love of freedom. But one need only look at what Apple is doing — as opposed to what it is saying — to understand its true aims.
Should you think I’m saying too much, please note that Apple pompously claims that courtesy and manners don’t count for anything. That sort of nonsense impresses many people, unfortunately. I don’t know which are worse, right-wing tyrants or left-wing tyrants. But I do know that Apple’s power is built on lies. Now, I could go off on that point alone, but I have a dream that my children will be able to live in a world filled with open spaces and beautiful wilderness — not in a dark, belligerent world run by worthless survivalists. Apple had promised us liberty, equality, and fraternity. Instead, it gave us fogyism, allotheism, and simplism. I suppose we should have seen that coming, especially since Apple is not only immoral, but amoral. Let us now strike at the heart of Apple’s efforts to develop a credible pretext to forcibly silence Apple’s opponents, because in that is our only hope for the future.
hey man, you should cunsult a psychotherapist pretty soon….
Smartes. Bashing. Ever. 🙂
You could’ve posted just the 3rd paragraph and your point would’ve come across just as strong. No need for the nonsensical divagation.
Good point anyway
Ok, yes. Thank you for demonstrating your mastery of the Automatic Rant Generator. Here’s a cookie.
* crunch *
Yum!
Me like cookie.
You lost me at “I want to share”…
Please, not all of us have the time nor the energy to read/decipher your grievance, so your point, if well made and true, may be wasted on a lot of us here (or just me perhaps). I have no idea what you are going on about other than I think you believe Apple jumps on critisism or something and tell lies…
This may all be true, but go with Apple’s idea of simplism (which I think you have a problem with) and also give us some examples of what you are talking about. You may have in that foggy mire of rhetoric you so lavishly dished out on us, but if you did, I lost it.
Also, a column about iMacs being compared with other iMacs may not be the most appropriate place. Maybe you could start a whole new thread?
My god, what a load of complete and utter shit!
Please, stop trying to ladden your post with generous helpings of gobbly goop and sentence gruft and get to the blasted point!
For that 4kb you wasted posting that diatribe, I’m sure it could be compressed down into 3 sentences, with enough room left over for the required complain about the reality distortion field or something.
I have some advice for you: Seek professional help.
‘A regime of puerile lexiphanicism.’ …’A regime of puerile lexiphanicism?’
I loves me some satire….oughta be a little funny though
Download Windows Vista: http://windows.czweb.org/show_article.php?id_article=173
Just bought a 1.9GHz iMac G5 for my 77-year old mother-in-law two days ago. Her seven-year old IBM laptop died and I declined to replace the hard drive in it a second time, recommending the iMac instead.
I’ve got a fair bit of Mac experience but my forte is more in the Windows and Linux world. I will say that the iMac was easy to set up, connected to my MIL’s wireless network without a hitch, and she’s been using the thing for two days and hasn’t called me with a ‘how do I’ question once.
Impressive.