John “Hannibal” Stokes has interviewed Pete Sandon, the PowerPC 970’s main designer, and David Edelsohn, a compiler writer from IBM, and clarified several points about the 970 regarding group formation, vector issue queues and performance, and more.
John “Hannibal” Stokes has interviewed Pete Sandon, the PowerPC 970’s main designer, and David Edelsohn, a compiler writer from IBM, and clarified several points about the 970 regarding group formation, vector issue queues and performance, and more.
…was when the ibm engineer commented on not attempting to combat/compete with apple’s RDF ;}
Cute, how gcc is such an important customer of processor designers.
From what I have read here and elsewhere an optimized compiler can speed overall system almost as much as new hardware. This is really good news. As a side benefit, code can be produced without as much hand tweaking for good results.
This news will have the execs over at Intel shaking a little more (well, at least sitting up and paying attention). Performance gains from optimisation of the compiler, combined with a shift to the smaller 0.09 micron production process, and perhaps longer term, improvements to the VMX unit and integration of the memory controller, all point to a bright future for PPC computing. Perhaps an extended stay at the top of the pecking order is what Apple needs to win some of that seemingly elusive market share…
Not only is the VMX far more flexible than previous thought, it’s been made painfully clear that whatever is shipping with the G5’s this year is nowhere near optimized for the machine. The compiler itself needs to be optimized for coding for this specific architecture, because this archetecture “breaks the customs” for most industry designs.
That means that as they develop and perfect an optimized version of GCC or something else, then developers can start using this new optimized complier to create optimized code, which will then show even more performance gains out of the same hardware.
So the G5’s future looks brighter and brighter.
Can’t wait to get mine.
is that to get full benefit from an application – a vender will have to recompile the same application and redistribute it to gain the performance. With the compiler not already optimized and ready today, this will set software vendors back as much as 2 years before they will be able to ship the same product with optimized code
This is very bad from everyone perspective (except Intels of course)
This is very bad from everyone perspective (except Intels of course)
What is bad, that applications shipping now won’t be able to fully utilize the hardware?
I think this problem is significantly worse on the IA64 side of things, where EPIC compilers are still not yet intelligent enough to optimize code properly for the architecture.
For the interim, all that can be said is that the processor isn’t being leveraged to its full theoretical potential. This isn’t desirable, but it certainly isn’t a “bad” thing.
“With the compiler not already optimized and ready today, this will set software vendors back as much as 2 years before they will be able to ship the same product with optimized code”,/i>
You make it sounds like the compiler will produce less than stellar results. The GCC compiler will still produce fast code, even if it still has room to make the code run much faster.
[i]”This is very bad from everyone perspective (except Intels of course)”
Nah, this isn’t a problem at all.
no, it just means that when the compiler is better that companies can post a patch on their website
damn… so many pessimists at osnews… everybody seems to be seeing the end of the world soon and apocalypse right next door… 😛
Just reminds me of that “Don’t worry, Be happy song”… ya know… by Bobby Mcferin…
Here’s a little song I wrote
You might want to sing it note for note
Don’t worry, be happy
In every life we have some trouble
But when you worry you make it double
Don’t worry, be happy
Don’t worry, be happy now
Don’t worry, be happy Don’t worry, be happy
Don’t worry, be happy Don’t worry, be happy
“This news will have the execs over at Intel shaking a little more (well, at least sitting up and paying attention). Performance gains from optimisation of the compiler, combined with a shift to the smaller 0.09 micron production process, and perhaps longer term, improvements to the VMX unit and integration of the memory controller, all point to a bright future for PPC”
What are you talking about. Intel Has a highly optimized compiler for their chips, they will be shifting to smaller fabs same as everyone else, and the memory controller will probably be in the chip soon same as AMD has done with the AMD64’s.
You statement didn’t make any sense at all as to why intel should be shaking their heads. You think they don’t know that the more optimized the compiler is the faster a program can get, or that smaller fab can speed things up?
“What are you talking about. Intel Has a highly optimized compiler for their chips”
Thats the point. The G5 is very impressive in real world tests without the benedit of a highly optimized compiler. Therefore it will be worse for Intel when the G5 gets a good compiler.
Glad to hear that the altivec on the new G5 systems is going to be solid. Ars originally speculated that the altivec had some issues in terms of speed and robustness. Looks like after a chat with the IBM folks, the questions have been answered.
Another great Ars article…
It sounds like there is a lot of potential in this chip. I hope IBM is able to roll these optimizations into GCC. I’d hate to see it need a proprietary compiler to perform well like the P4.
Also, from what I’ve heard, IBM is really getting behind this chip. I for one, would love to have a Linux box running a couple of these. I’ll probably get a DP Athlon64 instead. I doubt IBM is going to be making any cheap 970 boxes anytime soon. Maybe a cheap 970 white-box market will open up (I sure hope so). Of course, you’d still have drivers to worry about.
” I’d hate to see it need a proprietary compiler to perform well like the P4.”
The 970 will significantly outpace the P4.
At one point in the interview it looks like IBM and Apple are working together on GCC improvements and donating the code back to the FSF.
This is a fairly big deal as people have pointed out before that GCC on PPC isn’t as hot as it should be, but with that kind of muscle and money behind it it should go forwards by leaps and bounds.
With the new GCC improvements it looks like Linux on those new, remarkably cheap, P970 IBM boxes is going to be a real winner. And AFAIK Gentoo already runs on PPC fine – no one is going to be bitching about compile times with 4 1gig+ CPUs crunching away at it!
It is good to know that a G5 Mac I buy today will only get faster into the future. I can’t imagine anyone not liking this. Maybe because in general Wintel users are used to their hardware becoming obsolete after every OS update, they feel this should be the norm? In the Wintel world it seems that what is bad is good: they like planned obsolescence; they prefer higher clock rates and heat to actual higher performance; they prefer a myriad of security issues with their OS; they would rather buy a processor near its eol than one that is faster and just coming out of the gate.
Please note that most Apple software is modular and use the system frameworks. This is why a text editor thats only 512K in size can support text documents in 8 different language encoding, page formating, text formating, multfonts, printing to PDF, text color management, graphics, spell checking, and saving to RTF.
Given this, once improvements to the GCC are done and Apple does there normal updates; they system will get faster and most programs will become faster. Vendors won’t have to release new version to just to make their applicatoins faster.
From the article it looks like IBM is going to focus on delivering the 970s to Apple and for use in IBM blade and workstation systems so for the time being their will be no whitebox 970s.
Its also interesting to note that GCC and the 970s don’t play well so Apple’s choice of using GCC was not an optimal choice
Apple chose GCC because it was free. they made improovements to GCC so that it compiled better for PPC, now they will make changes to it to compile better for 970’s.
Xcode is being released with Panther…I assume that the reason for developing Xcode is so that you can drop in a new compiler (such as the one made by IBM) and then go about your business.
I don’t know why I bother to answer to a troll, but…
Maybe because in general Wintel users are used to their hardware becoming obsolete after every OS update, they feel this should be the norm?
FUD. While Windows seems to get slower and slower at each update, it doesn’t make the previous hardware automatically obsolete. Windows XP have tons of drivers for “obsolete” hardware.
In the Wintel world it seems that what is bad is good:
Really? Let’s see…
they like planned obsolescence;
Every IT company out there does that, even Apple. What’s your point?
they prefer higher clock rates and heat to actual higher performance;
Most people don’t know anything about higher clock rate and/or heat, and those Wintel users who does use their PC because it does what they want the way they want.
they prefer a myriad of security issues with their OS;
FUD. MS’ lastest OS (Win2K3) is much more secure, and I guess the next version of their desktop OS will be at least that secure. Every OS have security holes, anyway.
they would rather buy a processor near its eol than one that is faster and just coming out of the gate.
Maybe because that processor can run the software they want? Maybe because some people prefer to build their own PCs? Zzz.
“Apple chose GCC because it was free. they made improovements to GCC so that it compiled better for PPC, now they will make changes to it to compile better for 970’s.”
I believe Apple went with GCC because all their obj-c front ends and APIs were deribated from NeXTStep. NeXStep/OpenStep used GCC as their compiler, so… if ain’t broken .