“This January not only brought new Apple systems, but also a MAC OS X-adapted benchmark suite by the Standard Performance Evaluation Corporation (SPEC) entitled CPU2000. On the one hand, this suite allows comparisons to be made within a certain framework with the Intel competition and, on the other, it shows that Motorola and Apple were able to get more out of the new gigahertz processor than might have been expected by simply taking the pure clock frequency difference to the 866 MHz predecessor model into account.” Apple G4 Dual 1 Ghz against a single PIII at 1 Ghz. Which is faster? Heise has the SPEC benchmark results. SPEC is known to be very precise when comparing the CPUs themselves without having major interference from the rest of the system or surrounded hardware. Our Take: AFAIK, the MacOSX license specifically states that no benchmarks results of any kind are allowed to be published. Coolio. UPDATE: Read more for some commentary on the results.We see that the G4 can’t always keep up clock per clock with a Pentium-III and Heise didn’t even use gcc3 or icc for the PIII… and that’s only in integer.
In floating point, the G4 significantly lags behind the PIII, and this is an area where the P4 (and Athlon) are known for being significantly faster than the PIII.
A quick comparison, when using the better compilers for the
x86 CPUs:
Integer Results:
Athlon 1666 (2000+) : 697
P4 2200 : 790
G4 1000 : 306
PIII 667 : 310
Floating Point Results:
Athlon 1666 : 596
P4 2200 : 779
G4 : 187
PIII 667 : 222
For the people who argue that Altivec was not enabled. This is true, but it is also unfair.
The compiler they used, gcc 2.95.2, doesn’t know how to use MMX or SSE either, and barely knows how to use the PPro floating-point instructions FCOMI and FCMOVcc.
Furthermore, what most people care about is to write
high-level code and to compile it, trusting that the
compiler. Going even further, most companies will want
their engineers to spend more time optimizing an x86
version than a PPC version (writing a few core routines
in vector assembly), so it’s even more important for a PPC compiler to be able to auto-vectorize code.
On SPEC, what matters is how fast the exact same (source) code runs, and nothing else. CPU vendors have no excuse for not providing good compilers.
I’ll bet a million dollars the floating point marks aren’t AltiVec accelerated. Whether this is even keel with not using the Intel optimized libraries, I’m not sure. The bottom line is that the SPEC mark doesn’t explicitly vectorize code, and I’d be willing to bet Apple’s gcc compiler doesn’t pickup the ball either.
Despite loving OS X and the G4 I’d have to say that this is still a valid benchmark. It compares identical code running on various machines. If the compiler can’t automatically vectorize the code, it is the compiler’s fault. So the article’s statement “the SPEC mark shows that this isn’t a supercomputer at all” is true from that standpoint.
At the same time, I’d bet explicitly vectorized code running on both platforms in floating point performance will fare signficantly better per clock rate on the G4 than on the x86. The capabilities of G4 processors running optimized scientific code refutes the SPECfp benchmark’s dismal floating point rating. In the end the benchmark show’s more a deficiency in the compiler than the hardware.
Read the update Hank. I have answered there..
> Our Take: AFAIK, the MacOSX license specifically states that
no benchmarks results of any kind are allowed to be published. Coolio.
Not even photoshop benches?
Who’s responsible dumb stuff like this?
[/i]Pixar Presents: The mhz myth[/i]
apple: Well yes, the whole mhz thang is just a myth. And well prove it!
[Enter AMD, left]
apple: See we told you. Oh you want us to prove it with our own processors, heh, well no, we cant do that.
If you decide to play the game, play it! Otherwise just stick with what you can really prove are strong points of your products.
Admit that ppl buy your products for reasons other than power, cus no one in their right mind would ever by a mac for power (the
whole system is just bottleneck madness, vis-a-vis current pcs).
I think you basically proved what I said in my response. I also think we are saying 100% the same thing. The purpose of the benchmark is to compare non-optimized code. That’s one of the reasons why I like non-Apple run Photoshop benchmarks. Adobe tweaks the heck out of their code for both platforms. It therefore is very good at showing the theoretical maximum performance of the hardware.
If the G5 vector unit is as powerful as the rumor sites are saying, Motorola and Apple need to license the auto-vectorization algorithms from Cray or redevelop them themselves. As a point of comparison, the SPECfp mark running on a Cray supercomputer would be equally abysmal, if Cray hadn’t done such a good job of tweaking their compiler optimization.
What does the benchmark tell us then? Something very profound. If you want to get great G4 performance, you’d better work at optimizing your code. There won’t be a free lunch going over to the Mac.
>>Admit that ppl buy your products for reasons other than power, cus no one in their right mind would ever by a mac for power (the
whole system is just bottleneck madness, vis-a-vis current pcs).<<
A compiler test doesn’t prove anything except that Intel did their homework and Motorola obviously didn’t! I am not making no excuse for Apple, I see benchmark tests saying this and other benchmarks saying that! Of course for a developer always compiling code this would matter, but for other implementations, this might not even be a problem or the case! Clock speed still don’t matter when other factors are involved, like the software, the operating system the CPU family and whatever else floats your boat. answer me this question between a 64 bit CPU running at 500 MHz verses a 32 bit CPU running at 1 GHz, which one do you think will be faster in real time performance, helk even in a test such as we have with the SPEC? Of course this isn’t the case with what is going on here, but the fact remains that the clock speed comparison in my scenario points out that only comparing clock speed is a flawed concept, even with different CPU families like x86 and PPC. Do I really care if the G4 is a supercomputer or not?… NO! I laugh at the notion. It sounds good for about 5 minutes, about the same as hearing the USA is a ‘Super Empire’ (I am an American, so I am not bashing the USA, it’s my homeland)! Will I go out and buy a Dual G4 Power Mac ‘Quick Silver’ tomorrow?… you bet your bottom dollar I would, and I wont think twice about it!! Do I care about a compiler test from SPEC?… it was an interesting read and I know more today than I did yesterday, but this type of test doesn’t affect me and what I do with the machines. We have all seen different results from different test and they either say x86 is supreme and/or PPC is supreme. It’s just like the marketshare in the OS world, you’re telling me that those experts who come up with the percentages actually went to every home and asked “what are you running on your PC/Mac today?”… I think not, it’s all hear say.
Bottom line is… just use what you like to use and respect others for what they like to use and be glad that we have different choices in the computing world, so that we get to have such wild and crazy discussions like we do here on a daily basis. I have come to a point now, that no matter what I say, it wont change the others persons mind on what the choose to use and furthermore would probably piss them off to where they wouldn’t even think about it since I just cut them down and what they are comfortable with!
I think I might have went overboard, but I just had alot od jibber jabber to get off my chest sorry π
Congrats to Intel and the compiler thingy he he π
btw, where did you get the benchmark numbers you were citing? Is that right from SPEC or from some other source?
You have to give it to him, he’s consistent!
A decent person who is also consistent like CattBeMac, you want it for neighbor/friend/boyfriend/girlfriend/dentist, because you know you can trust him/her, even if you don’t want a Mac ;o)
about three years ago I was running a code (C++ extension module for python) on a 500 MHz alpha (21364) and then tried it on a 433 MHz celeron. I was shocked when the alpha was only about 50% faster (don’t remember the exact ratio). Both were compiled with the same version of 2.9 flavor of gcc. I did not pursue the discrepancy but suspected compiler optimizations in gcc.
>btw, where did you get the benchmark numbers you were citing? Is that right from SPEC or from some other source?
From SPEC:
http://www.spec.org/osg/cpu2000/results/cpu2000.html
Well, given that you’re comparing wildly disparate CPU speeds, you might want to give Spec/MHz ratings. When you do that, you see some interesting things–the most notable one being that the Pentium III is, if the SpecINT numbers are to be believed, not only 52% faster than the G4, but 30% faster than the P4 and 11% faster than the Athlon, all on a MHz-for-MHz basis.
Hmm. To be perfectly honest, I’m not sure I buy that.
What I’d be more interested in seeing, anyway, is how well OS X uses multiple processors. BeOS was famous for its low overhead in handling multiple CPUs, and NT was famous for its, well, high overhead in such. I’d really like to see Apple start pushing multiple CPUs throughout their product line, not just on the high end. If BeOS taught us anything, it’s that there are cases when two (or more) “slow” processors are nicer to have than one fast one.
I thought the G4 would be faster. While I never believed it would be competitive with the fastest x86 CPUs (Athlon XPs clocked at about 1.7 GHz or P4’s clocked at 2.2 GHz) since PPC isn’t really one of the best RISC archs (that would be Alpha), I would think it would at least compare with a PIII at the same clock speed. I’m guessing that it probably does compare pretty well, but GCC’s lack of good optimizations for platforms other than x86 is holding the G4 back. Still, if GCC is going to be the official MacOS-X compiler from now on, they’re going to have to get it fixed real fast. Otherwise, they’ll just have a slow OS running on slow hardware.
hehehe
I agree with you 100%, and you agree with me (whether you know it or not )
I think this brings together our arguments:
Bottom line is… just use what you like to use and respect others for what they like to use and be glad that we have different choices in the computing world, so that we get to have such wild and crazy discussions like we do here on a daily basis.
Apple certainly has its strengths and their userbase’s devotion must have its reasons. What annoys me is apple trying to play the power cpu game with so little to show for themselves. If you say mhz are a myth, then be prepared to prove it, or at least let someone else do it. Restricting benchmarks on your OS is plain suspicious. Further the only reason that the mhz myth has been revealed is due to AMD not apple.
Another question is whether they have any chance at all at playing the game. Definitely not with a 1ghz processor (no matter how “fast” it is) and certainly not with things like slow access to hds, slow ram etc.
Perhaps the g5 can step up to the plate, though I’d be surprised, moreso because the hammer and “finally showing some real juice” p4s are gonna be very tough to beat.
So given this environment apple should either show real commitment or just forget about it. I have a feeling their server strategy could force into something they don’t seem ready for. Why not just say, hey look if you want to compile X in 10 seconds, they are your solution, but if you want taste the sweet sweet flavor of a modern interface, running on the master of masters (bsd), then come to poppa.
I’d like to see the same benchmark run on the G4 with linux. Its no secret that OSX does not have optimized libraries yet…so this comparison is valid for real world tests of OSX but invalid for CPU benchmarking.
And as for real world….recall that the benchmark only tested once CPU. The Dual G4 will be faster than indicated for everyday use.
That said, I’m happily plugging along without a 697 specInt2000 athlon….My G3’s and Mips R10k’s and Fujitsu Turbosparc’s run just fine. And OSX kicks ass all over windows…
>>While I never believed it would be competitive with the fastest x86 CPUs (Athlon XPs clocked at about 1.7 GHz or P4’s clocked at 2.2 GHz) since PPC isn’t really one of the best RISC archs (that would be Alpha), I would think it would at least compare with a PIII at the same clock speed.<<
Yeah too bad Compaq is letting it go to the toilet π If DEC was still here, we would be arguing between Alphas, x86s and PPCs and which was the best operating system to boot… Windows, UNIX and/or VMS!
And as for real world….recall that the benchmark only tested once CPU. The Dual G4 will be faster than indicated for everyday use.
assuming that you’re multitasking, or using dual-processor optimized applications, just as it would be using a dual-CPU P3 system under NT/2k/XP Pro or any other OS with SMP support.
What this really tells us is that Altivec is what makes the PPC processor a viable option today. Otherwise, the 1GHz PPC is getting beaten by a 667MHz P3. MHz myth indeed.
Try this test. Take a Porsche Boxster attach a plow to it and time it plowing a field. Now do the same with a BMW Z3. When you are done compare the results, what have you learned, absolutely nothing useful, unless you are going to use the winner for plowing fields. The plowing test is out of the context of how the cars are typically used. The same problem exists with the processor tests in this article. If your intended use is doing calculations using inefficient compliers then this test is valid, and you should select the processor accordingly. If you intend any other use, you are going to need different comparison criteria.
“about three years ago I was running a code (C++ extension module for python) on a 500 MHz alpha (21364) and then tried it on a 433 MHz celeron. I was shocked when the alpha was only about 50% faster ”
I’m no alpha expert, im new to them, but isnt the 21364 like _really_ new? 3 Years ago i’d be surprised to see a 21264. My Alpha dates from like 1997, and its a 500Mhz 21164, sure you arent talknig about those?
50% faster is still quite a bit faster, but there again, as everyone has said, it all depends on the compiler, which is why i have a grudge against C, you have to rely on the compiler to give you a good result.
That apple optimizes for things I do, or that they optimize for the spec benchmarks? Luckily, my favorite manufacturer of hw optimizes for the things I do daily. Last time I remember running the spec benchmark, I…. well, I don’t think I ever have in my day to day chores.
But I will say this… I did a ftp transfer from a Athlon 1900 box to a 600MHz G3 iBook, and I pegged the bandwidth-o-meter. Funny, a PIII 700MHz cannot do this against the iBook. None of the machines were “tweaked”. Even more interesting is that I use the networking option on my iBook everyday. Seems Apple has more of a handle on a total system approach.
I did this a few weeks ago, so bear with me:
Using the benchmark capability of dnetc, my 500MHz G3 iMac was consistently faster than than my 600MHz celeron.
As far as per clock cycle went, the G3 was about 15% to 20% faster than the Celeron per MHz.
Note that the dnetc program has very optimized executables. uses altivec, mmx, sse, etc
Try it yourself and share your results with me and everyone else!
Thanks,
Chad
Did anyone else notice that the FP tests used Fortran, which required the use of a third-party compiler? Maybe a C-based FP test would have been better for Apple?
Or maybe not. I must admit, my Xeon/400 is much faster at FP stuff like video decoding than my G4/867. But, I like my Mac.
I remember that when an old comparision between linux and darwin on the same hardware came out. Linux kicked darwins ass in running the same floatingpoint code (Not to mention most other things except network/disk stuff I believe). I remember the people on Darwin-dev mailing list attributed it to darwin’s slow mathlib. There was a referance to OS 9’s mathlib being up to 6 times faster in some inscedances.
Anyways, think this might be the problem? I know gcc isn’t the best compiler for ppc platforms too but this can’t be the entire problem.
Wish I could find the original thread…. But it was almost a year ago..
How timely. I’m a research scientist who spends a lot of time simulating. I’m currently using a small ‘wulf cluster of Athlons (6 XP 1700’s to be exact and 3 K-7 1000’s to be exact) running under Linux. I typically write my own stuff in C/C++ and use the MPI. Just two days ago I was interested in the speed of the new 1 GHz Mac, as I’m getting ready to build a new cluster (grant money!). I compiled a small sim on my office machine (466 G4; apples c++ compiler) to run under a single node and used the same code and compiled on one of the linux boxes (g++ 2.95.3). Everything was exactly the same (same makefile). Both boxes have sufficient RAM to not be an issue. I found the following info:
1) running on OSX w/o the Aqua interface gives about 5% increase in speed (login: >console).
2) for both 1000 MHz G4 & 1000 MHz AMD, the G4 took ~35% longer than the AMD to do the exact same computations.
3) for the same outlay in $$, I can build almost 3 AMD XP 1700 machines for the cost of a single G4.
I’ve spent the last few days trying to vectorize my sim app to take advantage of the AltaVec. So far my results suggest that an AMD 1700 is still faster than the dual G4, even under the AltaVec, although it is close. IMHO, the issue is this:
1) Cost: I can add more nodes using AMD than G4 for the same outlay.
2) Portability: Should I be interested in re-writing my code to take advantage of the AltaVec engine when that will force me to only be able to compile for the G4? Or should I focus on writing code that can compile on Sparc/SGI/AMD/Intel, etc?
Don’t get me wrong, I’ve been using a mac in one way or another since 7.5.3. All the digital hub stuff is totally true, I still can’t sync my iPod with any of the AMD boxen… However, when it comes to the kind of computations that I typically do for research, it doesn’t make sense to use the mac platform at this time.
R. Dyer, PhD.
Ok, in response to my own post. Libm in the current distro of os x is… archaic. I mean it’s freakin OLD! And all in straight-forward C. Modification dates from the 1993’s. Changelogs that point back to when it was in BSD from 96. Ouch…. If libm is used for anything, huge performance drop immediatly on os x is my guess.
BUT! 5 weeks ago libm was replaced with LibMathV5 in CVS. Completly different code. Still C but much faster.
I think it’s still missing some stuff… But I’m gonna run a simple test suite I made on the current libm. Then upgrade and run the suite again. Betcha it’ll be a big difference. π
too bad it took so long for apple to get around to this though… π
http://www.spec.org/osg/cpu2000/results/cpu2000.html
Am I crazy ? I was unable to find anything about the G4 on this page, either from apple or motorola. This smells like plain FUD. I am using macs and NTs for high end compositing, and our aging dual g4s @ 450 Mhz kick the butt of 1.4 Ghz P3s. Something is wrong here π
The G4 results are from the Heise article.
SPEC results are just that: SPEC results. They do not have to be done on the same machine or on the same session. All the SPEC benchmars are done on different machines, configurations, and even time.
That looks about on par with what I’ve seen. I’ve had a dual 800mhz PIII for about a year. At the last price drop I picked up a dual 800mhz G4. I use Carrara for 3D, which is not optimized for Altivec or SSE, and clock for clock the G4 runs about 10% slower for rendering the same scene, no FUD here. This of course is not to say there are some tasks the G4 is really good at, for example one of the scenes I used to compare them subjectively loads much faster on the Mac. I say subjective because I didn’t measure but it was significantly different.
The part I don’t agree with is when he says the G4 is “Mac is so far less well-suited for scientific applications”. Hah! BS! When you write a scientific application you friggn use altivec!
>When you write a scientific application you friggn use altivec!
No. When you write a scientific application, you use F77 or (even better) F90, because that’s what everyone in your community knows and understands, because it’s very highly portable (possibly even more than C), and because that’s what gives the best results on most platforms.
JBQ
Beyond what everyone has said, I think that Apple does need to step up efforts to make real pro-line computers faster. Less bottlenecks and such, not just in processing power.
We need faster RAM (DDR?), better motherboards, better I/O in both motherboard and OS…. There are several places where Apple could optimize and get to a level fitting their price point.
Don’t get me wrong, I am a HUGE mac fan. This x86 hardware can go out the window as far as I’m concerned. When i get a preference, i am a PPC man… however, that doesn’t mean that I will just accept any piece of dog doo-doo that Apple gives us as being faster than similarly speced (and less costly) x86 hardware.
Yes, I want the experience of MacOSX, but speed is also really nice.
Spec Benchmarks just spitout naked truths.
<P>
In REAL Life, everything is different…Ripping a Complete CD w/192kbps-j-stereo, gogo in less than 3 minutes ? no problem ! Compressing divx in less than 2 hours ? no problem.
<P>
I tried a lot. Man If you use Photoshop, Quark & Illustrator, stay on MAC.
For ANYTHING else which essentially uses heavy FPU : for heaven’s sake, use a dual AtlonXP, PIII-Tualatin Kombo.
<P>
If you got the money to buy a dual G4 GHz – USE dual Xeons w. 1.9 GHz.
You will definately be on the (performance way spoken) safer side.
<P>
If you want to impress Lady Chatterley – buy a mac. Besides the GUI is beautiful !!
<P>
BTW Eugenia, I still live on a BeOS R5.03 Pro with 2 PIII-Tualatin 1.4 GHz on a Tyan SMP Mobo (greeeeeaaaat to work & to impress Lady Chatterley a once π
<P>
bye !
I’ve never liked the G4 chip even though its now running at 1GHz. To me, its basically a G3 processer with slightly better floating point performance and Altivec capability. Because of this, I’m still using my 8600/200 which has a 604e processor and I’m going to keep on using it until the G5 is released. I like the Altivec but I want a processor that has much better spec integer and floating point performance. The G5 should have better performance in these catagories (unless they take the next generation G4 and call it a G5 which means I won’t be buying a new mac until the REAL G5 is released).
– Mark
I love reading benchmark and cpu comparisons, because they never say anything new or important.
What is important about computing is that most modern computers and operating systems are highly inefficient.
with OSes using around a ~third of most hardwares potential; and hardware being designed mainly to beat the benchmarks that represent it as opposed to being more optimized to each other component of the computer.
Almost 20 years ago, Commodore’s Amiga where able to do NTSC video at full frame rate on 14Mhz processors, it took almost 10years for Quicktime to come out on Macs and for AVI on intel.
Is this because their harware platforms where less powerfull.
Apple used the same processors, and intel processors where comparable.
So the reality is that apple and wintel where more concerned with their share prices and the glitter and glitz of their platforms as opposed to really improving the state of computing.
Commodore on the other hand had brilliant engineers, and bad management, so great computers but long out of business.
The best processor or the best Video card are nothing without a tighly integrated hardware and software platform.
Wintel manages to work via pure brute force (like a ford mustang or a chevy corvette)and uses way more resources to do things that Amiga’s could do 20years ago.
Apple uses and different technique which is to use semi-intel compatible standards and integrate the software platform in an half assed way.
neither way is optimal or right, and when the bebox from be came out, we got to see the beauty of great system design from the amiga generation.
at the time where apple was selling PPC604e based machines to be able to just play a quicktime movie without frames dropping at half NTSC range, BeBox + BeOS was playing 4 quicktime movies on a 603 based platform.
(more like a highly optimized Honda Prelude or Mazda RX7)
Why is any of this important, well because the manufacturers of this stuff have us on permanent upgrade mode.
Because in reality 4 year old computers have more power then necessary for almost everyone, but are so overloaded by MS bloatware that they fall apart miserably when one tries to type a letter.
You might ask if linux is the answer, well no.
Linux is great for what it does which is being an amazing Jack of all trades, it is able to deal with an amazing cacophony of standards platforms and uses.
But is it a highly optimised hardware/software combination.
well no.
but it’s good enough and uses resources better than most of what is out there.
Do we have to suffer to this mess?
well no again.
We can vote with our cash and our voices and tell Intel, Microsoft, Apple, Motorola, IBM, and the rest of the cohorts to shape up and slim up or go to hell.
We should support companies that really make amazing products such as the former BeInc, and not succomb to the layers of FUD and Marketing smoke that enveloppes this industry.
I use a Mac and WinXP everyday, I have grown accustomed to their bells and wistles, but I long for a smooth and succinct platform that use amazingly well designed hardware, and highly optimized software.
What we really need is to boycott any tech company that has a bigger Marketing dept then their R&D dept.
Te Power is to the People, and the revolution will be emailed.
seabass
1. Most legacy code is Fortran code. A lot of new code that is based on Fortran libraries is fortran code. Most new code with no fortran depandacies is *not* fortran. I think this over generality is more accurate than the previous one which stated all scientific computing uses fortran
2. In a perfect world all scientific code would be vectorized explicitly and perfectly optimized. Unfortunately this is not the case. For the most part it is better to create code which is easy for the compiler to vectorize rather than doing it explicitly. This is the general pratice if you read Cray’s language documentation. Unfortunately, Motorola’s/Apple’s compilers don’t seem up to the task.
>>We can vote with our cash and our voices and tell Intel, Microsoft, Apple, Motorola, IBM, and the rest of the cohorts to shape up and slim up or go to hell.<<
Well you will have your second chance. I just had a discussion with a person ‘AmigaGuy’ on MacCentral about Amiga’s future, and he had some real interesting things to say. He pointed out to me that there will be a new Amiga machine on the way, the AmigaOne to be exact from what I gather and that they are currently optimizing AmigaOS 4.0 to run on the PPC architecture, though the interface wont change much, he said some new and really nifty things will be coming our way. Hopefully Amiga will get the second chance to prove your point?! It would be nice to see Amiga jump back in the market full scale and prove their techno was worth it!!!
friend and i had to benchmark dnet on a demo g4 1ghz. 20gigakeys vs. 4.5 gigakeys from an single 1.4ghz athlon.
anyone have any ideas about this one?
I have little confidence in these so called “independent” benchmarkers.
But even if I believed the results there are two points to consider: 1) OS 10.1 still has a lot of rough edges and needs more optimization. If you go X86, your only choices for either speed OR stability is LINUX or FreeBSD (I’ve used W2000 plenty and I am convinced it is not stable enough for ANY important application. It IS fast enough for routine office work, though), and I am too much of a wimp to want to take on maintaining a UNIX system myself. I will HAPPILY stick with OSX/PPC.
To see some of the work that was early Babylon 5 and the CGI was done using old Amigas it made me cry. For almost a decade I have been wanting a decent computer system which incorporated hardware and software into a kick ass solution for media authoring (Video and Sound) without the excessive wallet bleeding required curtesy of Apple and Wintel. The closest I came to that platform was through Be Inc’s remarkable OS but now I am wandering in the darkness again wondering what the hell to do. There is Win2K which I now use as my primary OS but on my aging hardware it couldn’t hold crap compared to BeOS. Then there is Linux and although there are some strides being made to get XFS and low latency kernels developed I think there is still half a year before it becomes standard. Hopefully the GUI’s on Linux will become more pollished as well although to be honest a Sawfish Gnome/Rox combo does it for me. Problem is when will software houses produce some decent software for this platform??? they prefere the Windows/Mac platforms where at least in music production more and more of them are becoming specialised hardware/software solutions so the end user has to spend an arm and a leg to get their recording solutions up and running.
I mourn the passing of BeOS and hope something will rise from the ashes but I am sick of waiting.
ps. I’m considering a CPU/motherboard/mem upgrade but I need some help deciding.
Requirements are
preferably dual AMD system (can I use 2 AMD-XP CPU’s in SMP) (-:
– To be used with Hoontech DSP24 C-Port audio hardware
– Run under Win2K Pro and Debian Linux
– Designed for a serious multitasking environment and media workstation
Advice and suggestions would be appreciated.
I’m thinking an Asus Dual AMD board with 512Mb DDR ram and 2 x AMD 1800XP cpu’s but I’m not sure yet.
One of Apple’s big problems is that they only refresh their product lines once or twice a year. Dell, Compaq, HP, IBM, Gateway, and all other PC vendors are constantly upgrading their products. Sometimes Apple does something cool, like putting Firewire on a computer before any other vendor. Other times, we’re stuck with aging tech such as PC133 RAM and AGP/4x while other vendors implement faster schemes. Which means that Apple probably won’t be implementing USB 2.0, Firewire 2 (or whatever they call it), ATA133, and faster RAM until they come out with G5’s in July or (LORD SAVE US) next January.
>>Which means that Apple probably won’t be implementing USB 2.0, Firewire 2 (or whatever they call it), ATA133, and faster RAM until they come out with G5’s in July or (LORD SAVE US) next January.<<
I have been hearing about Apple talking seriousness GigaWire and Rapid I/O to push some latest and greatest techno. Don’t be surprised if this will be in the next generation Power Mac sporting a G5!
>>The closest I came to that platform was through Be Inc’s remarkable OS but now I am wandering in the darkness again wondering what the hell to do. There is Win2K which I now use as my primary OS but on my aging hardware it couldn’t hold crap compared to BeOS.<<
Join the ‘GE’ project to make sure the OBOS folks keep this type of high performance and true multimedia experience going. These people are talking about everything from the kernel to the GUI functionality, the more the help, the better!
>>I mourn the passing of BeOS and hope something will rise from the ashes but I am sick of waiting.<<
I think with the OBOS project going the way it is, BeOS will rise again π
>until they come out with G5’s in July or (LORD SAVE US) next January
This is really getting funny – the g5 rumors have been around since late 1999, and there is still _nothing_ that says the chip will ever appear…
Last autumn the g5 was rumored to be 64 bits, produced with *very good* yields at 1.6 GHz and reasonable yields at 2 GHz… This is really cool – Motorola and Apple have a chip that is much faster than anything else in the world and chose not to release it just to sell their old g4’s π
Give me a break; when it comes to fast cpu design Motorola is almost going out of business and can’t keep up with Intel/AM – If there is any such thing as a “g5” designation this year it will almost certainly be a minor upgrade to the current PowerPC, 32 bit, running at 1.2 GHz, and about 20% faster than the current PowerPC.
>>This is really getting funny – the g5 rumors have been around since late 1999, and there is still _nothing_ that says the chip will ever appear…<<
It’s about the same as us hearing about all the hype on AMDs wonderful Hammer (though I am excited for them)! So what is your point?!
Most of the comments so far seem to miss, minimize, or ignore something. (Or perhaps it’s just me). Three things are being tested. Processor performance, OS performance, and system performance. OS X is also under the handicap of being the least finished / optimized OS used during the test. An unfinished OS is not an excuse, rather, it tells the reader s/he must take an additional gamble if they choose OSX.
How much overhead does OSX consume compared current versions of BSD, Linux, Unix, or any version of Windows? I don’t know, but I would not assume it’s irrelevant. Further, the ‘Unix’ layers of OS X are not current with Free BSD, which is now at 4.5. OS X were ALL of the available CPU cycles available devoted to the test? ‘renice’ comes to mind…There are some ‘funny things’ going on down in the kernel. Although the OS X kernel is not RT, it does have features for low latency. Depending how it’s implemented, it could negatively affect everything else. This does not mean it’s not a fair test (on non-Altivec portions of the CPU). After all, this affects the system as a whole, but it misleads you about “Processor” performance, as opposed to system performance.
In my opinion, OSX is DEFINITELY not optimized yet. Or maybe we should say, “β¦It better not be optimized yet!” This is so great an issue that Steve Jobs had to come out and publicly acknowledge, and confirm that speed had to improve. Many of the comments here reflect that.
The article also calls into question Apple’s claim of ‘supercomputer’ status. If you’re not talking about vector processing, you have a point. However, Apple CLEARLY states they make this claim in regards to Altivec. The SPEC test only addresses the FP and Int. I think this is called “comparing Apples to Oranges”. The debate over Fortran, compilers, and math libraries, etc, is a separate (although valid) issue.
Be careful how much hay we make over this. Today, now, this test shows the advantage of Linux or Windows on x86, over OS X, 10.1.X on a G4. I wish to emphasize “OS X 10.1.X on a G4”. If you’re performing tests that directly take advantage of x86 FPU abilities, buy x86. Unlike horse races, the finish line is when you purchase your machine. Today, it’s x86. Tomorrow?
>> It’s about the same as us hearing about all the hype on AMDs wonderful Hammer (though I am excited for them)! So what is your point?!
Well, I haven’t been claiming that you should use Hammer instead of current processors for comparisons – it’s always pointless to compare performance of current platforms with non-shipping products…
Still, Hammer is *a lot* more real than G5. You can find all the CPU and instruction set documentation at http://www.amd.com or http://www.x86-64.org, they have been showing the chip and there is an x86-64 version of the Linux kernel.
Could you point me to some Motorola documentation about the “rumored” 64-bit G5 with double precision altivec, or at least any official statement saying they intend to produce such a chip? Or even a Motorola statement saying they *might* produce such a chip? (I don’t consider Macintosh rumors to be official Motorola statements π
If not, it means it will still be 6-12 months after the documentation is released before you can expect support in compilers and linkers. So, even if such a chip would appear, you’d be lucky if your compilers support it towards the end of 2003…