“The Power Mac G5 is a formidable machine, representing a giant leap in performance over the G4. But the 64-bit transition so far only represents a small step. Even though there’s not much of a benefit from 64-bit computing yet, this marks the beginning of a new era for Apple, where the 64-bit world will enable new capabilities for the content creation community” the reviewer concludes in his benchmark article.
Well its obvious that the G5 will trail the Opteron at this time. When the G5 came out, it was favorably competitive with the 2.0GHz Opteron. Now, AMD has 2.2 and 2.4GHz Opterons out. Its expected that the clock-speed advantage will give AMD an edge now. The G5 and Opteron architecture seem to be comparable, clock-for-clock, with the G5 having a bit of an edge overall. That means that the G5’s overall competitiveness is predicated on IBM being able to match AMD’s increases in clock-speed. Whether they can do this remains to be seen — they seem to have hit some snags (faster G5s were supposed to be out by now), but they might be able to pull it together in the long run.
I want my powermac G5 3ghz dual NOW!!!!!!
The G5 was cheaper at least..
What about comparing price/speed?
Hi,
I don’t use to be a mac zealot at all, but i don’t know what this bench is doing here for PowerMacs G5.
It is half BULLSHIT.
AfterEffect is the SLOWER software you’ll EVER find for MacOSX. It is NOT even supported anymore, the performance is abysmal.
Secondly, CineBench, while really better compared to AE performance on MacOSX isn’t either really the best.
It is very hard to compare thoses, honestly.
It would be better to compare finalcute pro with equivalent functions for example.
I don’t understand. same thing why using premiere over Final Cut Pro.
if you want a credible bench mark, use applications that are SUPPORTED on the platforms, FCP will be better supported than Premiere on the Mac and Premiere is better supported on windows.
same with motion and After Effects.
Get a clue: when you benchmark between different architectures it’s imperative that you use the same code. There is no FinalCutPro or Motion for Windows or x86, so it is impossible to get more accurate results than the ones the article presented.
thats funny since I recall people calling the GCC compilation benchmark as garbage because it should have used ICC for the X86 arch.
if you give the best tool for the job on each platform, you are benchmarking what the USER will experience, not how well a company can tune code for one platform over another.
We’ve been hearing for ages that the x86 architecture is obsolete and PPC is more modern, elegant, etc. But why have these claims not translated into real benchmark victories? “But Intel spends soo much more on chip R&D and good fabrication processes, they simply spend their way out of the x86 drawbacks.” Ok, but what about AMD? I don’t think they’re that much bigger than the IBM or Motorola chip divisions. “But these new chips are internally just RISC chips that camouflage as x86.” So? If they can do that without performing worse than the “true RISC” PPC, why is being x86 compatible a disadvantage? Is there any real argument why PPC is really (as opposed to theoretically) superior to x86?
Jon, the problem here, is that after effects for mac and after effects for windows share little to zero code, its no a true cross platform app.
Unix Apps compiled with GCC is the most approximate cross platform benchmark we can found..
After Effects & CineBench for windows it’s compiled with a compiler from Intel, with optimizations for SSE 2, in the mac uses codewarrior’s compiler..
Motion vs After Effects wouldn’t be a better comparison, but probably Motion will win.. We already know the level of optimization of Apple software…
what? apparently you did not notice that they bench was done with a 2.0 GHz G5 and a 2.2 and 2.4 GHz Opteron.
the G5 beats the Opteron at 2.0 GHz, and it seems to me that the Opteron and G5 are very close so a direct comparison can be made, as such, it is no surprise to me that a 2.4 GHz Opteron can beat a 2.0 GHz G5.
wait for WWDC and if Jobs is even half right about being at 3 GHz after a year, then the G5 will be in the mid to high 2.x GHz range.
Something you must know, x86 and ppc, both looks very alike now. Read ArtsTechnica articles if you need more insight.
Else than that, yes, the G5 is competitive, but doesn’t changes that the benchmark is worth nothing for the G5 himself. Xeon vs Opteron ok.
We don’t bench either with unsupported old software on Windows do we ? So what about this damn hypocrysis.
“wait for WWDC and if Jobs is even half right about being at 3 GHz after a year, then the G5 will be in the mid to high 2.x GHz range. ”
If I had a dime every time I heard people talking about how the G4 was going to break 500mhz Real Soon Now a few years back, I’d be richer than Jobs.
In other words, I’ll believe the scalability when I actually see it. Intel was predicting the P4 would scale to 5ghz, but now, we’re seeing the P4 phased out entirely. That’s something to consider.
I’m not going to wait on Apple if AMD is offering something faster right now.
-Erwos
Yeah, but what do you have rather standing on your desk– a beige box or one of those slick G5 cases?
Seems to me those benchmarks don’t quite test a full range of CPU featuresets. I could be wrong about the benches, but they mostly seem to be streamlined kinds of processes. They should try some threaded testing, or maybe some compilations. You can’t run two benchmarks and claim a conclusion. With two benchmarks I can argue Athlon64 is superior to the P4. And with two different benchmarks I can argue the exact opposite.
Benchmarkers are such uncreative people. Why does one need s “benchmarking suite” to test a system’s ability? Can’t anyone write some quick c, c++, and perl code and really play around with things yourself?
I’m sure the G5 and Opteron are quite competitive with each other, and the Xeon probably doesn’t trail too far behind. However, the G5 still has the advantage of catering so a slightly less horribly outdated instruction set.
No…. actually, the reviewer concludes (about the G5): “It’s a great value and receives our highest recommendation.”
These benchmarks are meaningless. Let’s see AE up against Motion – both tested on a G5 to see what properly written software can do. Adobe has yet to step up.
The best benchmark would be to perform a complete task, like take some raw digital footage and turn it into a movie. The software used would not matter since what would be judged would be the final product. This is what happens in real life, not seeing how fast one can rotate a frame 10,000 times. This would judge how easy the User interface, the software available on the platform to do the job, how fast the computer can do the perform the tasks that need to be done.
x86 compatibility has the following disadvantages:
1) Limits the instruction set. Eg: AMD’s transition to 64-bit increased code-size quite significantly, from close to 3-bytes per instruction (on average) to 4.2 bytes per instruction, or slightly more than the 4 bytes per instruction on most RISC CPUs.
2) Adds some decode stages in front of the main pipeline, but that is nearly entirely mitigated by things like the P4’s trace cache, which stores pre-decoded instructions.
3) Uses extra transistors. Not just for the extra decode stages, but to handle niceties in the interrupt and MMIO features of the x86 that don’t exist in most RISCs. For example, x86 mandates precise exceptions, while Alpha, POWER, and SPARC get away with imprecise exceptions in many places.
In all, none of these are huge issues. Those people who want to get rid of the x86 just for the sake of getting rid of x86 must have some sort of cleanliness fetish.
That would be a highly unscientific benchmark. In science, you try to minimize the number of variables. If you want to test the processor, you try to keep everything as similar as possible, and just test the processor. If you want to test the UI, you keep the machine the same and just test the UI. Throwing them all in the same pot won’t give you meaningful results at all.
That people who don’t like the conclusion of a benchmark seem to always find some way to discredit/dismiss the benchmarks.
Which 64-bit OS and which 64-bit Software did they used for PC. If Mac didn’t win it’s unfair.
Apple at least provides their OS. And mostly you bunch of Mac Zealots (right here and on /.) boost how great it is and how 64-bit is everywhere where it matters.
Face it. Mac lost
btw. there was no dual 2.4 opteron in test, so it wasn’t even the fastest PC.
btw. Opteron definitly is overpriced based on what this PC offers.
Dual AMD Opteron 248 Processors
NVidia Quadro FX1100 128MB graphics card
(2) 74GB 10,000 RPM SATA 150, 8MB cache drives, four port serial RAID 0
AMD-8131 HyperTransport PCI-X Tunnel
AMD-8151 HyperTransport AGP Tunnel
128-bit Dual Channel Memory Bus
2GB (4- 512MB, supports up to 16GB) ECC Registered 333MHz DDR DIMMS
(8) DIMM Slots
Dual Channel UltraDMA 133 IDE Controller
Four Port Serial ATA RAID (0,1,0+1)
6 Channel Audio
(1) 8x AGP Pro Full Length Slot
(2) Full Length PCI-X 64bit 133/100MHz Slots
(2) Full Length PCI-X 64bit 100/66MHz Slots
(1) Onboard 10/100/1000Mbs Ethernet Adapter
(4) USB Ports: (2) Front USB 1.1, (2) Rear USB 2.0
(2) IEEE1394 Ports: (1) Front, (1) Rear
(1) Front headphone jack
(1) Parallel port
(1) Serial port
(4) 3.5″ x 1″ Internal hard drive bays
(2) 5.25″ Exposed drive bays
1.44MB Floppy drive
460W Power supply
(2) 92MM, (1) 80MM Cooling fans
Tower Chassis (Optional 4U Rackmount kit)
Physical dimensions: 7.0″W x 17.0″H x 17.5″D
Except Quadro it’s basicaly the same as mine and I haven’t paid even nearly as much as $5500.
2) Adds some decode stages in front of the main pipeline(SNIP)
Note that PPC970 decodes PPC32 to “FRISC” i.e. it also has decode/crack stage.
Well actually, you have some reason.. but..
Supposing Motion is faster than After Effects..
What software you will buy, a software that cost less is faster a do the same like Motion or After Effects?..
Now Try to do another question..
What machine you will buy, the “slower” machine that can runs the faster software (Motion) and do your work in less time, or the “faster” machine that can runs the slower software and do your work in much more time?
Anyways… Computers change everyday, in a few months we will be seeing new machines that are faster than de machines of today..
I think people should buy the platform they like more, speed its not so important today… At least for general people and freaks like you & me .
Possibly for Pixar, Film Studies & Virginia Tech Speed its very important, but their work relies on speed…
You imagine a world without Nemo or Ice Age? NOOOOOOOOOOOOOO!
@Rayiner Hashem
Refer to http://www.infosatellite.com/news/2001/10/j221001amd_hammer.html
Code size grows <10%
• Due mostly to instruction prefixes
– Static Instruction Count SHRINKS by 10%
– Dynamic Instruction Count SHRINKS by at least 5%
– Dynamic Load/Store Count SHRINKS by 20%
Face it. Mac lost
btw. there was no dual 2.4 opteron in test, so it wasn’t even the fastest PC.
The 248 is a 2.2 GHz CPU. That is faster than the G5s being tested. Neither the x48 nor the x46 were available when the G5 was released. I find it interesting how some people point to computers nearly a year newer than another and go “Look! A faster computer! You old computer is junk and the ads fraudulant!” The industry moves quickly. It is why Apple moved to IBM – Motorola wasn’t willing to keep up with the rest of the industry. IBM seems much more willing to try to keep pace with Intel and AMD.
It will be funny watching you whine when IBM releases faster G5 systems and Apple is again faster. I promise not to rub your nose in it, because a little while later, AMD or Intel will make and faster system – and then IBM will… and so on.
Aren’t people tired of the stupid fights over who is faster yet? Darn zealots. They are run ABOUT THE SAME. If it has the features you are looking for and runs the software you like, fine! For some, that will be a Mac – for others, it will be a PC. The world won’t end if one is a few seconds faster than the other for a brief moment in time. Grow up.
Depends what you are testing as to how you take these results.
If you want to test the CPU only then yes, you want to remove as many variables as possible. On the other hand if you are testing system performance for a specific application area then looking at different software is perfectly valid since the different software itself will be part of that system.
That said the G5 missed an update cycle due to IBMs fab problems, they should have been up to 2.5GHz months ago, IBM seem to have that fixed now so I think we will see faster G5s in the not too distand future.
However, if the rumours are true the next PowerMac update may actually be using a different CPU, *not* a 970 / 970FX.
We’ll see next month…
There are a number of pro applications which work on both x86 and PPC. Why not benchmark those? AfterEffects is a start, but it’s really ridiculous to conclude that the Opteron makes a more powerful media workstation from just AfterEffects performance alone.
How about an Avid Xpress DV benchmark? (These have been done, and the G5 won hands down) Or Cubase, Logic, or ProTools performance? Software synth performance of Reason?
Regardless, there are dozens of other considerations to make when purchasing a media workstation other than raw performance. Many pro apps which were designed for Mac but ported to Wintel and make extensive use of child windows are quite cumbersome to use on Windows due to the obtrusiveness of the Windows “gray background” MDI implementation. Several pro apps have the ability to interoperate, such as Cubase and Reason through ReWire, and with an obtrusive gray background obstructing your view attempting to take advantage of these interoperability features becomes quite cumbersome.
@J.F.
The person could be referring “as of May 2004”.
>It will be funny watching you whine when IBM releases >faster G5 systems and Apple is again faster.
AMD didn’t rest on Opteron 150 and 250 releases i.e.
“Athlon 64 3500+, 3700+, and 3800+”.
Refer to http://www.amdzone.com/modules.php?op=modload&name=News&file=articl…
@justsomebody
I still want an Apple G5 badly. Nothing found in todays computer can even begin to compare to a Mac.
Today, I am still amazed at Panther and how great it is. I can’t wait for Tiger.
>In the real word, nobody gives a flying f**k about
>benchmarks.
What about EU government contracts?
hmm, you say the Mac lost yet the benchmarker says the Mac is the best value and recommends it over the opteron.
I went into an Apple store the other day to look at getting a new Powerbook. I noticed a 2x2Gz G5 sitting there running 8 Quicktime movie trailers at around 320×200, another trailer running much larger, probably 640×480 or there abouts and iTunes with visualisation on (around 800×600).
Here’s the kicker. While all this was happening, I started using Finder, expecting it to be really slow. It “felt” normal, responsive (for Finder) and so on. I started up Word (maybe 1-2 seconds to load), then Excel (again, a couple of seconds) both of which worked fast and fine. If I had more time, I would have kept going till I found the limit.
As a Mac user, that convinced me to get a G5 over another G4.
Now, I haven’t tried this on the new PC’s comming out now. If they get anywhere near this kind of performance (maybe they can do even better), that would impress me, not some benchmark figure which means very little to most people.
A car may be fast, but can it tow?
Neat trick, but BeOS was doing it a decade ago, with 1/10 the hardware power…
When the duel 2Ghz G5 came out LAST YEAR, it was the overall fastest, period. It wasn’t for very long, and it certainly isn’t now, because they haven’t updated it since. Comparing now is ridiculous!
They are expected to update the G5 to 3Ghz though, and possibly within a month or two. It will be interesting to see how it compares.
a decade ago???
2004 – 0010 = 1994…
I don’t think that beos have the capacity to play 8 videos at full frame speed and a visualization, at that time there wasn’t even machines with a pentium II…
http://en.wikipedia.org/wiki/Pentium_II
well, yes, and that is a testament to how well BeOS was written as a single user non networking OS, but then you also have to figure in the fact that the video was very low resolution compared to today.
oh, and if windows could do that, then it should get kudos like OS X, but it can’t. Longhorn should be able to, and that is why I think Longhorn is going to give OS X a real challenge.
@coolkamio: Okay, fine, 0.80 decades, ago. The 2nd-gen BeBox had dual-133MHz CPUs.
@gamma: BeOS was networked, and what does single-user have anything to do with anything? Be hardware had awesome media performance *because* of BeOS. Apple hardware has awesome media performance *despite* OS X. Aside from the fantastic CoreAudio layer, OS X is a plodding beast of an OS.
Well how about a Maya render test that clearly shows that a dual G5 with 4 GB of RAM still lags behind other systems. You can find this out by going to http://www.zoorender.com/ and see the “Benchmarks” test results provided by studios and individual artists. Maya and Mental Ray run on Windows, Irix, Linux, OSX and will use either single or dual CPUs depending on system specs. A similar test can be done at http://www.specbench.org/ which also indicates submitted results with cost per composite.
As for the comments that no one really cares about benchmarks is a useless comment in itself. Studios(animation, film, broadcast, games, graphics) and individual artists for example care about not only the stability of systems but also speed. Time after all does cost money. These benchmark results are important specially considering Apple has been trying to make their new G5 systems more attractive to the film, games and broadcast markets by comparing them to x86 systems.
Apple didn’t do themselves any favours by posting false G5 results for Shake and Maya which could be easily reproduced by individual artists and studios. No one should make excuses for those that attempt to mislead consumers with false advertising. Prior to making a purchase it’s best to be an objective consumer no matter who you buy from.
😛
I’ve been and user of BeOS in the good ages, and i know what you are saying..
But it don’t think even a dual powerpc at 133Mhz can play 8 videos at 320×240 + 1 video at 640×480 + a good visualizer all at full frame rate…
Tough, it will probably play 3 videos at 320×240 and a few mp3 files at the same time, because i’ve already do this.. In a similar x86 machine..
Actually, what thavith says, is not impressive, but the G5 do not saturate doing this.. and a G4 dual 1,42Ghz wouldn’t saturate either..
It’s true that Os X uses much more cpu power than beos.. But It does much more..
You can say that you don’t want things like quartz, an unix core, all those useful frameworks for apps, transparente integration, interface improvements, filevault, rendevouz.. well basically everything that http://www.apple.com/macosx/“ Mac Os X has and BeOS no..
But some users like me, use it, and like it.. I think Apple have been doing a really good work optimizing Mac Os X, the world is not designed specifically for one person..
If you have suggestions the better you can do it’s tell it to Apple, usually they hear to their users..
I really like BeOS, but it’s an Os now defunct, Os X is the best Os alive today..
“@coolkamio: Okay, fine, 0.80 decades, ago. The 2nd-gen BeBox had dual-133MHz CPUs. ”
So? Those were 133Mhz 603’s hardly a powerhouse there, plus the crappy video board. BeOS did not magically transcend the limits of physics you know. It is/was a nice OS, but come on let’s be real here. There is no way BeOS could do the same as a modern machine.
“@gamma: BeOS was networked, and what does single-user have anything to do with anything? Be hardware had awesome media performance *because* of BeOS. Apple hardware has awesome media performance *despite* OS X. Aside from the fantastic CoreAudio layer, OS X is a plodding beast of an OS.”
Well, it had awesome media performance according to whom? The demos that Be shipped, I guess that is why there were sooooo many media applications ported over BeOS, right?
MacOS had its growing pains that is for sure, but you should understand that MacOS is able to reach a wider audience. The multiuser thingie that you seem not to appreciate makes perfect sense for a server environment. It has a wonderful set of APIs and things like Quartz puts it ahead of BeOS IMHO. I have developed for both environments, and the NeXTish environment that MacOS inherited makes it far better. But that is because I am partial to Objective C over C++.
Overall BeOS was a nice experiment, and MacOS I believe has surpassed BeOS in almost every aspect.
“Yeah, but what do you have rather standing on your desk– a beige box or one of those slick G5 cases? ”
Please, why the heck do people still call them beige boxes, i don’t think any of the top OEMs have made a beige box in years. Most seam to be black these days. But look at the Sony or HP cases, they are pretty damn slick looking. A G5 is also nice. But try to find a new top make beige box. Their are plenty of x86 boxes out their that look very slick in their own right.
Its not really a matter of OS X having features that require a speed hit to implement. A UNIX core doesn’t need to be slower than a non-UNIX core. Eye-candy doesn’t slow anything down during benchmarks, etc. OS X, largely a result of the fact that Apple needed to take huge shortcuts to get it out in time, simply has a relatively slow core. As such, the demo with the multiple videos isn’t really impressive, because its simple a matter of vastly more powerful hardware, not a particularly fast OS.
Actually, all Unix cores uses more power cpu than beos core..
What I’m really saying, is that it’s easy to build a new core from scratch that its ultra-optimized, than using an old already existent “bloated” one…
Obviusly, Eye Candy doesn’t need to be slower when doing benchs.. but only if the interface it’s idle.. In real world work, it slow down things..
All those textures, effects, images, antialias.. employs cpu and memory power…
I already have said that running 9 videos on a G5 isn’t impressive, but in no way a demo like this will consume 100% of the 2 cpu power..
I doubt that it requires much more than 75% in one cpu… But i don’t have a G5 at hand (G5 owners speak..)
Who sais that OS X is faster than BeOs? Anyways.. You will see worse & much more slower Operating systems than Mac Os X in 2006
The majority of BeOs developers are now working at Apple.. So we will see improvements, sure.. Wait until WWDC for first Tiger betas
BeOs was really good at speed and video media, but its dead, accept it..
It’s easy to say that an os like mac os x is not particularly fast.. build something better! I would be happy of buying/trying it..
well, since it runs on Linux, then set up a duel 2 GHz G5 with a Duel 2 GHz Opteron both running 64 bit linux, and with max memory. then run the Maya render bench, compiled for each platform with the GCC or use ICC for the Opteron and IBMs compiler for the G5.
i honestly don’t know if FCP is going to be so much faster than Avid or Premiere, or Motion faster than AE. I really would like to see a benchmark of that. I think it would put a lot of questions to rest.
Did you see the clip about Industrial Light and magic using a G4 mac and AE to do the special effects of Van Hellsing? I thought only super fast opterons could do cool things like that.
btw, i’m a mac user for UI, not MHz
the dual 2ghz g5 was smashed in nearly all tests by single cpu pentium 4 extremes and athlon fx 51 cpus in testing done by macaddict and maximum pc (they are sister companies and worked on the tests together) last november or so.
fact is a single cpu pentium 4 extreme, athlon fx, or athlon 64 will annihilate the macs and you can outfit a machine for about 1/2 to 2/3 the money.
a way over priced dell or boxx workstation is not needed to achieve the defeat of the best macs.
1/2 to 2/3 the money to slay a mac? uh, and run a super computer beater, eh? Look at it this way. You can keep your x86 boxes, I don’t like the look, the quality, the resale value, or any of the os’s in them. Linux is very powerfull, in a tweaker kind of way. xp is great, if you like worms and spyware. but they don’t come with osx (not yet anyway!!) .
The thing i like most about my G5 is osx. panther rocks, i get way more work done, with way less hassles, way easier. period. and with fink, I can run gnome, kde, and any other linux program on my nice looking 64 bit g5. power. freedom. choice.
So a nearly year old G5 is bested by much newer PCs, is anyone really shocked? I still love my G5 and expect to own it for a good while yet. Even when the 3 ghz models come out, my G5 rocks ass. 🙂 I still got a faster computer then most people, and arguably faster then I actually really /need/.
>So a nearly year old G5 is bested by much newer PCs, is
>anyone really shocked?
There’s nothing new about Opteron 148(S940)(2.2Ghz) i.e. it’s just “server certified” version of Athlon 64 FX-51(S940)(2.2Ghz). “Athlon 64 FX-51” was release about Sep 2003.
>When the duel 2Ghz G5 came out LAST YEAR, it was the >overall fastest, period. It wasn’t for very long, and it >certainly isn’t now, because they haven’t updated it >since. Comparing now is ridiculous!
Note that, the recent Opteron releases were for 150, 250 and 850. Opteron 150(2.4Ghz) is just a relabeled Athlon 64 FX-53(2.4Ghz).
There’s nothing new about AMD K8 Socket 940 @2.2Ghz(stepping C0) i.e. Opteron 248 appeared around December 2003. Refer to http://www.anandtech.com/IT/showdoc.html?i=1935
>RISC is better than CISC!
At micro-architecture level; modern X86 executes RISC like instructions i.e. fix length, micro-architecture registers more than 8 and ‘etc’.
No, the Opteron isn’t new – thats not what I meant. Anyways, concerning the benchmarks I wish they were broader. However as a more even denominator they’d use Linux, which runs on both, in native 64-bit mode and benchmark various apps compiled with gcc? Kernel compiles, apache benchmarks, whatever. It would certainly be more closely even then Windows and Mac OS X. Even if they didn’t at least use more apps, or supported apps – isn’t Adobe abandoning After Effects for Mac OS X? Lets revisit this once the 3 ghz G5s are out and whatever is the current Opteron then. 🙂 I wonder if the new G5s will have PCI Express, that would be nice.
That the G5 has a CUT DOWN version of the 970.. is true.. and all the MacZealots can do a real switch and get the OEM versions of the PPC structure.. (Motherboards/LOgicboards are coming from the likes of ASUS and Foxconn Q1 ’05 ((that I know of))
and the fact that the G5 was cheaper.. WHERE? they turned out to be COMPARITIVLY priced.. that means not same platform, not same price.. but close..
this is a worthless discussion anyways..
> I wish they were broader.
One may have factor in the article’s targeted audience.
>However as a more even denominator they’d use Linux
Note that Linux wouldn’t be representative of X86 desktop OS installation base.
>Lets revisit this once the 3 ghz G5s are out
Seems to be a repeat of January 2004’s promise.
>and whatever is the current Opteron then
Typical benchmarks doesn’t work like that i.e. snapshot of today’s solutions.
I have a dual p3 (800MHz clock w/133MHz memory)
I just had it playing 4 320×240 sized (one was wider & narrower 400×220) & 1 720×480 sized video. Konqueror was responsive, but I was unable to play 5 of the 320×240 sized videos. (might have been an issue of hard drive seeking, but I’m not going to waste hours testing if I am not going to benchmark things, which I am not planning on doing) These were mpeg4/divx type with either ac3 audio copied from dvds or mp3 audio if it was rencoded, or I captured it from tv (which one was).
With 5(4small, 1 large) videos, Konqueror & Konsole were responsive. With 6 (well not so great), it caused them to stutter, badly. So I would expect a Mac with either processor having more clock speed to be able to do all that.
Is anyone willing to do some FAIR benchmarking?! By this I don’t mean vendor tweaked systems, I mean actual people willing to use their own hardware to run programs that are agreed upon by a group? say encoding with mencoder being one (which would also allow certain instuction sets to be tested: mmx, sse, altivec, vis, to all be turned off to see the difference in performance.) You have the graphics renderers of all types that hopefully run on different OSes. Then there are things like the scientific computing programs that take hours or days to run, even on 2×1.3GHz athlons.
Then lets look at TASK benchmarking. Professionals, who use whatever programs they want to on whatever platform they care to, and look at their times, statistically. Photoshop is better than Gimp in a professional’s hands, lets test this, among so many examples. With tasks such as touch up this photo, with some criteria, or take this video and produce a dvd of it, with chapters at certain places. And of course the BOFH test: rm -rf /home/ or others (deltree). (that one should be optional )
The idea is to create a large database of these numbers, with each thing listed. So it’s easier to remove red-eye in photoshop, but easier to gausian blur in gimp, or some such, and both those tasks are faster on $CURRENT_FASTEST_PROCESSOR than $NOT_CURRENT_FASTEST_PROCESSOR.
Now why it won’t work: 1) I don’t have the money to buy the hardware, 2) see number 1, but replace I with My friends & associates, 3) Companies don’t want to see this happen, otherwise Apple would be forced to change it’s advertizing claims, and Intel, and AMD, and IBM. 4) Professionals like to be paid, which ties into #1 & #2, though it might be done by some for the recognition. 5) EULAs & their DO NOT RELEASE BENCHMARKs clauses. And last, I’m lazy, it’s easier to bitch about it, than navigate the attempts at cheating, the platform partisanship, ‘Intelectual Property’ crap, and so on and so forth (not to mention a heck of a lot less liability, sadly.)
>So I would expect a Mac with either processor having more >clock speed to be able to do all that.
Note that, other factors could influence the overall performance i.e. GPU acceleration (e.g. Mpeg2 decoding), CPU’s FSB bandwidth, memory bandwidth, amount of memory, HD throughput, chipset efficiencies and ‘etc’ i.e. not just PPC’s clockspeed.
> Is anyone willing to do some FAIR benchmarking?!
Such request may lead to “point scoring” as in 3DMark’s forums(just be careful). As for benchmarking, all I can say that it’s competitive to barefeat’s G5 gaming benchmarks and the cited video files’s collective mega pixels is no an issue.
Did you even go to the Zoo Render site and look at those benchmarks? Obviously you didn’t since you would have seen that the G5 systems lagged behind other systems. This test was first suggested because of confusing benchmarks posted by Apple that conflicted with independent tests. I beleive this does point to several conclusions. 1: Apple fabricated the benchmarks which a few Apple sites used as reference. 2: I don’t believe the G5 64-bit proc is all bad. What I do believe is that Apple misleads artists and studios into believing Maya and Shake will perform better on their systems. Well that’s difficult to swallow considering the results below. This may be in part due to that no G5 is ever sold with a professional 3D graphics card suited for DCC work (ie: FireGL, QuadroFX, Wildcat). This is reason for why there is now a petition in the Apple discussion forum for DCC cards to be sold with G5 systems. 4: Don’t take things face value that Apple claims with out doing a little research yourself. After all it’s your money that you are spending so you may as well spend it wisely.
Scene 1:
-Dual Opteron 244 (1.8GHz) running SuSE Linux 9.1 (time=0:46).
-Dual R16000 Tezro (700MHz) running Irix 6.5.21m (time=1:05).
-Dual G5 (2GHz) running OSX 10.2.8 (time=1:17).
Scene 2:
-Dual Opteron 244 (1.8GHz) running SuSE Linux 9.1 (time=1:05).
-Dual Opteron 246 (2GHz) running WinXP (time=1:06).
-Dual G5 (2GHz) running OSX v10.3.2. (time=1:25).
“If they can do that without performing worse than the “true RISC” PPC, why is being x86 compatible a disadvantage?”
People only benchmark one thing, forgetting other circumstances. For example x86-32 is a hack on a hack on hack which has several disadvantages. For example, it can’t handle much throughput…
Download the latest version (2.5 as of today) of POV-Ray http://www.povray.org/download/
processor speed 1job 4jobs compiler
Opteron 1.8 Ghz 324s 648s gcc 3.4.0
Athlon MP 2.1 Ghz 373s 751s gcc 3.4.0
Apple G5 2.0 Ghz 440s 895s Absoft
An optimal set of compiler flags were used to build each executable. IBM’s proprietary PPC970 compiler was used with optimization flags to produce the fastest running executable on the G5.
All machine are dual processor.
— benchmark.ini —
Width = 320
Height = 240
Bounding_Threshold = 3
Verbose=On
Input_File_Name=benchmark.pov
Output_to_File=false
Correct me if I’m wrong, but doesn’t After Effects only use one of the two G5 processors? I can’t find the link, but remember a benchmark showing almost no difference between a single 1.8 GHz G5 and the updated dual 1.8 GHz G5 when using After Effects.
As further food for thought, did anyone read the Popular Mechanics review of G5 benchmarks. PM wasn’t happy with Apple’s benchmark marketing campaign, but eventually praised the g5 PowerMacs for their incredible speed…
To whit, and I quote:
“Not being able to run SPEC tests, we turned to BLAST and HMMer, which are DNA and genome-sequence matching tests, as well as to Bibble, a batch image-processing application. The problem is that these tests do not run on Windows XP. In frustration, after running the SPEC tests on the HP xw6000 workstation, we installed Linux on the HP, which allowed us to run the new tests. And we were surprised. The G5 was 59.5 percent faster than the HP at processing 85 high-resolution color photographs totaling 684.6MB of data. In the HMMer tests (61.3MB of data), Apple was 67 percent faster than the PC and under BLAST (32.8MB), Apple was 85.9 percent faster. These results are in line with those now published on Apple’s Web site.”
“So a nearly year old G5 is bested by much newer PCs, is anyone really shocked?”
I’m not amazed, because the fact is that the G5 tested is a year old and was it origanally tested against the 2.2 opteron? It beats the 2.0 optron which is more in the G5’s current class. The discussion is meaningless, given the current age of the G5, but the G5 is the better value.
If history is right this happens all the time, Amd leap-frogging Intel and so on. So it shouldn’t be a big supprise. Intel recently hit a wall in their current crop of processors, so what’s the difference if others have?
I would of like to seen other cross-plateform wares tested such as Avid or other Adobe horrible ported wares.
This test should of come out a year ago.
woah, you got a link to that I would love to bookmark it for future benchmark reports….
it is funny how that bench (by a very very respected Technology Magazine) was not reported here….HMMM.
found it!!
http://www.popularmechanics.com/technology/computers/2004/4/desktop…
I was notified by Adobe that AfterEffects does take advantage of the second CPU and also Hyperthreaded CPUs during rendering. This is the same as Maya’s default Alias Renderer and also Mental Ray which is integrated in XSI, Maya or offered as a stand alone version. Shake which runs on OSX and Linux also takes advantage of dual proc systems.
I think the main problem with the dual G5 is not the processors and not the OS but instead the graphics card used. Apple continues to only offer low end cards suited for gaming instead of high end cards suited for digital content creation (DCC). Software such as Maya, Mental Ray and Shake can all take advantage of added features found only in highend graphics cards. No matter how good the OS is or how powerful the CPU is if the graphics support is lacking then your projects will suffer.
What I find that confuses consumers is seeing Apple (Shake) and Alias (Maya) supporting those applications for use with ATI Radeon and NVIDIA Geforce FX cards. If you read some of the forum sites (3D and 2D) for example at Alias or Highend you will see a continual stream of complaints towards issues of slow performance, crashing, GUI display issues, etc. When Nothing Real owned Shake they only supported high end graphics cards. When Apple acquired Shake they state it supports low end cards on OSX but on Linux still requires high end cards. I guess Apple believes Artists and studios would never pick up on this. To excuse this fable I’ve noticed Apple resellers when questioned about the differance state it’s because OSX is designed so well. Ah…okay..so that’s it..not. Get the shovel boys and girls because the B.S. is piling high around Apple HQ
Actually, gankaku has reason, After Effects its not fully optimized to take advantage of the second processor..
Or at least, 5.5 version..
http://www.creativemac.com/articles/viewarticle.jsp?id=16880
I don’t think 6.0 version have been optimized, because its not 2x faster than 5.5… so..
I would like to see the same benchmark using the trick at creativemac..
And Dark_Knight, obviously in the benchmarks at ZooRender (Maya) & PovRay is more important the graphics card than processor.
PLEASE APPLE HEAR THIS: WE REALLY NEED PRO-GRAPHICS CARD SUPPORT IN MAC OS X (Fire GL, Quadro FX & 3D Labs WildCat)
Anyways, I think Apple is already working on that, there are rumors of PowerMacs with a Fire GL as BTO… We will see at WWDC, i can’t wait
“IBM’s proprietary PPC970 compiler was used with optimization flags to produce the fastest running executable on the G5.”
OTOH, you haven’t used a proprietary compiler on the AMD computers. Why didn’t you install Linux + GCC on the G5? (And even then it isn’t objective)…
> It beats the 2.0 optron which is more in the G5’s >current class.
Dependant on benchmarks (e.g. Barefeat’s Cinebench, Photoshop and games), CO vs C1 stepping, Opteron 246 (2Ghz) sports registered rams (C0’s 333DDR) unlike G5’s unregistered rams(400DDR). IF one has to compare setup with similar components (i.e. 400DDR unregistered rams vs 400DDR unregistered rams), it would be Socket 939 based Athlons (planned for release in the next week or two).
PS; C1 should be CG.
>found it!!
Note that HP box was Dual Xeon 3.06Ghz with a crippled QDR 533Mhz bus. Performance deltas from Xeon may not be applicable to Opterons.
It’s a Xeon 3.06Ghz based not an Opteron 2.2Ghz based.
>”Computing technology evolves at an astounding speed. AMD >now has its Athlon 64-bit chip”
Note that AMD’s K8s was release some time during April 2003. Didn’t popularmechanics know that the Opteron 14x are simply Athlon 64 FXs relabeled (leveraging the existing product name i.e. “Athlon”)?
“We’ve been hearing for ages that the x86 architecture is obsolete and PPC is more modern, elegant, etc. But why have these claims not translated into real benchmark victories?”
It seems to me you are really missing the point. I switched from PC to Apple last fall because of the G5. Not because of benchmarks or any other mythical method of comparing apples and oranges. I switched brcause at last there was a system that was fast enough to run the software I needed, (and all the software I need to use is just as available on the Mac as on the PC, different software but it does the same job), And I finally had the stability I wanted to spend my time on my projects instead of fixing the little problems that always seem to creep into Windows.
Once you reach a certain speed any system is fast enough for most users. Personally, I can’t see the difference between 120fps and 240fps and I doubt most people can.
Bill