Everyone loves benchmarks, so here is your daily dose: NVidia has just released the (pricey) GeForce FX 5900 and pretty much regained the speed crown from ATi (possibly until the upcoming .13 ATi card comes out). Nvidia benchmarks here, here, here, here and here. In the meantime, NWFusion is reviewing the Apple XServe and tested it against a Compaq four-way web server.
BareFeats have some game benchmarks showing the speed difference between the Xeon and P4 against the fastest dual PowerMac today.
Finally, AcesHardware shows some SPEC results of the AMD Opteron CPU. SPEC is the most respectable CPU-only benchmark. The Opteron is almost as fast as the P4 at 3 GHz, but nowhere near the Itanium in floating point. More over, it is very interesting to see (and dissapointing for OSS) that Intel’s compiler at 32bits is faster than the GCC at 64bit and even AMD prefers Intel’s compiler for their tests!
More over, it is very interesting to see (and dissapointing for OSS) that Intel’s compiler at 32bits is faster than the GCC at 64bit and even AMD prefers Intel’s compiler for their tests!
The gcc team might be disappointed but the rest of the OSS community still has access to ICC right? I haven’t used ICC myself, but I might try it someday – Gentoo is my Linux distro now. Would like to see AMD put some resources in to help make GCC better/as good as ICC. Apple/Motorola did so didn’t they?
99.9% of the OSS projects on Unix and mostly Linux use GCC, so that is dissapointing for OSS as much as it is for the GCC team. ICC is only available for Linux and not on other Unices. FreeBSD supports it via its Linux emulation layer, but not natively.
SPEC is the most respectable CPU only benchmark… on the other hand, it sucks at measuring how a server will actually perform. Server’s are typically more limited (especially in SMP configurations) by memory bandwidth and I really think that this is where Opteron will shine (Hypertransport or whatever they call it helps). Opteron has been doing well in benchmarks designed to measure server performance (not stellar but a significant speed increase over Xeon’s). I think tomshardware has some good numbers a little while ago.
>Server’s are typically more limited (especially in SMP configurations) by memory bandwidth
In this case, Intel still has the upper hand. Their new CPUs and mobos will be using the faster RAM “just like that”. For Opteron, moving to a faster RAM, is more difficult, as it will need some redesigning in the architecture, I heard.
I regarded AMD as having the upper hand in SMP design. Since day one Athlon MP systems had seperate memory buses feeding each CPU while Xeon systems had only one which each CPU had to share. Only recently did Intel do the same for their SMP chipsets.
Forgot to mention, Intel’s upper hand wasn’t really with faster RAM but larger L2 and L3 caches where applicable.
yes – i was a bit puzzled by this – intel compilers working on AMD… now i guess that not all the optimisations icc offers can be implemented on an AMD… but a few are… more so than gcc?
am i right?
t
OK, I have to ask: why did nwfusion compare the quad processor compaq system versus the dual processor Xserve? I can’t imagine that these two boxes would ever go head to head. The compaq is much more expensive, uses zippity fast SCSI instead of pokey IDE drives, is physically larger, etc. Was it just that they happened to have the other box around and wanted to compare the XServe to some kind of PC?
Sincerely yours,
Jeffrey Boulier
@Eugenia: You’re comments are entirely unwarrented based on the benchmarks linked too. The difference between ICC and GCC in the 64-bit results amounts to all of 5%. That’s verging on negligable. Intel C++ is a very well-established compiler, and have benifeted from Intel’s buyout of Kai. The GCC folks should be proud that they’re so close in performance, while remaining open and highly portable. A 5% performance difference doesn’t do you much good if you’re a PowerPC user and ICC doesn’t generate PPC code
Although, I must say that ICC is an excellent compiler. I use the personal version for my own code. It’s extremely standards complient (comparable to GCC) and very GCC compatible. Best of all, it has wonderful error messages, courtesy of the EDG front-end. It also has a very significant edge in numeric programs, because it can detect vectorizable loops and automatically emit suitable SSE code. Also, it still has a significant edge on the P4, because the P4 is a very different architecture, and Intel knows it better than anyone. For the vast majority of code, however, many of ICC’s advantages really don’t matter. 99% of code isn’t heavily numeric, can’t take advantage of auto-vectorization, and is written in C, and thus can’t take advantage of ICC’s C++ optimizations.
PS> The reason that AMD uses ICC is because a lot of optimization, especially C/C++ optimization, is processor-independent. What’s really processor-dependent is the later stages of the optimizer, like register allocation and instruction scheduling.
It’s wonderful to see memory bandwidth increasing so quickly. A GeForce4 had around 10GB/sec of memory bandwidth, and this is a nearly 3x increase. Heck, my GeForce4MX 440 has 6.4 GB/sec of memory bandwidth, which is matched by the *system* memory bandwidth of the new Canterwood P4’s. Memory bandwidth used to be a huge bottleneck (and still is) and it’s good to see it being addressed directly. The memory bandwidth increases in the mid to late 1990’s did not occur *nearly* as fast as they are now.
Where? Which link is it?
Don’t worry, το βρήκα (I found it)
Two pages: http://www.barefeats.com/pentium4.html
One page with app and one with gaming benchmarks.
It is very clear in the story which link it is. I don’t understand why people don’t click the “read more” or they don’t bother READING the whole story before commenting.
I wrote to the author of the article and he basically said that the answer was availability. They wanted a PC-platform score to compare with the mac score, and that was the closest to suitable they had around. If anyone else was curious, the PC system was a DL580, 4 Xeons at 1.8Mhz.
Yours truly,
Jeffrey Boulier
I don’t understand why people don’t click the “read more” or they don’t bother READING the whole story before commenting.
I’ll take this opportunity to say I’m sorry. I had a bad case of the Mondays.
I don’t usually click the read more, because there isn’t anything to read unless the title is in red and is a story from this site.
I am sorry Eugenia.
Also you might notice garbage text above. That is because I was testing if it could support Greek. You know how you can change language in 2000/XP by clicking Left Alt + Shift. Well that is what I did.
It works in the forums but I guess that is because of the upgrades that was done to the forums.
Did you notice how considerably well the Mac scored on the Quake 3 benchmark? This is because ID spend a crap load of time optimizing Quake 3 for both PC and Mac, all the other games are slapstick efforts for a quick, inexpensive port. I know Mac’s ARE slower than PC’s, not saying mac’s or perfect or anything, but if anything you have to realise that most Mac games are built for PC’s and aren’t exactly optimized or built for Mac’s.
Great article though I enjoyed it thoroughly!
When I see how much each of these cards costs, I wonder why would anyone waste so much money over them.
If you are into gaming, you will be better served with a product from Nintendo, Sega or Sony (yeah, I know about that other company in Redmond too, but I prefer not to mention its name …). On the other hand, if you are interested in CAD, you have ample choice of professional cards. It seems to me that ATI and Nvidia are so focused on regaining the “3D champ crown” that they don’t even notice that few gamers can actually afford to buy their latest offering.
After all, do I really need to pay 500 $ just for something which will allow me to play some noisy, bloody, flashing, shooting entertainment ?
Three points
“After all, do I really need to pay 500 $ just for something which will allow me to play some noisy, bloody, flashing, shooting entertainment ?”
Obviously you personally don’t.
“If you are into gaming, you will be better served with a product from Nintendo,etc”
Console gaming and PC gaming are entirely different. Your can’t possibly think they aren’t or that you are able to speak for all gamers?
Lastly if I had a nickiel for every time I read someone questioning why anyone would X dollars for a video card I’d be retired by now.
btw, here’s a hint. Most gamers do NOT buy the top of the line cards when they first come out.
News has been copied over on slashdot. Fortunately, I get my news here first
Thanks Eugenia!
“After all, do I really need to pay 500 $ …”
New technology is *always* overpriced. Can you imagine $4700 for a PowerBook 170? Lots of people paid it; now they’re twenty bucks on eBay and new laptops are in the $1000 range. How about the more recent DVD players, HDTV, Trinitron…
Best Wishes
-Bob
It is always very interesting some anti-MS guy comes around and suggest gamers to switch to game consoles. Guess what? Game consoles and PCs give EXTREMELY different experiences. Especially in networked games. Maybe not so in the future, but certainly today.
Get used to it.
And considering Nvidia is profitable, it does seem that there are people buying $500 cards for “noisy, bloody, flashing, shooting entertainment”. If you don’t like it – guess what? You are NOT in their target market. Nvidia doesn’t have to create cards catering to every user.
And guess what? ATI and Nvidia both make professional GPUs mainly for the mid-end CAD market. http://nvidia.com/view.asp?PAGE=quadronvs http://nvidia.com/view.asp?PAGE=quadro4 http://mirror.ati.com/products/builtworkstation.html
May not get as much press as the consumer cards, but they certainly have them.
I think my next card upgrade will come with in the form of a PCI Express card. Can’t wait to see what they squeeze out of the new AGP killer.
Did you notice how considerably well the Mac scored on the Quake 3 benchmark? This is because ID spend a crap load of time optimizing Quake 3 for both PC and Mac, all the other games are slapstick efforts for a quick, inexpensive port. I know Mac’s ARE slower than PC’s, not saying mac’s or perfect or anything, but if anything you have to realise that most Mac games are built for PC’s and aren’t exactly optimized or built for Mac’s.
Not trying to be a big Wintel fanboy, but if you look at PC game benchmarks eventually you’ll notice that Quake 3 is one of the most processor-independant game benchmarks available (as was Quake2 to a lesser extent). UT2k3 has higher processor requirements than Q3, and it’s performance scales more with the processor speed than Q3’s performance does. For example, the same video card on 3 different processors (of different speeds w/ same architecture) will produce similar results on Q3 (unless you’re using very slow processors), while they will produce a greater variance in UT2k3. A lot of that has to do with the extreme level of attention that Carmack gives to the graphics engine’s interface with upcoming (and high-end current) video cards with each new game he develops.
The differences in performance could also be attributed to differences in the drivers for the cards, and the fact that the UT2k3 demo may not have been available quite as long for the Mac as for the PC.
Then again, I could just be bitter because UT2k3 crashed back to desktop every time I tried to run it on my PC (maybe Ill try again now that a couple patches have been released)
Eugenia: You (chipset manufacturers) can disable the onchip memory controller in favor of their own.