Intel will come out with a server chip next quarter that adds 64-bit processing power to its current x86 line of processors, the company’s chief executive said Tuesday. In the meantime, Advanced Micro Devices is adhering to the less-is-more theory as it begins shipments of low-power versions of its Opteron processor for servers.
How long did it take them?
Intel’s approach is compatible with AMD’s, the representative said. “There will be one operating system that will support all (64-bit) extended systems,” the representative said.
I’m certainly glad Intel decided to make AMD64 compatible processors, rather than yet another incompatible standard. It’s almost entirely certain this was the work of Microsoft. Hopefully we’ll see Linux and FreeBSD running on these systems soon, and support in the Intel C/C++/Fortran Compiler for this architecture. Lack of good compilers is one of the primary obstacles hindering the adoption of the Opteron (at least in the scientific computing community)
does amd get royalties for intel using there design or technology ?
Snake
http://arstechnica.com/news/posts/1076956744.html
according to ars technica, “Intel can take and use AMD64 without having to pay a penny for it.”
.. Apple (IBM) and AMD were completely destroying Intel in the cpu-speed race up to this point with the G5 and the Opteron. I’m sure someone got in trouble for pushing Intel towards the position that 64-bit computing is a useless thing.
They better hurry though.. Apple will have its dual 3GHz G5 out at the end of the summer (and it might have the HyperTransport 2.0 spec mobo) and the Opterons aren’t standing still either.
“.. Apple (IBM) and AMD were completely destroying Intel in the cpu-speed race up to this point with the G5 and the Opteron. I’m sure someone got in trouble for pushing Intel towards the position that 64-bit computing is a useless thing. ”
No they weren’t. The Opteron is *competitive* with the P4/Xeon and both x86-derived chips benchmark faster than the G5. Neither IBM or AMD are *completely destroying* Intel. Get real.
Apple (IBM) and AMD were completely destroying Intel in the cpu-speed race up to this point with the G5 and the Opteron.
“Up to this point” ? Measured from when, 6 months ago ? Forgotten the last twenty years, have we ?
Not to mention neither the Opteron nor the G5 “destroy” the intel CPUs by any rational definition of the word “destroy”.
I’m sure someone got in trouble for pushing Intel towards the position that 64-bit computing is a useless thing.
I don’t seem to recall intel ever saying such a thing.
Added to that, for the overwhelmingly vast majority of customers, it *is* a useless thing. That is, it offers no advantages whatsoever over 32 bit CPUs.
They better hurry though.. Apple will have its dual 3GHz G5 out at the end of the summer (and it might have the HyperTransport 2.0 spec mobo) and the Opterons aren’t standing still either.
Amazing how Apple has competitive performance in a minority of its lineup for the first time in several years, and suddenly the stagnation preceding it never occurred. (The real irony here is how much the Mac zealots like to lay the whole “1984” thing on the PC community, when theirs is probably more like the society in the novel.)
/*grumble* Bloody fanboys *grumble*
“”Up to this point” ? Measured from when, 6 months ago ? Forgotten the last twenty years, have we ? ”
20 years, what kind of weed have you been smoking? x86 chips were hardly competitive until the PentiumPro came along.
“Amazing how Apple has competitive performance in a minority of its lineup for the first time in several years, and suddenly the stagnation preceding it never occurred.”
I assume every machine DELL sells is a 3+GHz dual Xeon machine, right? Afterall I guess high end is what every other PC vendor sells as to put Apple in such a bad position as being the only vendor whose high end offerings are the minority of its lineup.
No they weren’t. The Opteron is *competitive* with the P4/Xeon and both x86-derived chips benchmark faster than the G5. Neither IBM or AMD are *completely destroying* Intel. Get real.
Competative? Is that a euphamism for crushing?
Even if you look at a purely synthetic benchmark like SPEC, which Intel is known to optimize its compiler for for, the Xeon is getting crushed. A 3.2GHz Xeon with 533MHz FSB SPECs at 1289 / 1230 whereas a 2.2GHz Opteron SPECs at 1353 / 1309.
The best Xeon systems use a shared 533MHz QDR FSB. That means that at saturation, each processor gets approximately 266MHz of the QDR bus (or 66MHz of raw clock speed). With the P4 constantly hitting main RAM when its trace cache gets tainted due to a mispredicted branch, this is simply not enough memory bandwidth to keep the P4’s ridiculously long pipelines full, and the processor sits stalled for dozens of cycles at a time as instructions are loaded, decoded, and cached from main memory. Is it any surprise such an architecture isn’t competative against a processor like the Opteron, where each processor has an onboard dual channel DDR400 memory controller (for up to 6.4GB/s of memory bandwidth) and communicate with each other via 6.4GB/s HyperTransport links?
Don’t like my analysis or synthetic benchmarks? Do you trust Anandtech to do fair real-world benchmarks? Check out this, it’s a bit dated but still pertainent:
http://www.anandtech.com/IT/showdoc.html?i=1935&p=9
And Anandtech’s analysis:
The Opteron 248 setup managed to outperform Intel?s fastest, largest cache Xeon MP by a whopping 45%. Boasting 141 ms request times, the Opteron 248 system was 12% faster than the Opteron 244 setup, indicating very good scaling with clock speed ? a 50% increase in performance for every 100% increase in clock speed.
“Competative? Is that a euphamism for crushing?”
No. It isn’t.
“Even if you look at a purely synthetic benchmark like SPEC, which Intel is known to optimize its compiler for for, the Xeon is getting crushed. A 3.2GHz Xeon with 533MHz FSB SPECs at 1289 / 1230 whereas a 2.2GHz Opteron SPECs at 1353 / 1309. ”
1) SPEC isn’t a *synthetic* benchmark. It is a suite built from real codes.
2) The delta hardly justifies “crushing”.
As to your Xeon benchmarks, those are quite dated.
This doesn’t indicate anything more then “competitive”:
http://www.tomshardware.com/cpu/20040106/index.html
I’m not *attacking* the AMD processor. I not only buy them (about to buy a 3400+ for home) but own stock. However, they aren’t *crushing* Intel. For applications like media encoding, that “ridiculously long pipeline” certainly helps the P4.
I think you’ll find the delivered bandwidth on most Athlon64, at least, motherboards lacking compared to the theoretical full-duplex 3.2 GB/s. Especially the disappointing nForce3 150.
> This doesn’t indicate anything more then “competitive”:
> http://www.tomshardware.com/cpu/20040106/index.html
It doesn’t mention whether its in long mode or in short mode… But since they’re running Windows XP on it, I suppose it’s just short mode (32-bit compat).
The Opteron should see a *significant* performance boost when run in long mode (twice as many registers).
http://www.tomshardware.com/cpu/20040106/index.html
These aren’t Opteron benchmarks, they’re for a standard Athlon 64 which only has half the memory bandwidth of an Opteron. The least you could have linked was benchmarks for an Athlon 64 FX-51, which has dual memory controllers.
Also, it compares the Athlon 64 to the 3.2GHz P4 Extreme edition, which not only has as much cache as a Xeon, but also sports an 800MHz QDR FSB. This significantly helps the P4, as the processor is no longer starved by the effectively 266MHz QDR FSB of the Xeon. Your original post compared the Opteron to the Xeon as well. The P4 Xeon architecture is fundamentally flawed, as I mentioned, and cannot hold a candle to the Opteron.
Regardless, the Athlon 64 alone wins in virtually every benchmark. I’d call that crushed…
What a twist of fate! Is this a sign of new things to come, a completely new chapter in the history of these two CPU giants?
It has always been up to AMD to engineer a solution compatible with Intel’s. But not anymore – now it was AMD that came out with the winner architecture, and Intel was forced to follow it.
Watch this story unfolding!!
Ooops… I actually meant legacy mode instead of “short mode”, and GPRs of course.
I think I have to go to bed soon…
“These aren’t Opteron benchmarks, they’re for a standard Athlon 64 which only has half the memory bandwidth of an Opteron. The least you could have linked was benchmarks for an Athlon 64 FX-51, which has dual memory controllers.”
In most every benchmark I’ve seen, the FX-51 shows little performance gains over the 3400+. At least not worth the premium.
“Also, it compares the Athlon 64 to the 3.2GHz P4 Extreme edition, which not only has as much cache as a Xeon, but also sports an 800MHz QDR FSB. This significantly helps the P4, as the processor is no longer starved by the effectively 266MHz QDR FSB of the Xeon. Your original post compared the Opteron to the Xeon as well. The P4 Xeon architecture is fundamentally flawed, as I mentioned, and cannot hold a candle to the Opteron.”
So, Intel gets “crushed” when its older chips are compared to top-end 248 chips…. The P4 Xeon most certainly can and does hold a candle to the Opteron. Neither architecture is “fundamentally flawed”. One is just older then the other.
“Regardless, the Athlon 64 alone wins in virtually every benchmark. I’d call that crushed…”
Only if “slightly faster in some and slightly slower in others” is how you define “crushed”.
The additional 8 registers should improve performance by about 20%. The current public beta of Windows XP 64 for AMD doesn’t show deltas that large. What AMD needs is their *own* compilers. I suppose we’ll now have Intel’s compilers for this platform (x86-64).
20 years, what kind of weed have you been smoking? x86 chips were hardly competitive until the PentiumPro came along.
Compared to their AMD and Motorola counterparts, they were. Heck, it was two years between the release of 386 machines (1986) and the first 68030 Macs (1988), back in the mid-to-late 80s. Apple had a good run with the early PowerPCs (albeit hamstrung by having to emulate a 680×0 to run most of the software) and then again for another short period with the first G4s. AMD had their 40Mhz 386 out for some time before intel answered with the 486 (although a clock crystal change would overclock a good proportion of intel 386/33s to 40Mhz).
The point I’m trying to make is that it moves in cycles – “who has the fastest CPU” has changed numerous times throughout history, and in all likelihood is going to continue to do so. It’s not the first time AMD has been fastest and before that it wasn’t the first time intel had been fastest. Writing “up until this point” – implying the situation has always been true until now – is just *wrong*.
I assume every machine DELL sells is a 3+GHz dual Xeon machine, right? Afterall I guess high end is what every other PC vendor sells as to put Apple in such a bad position as being the only vendor whose high end offerings are the minority of its lineup.
The point you seemed to have missed is that its only Apple’s high-end lineup that’s performance competitive. In the low-end and middle market segments, their machines are not (because their technology has remained essentially stagnant for 4-odd years). A minority of Apple’s lineup is competitive in terms of performance and technology, the rest of it remains stagnant – yet as far as the Apple fans are concerned, because the G5s are so great, it’s like every other machine in Apple’s lineup doesn’t exist.
What a twist of fate! Is this a sign of new things to come, a completely new chapter in the history of these two CPU giants?
No. It’s happened before (20 and 25Mhz 286s, 40Mhz 386s, 120Mhz 486s) and will undoubtedly happen again.
So when Craig Barrett mentions CT Prescotts coming soon for processors/servers but no 64-bit processors for the PC, what does he mean?
Are those Prescotts going to be issued as Xeons? or perhaps in 775 pin format? How does he intend to keep them from the PC? By price alone?
You got to wonder what this means for the future of Itanium. Given the amount of money both intel and hp have spent on it i can’t see it been canned, but who knows. With this anouncement it’s only ever going to amount to a PA-RISC/Alpha/Mips replacement.
I like Intel
I like AMD
I love AMD prices to build a whole system.
tada!!
“Added to that, for the overwhelmingly vast majority of customers, it *is* a useless thing. That is, it offers no advantages whatsoever over 32 bit CPUs.”
That would be the case if we were just talking about an extension from a good 32-bit processor architecture to 64-bit processor (like the G5 processor). As it stands the IA32 architecture is severely register constrained, and the AMD64 architecture not only adds twice as many general purpose registers, it doubles those in size as well, giving 4 times as much temporary register space for computations. Then there is the extended SSE registers for high speed floating point added to the mix.
So although just moving from 32-bit to 64-bit is not a win (it can make some apps slower for 32-bit architectures that have tons of registers already), re-compiling an app from 32-bit x86 code to 64-bit x86 code *does* make a difference. In the benchmarks one group did (forget who – sorry), the 64-bit recompile of the POVRay ray tracers on Linux ran something like 30% faster on AMD64. With zero changes to the code!
So for the average PC user who browses the web, does email and plays games, AMD64 should be able to bring a really decent performance boost to at least the third category 😉
Now before you flame me let me explain.
Apple has a very smooth migration to 64 Bit and apple is pushing 64 bit into the high-end of the mainstream desktop more effectively than AMD or Intel.
By contrast, the pc world has AMD, the itanium and now x32 with 64 bit extensions which are kind of compatible with AMD. that is a royal mess. its going to help apple take a few points of share on the high-end, workstations, scientific, creative etc. but it probably won’t do much for them in the mainstream market since the average PC user really wants a pc for $800-$1000 US and 64 bit is a bit outside of that (for AMD and intel too).
So for the average PC user who browses the web, does email and plays games, AMD64 should be able to bring a really decent performance boost to at least the third category 😉
At the cost of having to purchase completely new software, since all those things the AMD64 offers will require rewritten and/or recompiled software.
It’s also not really relevant given that existing 32 bit CPUs are capable of running typical tasks – even atypical ones like high-end games – quite fast enough. Performance in the consumer market is not lacking or, where it is, the bottleneck is rarely the CPU.
I reitereate, a 32 to 64 bit move, for the vast majority of customers (and such a definition would exclude high end gamers) offers no advantages whatsoever.
I’m not arguing there aren’t advantages to be had, but they’re simply not relevant to most people, *especially* those in the “normal consumer” demographic.
Does it mean that Itanium is in even more trouble than we all thought? It looks like even Intel doesn’t have a whole lot of certainty if it is going to succeed…
At the cost of having to purchase completely new software, since all those things the AMD64 offers will require rewritten and/or recompiled software.
What a load of rot. Existing 32-bit software runs slightly faster (10-15%) on an x86-64 with Win64 than it does with Win32. Customers don’t need new software, they can use their existing copies, and then just get the 64-bit version whenever they next want to upgrade.
I reitereate, a 32 to 64 bit move, for the vast majority of customers (and such a definition would exclude high end gamers) offers no advantages whatsoever.
This is just plain wrong – sure most users don’t need the increased memory capacity of a 64bit system, but the extra registers and new instructions can make quite a difference in both system and application speed. Okay, 64bit may not offer many advantes to the average Word user, but then, neither does a 3ghz CPU.
As I stated above, one of the big wins from the upgrade to x86-64 is the extra registers the architecture has. If you’ve ever programmed in assembly, you’ll understand just how useful another 16 registers are. They greatly reduce the amount of wasteful swapping of date that generally has to be done on a x86 processor to do calculations.
Take a look at the link below for a better understanding
http://www.arstechnica.com/cpu/03q1/x86-64/x86-64-1.html
They said it at the show pretty clearlly, even though these mid to low end workstation and desktop cpu’s will have 64bit extentions, they can’t replace the Itanium in big tin as they say.
Intel said that the 64bit extentions are more of a memory extention then a number crunching one, the big benifits will be for workstations where people who use 3d rendering software and things like AutoCAD will be able to access more then 4GB of RAM from the start. This is also why intel is bring 64bit exntentions to the Xeons first then to the Prescotts later on.
Also you have to keep in mind that even the Opteron isn’t fighting the Itanium, the Opteron competes agenst the Xeons, AMD has said this many times.
The newer Itanium which is on the way with its dual core arch will be a FPU and ALU monster, nothing seen before it will be able to crunch numbers as fast. Intel isn’t dropping the Itanium at all.
If you ask me the 64bit extentions INtel is working on all be it are compatibile with AMD64, aren’t at all the same, in fact I think that if not now later on intel will add in some 64bit extentions that will act as a stepping stone to the Itaniums EPIC (i think that’s what its called) arch, so applications can be then ported faster in time. or companies can move from x86-64 CT P4/Xeons to Itaniums without any problems.
This positioning of Itanium sound good in theory, but in reality Itanium is now pretty much stuck between a rock and a hard place. Now it is being squizzed from low end by both Opteron and Intel’s own Yamhill and by UltraSparc and Power and the high end. Itanium being very expensive and very low volume even compared with the big boys (UltraSparc, Power) doesn’t help the cause either. The only way Itanium could enter the enterprise market was from the low end, now it is not possible. Is Itanium going to become the next i432?
“…it will be able to crunch numbers as fast.”
as fast as what??? A 4 year old Alpha?
The reason why Itanic is competitive today is because they got HP to put Alpha-developement on ice.
Intel would have done better by taking over Alpha and rebranding it as an Intel-CPU.
Compared to their AMD and Motorola counterparts, they were. Heck, it was two years between the release of 386 machines (1986) and the first 68030 Macs (1988),
The Mac II had a 68020 in early 1987, less than a year after the first 386 PC.
The point I’m trying to make is that it moves in cycles – “who has the fastest CPU” has changed numerous times throughout history, and in all likelihood is going to continue to do so
Yup.
While the Alpha was around they usually lead, sometimes by a huge margin but PA-RISC would sometimes jump ahead… until the next Alpha came along.
Currently it’s Itanium II or POWER4 in the lead – depending on which benchmarks you trust – but then, thats always been the case.
The P6 was a very rare case when a x86 outgunned _everyone_ even the RISC vendors, think that’s the only time it ever happened and I don’t think it held it’s lead that long.
Hmm, interesting that the Pentium-M is actually a desendant of the P6 – released in 1995!
“Hmm, interesting that the Pentium-M is actually a desendant of the P6 – released in 1995!”
So’s the P4. The Pentium Pro/PII/PIII/PIV line are all Family 6 CPUs (i686). The Pentium-M is a development of the PIII, from what I understand.
The Mac II had a 68020 in early 1987, less than a year after the first 386 PC.
I said 68030. My understanding is an 020 is more comparable to a 386SX (which actually appeared in the market some time later). A 68020 is not comparable to a 386 (no MMU).
While the Alpha was around they usually lead, sometimes by a huge margin but PA-RISC would sometimes jump ahead… until the next Alpha came along.
I don’t really think those CPU families are relevant to this discussion…
The P6 was a very rare case when a x86 outgunned _everyone_ even the RISC vendors, think that’s the only time it ever happened and I don’t think it held it’s lead that long.
Weren’t the Xeons top of the heap (according to SPEC) for quite a while ? I didn’t think the PPro was faster than an Alpha, back in the day (or a really relevant comparison).
Hmm, interesting that the Pentium-M is actually a desendant of the P6 – released in 1995!
Yep. PPro -> P2 (PPro+some bug fixes+separated L2 cache) -> P3 (P2+SSE) -> P-M (P3+P4 bus ? Can’t say I’ve paid a lot of attention). Pentium Ms have turned into good laptop CPUs to compete with Apple’s G4. I’m surprised no-one has started putting them into blade servers yet.
“Compared to their AMD and Motorola counterparts, they were. Heck, it was two years between the release of 386 machines (1986) and the first 68030 Macs (1988), ”
Motorola had the 020 out in 1984 it was a true 32bit processor 2 years before intel had the 386 out in numbers. Apple was not the only customer of 680×0 cpus during the 80s by a long shot. The Mac II with the 020 was out in 87 smae year as the early compaq 386s. Hardly the 2 year gap you made it out to be.
“The point I’m trying to make is that it moves in cycles – “who has the fastest CPU” has changed numerous times throughout history, and in all likelihood is going to continue to do so.”
Intel have produced the faster x86 chips sometimes during the 20 years, and during most of those 20 years the x86 machines were not even close to the performance of other microprocessors. Having the fastest x86 is not the same as having the fastest micro at that time, at least not during most of the past 20 years.
“The P6 was a very rare case when a x86 outgunned _everyone_ even the RISC vendors, think that’s the only time it ever happened and I don’t think it held it’s lead that long. ”
Not really when the P6 came out, Alphas were on its second generation the 21264, and the R10K had also made the MIPS IV far more efficient than the FP heavy R8K, even the UltraSparc I was significantly faster than the P6. The main merit of the P6 was that it was just not as bad in comparison as the previous iterations of the architecture with respect to the contemporary counterparts. The P6 made the gap smaller and allowed vendors to deploy NT machines that were not complete dogs when compared to the 64bit RISC offerings at the time.
“I said 68030. My understanding is an 020 is more comparable to a 386SX (which actually appeared in the market some time later). A 68020 is not comparable to a 386 (no MMU). ”
Nope, a 020 is a full 32bit CPU, while the 386sx is actually more comparable to a 68010 (or even 68000). The 020 was a 3 chip set, the CPU, MMU (68851), and FPU (68881). The 030 is basically with the MMU/CPU in one chip and added instruction cache on chip (which the 386 does not have, hence by your same metric the 386 and the 030 are not the same class since the 030 had internal data nad instruction (the 020 actually had internal instruction cache) cache, whereas the 386 had none.
Were also used in high end Unix workstations of the day, not just Macs. Also Amigas if memory serves. Impressive CPUs even today.
They wer also unsed in the Next computers.
Those things absolutely rocked.
Eugenia has got one if I am not mistaken?
you’re not even comparing the same clock speed cpu’s, the
real comparison is then with a xeon 2.0 ghz vs an opteron
2.0 ghz.
you think it’s fair to rate an opteron 3.6ghz with a xeon
3.6ghz?
let’s get our perspectives straight.
“you’re not even comparing the same clock speed cpu’s, the
real comparison is then with a xeon 2.0 ghz vs an opteron
2.0 ghz.”
No, the real comparison is between the equivalent products from both companies. The 2GHz Opteron part is a high-end current product. The 2GHz Xeon part is obsolete and has been for some time.
“you think it’s fair to rate an opteron 3.6ghz with a xeon
3.6ghz?”
Yes, I do.
A 3.6 GHz Opteron does not exist. Clock for clock comparisons are meaningless.
>> “Hmm, interesting that the Pentium-M is actually a desendant of the P6 – released in 1995!”
Actually, every CPU Intel has released since 1995 is nothing more than an overclocked PentiumPro. In fact, the Pentium-II was nothing more than a PentiumPro with MMX and half the cache. The internal core of the CPU didn’t change until the Pentium4. But the system bus is still based on the P6 bus. The Pentium-M is nothing more than a re-worked Pentium3 … which really shows the flaws in the Pentium4’s design. You gotta love when the previous generation of CPU, with just a few tweaks here and there, can outperform their flagship desktop CPU.
Intel has done a lot of things right in the past, but their obsession with high MHz above all else is really starting to hurt them.
x386 have added two important features to the x86 line :
– 32 bits registers
– Useable protected mode, as opposition to the x286 extended mode wreck.
Of these two features, the 68k line had the former from the beginning, it had even more registers than first generations of Pentiums more that 10 years later.
There is a great difference between the external bus size and the internal architecture. For example, the 68008 was a 32bits processor with a 8 bits external bus.
then you don’t believe the babble coming out of
intel fud, that’s a sign of intelligence.
However, I believe that we *are* rating processors
clock for clock, it’s just that the opteron doesn’t
need to increase the frequency to gain performance
over an intel x86. Doesn’t that tell you how
pathetic intels have become compared to amd x86-64?
So maybe a pint sized user like you doesn’t care
about heat dissipation/power consumption, but for
a big time computing cluster, this becomes a
monumental problem. crushing defeat sounds concise
to me.
You gotta love when the previous generation of CPU, with just a few tweaks here and there, can outperform their flagship desktop CPU.
It can ? Which Pentium M is faster than a 3.4Ghz P4 ?
You do realise that argument about clock speed being an unfair comparison cuts both ways, right ?
“then you don’t believe the babble coming out of
intel fud, that’s a sign of intelligence.”
It’s not FUD. Learn what *FUD* means.
“However, I believe that we *are* rating processors
clock for clock, it’s just that the opteron doesn’t ”
No, *we* aren’t. You are.
“need to increase the frequency to gain performance
over an intel x86. Doesn’t that tell you how ”
The P4 is designed with a deep pipeline that offers advantages for tasks like media encoding. The Opteron/Athlon64 tends to lose those benchmarks. This was a *design* decision on the part of Intel.
“pathetic intels have become compared to amd x86-64? ”
If by “pathetic” you mean “just as fast as” you would be correct.
“So maybe a pint sized user like you doesn’t care
about heat dissipation/power consumption, but for
a big time computing cluster, this becomes a
monumental problem. crushing defeat sounds concise
to me.”
Oh, this is funny. Really. Does your mom know you’re on the computer?
JCS, you’re an idiot, and to educate an idiot like yourself:
http://216.239.51.104/search?q=cache:_yKLVFGWMyIJ:www.geocities.com…
one word–crushing.
Steve,
Your garbage is still not FUD. As to your “pint-sized” user foolishness, I’d lay dollars to Navy beans I’ve more experience both in the workstation and supercomputer regimes as an engineer. Your posts, however, indicate that you are about 12.
This word you use, “crushing”…. It does not mean what you think it means – and I *BUY* AMD whenever possible and even own stock.