More than a decade ago, Intel ran into an issue trying to deliver what was to be the world’s top-ranked supercomputer: it looked possible that its new Pentium Pro processors at the heart of the system might not arrive in time. As a result, the chipmaker made an unusual move by paying Hewlett-Packard $100,000 to evaluate building the system using its PA-RISC processors in the machine, said Paul Prince, now Dell’s chief technology officer for enterprise products but then Intel’s system architect for the supercomputer. Called ASCI Red and housed at Sandia National Laboratories, it was designed to be the first supercomputer to cross the threshold of a trillion math calculations per second.
Given what a lemon Itanium is; the original aim was for it to become a general purpose processor to replace Xeon so that it could compete with the UNIX world, I wonder whether PA-RISC would have been a better architecture to go for instead of Itanium.
I also wonder what it would have been like if Intel sprinkled PA-RISC with its power saving magic and what it would be like in a laptop
Just a simple question. What is the exact relationship between Itanium and PA-RISC?
Itanium started off initially as a project by HP as a successor to PA-RISC, hence, there is from the start the design of a compatibility layer so that people can run PA-RISC binaries on Itanium unmodified.
The problem with Itanium is that it placed far too much hope in the skill of compiler engineers to create a compiler that can handle all the things which HP thought should be pushed back as a matter of software compiling rather than at runtime. The net result has been what we’ve seen today a lacklustre CPU performance which seems to be more of a by-product of university theory rather than business practicality.
Btw, this isn’t the first time a VLIW-like processor has been attempted; its one of those ideas that came out of the engineering academia when it should have stayed there. Nice on the blackboard when teaching kids but reality is that it throws out reality in favour of the perfect set of scenarios.
There are plenty of VLIW processors out there, esp in the GPU/DSP/embedded arena. Careful when making such statements regarding VLIW as a viable idea.
Edited 2009-09-24 00:00 UTC
I am talking about VLIW in regards to CPU, not GPU or DSP. Intel created the i960, Sun created the Majc processor, I also believe that IBM might have tried it at one stage. VLIW when used for a general purpose CPU is ultimately epic failure – its performance is ultimately set on a perfect stream of data flowing into the processor which of course rarely happens in reality.
Does VLIW design have hope in specialised areas like GPU or encryption acceleration? sure, but I never suggested that it couldn’t be used in specialised roles.
I have to wonder if people who call Itanium a “lemon” and “Itanic” have ever used one.
I do think that Intel didn’t get the success they planned on. But it is a good processor.
I have a dual processor Itanium2 1.4GHz (Celestica off eBay) that produces roughly the same speed results as a 2GHz dual Opteron when I compile my software with GCC 4.4 with profiling feedback.
That’s pretty good for a 600 MHz speed difference.
Itanium has other neat features, like not using the stack for function call returns. And things like a large selection of jumbo and giant page sizes.
There is not just the only one generation of Opterons, you know. Witch one were you benchmarking against.
Whichever generation, similar performance from 70% of the clock speed is impressive.
Not really it is expected considering the design.
I’m betting the power requirements are similar or higher and it probably isn’t as overclockable due to the more complex design
Uhm… Who would overclock a Server CPU? In servers, reliability is *pretty* important. Gaining a small percentage of speed in the place of reliability doesn’t make any sense.
First off, you’re not using the “Itanic”, you have the second generation. You’re also using an up to date compiler – after all, the whole idea of the Itanium / EPIC arch was that the smarts would be in the compiler.
Based on your description, your Itanium2 could be as old as 2003 or as new as 2006. The Opterons have had such a wide clockspeed range that I can’t tell how old they would be.
Let’s assume that both your Opterons and Itaniums date to 2004; now answer these questions, what was the price differential? How about typical power dissipation? What was the performance of software using the compilers ( free or otherwise ) available at the time? If commercial compilers were used, how expensive were they?
Another thing to consider is that the original Itanium was supposed to deliver decent IA-32 performance – and it didn’t, even though there was hardware emulation.
So the question remains, how could Intel and HP screw up so badly? It also opened the door for AMD to take the lead on Intel’s own architecture. If AMD had had more fabrication plants and better manufacturing processes, Chipzilla might have gone the way of the dinosaur.
No itanium was never supposed to provide decent X86 execution, if anything it was more focused on PA-RISC compatibility.
And even during the height of the Opteron, Intel was still selling more P4s/Xeons than AMD. It is not that AMD could have killed Intel with Opteron, as much as Opteron allowed AMD to not flat out die.
What many people neglect to understand is that Itanium did what intel set it out to do: kill competing architectures in the high end. MIPS, Alpha, PA-RISC all went the way of the dodo in the mid/high ends. SPARC is hanging by a thread, and PPC is pretty much on life support since IBM is not even clear if there will be a successor to POWER7. Most of the sales from the dismissal of those platforms went to Intel, either Itanium or Xeon is of little relevance, since a sale is a sale. So if anything, I assume Intel sees itanium as a marketing expense, more than a technical expense.
Sorry, but you’re wrong. The Itanium was supposed to be the end-all and be-all. Go back and read the industry pundits pronouncements between the initial announcement and the first release.
If x86 performance didn’t matter, they wouldn’t have wasted precious silicon on it – however, it was supposed to be “good enough” but, by the time the chips were shipping, it simply wasn’t.
As I clearly stated, AMD’s problem has, for a long time, been one of manufacturing yields not design.
The much ballyhooed Core i7 that has put Intel squarely
back at the top of the heap features designs that AMD introduced with the Athlon 64, 6 years ago.
However, Intel has revived HyperThreading which, may just work, this time around.
Intel’s manufacturing strength has always allowed them to throw more cache at the problem or go for the next die shrink, keeping their chips competitive and forcing AMD to play catch up, even when their design was superior.
Now, however, they are behind on all counts, except perhaps price/performance [/q]
Nope. It was supposed to give “good enough” x86 performance. If that wasn’t the case, they wouldn’t have wasted precious hardware on it. Trouble was, the hardware didn’t deliver.
As for AMD, until the Core i7 release from Intel, they’ve had the design lead but have always lagged on manufacturing yields, total chip output and die shrink.
Now, they’ve lost the edge in design so the next year could be really bad for them if they also lose the price/ performance edge ( assuming they still have it ).
Clock speed is just one parameter (think about the P4s and how they achieved very high clock speed by shortening each pipeline stage). Have you compared the transistor count, memory bandwidth and process technology used? Itaniums have only been competitive because they poured resources into it – large cache, high memory bandwidth and so on. I’ve heard recent ones are pretty good but I doubt even the yet-to-come Tukwila would surpass Nehalems by much, well, if at all. If you factor in the overall cost, it just doesn’t make much sense unless you’re talking about highend niche.
Edited 2009-09-23 14:56 UTC
Well, that was the problem, wasn’t it? The Titanic wasn’t a bad ship. It hit an iceberg and sank. The Itanic also hit an iceberg and sank. The difference being that plenty of people saw the iceberg that Itanic was heading for, and Intel still just plowed into it.
Maybe compilers today do have the smarts necessary to have averted that collision years ago. And maybe modern technology could have averted Titanic’s own collision. (For that matter, 1912 technology could have done so, if only <fill in the blank>. But does it really matter today?
I don’t know if it matters. I’m not trying to convince you to buy one. But it isn’t a “lemon.”