Sun’s recent cancellation of many UltraSparc chips is clear evidence that Sun is finding UltraSparc a huge resource drain. But, the company is still averse to porting Solaris over to Itanium (however they did port it to AMD’s 64bit architectures). This article is an analysis of the rationale behind such bizarre moves, according to the author, and Sun’s alternatives to this self-destructive
strategy.
I don’t see any reason why Sun would port Solaris to the Itanium. It just doesn’t make sense. They made a commitment to the Opteron and there is no point in porting Solaris to the Itanium unless the Itanium takes over the market, which, at this point doesn’t seem likely.
It seems comical that the writer claims that Sun’s executives cannot be trusted while at the same time taking the statements of Intel’s execs on Itanium as an item of fact. I think if anything, Intel’s execs have a worse believability record with respect to Itanium. If their babble had been true, by now Itanium should have taken over the world.
In fact, the whole Itanium strategy is in trouble. The fact that Intel is even making the Xeon-64 is because AMD caught them with their pants down, and they were taking a beating in the market place. The flowery statements about Itanium and Xeon achieving price parity by 2007 are more wishful thinking. The underlying assumption is that Itanium will hold a performance lead at the same price, and everybody will see the light and move to Itanium. The more likely scenario, however is that in the meantime AMD will continue to add Itanium type features to Opteron and may very well have a good-enough competing product (does not have to be better, as the popularity of Windows shows) that is x86 compatible. Of course, Intel won’t be able to sit still, and will have to keep adding Itanium features to Xeon, relegating the Itanium to a niche market.
I agree that Sun should stop wasting money on Sparc, but the best bet seems to me to enter into a partnership with AMD and make kick-butt x86 servers. The other thing is this: SGI is dropping IRIX, it is clear that IBM is readying itself to drop AIX, why in heavens can’t Sun see the light once and for all and stop wasting its money on Solaris, buy Red Hat and port its Solaris utilities to Linux?
> unless the Itanium takes over the market
I fully agree with that the Itanic isn’t going to take over the market. It isn’t spectacularily fast or cheap, so why should Sun bother porting Solaris to it?
> to add Itanium type features to Opteron
Could you please explain any special Itanium features that an AMD64 processor doesn’t have? It isn’t the 64-bit address space in place of the Opteron’s 48-bit one, is it?
> partnership with AMD and make kick-butt x86 servers.
s/x86/amd64/
> buy Red Hat and port its Solaris utilities to Linux
Then they’ll port the IMO ugly JDS theme to the Java Hat System?
“Sun’s recent cancellation of many UltraSparc chips is clear evidence that Sun is finding UltraSparc a huge resource drain.”
Yes, the first line.
First, “many” is two, UltraSPARC V and Gemini.
Gemini was a dual core UltraSPARC II processor. Recently Sun has released a dual core UltraSPARC III processor, the UltraSPARC IV. Why would Sun release another UltraSPARC II processor? The scrapping of Gemini was a no-brainer.
The scrapping of the UltraSPARC V is a fairly sensible decision as well. Sun was overdiversifying their processor line, which would only cause their customers a headache as they transition their infrastructure to yet another product. Waiting for the Niagra is a smart bet.
Itanium is sinking. There is no possible bright future for this chip unless Intel prices it lower than a P4. And even then, the IA32 capability of the Itanic is stunningly poor.
For the past 5 years, Intel has leveraged their monopoly power to try and make the market accept products that are good only for Intel, not for customers.
The customers are not fooled:
— they know Itanic is a loser
— they know Intel’s “maximum gigahertz / moore gigatherms” strategy is extremely expensive when it comes to electricity.
— they know BTX is good for Intel, bad for everyone else.
— they know Intel has undisclosed DRM in all the new chips.
All in all, the world doesn’t need Intel anymore.
There simply is no value proposition.
The article was advertised as discussing the ‘alternatives’. Apparantly the author thinks the only alternative is ‘Itanium or death’.
Perhaps Sun would be a good candidate for IBM’s PowerPC licensing offer:
http://www-306.ibm.com/chips/products/powerpc/licensing.html
Perhaps Sun would be a good candidate for IBM’s PowerPC licensing offer …
Its pretty much simple, as everybody knows Solaris runs on UltraSPARCS processors as well as cheap x86 and and AMDs opteron..
Why would Sun choose PowerPC? lower price? they got Opteron.
RISC processing? they got UltraSPARC family.
Im wondering why OSNews sometimes keep posting such ridiculous articles.. a shame IMHO.
OSNews is a great place for info.
Regards,
can someone please explain to me why transmeta and itanium both use VLIW, transmeta is able to come out with an affordable and low-power consumption cpu that emulates x86 quite well, but itanium is expensive, big, power hungry and slow at emulating x86? both use vliw, and intel has far more engineers and resources than transmeta.
Intel would be smart to drop Itanium immediately. The chip is a loser and at its current pace of sales, will never make money for Intel. It is just a vast sink hole of time, energy, money, and resources.
The smart move for Intel would be to throw massive resources at making an excellent 64-bit version of their IA32. With x86-64, AMD has only scratched the surface on how the proven IA32 architecture can be improved.
There really is no pay off for traditional RISC. Binary compatibility with IA32 is the most important driver in the market today.
Perhaps there will be innovation in processors once again, but that presumes much. That day is not today in any case.
Firstly, I agree, Itanium is definitely a loser. It’s a badly designed architecture, which ignores everything intel’s engineers have learned about processor design over the lifetime of the x86. I really hope this half-baked architecture dies, just like the P4 stupidly-long-pipeline architecture is scheduled to.
Secondly, intel is aware that their high-clock, high-temp architecture is not a winning strategy, hence their recently declared intentions to move back to a P3/Athlon styled architecture, based upon their current Pentium M architecture. The changeover isn’t slated to happen for a couple of years yet, but at least they’ve finally come to their senses.
Thirdly, how is BTX only good for Intel? From what I can see, BTX primarily encourages better thermal design for cases, to better deal with the high temperature characteristics of modern processors. AMD’s chips’ temperatures may not be rocketing into the sky like intel’s, however they have graudally risen over the years, and will likely continue to do so. Also, don’t forget about the power/heating requirements of ATI and nVidia’s video cards.
Finally, I assume by the undisclosed DRM you’re refering to the assumed DRM incorporated into their current Prescott chips? Sure, the functionality may be there, but so what? Until there’s software support, it means nothing. In fact, DRM is only of a concern if you’re using Windows or MacOS. For those of use on Open Source systems, there’s really no concern. To me, this is just one more “feature” of commercial systems that will start to drive more people toward Linux and other Open Source systems.
why in heavens can’t Sun see the light once and for all and stop wasting its money on Solaris, buy Red Hat and port its Solaris utilities to Linux?
First of all, what does Sun need Red Hat for? Sun is working on JDS and in my opinion buying Red Hat is kind of overdone.
and second, dropping Solaris in favor of Linux alltogether? Uhm, in case you haven’t noticed; Solaris is a top-of-the-line OS (on the SPARC, at least) and it would be like the Fiat company dropping Maserati because Alfa Romeo (also owned by Fiat) sells more cars…
The fact is, Sun ships more SPARC chips than Intel ships the IA64…
I don’t think Intel will adopt SPARC, but it is clear that IA64 is not popular, and Intel needs to follow AMD, or else Intel’s chip market will be killed by the AMD64 or PowerPC.
To those talking about the failure of “Itanic”:
1) Have you *seen* the benchmarks for Itanium2? It is extremely powerful for FPU code, and has found good reception in the scientific and media computing industry.
2) The Itanium is not a badly designed architecture. Its hit practical problems with compiler technology, but Intel seems to have gotten past the worst of it. Beyond that, its actually a very cleanly designed architecture. The funny thing is that the same people making fun of the P4 being a bad architecture are making fun ot Itanium being a bad architecture. Everything that is usually perceived as a problem with the P4 (complex ISA, long pipeline, low IPC), is exactly the opposite in Itanium (simple ISA, short pipeline, extremely high IPC).
To those talking about how the P4 is “stupidly designed.” I dunno about that. The P4 spent a good long while as the fastest commodity CPU. IBM has taken cues from Intel in this case too — the G5 pipeline is closer to the length of the P4’s than the Itanium’s.
From my own experience I found the P4’s to be very poor for floating point computations / scientific computing. The AMD Athlon XP’s are far better in this regard. Even the Pentium M’s are better than P4’s when comparing the same clock speed.
Most people would rather buy 32 bit Xeons than 64 bit Opterons to run their servers. The market stats prove this.
Sun makes ultra reliable big iron but at twice the cost of a similarly powered Xeon box. If sun decided to start selling AMD which has not proven itself in the industry and is generally considered a slightly cheaper Intel, what reason do people really have to go with Sun aside from better support?
Sun boxes are already being replaced with cheaper solutions from Dell. Sun simply cannot afford to be on the same level as smaller Xeon boxes because they simply cannot wage a price war against the likes of Dell and HP.
Those of you that want Sun open Java or move to AMD might as well just come out and say what you really mean.
Sun, you are a sinking ship. Please cast your ax at the likes of Wintel for the cause of greater good while you still have market power.
There will still be room for Sun in the future as a smaller company, but the market share bleeding is only beginning. Wait till Intel launches a 64 bit xeon and MS releases 64 bit windows, then you will see what I am talking about.
The “desktop companies” are catching up with big iron and a price war will be no contest. Dell has recently taken back the # 1 desktop crown and will also become one of the largest players in the server market.
When using SSE2, the P4’s FPU performance is very good. Its not so great when using the x87 FPU, but it was a conscious decision to put more resources into vector performance. And what’s the point of comparing a Pentium M to a Pentium 4 at the same clock speed? The P4 was designed to get high performance by being scalable to higher clock-speeds. All that matters, when you’re not in a power constrained environment like a laptop or blade server, is whether the fastest available P4 is faster than the fastest available Pentium-M.
There is nothing worng with the Itanium architecturally. The market for 64-bit computing has a lot of players already and Intel is now finding it self in a bind since it can’t pentrate the Enterprise market. The enterprise market that Inel is targetting the itanium famliy at is a very slow to change market.
There is a huge contrast in the speed at which the Enterprise market changes to the PC market (enthusiasts). Enterprises have huge processes in place for change mangement and change only when there is a need to. There still hoardes of cutomers running VAX machines (financial sector), the fact that HP and SGI still sell alpha and MIPs systems is a testament to that fact.
Itanium means a hige change in infrastructure for companies, companies that don’t like change. Hence problem #1 for the Itanium. Another testament to that fact is Microsoft’s claim that the number of XP machines in the corporate sector is far lower than they expected, if Microsoft with a monopoly can’t make customers change, Intel with a new archtecture surely is going to find it extremely difficult.
Problem #2. The new surge of 64-bit chips like the opteron and intel’s own x86-64 chips are going to prevent sales of Itanium in the low-end/low cost space. x86-64 offers performance at a low price and immediate binary compatibility, something itanium can not do. With multiple vendors backing opteron and intel’s x86-64 including dell, Sun, IBM and Intel’s own partner in crime HP Itanium is going to find it extremely hard to keep a foothold in the lowend space.
So Intel is stuck with almost nonexistant volumes and very low adoption rates in all the markets it is targeting, partly becuase it holds no, monopoly like it does in the lowend space, in the enterprise market. Also becuase it’s won products will cannablize it in the lowend space. The cost/benifit analysis simple doesn’t play well for intel to sustain development, but they can and will probably continue developement of itanium becuase they can fund it.
It makes no sense for Sun to support Itanium. There is more market for SPARC than itanium today. In fact, IDC recently cut is projections for Itanium recently
Research firm IDC in 2000 predicted that Itanium server sales would hit $28 billion by 2004. In 2001, IDC predicted sales of $15 billion by 2005, and lowered its forecast later that year to about $12.5 billion. Now, the forecast is for $7.5 billion in 2007, according to CNet.
Itanium as I see it isn’t the future for SUN. What SUN need to do is push their Itanium into the ultra-high end of ( >8 SMP configuration) and from uni to SMPx8 they should start using Solaris x86 9/10 on Opteron, this includes their workstations as well.
Push the assembling out to China to reduce costs, stick with a basic board design, PCI-Express all the way, from the graphics card to the network card, that should be the only connector on the board; follow KISS and you’ll find the production costs are lower; motherboard production can be handled by ASUS and the chipset by Broadcom/ServerWorks.
As for the desktop side of the equation; Dell already offer whitebox services, that is, they’ll assemble computers and allow the “whitebox” vendor to brand it themselves and resell it, meaning, you will get price parity with Dell, meaning, SUN could sign up to this white box programme, brand the boxes with “SUN” and push it off to their existing customers with the computers pre-loaded with JDS.
Its all about reducing overhead and ensuring that that amount of money invested into a product is lower than the amount earnt back in profits. Right now SUN is losing money because they don’t have their priorities right and they aren’t willing to accept that the days of demanding premiums for a 2way systems or pricy workstations are long gone.
People now look over their sholder and see attractive offers from Apple, Dell, IBM and so forth. Why would someone choose to ignore those great deals over the prices SUN charges? of course not, and that is why SUN is suffering.
Someone really needs to head into the offices of SUN and knock some heads together and wake up management. People *aren’t* going to pay 1/2 million dollars for a piece of hardware that performs WORSE than a system 1/2 the price and better software availability. About the *only* people I feel sorry for in SUN is the sales staff and them trying to keep a straight face when explaining that the customer that they should buy a 12way server UltraSPARC server that is beaten by a cheaper Opteron 4way system.
eh?
the only reason the itanium would have a 64 bit addressing space would be that intel uses a two word address rather than a one word address. it is nothing spectacular and something that AMD could add by just creating a mode for it.
remember, 64 bit is the size of a memory word, not the size of addressable memory. addressable memory depends on how the designers construct a memory word.
the G5 pipeline is only 20 stages.
the latest northwood is about what?? 40 stages?
so, unless the itanium has near zero stages (which would be a good reason for it being slow at general computation) I don’t see how it can really be closer to northwood than itanium.
I would like to know, how much longer can the pipelines get?
I mean, there are only so many sub-operations that an instruction can be broken down into, especially with RISC systems.
at some point, the law of diminishing returns comes into play.
kaiwai (IP: —.a.002.cba.iprimus.net.au) – Posted on 2004-04-18 05:17:19>
:s/Itanium/UltraSparc
Bascule (IP: —.client.comcast.net) – Posted on 2004-04-18 01:29:57
The scrapping of the UltraSPARC V is a fairly sensible decision as well. Sun was overdiversifying their processor line, which would only cause their customers a headache as they transition their infrastructure to yet another product. Waiting for the Niagra is a smart bet.
And as one of the excutives said in a previous interview, with the two chips being dropped Niagra will be bought to the market sooner, meaning, when the Niagra arrives, it will be a big bang for the market place, and a massive increase in performance from the move from UltraSPARC IV to Niagra.
With that being said, UltraSparc IV should have arrived by the begging of this year in volume, then a quad core by June, with the dual core moving down to 0.09 in size and moved into the blades, workstations and so forth.
What SUN need to worry about isn’t price, but price/performance. If they sell a server that is 4x as fast by costs 2x the price of its nearest competitor, then yes, that *is* a well priced product, but I’m sorry, charging $5k+ for workstations that are outperformed and supplied with more software by a cheaper non-Windows alternative; Apple G5, one really wonders what on earth is happening at SUN.
Want to see proof, talk to me as another disatisfied SUN customer who bought into the “Mhz don’t matter” hype, bought a SUN Blade 100 and found it performs worse than a PIII 550 coupled with a video card that makes the Intel built in GPU look great.
try tech journalism (IP: —.dsl.pltn13.pacbell.net) – Posted on 2004-04-18 01:30:12
Itanium is sinking. There is no possible bright future for this chip unless Intel prices it lower than a P4. And even then, the IA32 capability of the Itanic is stunningly poor.
That doesn’t take account of the biggest threat; vendor lock in. IBM see’s that their customers fear this and thus started to free up their PowerPC design so that all and sundry can adopt it. AMD same situation, they’ve hidden nothing and worked with everyone to make their ISA *THE* affordable 64bit standard. SPARC, you have Fujitsu and SUN being the “big two”. Then you have Itanium, controlled by one company who doesn’t have a reputation for “playing nice with others”. Intel compared to Microsoft; Microsoft comes out like a sweet little angel.
Hey, that doesn’t get into the second part. For VOLUME to be achieved, they need both the workstation AND server market using Itanium, and I’m sorry, no software, no sale. No high end software, no sale.
There really is no pay off for traditional RISC. Binary compatibility with IA32 is the most important driver in the market today.
Not necessarily true. If I had a better architecture, it would require a lot of money to get software vendors onboard. What Intel failed to grasp is being are not going to come to their BBQ unless they’re willing to provide the food and alcohol.
Intel thought it could just get away with providing the BBQ and a tank of LPG; sorry, the third party software and hardware market want to see either one of two things; freeing up Itanium to allow vendors to offer motherboards and small companies to sell Itanium CPUs in boxes just like their P4/Xeon processors. The second is they want to see the money. Approach the vendors and *PAY* for the porting of software, *SUBSIDISE* the upgrades of those who wish to move frorm IA32 to Itanium. Make it cheap enough so anyone can do it.
so, unless the itanium has near zero stages (which would be a good reason for it being slow at general computation) I don’t see how it can really be closer to northwood than itanium.
Slow as in faster than IBM’s fastest POWER4 offering, and a clear 2 times the speed of Sun’s fastest SPARC (yes, both integer and floating point)?
Sorry you weren’t quite clear on that.
once again, the x86 CISC to RISC instruction set is about as kludgy
as life gets. Extending an 8bit instruction set to 16, 32, and now
64 is not how you achieve better performance in the real world.
Sure, clusters of these can do lots, but you heard it from the guy
at Cray, it just doesn’t give you high performance. The fact that they
extended the hell out of the 8 bit architecture shows up truly when
you look at the amount of registers in the x86 architecture. Itanium
(EPIC architecture) has 256 general purpose registers and was built
from the ground up for 64 bit computing, so when you cluster those bad boys together with the tuning that you can do, you’ll find some true performance.
Of course if we’re talking about things like load distribution and
fail safeness, sure, opterons or even x86-32 are great (and cheap).
But life sciences, genome sequencing, anything that requires a huge
brain of cpu’s for heavy duty computations, you’re probably better off with the itanium (or alpha, or <gulp> ultrasparc). Don’t give me the benchmark numbers because 1-4 cpu’s does not a cluster make. How many itaniums can you put together?
Yet even still, it’s too early to tell if Sun would adopt a new architecture, they’d be behind the HP/IBM curve. The ultra’s are not going away anytime soon, fujitsu or whomever isn’t done with it yet. I’m not quite sold on opteron’s future, it’s a nice piece of engineering and marketing research,
but it’s not the top of the performance metrics if you’re talking about huge fucking systems. So you’re probably going to say, how come itaniums don’t rule the top ten for super computers? The answer my friend is that they haven’t fully optimized them yet, and those supercomputers are built with costs in mind.
The G5 pipeline is 16 stages (~23 for floating-point). The Northwood P4 pipeline is 20 stages (>20 for floating-point). The Itanium2 has an 8 stage pipeline. The Prescott P4 pipeline is 31 stages. So the G5 is closer to Northwood than I2, but closer to I2 than to Prescott. Also, don’t be surprised to see IBM lengthen the G5 pipeline some in future iterations.
The biggest problem with the cheap sunblade 100s was the ide chipset. It was absolutely horrible. Sun needs to stick to scsi.
gee, it so simple! Er…Intels acceptance of the AMD64 architecture is a sign, Sun is doing fine by AMD64 and Sparc.
“2) The Itanium is not a badly designed architecture. ”
I think Itanium is analagous in architecture to a spaceship made out of concrete.
The problem is that IA64 requires an instruction cache the size of Mount Rushmore and a bundled nuclear reactor to be viable.
Basically it’s the biggest mistake Intel and HP have ever done.
Itanium is Toast with a capital T.
And what are your credentials? You must be a renowned chip designer to so confidently dismiss the Itanium’s architecture, as it it is currently one of the fastest CPUs on the market, and made by the biggest chip company in the world.
Instead of Itanium Sun will probably adopt the i4004 which is a much more successful processor from Intel.
This article is just 100% crap!
The biggest problem with the cheap sunblade 100s was the ide chipset. It was absolutely horrible. Sun needs to stick to scsi.
Not necessarily. I replaced the HDD with a faster one, and the speed improved dramatically, with that being said, if they are going to use a cheaper connector, they should go for SATA minimum.
Pretty much everyone in the industry outside of Intel knows that Itanium (Itanic) is a horrible design.
Even one of Intel’s key partners, NEC, said so. Intel got so pissed that NEC’s chief technologist, Leonard Tsai, dissed the Itanic, Intel had the poor guy fired for speaking the truth.
“I believe that Itanium has its place, but not as the overall panacea for 64-bit (computing) like … Intel is drum-beating to the world,” Tsai wrote in an e-mail message.
*and*
During the Platform Conference in July, in San Jose, Calif., Tsai said he expected it to take many years for Itanium to take off due to the new EPIC instruction set. Users are more familiar with the RISC (reduced instruction set computer) architectures used by Sun, IBM and HP, so it is easier to tune RISC-based servers for the best performance. It will take a massive effort to educate enough people about EPIC and the Itanium processors to make them successful, Tsai said. He also added that Intel had “bullied” NEC into picking Itanium for its servers and that HP, as co-designer of EPIC, received preferential treatment from Intel.
*and*
“NEC’s approach to Itanium is the right way,” he wrote in an e-mail message. “(They) take it from the beginning to (the) very high end as a possible solution to replace aged mainframe. . .Building (two-processor Itanium systems) or (single-processor) workstations is begging the question of sanity.
*and much more*
Overall, Itanic is quite simply a very poorly designed system. It is power hungry, needs giant amounts of cache to get any performance, and because of the VLIW, does not scale to multi-core CPU’s very well.
Intel has a long history of failed RISC designs, from the 432 to the 860. Itanic is just the latest failure, nothing all to special if it weren’t for the billions Intel has invested in this clunker.
The future of 64-bit computing belongs to AMD64 and iAMD64. These architectures have economies of scale. Nothing else in 64-bit world can compete.
Minor nitpick correction, the i432 was not in any sense a RISC design, it is one of the CISCiest designs ever attempted… basically it made the VAX look simple in comparisson. Intel has had great success with RISC too, the i960 is one of the most successful families of processors out there, granted these are used as embedded microcontrollers most of the time.
The claim that you made on VLIW not allowing for multicore scaling is nonsense too, where the heck did you get that idea? Most multicore approaches are either multithreaded or plain SMP oriented in which VLIW makes a lick of difference. If anything it theoretically makes the control overhead (the dynamic scheduler).
The itanium it is by any measure a failure, it is a complex bloated unelegant design. And before any of you come over with intel this, benchark that… go ahead and read on the actual architecture. And then try to write a simple program for it, most of the “itanium fans” have never had to actual deal with the chip face to face. It is a cludge of gigantic proportions.
The idea of VLIW was to allow for simple hardware that had to rely on complex compilers, people realized that it was easier to throw transistors at a problem than rely on static and compiler techonologies. Intel and HP ended up with an extremely complex hardware platform which requires extremely complex compilers. It is a lose/lose situation, no matter how great hand tuned benchmarks appear, customers do not make a living running benchmarks but real life code, and in that Itanium is pretty shitty (it may map well to some customers, but obviously it is not a widespread architecture for a reason).
god i hope htey dont the ultrasparc processors are great
sssshhhhhh.
Don’t start talking about intel’s monopoly. Don’t start talking about how Intel doesn’t play nicely with other CPU manufacturers. Don’t start talking about Intel only actually competes with itself. If you start talking about such things people would have to realize that when they talk about Microsoft’s monopoly they are actually taking about the MS/intel monopoly, which has now become the MS/x86 monopoly. Neither MS, nor Intel could ever alone control 90% of the PC market. Only the two taken together can account for real monopoly. AMD is just a party to this monopoly- a subcontractor if you will. The x86 architecture has been horrible since day one-it works, but it is a horrible design. Intel has one of the worst track records of the whole industry when it comes to quality of production.
I use intel because I have never been able to afford real alternatives-due exclusively to the monopoly position of x86, however an open platform PPC system is what I would really like to use. AMD’s recent forray into x86-64 changes nothing as regards this situtation, although I applaud them finally being able to make a move prior to Intel saying “jump”. Motorola has effectively handed over its PC CPU stuff to IBM-IBM is good as a steward of open platforms. Motorola gave up on ever trying to break the monopoly.Now they have gone simply “cellular”-ie. they only have a CPU name nowadays in the embedded sphere. Back in the day Motorola was so far superior to Intel it wasn’t even funny-but did the market care?
Intel became the monopoly that is has because IBM, during the time when it was Mr.Monopolist decided that PC’s shouldn’t threaten their servers-that’s why they chose the 8088 over the 68000-the first IBM PC, which was never released had the 68000. And if it wasn’t for Motorola and their 6800 series Apple would have never come into being,the 6500 from Mostek being a clone of 6800-in fact desgined mostly by former Motorola engineers. Intel really invented the vendor lock-in strategy-they routinely offered products which competed with their own product line and to force people to upgrade they would simply pull all of their older CPU’s from the shelves(no I am not talking about simply not manufacturing anymore of the older CPU’s I mean ordering their distributors to return unsold CPU’s for destrcution purposes).
For all the clamour about open source software what the world also realy need’s is open hardware. IBM is good at open platforms and open standards. But the next big step towards open hardware isn’t going to come from any american/european corporations-My eyes are on China and India for this development….
As regards the article the author does not know what he is talkign about. Intel only has a name in the consumer PC world-ie. cheap devices. Intel has never even been a player in the big server market-and is only becomming one because *NIX(Linux/*BSD/Solaris) effectively masks the weaknesses of the x86 platform. Itanium is the prodcut of Intel engineers who finally rebelled against the braindead x86 platform-it is buy far the best technology which Intel offers. But Sun’s UltraSPARC technologie ranks right up their with the other older big names in the high end server market (MIPS/ALPHA/etc). It is a known and trusted product-unlike the Itanium. Sun only cut the production of two of their newer processors-two variants of older technologie which arguably never should have been developed in the first place.
1) Have you *seen* the benchmarks for Itanium2? It is extremely powerful for FPU code, and has found good reception in the scientific and media computing industry.
That screams niche to me, and my understanding is that Intel was targetting the Itanium to ‘take over’ their product line at some undetermined point in the future. Intel has been doing their best to deemphasize FPU performance by touting SSE1/2/3 for some time now. For some specialized applications, Itanium seems to be dandy, but it seems to be far outside the mainstream CPU market (server and/or consumer).
2) The Itanium is not a badly designed architecture. Its hit practical problems with compiler technology, but Intel seems to have gotten past the worst of it.
What do you base that on? I haven’t seen huge improvements in compiler technology and I don’t see people suddenly perking up with interest at porting to Itanium. As for the architecture being poorly designed, I’m not informed enough to make an intelligent assessment of that statement so I’ll do my best to bite my tongue.
Beyond that, its actually a very cleanly designed architecture.
It’s easy to design a very clean system when you expect the compiler to do all the dirty work. Granted, I think the x86 architecture could benefit from a spring cleaning, but it really seems like they oversimplified in many ways and depended on software to clean up the mess.
Everything that is usually perceived as a problem with the P4 (complex ISA, long pipeline, low IPC), is exactly the opposite in Itanium (simple ISA, short pipeline, extremely high IPC).
Again, I’m not qualified to speak about CPU architectures beyond making general statements, but opposite does not imply better. Usually a more moderate approach is better. I’m not saying that is the case with Xeon vs. Itanium, all I know is I really like the AMD64 approach a _lot_ better than either of Intel’s ‘solutions’.
Most people would rather buy 32 bit Xeons than 64 bit Opterons to run their servers. The market stats prove this.
Oh sure, if you leave out every other reason someone would choose to run a Xeon and not an Opteron. I’m dying to move to an Opteron based architecture, but when I had to make a decision for my company there were no serious Tier 1 vendors (I don’t remember if IBM had their server out then), FreeBSD (our platform) support for AMD64 wasn’t finalized, and we had good experience with Dell servers. I’m looking forward to having more choices with Sun, HP, IBM and whatever other vendors will compete with Dell for our business when we refresh our hardware.
And what are your credentials? You must be a renowned chip designer to so confidently dismiss the Itanium’s architecture, as it it is currently one of the fastest CPUs on the market, and made by the biggest chip company in the world.
Yes, and that’s why it’s catching on like wildfire! If it really offered a compelling price/performance ratio, there would be more effort to move towards standardizing around it. Rambus was also one of the fastest memory types around and it was adopted by the biggest chip company in the world. Of course it’s now pretty much a niche product, and I think that will ultimately be Itanium’s fate as well.
I hear all kinds of people tout Itanium as a great, fast chip, but from my admittedly limited research it only seems significantly faster in high-end, specialized applications where clusters don’t make sense. I don’t think the bulk of the server market is serviced by this chip and that it will remain in the domain of the high end niche.
And now a few of my own comments
I’ve read that Intel wants to make it price comparable to the Xeons in a few years… hearing plans for stuffing 9MB of cache on the chip, I have to wonder how they propose to do that without subsidizing Itanium sales rather heavily. Any thoughts on this?
I hear quite a bit about how awful the limitations of the x86 architecture are. I remember way back in the late 80s and early 90s, reading articles in Byte about RISC vs. CISC about how the x86 architecture is a generation or two away from being tapped out. Now here it is well over a decade later and x86 is pretty much dominating all but the low-end (PDA type devices) and high-end (mainframe type machines) marketplaces. Even then, there are low voltage versions of x86 chips that are making inroads into that low-end market, and clusters of x86 chips are showing up in all sorts of high-end mainframes. And yet people still insist that it’s a complete dead end?
I know that AMD64 has mitigated a few of the problems (for example, to address complaints about having so few registers, a lot more registers are available in 64 bit mode). What dead-ends/obstacles do you folks see that can’t be overcome by future revisions to this instruction set? Do you think virtual machines (such as .NET/Mono and/or Java) will eventually level the architecture playing field?
Sadly, this seems to be a problem that the OSNews staff has a lot. I can’t tell you how many times I’ve seen opinion piecies posted as articles lately. And, so far as I can tell, this is just the opinion of another un-informed observer.
What this observer failed to note; however, is that Sun initiated a port to Itanium many years ago and ended up scrapping the project because they were basically getting screwed by Intel. Intel and Sun have had a pretty bad relationship over the years, and people I’ve talked to about this at Sun have implied that Intel was set upon making Sun’s port of Solaris to Itanium a failure. Apparently, very few resoucres were provided and they had all kinds of fancy stipulations, many of which Sun didn’t care to agree to. So, it sounds like the opportunities for Itanium and Solaris have long ago passed.
The author of this editorial seems not to completely understand what the tradeoffs, both in processor architecture and a business sense would be for Sun adopting Itanium. It would take a tremendous amount of Software work and provide a very negligible advantage in their product-line. Clearly x86 and x86-64 is the direction Sun is heading in terms of developing small fast boxes. As far as large HPC/ccNUMA machines goes, the author’s suggestion that Itanium will be at parity with Xenon by 2007 also doesn’t seem to address any future performance advances Sun might make to their UltraSPARC line. It would make much more sense for Sun to continue to improve the performance of UltraSPARC and bring it to parity with processors like Xenon instead of adopting a technology which has *way* less market share than their own chip technology. Just to give some perspective, AMD sold more x86-64s in their first quarter of availability than Intel has sold of Itaniums. Itanium has just not been a technology that the market has been embracing. (see http://www.theregister.co.uk/2003/09/03/itanium_fends_off_opteron/ )
In defense of Itanium as a processor architecture, those who write it off as crap are taking a rather narrow view of the architecture. While it was certainly designed in a fashion that does not benefit general-purpose computing, it does have plenty of cases where it performs quite nicely. The problem is that Intel was marketing Itanium as a high-performance solution to every computing problem, while the chip was really more suited to address markets like HPC computing. The benchmark numbers from Itanium boxes in this configuration certainly seems to corroborate such suggestions.
I really can’t believe some of the claims made in this editorial. Suggesting that Sun achieved none of its business aims by settling with Microsoft is incorrect. Further, acquiring SGI will do nothing to benefit Sun. In fact, the author’s suggestion that Sun adopt SGIs strategy shows in incredible lack of depth of understanding of Sun’s corporate strategy as well as the complex problems that SGI encountered in their business model.
I found the comment that someone in this thread made that Sun should buy Red Hat very amusing. Last time I checked, Red Hat’s market cap is almost as large as Sun’s The way things are going Sun won’t be buying anything soon.
The same nonsense was put forth when Novell bought SuSE, someone in an interview with the Novell CEO asked why not Red Hat instead? He kind of brushed the question off, but of course the reason was that at the time Red Hat would have been a much more likely buyer of Novell than vice versa
The TM processors was build with a low energyconsumption in mind, while the Itanic is not (AFAIR the 800Mhz version drew about 130W – and, well it doesn’t a big deal).
The TM processors was designed with an ISA independancy in mind (Via the morphing software), at the time being they’re optimized for IA32 (They’ve acuired an AMD64 license though) … while the Itanic is optimized for IA64 applications (It would be stupid to buy an Itanic only to run ia32 applications, it is not what it is designed for – but it does have IA32 support) …. The Itanic2 is quite fast, but to damn expensive.
Regarding the Gemini …. Gemini was not only a dualcore UltraSPARC (AFAIR 4-2-1, byt do correct me if I’m wrong), it was low-energy (25-35W) …. actually it could easily qualify for a notebook.
I have never seen so much bullshit posted on this board…
“Intel became the monopoly that is has because IBM, during the time when it was Mr.Monopolist decided that PC’s shouldn’t threaten their servers-that’s why they chose the 8088 over the 68000-the first IBM PC, which was never released had the 68000.”
IBM used the 8088 because neither intel not motorola could supply 16 bit interface chips in quantity so they had to go with the 8088 and use 8 bit ones, has nothing to do with monopoly that is just the paranoid fantasy of the early “homebrew” crowd, and yes they wanted to use the 68000 and indeed did, the IBM Instrumentation copmuter was released a couple of years after the PC and was the design that was originally intended for the PC, mine ran CPM68k and was unspectacluar in every way and sales where close to nonexistant. The only 68k computer sold in any quantity at the time used a seperate computer for I/O, also due to the lack of support chips
Intel wants to make Itanium SMT like Xeon/Pentium. And the Itanium’s VLIW architecture makes simultaneous multi-threading very difficult.
This is why Intel has all sorts of voodoo on their upcoming Itaniums… arbiters, shared caches, etc. I should have been more specific in my earlier post regarding Itanium. So that there is SMT support, Intel’s multicore design is very complex.
Too bad the Alpha was ahead of its time. It would have spanked Itanium. Perhaps Intel should buy it from HP and replace Itanium with a new generation Alpha brought up to date for Intel’s 90nm process.
Itanium was a grand idea. Intended to replace ‘proprietary’ risc solutions (with a vliw proprietary solution). Inteneded to be the fastest bar none in the computing industry.
The reason it was way over due: Alpha. The chip that IBM’s people apparently couldn’t model the performance/power/transistor. The computer chip family that owned spec for a decade (give or take a year or so). Also the chip that apparently put Intel’s Itanium to shame until it was killed off: What’s the point of introducting a new chip with no following that isn’t the fastest, and will cost a bundle to make? (read: little profit OR very very high cost even with Intel’s fabs) Dispite that they continued, When it was introduced, the alpha which compaq had pretty much decided to kill, still beat it. So kill off the competition. (also note that while compaq licenced alpha to intel they legally could not make it exclusive (and the court decision that dealt with DEC selling strongarm & some fabs to intel as well as several anti-trust issues seems to have made even that questionable)
Why did it suck so bad? Intel decided to make a gamble that some things would happen that would render the practices of things like the cpu branch predicting obsolete because of compilers, and other practices that were and are considered best practices. They were wrong. Compilers have not become this thing with has premonition about how something is going to branch, they can estimate, and things using profiling can help, but they haven’t eliminated the need for hardware branch prediction. Another thing: I have not looked this last one up, but the IA64 is supposed to not have any op-codes left over, which means no instruction set extensions if they ever were needed, no arch I know of that is currently used (alpha, sparc, powerpc, mips, and especially x86) doesn’t have extensions. Even alpha where the only instructions added were 8- & 16-bit loads & stores, and mvi (motion video instructions: a whole three) could use the extensions.
AMD took a similar approach, but unlike intel which is large enough, gambled the company not once but twice. #1 was athlon. It succeded, mostly due to licenced alpha technology (hint: slot A was not originally AMD’s nor was the ev6 (3rd core gen alpha) bus that athlons use(used?, though it should scale up to 800MHz by spec.) #2 is the opteron, which bares a LOT of similarity to the ev7 (still a 3rd core gen alpha, with lots of tweaks): integrated memory controller, on chip superfast interconnects, etc. I and most people think it will succede, but not really because it’s that revolutionary, but because it is more of the same old, same old: compatible with windows, the thing intel really rode into it’s position on.
Now Microsoft is pretty much forcing Intel to adopt x86-64/ amd64. One of the reasons Intel & AMD are behind Linux is that they want something they can play off of Microsoft like Microsoft plays them off. If nothing else, I expect that Intel will be helping out Linux even more, because of Microsoft’s actions. Of course, that could prove a problem, as Linux works almost exactly the same over x86, alpha, amd64, sparc, ppc, etc, etc, etc. Which could lead to people moving off of x86 or amd64, which means that they both really should be looking to make sure Linux doesn’t get too big as to shatter the Windows share: no reliance on just x86, dispite the fact that Linux is one of the biggest things running on amd64 & ia64 (if not the biggest on ia64. I don’t see why you would want to run windows it, given how the feature set consists of “important feature: emulated x86”.)
Summary: Intel shot themselves in the foot after shooting thier competitors, and being the giant they are, are more embarrased than really hurt.
Oh and I am tired, so I didn’t find references to each fact claimed, if you want a specific one, I will try to check back later today & post it.
I used to work on the team that supported and developed Itanium. However I felt at that time that is wasn’t a good product and as the AMD64 and Opetron processors gained a better market share, I realized that this product was a bomb.
I feel and believe that Itanium stinks and that Sun shouldn’t develop for it.
this article is far too extreme. IBM would bid for sun if sun tried to do that, ofcourse it would be blocked but.. i dont see why this is reported but not this:
http://www.eweek.com/article2/0,1759,1568329,00.asp
Also has a link to how sun claims sparc isnt dead yet but i guess this article “PROOVES THE POINT” that sun can make rational decisions like adoping itanium…….(im joking here)
Problem #3 for itanium is that it runs x86 code very poorly. Since approx, 90% or so of the code out in the world it x86 32-bit code, Itaniums so-so perfromance on them is one of the primary reasons Ianium will never achieve mass appeal no matter how well it performs on highly optimized native code, as long as Xeons exist. Even if Intel plans to bring the cost of itaniums down to the cost of xeons. The xeons will still outperform the intaniums becuase the itaniums can only at best emulate code at the rate which makes an itanium as efficient as a similarly clocked Xeon. Example,
that IA32 Execution Layer code-named “btrans”, will give the 1.50GHz Itanium 2 an ability to run 32-bit software about as fast as 1.50GHz Xeon MP processors, but given not really high core-clocks of Itanium CPUs, performance of such IA64/32 system will not exceed that of Intel Xeon MP 2.0GHz-based solution much.
http://www.xbitlabs.com/news/cpu/display/20040116164705.html
Now ecspecially with the introduction of intel’s own x86-64 xeons, things don’t look good for itanium’s gaining any presence in the vloume space. All the factors I have outlined for the itanium will keep it in a very niche market. HP and SGI being the biggest vendors catering to a very niche HPC market. Dell and IBM have shown very little interest in the itanium (I can’t find one product on Dell’s site that has an itanium, I could swear they had one in thier catalog, IBM has only 3 rackmounts). Now with opteron, Intel’s own x86-64 and IBMs now push into making ppc970 more propular in it’s lowend catalog I won’t be very surprised if IBM drops itanium products completely. Dell even though being a very cozy intel partner hasn’t incorporated the Itanium in its products. This can’t be a very good sign for the itaniums market acceptance.
If Dell and IBM aren’t adopting itanium with any zeal why should Sun?
Never underestimate the power of binary comaptability.
That screams niche to me, and my understanding is that Intel was targetting the Itanium to ‘take over’ their product line at some undetermined point in the future.
FPU performance is extremely important in the media, engineering, scientific, and home (read: games), market, not very important in the server industry.
Intel has been doing their best to deemphasize FPU performance by touting SSE1/2/3 for some time now.
And what do you think SSE does, if not FPU operations?
What do you base that on?
The general reception of the Itanium2? Its absolute killing in FPU performance?
I’m not saying that is the case with Xeon vs. Itanium, all I know is I really like the AMD64 approach a _lot_ better than either of Intel’s ‘solutions’.
Even though the I2 is faster than any Opteron for FPU code?
Yes, and that’s why it’s catching on like wildfire!
The Itanium-1 was a failure, no doubt. But the I2 seems to have turned out pretty decently, and didn’t come out all that long ago. I think its far to early to declare the failure of the Itanium architecture completely. And it pisses me off when people use popularity as an inidcator of quality. Popularity is just an indicator of popularity. If there are quality factors contributing to that popularity, argue about those instead.
Popularity is just an indicator of popularity. If there are quality factors contributing to that popularity, argue about those instead.
It’s also a major factor in the market. The Alpha was a far superior chip but it never made it. The Itanium could be the holy grail of chips and still never make it in the real world. That’s what this discussion is about. Why would anyone want such a high priced chip that doesn’t run many existing programs well when they can get an opteron, for much less, and still run legacy code at native speeds. The operton also scales very well. There are a whole lot of upsides to the opteron and not many with the itanium.
Back on the subject of Sun, considering they already have an x86 Solaris it isn’t a big deal for them to port it to the opteron. It will run without even porting it. The Itanium, on the other hand would require a lot of work to get Solaris to run on it.
Sun already has a high priced solution (Ultrasparc) and it doesn’t seem to be working out too well for them. Their support for opteron is an obvious attempt to plunge themselves into the cheap server market.
The best way to run solaris applications on Itanic would be to create a hardware translation from SPARC -> Itanium, like the HPPA -> Itanium translation, then run GNU/Linux on the machine with the linker able to find the necessary libraries
I guess that would explain why there are still so many damn Alpha boxes in the scientific computing area.
since when is a mark of a good chip that it must be useable in every market?
Itanium is not a desktop chip. in fact, if Rayiner is correct, and its FP performance is extremely high, I bet that is where Intel put the Alpha tech and engineers after they bought the chip line.
as for the Opteron, it is not even the same class of chip ad the I2, the Power 4/5, or SPARC .
the opteron is great for medium size servers, but ther is no way in heck that it could deal with the high end hardware that Sun, IBM, NEC, and Fugitsu.
I guess that would explain why there are still so many damn Alpha boxes in the scientific computing area.
Your point? Is Sun or anyone else going to port ANYTHING to an Alpha. Nope. The chip is dead. People may still use it but it’s not going anywhere.
since when is a mark of a good chip that it must be useable in every market?
Itanium is not a desktop chip.
Who claimed it was? This is not about whether or not Itanium is a good chip anyway. This is about whether or not Sun will port Solaris to the Itanium. It’s not going to happen because it’s not feasible. Plain and simple. There is too much work invovled for such a small market chip.
But the I2 seems to have turned out pretty decently, and didn’t come out all that long ago. I think its far to early to declare the failure of the Itanium architecture completely. And it pisses me off when people use popularity as an inidcator of quality. Popularity is just an indicator of popularity. If there are quality factors contributing to that popularity, argue about those instead.
The itanium 2 was released when in 2001/2002 by the end of 2003 they had, depending on the source, shipped only a 100,000 chips (rumors are many of those shipments were give aways for trial, sales figures don’t match intel’s numbers). From what I have read for intel to recover thier initial development cost they would have to sustain a volume of 2 million a year. That doesn’t sound too decent to me.
If I2 was so popular/hot (no pun intended) there would be no need for intel to crank out x86-64. Also, as I stated before majority of the world runs x86 code and the itanium isn’t even close in performance to the xeon on that code, so I don’t see a mass exodus from xeon to itanium anytime soon or ever actually. Unless of course Intel pulls a rabbit out of the hat and can emulate x86 and x86-64 at naive Itanium2 speeds and price the I2 less than the xeons. From the looks of the roadmap it doesn’t look like any such rabbit shall emerge from intel’s hat. I don’t know how intel plans to bring down the price of the itanium line with those huge on-chip caches. Huge low latency caches seem to be a prerequiste for the itanium to perform. With a 1 MB cache the itanium today can not even hope to compete with a xeon with a meg of cache.
I agree that it is too early to say if the itanium is DOA nor am I against the architecture. But the logistics just don’t favor the itanium as it stands today, especially with intel’s introduction of the x86-64 into its own products.
The best way to run solaris applications on Itanic would be to create a hardware translation from SPARC -> Itanium, like the HPPA -> Itanium translation, then run GNU/Linux on the machine with the linker able to find the necessary libraries
Hunh…. why not just run Solaris x86 and soon becuase of Sun’s opteron support x86-64???!!!!
No doubt what you suggested would be neat 🙂 but the path of least resistant I say
That screams niche to me, and my understanding is that Intel was targetting the Itanium to ‘take over’ their product line at some undetermined point in the future.
FPU performance is extremely important in the media, engineering, scientific, and home (read: games), market, not very important in the server industry.
If you are serious suggesting that anyone is going to buy an Itanium 2 or even Itanium 3 to run games, I don’t think anyone is going to take you seriously. I don’t see television stations or movie studios buying up lots of Itanium workstations either to work with their media. The fact of the matter is FPU performance is dandy for some applications and not necessary for others. And so far the only markets I see where this performance is important enough is some scientific and engineering applications. And most of those would probably do better with cheaper, more scalable clusters of lower-end chips.
Intel has been doing their best to deemphasize FPU performance by touting SSE1/2/3 for some time now.
And what do you think SSE does, if not FPU operations?
If you’re saying that SSE is a replacement for FPU operations, then why does the Itanium have blistering FPU performance and not blistering SSE performance? Could it be that SSE != FPU?
What do you base that on?
The general reception of the Itanium2? Its absolute killing in FPU performance?
Um, the best I can say with regards to the “general reception” of Itanium2 is a renewed interest in future revisions of the chip. I still don’t see big advancements in compiler technology and I don’t see people jumping to the new platform. Most OS vendors have offered reluctant (and often incomplete) support for Itanium2 at best, and only a few big applications have been ported. Can you point to evidence to the contrary? Or am I just supposed to assume that the “general reception” was so positive that I should take it for granted?
As for its absolutely earth-shattering FPU performance… if this boosts overall mainstream application performance as much as you seem to suggest it does, why are people not looking at this with more interest? Could it be that “absolute killing” FPU performance is not the end all, be all indicator of an overall system performance? An unbalanced processor (one that excels in one or two areas but not in others) will have a hard time as a mainstream chip. Which leads me back to my original conclusion, the Itanium makes a nifty niche chip, not one that will be powering my servers in a few years.
I’m not saying that is the case with Xeon vs. Itanium, all I know is I really like the AMD64 approach a _lot_ better than either of Intel’s ‘solutions’.
Even though the I2 is faster than any Opteron for FPU code?
Yes, even though the Itanium 2 offers blistering FPU performance, the Opteron has a much better price to performance ratio for the applications I (and I honestly believe most other people) actually buy CPUs for. And I’m not even getting into the fact that more software will be supported by AMD64 than by Itanium… another factor that will relegate the Itanium to a niche market.
Yes, and that’s why it’s catching on like wildfire!
The Itanium-1 was a failure, no doubt. But the I2 seems to have turned out pretty decently, and didn’t come out all that long ago.
And neither did AMD64, and yet that seems to be taking off far, far faster, especially for the mainstream. Whereas Itanium is at best relegated to niche, high-end applications where it will never achieve the economies of scale it needs to be cost effective, nor the software support it needs to find it’s way into mainstream computing.
I think its far to early to declare the failure of the Itanium architecture completely.
You are absolutely correct in that and I will happily concede that point. However I honestly believe it will be a market failure, for several different reasons, one of which are the decisions that went into making the architecture.
I had an argument with someone who was thoroughly enamored with Mini-Discs years ago and I argued that they would fail to leave a dent in the market place. It was too early to declare it a failure, but it was pretty obvious that’s where it was headed. I feel just as confident about predicting Itanium’s ultimate failure.
And it pisses me off when people use popularity as an inidcator of quality. Popularity is just an indicator of popularity.
Again, you are absolutely correct. And there are lots of precedents in history regarding superior products that failed, even though they were technically superior in terms of quality. The best analogy I can come up with (and which was already pointed out by Abraxas) is the Alpha architecture.
If there are quality factors contributing to that popularity, argue about those instead.
Okay, I think it was dumb to rely on the compiler to make the code run faster. To make matters worse, I think we’re moving towards JIT compilers and byte codes and I don’t see how those can be ultra optimized to run efficiently on Itanium. For those applications (not to mention the OS itself) that will remain in binary form, there’s still too much ‘randomness’ in the average application to accurately predict things that the Itanium would like fed to it.
I think that slapping on absurd amounts of cache will make the chip more expensive to produce, as well as run hotter. This has the unfortunate effect of leaving out low-end and mobile computing, which are important segments that can’t be ignored. If you don’t support those segments the architecture will never achieve mainstream acceptance.
I think that optimizing the FPU performance has the effect of making it run some applications that depend on FPU performance run really well. I think it also has the detriment of making general applications run very unevenly. For an architecture that was never intended to be a niche product, that’s a really poor design decision in my opinion.
I think it was a dumb idea to add x86 compatibility that was useless at best. The fact that software emulation runs faster leads me to believe that was wasted silicon. I think that offering decent to top-notch IA32 performance is important if you want to have compatibility, and I think that compatibility is essential if you want to transition people to the architecture. The idea of buying a CPU that costs more than a fairly high-end consumer system, that performs worse than a low-end consumer system is just appaling.
FPU performance is extremely important in the media, engineering, scientific, and home (read: games), market, not very important in the server industry.
The general reception of the Itanium2? Its absolute killing in FPU performance?
I’m not saying that is the case with Xeon vs. Itanium, all I know is I really like the AMD64 approach a _lot_ better than either of Intel’s ‘solutions’.
Even though the I2 is faster than any Opteron for FPU code?
The itanium2 according to price watch costs anywhere from $1400 -$5200 (1.5GHz 6MB) just for the CPU. The opteron 148 costs $700.
People don’t don’t just base thier purchase decisions on performance. Only very few people do. For most, running x86 code the itanium performs at 50% effciency do halve the performance. The opteron will out perform the $5200 itanium2 at 1/7 th price on the code that matters to 90% of the world. Not too much incentive to buy a I2 for 90% of the world’s population. Why would I buy a $8,000 i2 box with 1.5GHz 6MB of cache if my game written for x86 won’t run as fast as it would on the AMD-64 FX or P4 extreme at 1/8th the cost. Why would game vendors port thier games to a platform with yearly volumes of 100,000 if intel gives the chip away. All that academic performance means nothing in the “real world”.
The same arguments that people give against Apple apply towards the itanium for why it will probably never gain enough market share. I say this again “Never underestimate the power of Binary compatibility”. There are not many if any media applications natively ported to the Itanium that I know of, no games. The only market that the itanium is suited for today is the scientific engineering market a very very niche market.
Well said. I was typing my post and saw yours after I hit submit.
Opteron may be more mainstream, but it certainly is in the same class as I2, Power4, G5, and SPARC, in terms of performance, and scalability (Power4 will scale better, I don’t think any of the others will, with the possible exception of the SPARCV64 (Fujitsu’s SPARC).)
Also, please explain why Cray chose to use opterons?
http://www.cray.com/products/systems/xd1/
Another thing, as far as I can find, the IA64 uses Intel’s tradition shared access to memory. I hope they were smart enough to not do this, but as far as I can tell, they did, as this feature of dedicated bandwidth to the northbridge doesn’t seem to be touted, when it should be. (Also part of the reason Xeons need to exist: without the extra cache their performance would not scale well at all above 1 and possibly 2 processors due to competetion for IO.) Athlon MP didn’t have that issue, as each proc had dedicated access to the northbridge, and Opteron doesn’t even have the memory access bottleneck, unless it’s in a different processors’ memory.)
Altrix looks nice, but seems to per 4 cpus share the northbridge which really means it isn’t going to scale as well as cray’s solution once each is optimized.
The only place where IA64 might be beating the AMD64 is in fp, and only then with a horrible price/performance ratio.
And it isn’t by much (speccpu2000fp) 1500 vs 1800 (as I looked back, SGI’s Atrix seems to have found a 2100) Now compare the fact that for the cost of one Itanium processor, you can get probably a 4-way opteron complete syste. (You can if you compare it to the 1.5GHz I2 ($5000) vs the 1500$ 1.4 which does not appear to be any faster than a x48 opteron, ~1500 speccpu2000fp).
Intel tuned the IA64 compiler for SPEC, you should be able to find the dicsussions at comp.arch…
Also, IBM uses HALF the number of CPUs to beat the IA64 machine in TPC-C.
http://www.tpc.org/tpcc/results/tpcc_perf_results.asp?resulttype=no…
You will find IBM’s system is dual-core, so it using the same number of CPU’s as Intel. IBM’s system runs at a total higher clock rate so it is actually less efficient than Intel’s system.
While Itanium is a poor design, IBM POWER / PowerPC is showing itself to be no great champion either.
Notice how there is no PowerPC 970 (G5) entry for SPEC as the benchmarks are abysmal, the worst of any mainstream processor.
[ed]
The “16 bit interface chips” you are talking about was 16 bit wide memory. The intel 8088 was cheaper because you only needed 8bits +1 for parity(8x8bit wide +1 for parity per block of 64k). This is because the 8088 could only address 16 bits of memory(64k)(just like the 6809) at any given time and was mapped via their mmu(the 6809 lacked an integral mmu) to give one 16 blocks of 64k-of which 10 were accessible to DOS(640K). The 68000 had 24 bit addresss lines for memory, it could address 1 meg of memory at a time-ie. direct addressing. You needed 16bit wide memory for the 68000-which was somewhat more expensive at the time-although it was in production and available.
I am not a conspiracy theorist. Just like the IBM PC was hobbled so as not to compete with their workstations, the IBM PC jr. was hobbled so as not to compete with the IBM PC. Pitting products against one another-and enhancing or cripplicing them relative to one another is a hallmark activity of monopolists. IBM at that time was THE big evil monopolist. Only when Compaq(1984) started to really sell was IBM’s strangle hold on the blossoming PC market finally broken.
If it wasn’t for AMD, who, up until recently, asked “how high” each time Intel said “jump”, no one in their right mind would even question whether Intel had a complete and utter monopoly on commodity CPU’s.
WRONG!!!
Read the link CAREFULLY, IBM is using 32 CPUs, not 32 Chips.
Power4 has 2 cores (CPUs) per Chip…
http://www.tpc.org/tpcc/results/tpcc_perf_results.asp?resulttype=no…
When Power5 comes out, IA64 sucks even more!!!
This is my original statement:
You will find IBM’s system is dual-core, so it using the same number of CPU’s as Intel.
You will the statement is indeed correct.
As IBM’s system is dual-core, the total number of processors is the same for the two systems, IBM and Intel. In this specific comparison, IBM delivers less performance per gigahertz than Intel.
Can you read at all???
http://www.tpc.org/results/individual_results/IBM/IBM_690_040217_es…
Read the Server tab:
32 x 1.9 GHz POWER4+
w/ 128 MB L3 cache per MCM, 4 MCMs
2 cores per chip, 4 chips per MCM, 4 MCMs per server
2 * 4 * 4 = 32
Or you can’t do math???
Hey, you’re right. I got confused between counting what IBM refers to a “chip” vs. a “processor”.
—
POWER4 Chip
The components of the POWER4 chip are shown in Figure 1. The chip has two processors on board. Included in what we are referring to as the processor are the various execution units and the split first level instruction and data caches. The two processors share a unified second level cache, also onboard the chip, through a Core Interface Unit (CIU) in Figure 1. The CIU is a crossbar switch between the L2, implemented as three separate, autonomous cache controllers, and the two processors. Each L2 cache controller can operate concurrently and feed 32 bytes of data per cycle. The CUI connects each of the three L2 controllers to either the data cache or the instruction cache in either of the two processors. Additionally, the CUI accepts stores from the processors across 8-byte wide buses and sequences them to the L2 controllers. Each processor has associated with it a Noncacheable (NC) Unit, the NC Unit in Figure 1, responsible for handling instruction serializing functions and performing any noncacheable operations in the storage hierarchy. Logically, this is part of the L2.
The directory for a third level cache, L3, and logically its controller are also located on the POWER4 chip. The actual L3 is on a separate chip. A separate functional unit, referred to as the Fabric Controller, is responsible for controlling data flow between the L2 and L3 controller for the chip and for POWER4 communication. The GX controller is responsible for controlling the flow of information in and out of the system. Typically, this would be the interface to an I/O drawer attached to the system. But, with the POWER4 architecture, this is also where we would natively attach an interface to a switch for clustering multiple POWER4 nodes together.
Also included on the chip are functions we logically call Pervasive function. These include trace and debug facilities used for First Failure Data Capture, Builtin Self Test (BIST) facilities, Performance Monitoring Unit, an interface to the Service Processor (SP) used to control the overall system, Power On Reset (POR) Sequencing logic, and Error Detection and Logging circuitry.
Four POWER4 chips can be packaged on a single module to form an 8-way SMP. Four such modules can be interconnected to form a 32-way SMP. To accomplish this, each chip has five primary interfaces. To communicate to other POWER4 chips on the same module, there are logically four 16-byte buses. Physically, these four buses are implemented with six buses, three on and three off, as shown in Figure 1. To communicate to POWER4 chips on other modules, there are two 8-byte buses, one on and one off. Each chip has its own interface to the off chip L3 across two 16-byte wide buses, one on and one off, operating at one third processor frequency. To communicate with I/O devices and other compute nodes, two 4-byte wide GX buses, one on and one off, operating at one third processor frequency, are used. Finally, each chip has its own JTAG interface to the system service processor.
http://www-1.ibm.com/servers/eserver/pseries/hardware/whitepapers/p…
> Hey, you’re right. I got confused between counting what IBM
> refers to a “chip” vs. a “processor”.
Yes, the terms used by IBM are a little confusing, and Intel’s marketing makes people think that they have the *greatest* chip in the world…
In fact, the I2 used to suck even more, but Intel added a huge on-chip 6MB cache (Power4’s L3 is huge too, but it is off-chip, and draw less power)
Also, Intel says that their system is open… Open??? Is there another company manufacturing IA64 chips???
And, I don’t think IA64 can make it in this 5 to 10 years…
P.S.
Sorry being rude in replying your messages…
it appears that RISC is better than CISC or VLIW.
why don’t a group of college engineers create an open RISC spec, much as linus did, one that is clean slate, reflecting everything the industry has learned about CPU design, and release it as non-proprietary, anyone can clone?
“The “16 bit interface chips” you are talking about was 16 bit wide memory. The intel 8088 was cheaper because you only needed 8bits +1 for parity(8x8bit wide +1 for parity per block of 64k). This is because the 8088 could only address 16 bits of memory(64k)(just like the 6809) at any given time and was mapped via their mmu(the 6809 lacked an integral mmu) to give one 16 blocks of 64k-of which 10 were accessible to DOS(640K).”
While you are correct that IBM choose the 8088 and not the 8086 (which had a 16 bit databus) because of needing less memory chips, you are wrong about the amount of accessible memory. At one time a 8086/88 could address 64kB code(cs), 64kB stack(ss), 64kB data(ds) and 64kB extra data(es) which made it considerably more powerful than 8 bit processors.
[Another cost-saving thing about the 8086/88 was that it multiplexed the address and data pins which made it possible to use a lower-cost capsule for the CPU.]
“The 68000 had 24 bit addresss lines for memory, it could address 1 meg of memory at a time-ie. direct addressing. You needed 16bit wide memory for the 68000-which was somewhat more expensive at the time-although it was in production and available.”
The 68000 could address 16MB memory at a time without segmentation which made it easier to make compilers generate good code.
it appears that RISC is better than CISC or VLIW.
why don’t a group of college engineers create an open RISC spec, much as linus did, one that is clean slate, reflecting everything the industry has learned about CPU design, and release it as non-proprietary, anyone can clone?
You mean, like the SPARC? http://www.sparc.org/
It makes sense for sun to stop developing their own processors because of their crappy performance. I doubt, however, that they are going to drop the SPARC.
Fujitsu makes a FANTASTIC line of SPARC processors that just stomps Sun’s line. The SPARC64 V is at the top of the 64 bit pack in terms of performance. Their recent dealings with Fujitsu allude to this new course of action.
It makes no sense for them to spend so much money making slow processors when they can buy faster compatible processors and Sun sees it. McNealy made the right decision and it can only help them in the future.
in the 386 days, the FPU was an add-on coprocessor.
the FPU was integrated with the 486 and ever since.
however the transitor budgets of current cpus like the prescott and opteron are very high.
there are a lot of posts here about itanium FPU being better than opteron.
CPU-magazine with anandtech’s anand lai shimpi said that there’s a co-processor that just does FPU/vector processing.
since opteron has HT, why not create an FPU/vector unit ONLY, with internal HT links to hook up to opteron, to increase its FPU ability, this along with a GPU from ati or nvidia, should make a terrific gaming machine or workstation.
Thanks Megol-you hit the nail on the head. My memory isn’t what it once was. When the 8088(1979) came out I was using a 6809E(1976) from motorola. At that time, 1979, motorola offered a MMU chip for the 6809 which gave it almost the same specs as th 8088. The 68000 was a massive step forward(although 68000 assembly turned me off-looked like PDP/11). IBM could have chosen either the 68000 or the 8086-you can guess which I would have prefered – but they didn’t, but not because of inavailability, but ostensibly “price concerns”-which of course become moot once large scale production ensues….
IBM has a relatively successfull workstation market at the time and they did not want bottom-entry PC to compete with it. Had they chosen the 8086 we would have still been stuck with Intel and Intel would have taken it’s route to monopoly differently- but the whole computing world would not have been lamed by the damned “640k” barrier. Had they chosen Motorola, Intel would be producing better products today due to the competition. What I really regret is how the coding techniques of the 8/16 bit CPU era simply vanished once the 16/24/32 bit CPU’s took over.
Our motto was position independent, re-entry-able code. In the 1984 I was running, get this, a Tandy Color Computer 3 w/512k running OS-9 Level II. OS-9 from Microware, was the then equivalent of Linux-structurally speaking. It was written in assembler-it fit easily in 512k, ran from diskettes, with plenty of room for applications. It provided multi-tasking, multi-user functions with a graphical windowing environment uncannily like X11. You have virtual terminals-just like Linux and I routinely had a half dozen applications running in the different terminals. All this on a 2.3 Mhz CPU. Although x86 line had more registers- the 6809E had over 20 different addressing modes giving one nearly 2000 opcodes for use. I once had to write a 6809E macro assembler written for a 6800 to burn a ROM for the 6809E on an old Altari-8008 w/s-100 buses(the one with the flip switchs on the front and no native storage!-luckily the guy I worked for had done the hardwork-ie. writing a terminal driver and cassette operating system(with those switches and with no storage whatsoever!)
I didn’t have the money at the time but there were computers one could get that came with dual 6809E’s with MMU plus 1 Meg memory(expnandable to 16M) plus SCSI/PAR/SER. I got my first PC-compatbile(8088/512K/5-1/4 diskette drive) the next year-it was like going back to the stone ages. But at this time the 68000 had already found it’s was into the Macintosh, Amiga and Atari machines. My personal favorite CPU of the era was 68514-which as like a 6809E on steroids-w/32 bit adressing, built in MMU running at 8Mhz. Ahhh the memories….
Itanium is a badly designed architecture,
While I like the predication because you can describe
both paths of the branches and not take a
branch mis-prediction hit, but
First, The software-pipelining feature is extremely
convoluted piece to understand. It uses the
predication registers and the stack engine
together to work, but what happens when you are
already using the predication registers??
Plus this makes the RSE engine more complicated.
Alot of the stuff is done behind some scene.
Or with alot of hand waving. Which is bad
when you are programming in Assembly language.
Believe me, I’ve done programming in IA64 Assembly
language for work and alot of the examples
given by Intel I could not make heads or tails.
Second, because of the instructions bundles/groups
the instruction cache is effectively very small.
Third, There is NO integer multiple nor divide!!!
So you have to move the values over to the
Floating point unit. Well, what happens if the
FPU is busy doing loops??
“The “16 bit interface chips” you are talking about was 16 bit wide memory. The intel 8088 was cheaper because you only needed 8bits +1 for parity(8x8bit wide +1 for parity per block of 64k).”
You are missing the point altogether I am not talking about the memory bus width but about the pheripial chips, both companies pitched 16bit bus based designs but neither company could supply the variants IBM vanted in time in the quanitites IBM wanted so they went with a temporary design based around the 8088 bus design (lightly modified), some cards designed for the intel system where in fact usable with the IBM PC electronically (not mechanically), it turned out to be permanent only by the virtue of it’s popularity
Oh, a CoCoNut
That explains the ranting
Just a sidenote and getting off topic, not only could Intel not supply the 16 bit perihial chips that IBM wanted, the PC team found out to their surprice that chips that intel had been advertising as availabe since 1979 had not even been sampled…….
Niagara: A Torrent of Threads
http://www.aceshardware.com/read.jsp?id=65000292
Um, NO.