Intel President and Chief Operating Officer Paul Otellini on Wednesday said the world’s largest chipmaker would likely give its 32-bit microprocessors an upgrade to 64 bits once supporting software becomes available. UPDATE: Intel plans to demonstrate a 64-bit revamp of its Xeon and Pentium processors in mid-February–an endorsement of a major rival’s strategy and a troubling development for Intel’s Itanium chip.
From the little bit of info I have seen on the prescott it is about the same speed clock for clock as the willamette. It uses way more power, produces more heat, and does not scale well. I am sure they are going to make improvements to it from where it is at but Intel was probably counting on having better success with the chip than it is having. The lack of prescott’s ability to scale may push Intel to move faster to its next architecture.
Lucky for Intel, AMD is going to help pioneer the way to the 64 bit desktop.
PS. AMD is back
Why don’t they move to 256-bit right now? Or 512 bit?
Prescott already has the 64-bit instructions, they’re currently disabled though.
http://www.theregister.co.uk/content/archive/33624.html
http://www.xbitlabs.com/news/cpu/display/20031027151409.html
@Buck: Why don’t they move to 256-bit right now? Or 512 bit?
They did, it’s called a GPU, i.e. NVidia’s GeForce line of chips.
actually prescott is showing a 10% performance increase over northwood, from the 2x L1 and L2 cache alone. There are also architectural improvements, HT2, SSE3, and itll launch at the same price as similarly-clocked northwoods. The power increase is minimal, and the “120W heat dissipation” that had circled around the net was an untrue rumour. Prescott is supposed to scale incredibly well, intel has stated that they hope to have it at 5GHz soon.
and thank god for AMD otherwise i would have paid $1000 for a 1GHz P3 right now instead of $300 for a 3.2GHz P4
Is not a very good strategy from Intel, since 32 bits applications fill most of the people needs.
Its like hoping users to drop Win98.
Is not a very good strategy from Intel, since 32 bits applications fill most of the people needs.
Same can be said about AMD, why make Opterons if 32 bits if enough for most people? Sounds more like a “knee-jerk” reaction by Intel to keep up with AMD. But the target audience is most likely low to mid range servers, in other words NOT people who still use Win98.
Why don’t they move to 256-bit right now? Or 512 bit?
Unnecessarily increasing a machine’s word size is inefficient. There are numerous reasons for 64-bit computers, most notably for addressing large amounts of RAM and for large filesystems which support large files. There’s also several instances where 64-bit integers are necessary in a variety of programs (any time you need to store a number bigger than 4.2 billion)
64-bit integers let you store numbers up to 18 quintillion. If you need numbers larger than that, you’re better off using an arbitrary precision library.
Remember, the word size of the architecture is also used for addresses and opcodes, and increasing a system’s word size also affects the size of binaries produced and consequently the amount of time it takes to load machine instructions from main RAM. This is why, in general, 64-bit applications are slightly slower than 32-bit ones.
Same can be said about AMD, why make Opterons if 32 bits if enough for most people? Sounds more like a “knee-jerk” reaction by Intel to keep up with AMD. But the target audience is most likely low to mid range servers, in other words NOT people who still use Win98
That’s right, That makes me think Intel is giving up trying to reach AMD, cause the last I knew, they were tryin to give to their 32 bits emulation more performance.
But that’s a desition that may cost them.
18 quintillion
How about we start using the SI system of exponents for a change? Whats this trillion, gazillion, quintillion nonsense?
Intel doesn’t really care about whether or not their chips are #1 in performance, they care about their business being #1 and making more $$. Sure, having higher numbers in the benchmarks helps to sell a product but Intel has partners like Dell to do the selling (Dell’s success is a window into Intel’s success).
There aren’t as many vendors selling Itanium systems as opposed to ones selling Pentium 4 systems. But the latest obsession is 64-bits and already AMD is ahead with several partners building and selling AMD64 systems. More Opterons were sold in the first week (or was it month?) than Itaniums in a whole year.
Prescott already has the 64-bit instructions, they’re currently disabled though.
I’ve heard comments like this many many times and that simply doesn’t make it a 64-bit processor. It cannot address memory addresses that are 64 bits long nor can it hold values in general 64-bit registers becuase it doesn’t have any.
if intel does make the processors 64 bit
what happens to itanium. also, if they were to make it 64 bit, who is to say they will make it compatible to amd’s extension, rather than introducing their own incompatible extention
Just Fyi – a quintillion is 1 * 10^18. For those interested, it goes from Billion->Trillion->Quadrillion->Quintillion and the prefexes are latin and break down into 1 * 10 ^ (LatinNo * 3 + 3), except in the case of Million anyway – where Mil is latin derived and means 1,000. Just in case you’re wondering at all.
It’s easier to understand in written numeric form though – but then that’s always been the case, whether working with exponents or just regular numbers.
“the word size of the architecture is also used for addresses and opcodes, and increasing a system’s word size also affects the size of binaries produced and consequently the amount of time it takes to load machine instructions from main RAM”
Actually that is not totally correct, a lot of 64bit machines still keep 32bit fixed instruction length, in the case of IA it is a bit pointless since it has variable instruction lengths anyway. But most of the Sparc64 and mipsIV actually use 32bit wide instructions since they had enough space left in teh opcode space to add the operations using 64bit operands.
Intel dropped the ball here. Too bad for them.
intel is so much bigger than amd and apple that they can recover from any mistake.
they dropped the ball with respect to the pentium 3 and athlon, but then they came back with pentium 4, and amd bled red.
@Eric,
You’re right, that doesn’t make Prescott a 64-bit CPU. However, despite the fact that x86 CPUs have a limited number of GPRs, those registers can be renamed, such is the case with MMX/SSE. MMX combines 2 FPU registers to create a 64 bit register and renames it to [insert name]. I know what you’re thinking, that’s still not a real 64-bit GPR, because although the register is 64 bits in length, the MMX instructions still operate on 2 32-bit chunks of data and not one whole 64 bit datatype.
The fact that they used floating point registers to do integer math might be a sign that the same technique can be used to create the impression of a 64 bit CPU on a 32 bit CPU using existing registers. With SSE2 Intel does introduce new 128 bit registers specially for SSE2 instructions, no longer using FPU registers. Suppose they take those 128 bit SSE2 registers, and break them up into pairs of 64-bit registers and rename them (on the fly) to identify them as 64 bit GRPs. These rumored “64 bit instructions” may be what steers all this work and allow developers to do 64 bit arithmatic. But as far as 64 bit memory addressing, you got me there.
Intel did not “drop the ball” on the Pentium 3. They were doing well up until the P4, I believe it is with the P4 that they dropped the ball on. That’s when it stopped being a contest of engineering genius and became a game of numbers, in other words marketing. The engineers working on the P4 were not happy with the design, but it was driven by the marketing department, they wanted more MHz.
I reiterate, they did not drop the ball on the Pentium 3, and the mobile Pentium M is *living proof* that the P3 design is greater than that of P4.
The Itanium was a $10+ billion dollar mistake. If it was a simple matter of losing money then you’re right, Intel can recover from any mistake. But this one has a rippling effect, the industry might look differently upon Intel from now on.
I think he meant that Intel using rambus with P3 is what they “drop the ball” on. The P3 design was good it was just paired with very expensive proprietary memory.
It was the P4 the one where intel used Rambus, I think. There were plenty of P3 machines using SDRAM. And P4 chipsets use DDR stuff noawdays so RAM is not much of an issue right now. They got burned by RAMBUS that is for sure, but not with the P3 whose chipsets utilized SDRAM mostly.
I think I remember AMD advertising a performance increase for its athlon 64 running in “64-bit mode”, but recent articles and comments says 64-bit will be slower.
Well no doubt this will put a dent in Itanium, but tbh i didn’t ever really see it going anywhere, you can’t grow a market down from high value tbh. Itanium will end up in it’s niche as a PA-RISC/Alpha replacement, as well as been used in HPC (sgi Altrix)
Actually the P3 beat the original P4 in most respects. The final P3 line ever made was code-name “Tualatin”. I use two desktops with these CPU’s and i am very happy with the performance. Soyo motherboards and 512MB Crucial PC 133 ram is paired with it and that is why i have not made the move to P4, in fact the way AMD is looking,i may switch to them for my next desktop.
Look up the specs and reviews for “Tualatin” CPU’s and you will see lots of positives. It is this line that out performed the original P4 “Wiliamette” a few hardware sites have said. The fastest P3 made is 1.4 Ghz, this was made for a sever but my motherboard does support it. It has 512K cache on die if i recall, which is awesome for a P3, hence the designation for server. The other CPU’s in this line all have 256K cache.
One last thing, a reader above mentioned the Pentium 3 M. He is correct about it being very good, i have used that also and give it good marks. Lately it seems AMD is winning on the desktop side, but for a few years in mobile computing Intel has really ben kicking butt, from the original M line, to Centrino, they are really conquering the Mobile space.
Itaniums problem was this; too expensive, unaccessable and lack of software. As an OEM, you can’t purchase it from any of the distributors thus making it nothing more than an ultra-ultra-ultra-ultra, can’t possibly seen in the market place, niche product.
The last nail in the coffin will be the Microsoft factor. If Microsoft cuts their losses, unless Intel comes to the part and is willing to pay the price to maintain the Itanium version, Microsoft will commit fully to Opteron.
Hit the nail on the head there, as for Microsoft well no doubt the is some form of agreement between the two, like the way there was DEC agreement with MS regarding the alpha, the only problem with the alpha was that it only covered the os and not ms apps such as office etc.
As it is at the moment i can’t see Itanium been used by anyone but HP-UX/VMS/Linux users, no doubt there will be some Windows users out there but they’ll be rare.