IBM announced on Monday a microchip for personal computers that will crunch data in chunks twice as big as the current standard and is expected (but not confirmed yet) by industry watchers to be used by Apple. Apple was not available to comment, and IBM declined to comment on which computer makers would use the chip, but its plans would mark a change for the industry, which has emphasized the importance of the speed of a chip rather than its ability to handle heavy workloads. Read the report. Update: Read another report at ZDNews.
I’m not anti-PPC. I’m actually in the process of writing one article about it. Though I have to restart writing it (damn PSU on my Linux box), maybe (hopefully) it would get posted here 🙂
While this processor is really cool, it doesn’t have the economics scale to compete properly with Intel and AMD. That means, it would go the way of the G4, and the G3… IBM shouldn’t AT ALL depend on Apple for its success.
> it doesn’t have the economics scale to compete properly with Intel and AMD.
I have to (unfortunately) agree that according to the article, the CPU will come out at the end of next year and it will “only” run on 1.8 Ghz. At that time AMD and Intel will sail on 4 GHz.
I also agree that MHz do not necessarily matter. But when it comes to marketing, it does matter. This is exactly what pushes AMD to name its CPUs “2800+” while it runs on 2.2 GHz (I think).
I mean, one of the reasons why Apple would want to go to a new architecture is because the G4 does not have the design to run more than 1.4 or 1.5 GHz and Motorola just doesn’t deliver the G5. If their new IBM CPU also has the same “marketing problem” (even if the CPU might be faster than the 4 GHz Intel CPU at that time), it is nothing but the same problem…
If IBM released this chip this year I would be impressed. A year from now, I am not. Also, how in the hell do they know that it’s going to run at 1.8 GHz if it’s still a whole year away? That doesn’t make any sense.
I just wish that I could build myself a kick-ass PowerPC system now for much the same price as an x86 system. It would be a modern version of the AIM alliance CHRP platform. Alas, this is just a wild dream, however.
Man, I remember they days when I would read MacWeek with dreams of one day having a system that could run both Mac OS and Windows NT (for the PowerPC). Of course, it was too good to be true.
I recently bought an iBook, and it’s bullshit about how the current G3s and G4s are faster than x86 at the same clock rate. My 700 MHz iBook is no faster than my 600 MHz Pentium III laptop. Apple needs to get some decent chips running in their machines pronto. End of story. A year for this PowerPC 970 is too long.
> Also, how in the hell do they know that it’s going to run at 1.8 GHz if it’s still a whole year away?
The internal design of the CPU can reveal the speeds the chip can reach, at least roughly.
I am sure they already have running prototypes of the chip. The year ahead will be mostly bug fixing and long testing.
The problem is that when they say it is going to be really fast, they compare their 1.8 Ghz with current CPUs. In a year, Intel and AMD will have much more fast CPUs too.
As always, the CPU market is a “chase” market…
you doubt IBMs ability to engineer a CPU?
they have seen the mess that motorola have made of the G4 and they will not mess up this new archetecture.
besides 64 bits running 1.8 ghz!!!! that is a ton for that archetecture…show me another 64 bit chip that moves that fast. not to mention, at 64 bits, processor intensive work is easier to do and faster. plus, I will bet you that Apple has looked around and they will be the only people out on the market with a consumer 64 bit Computer.
> besides 64 bits running 1.8 ghz!!!! that is a ton for that archetecture…
Jeremy, no one doubts IBM’s ability to architecture this CPU and make it a great one. The problem here is clearly *marketing*. No matter how great a CPU might be, if the Mhz “number” is smaller than Intel’s by next year’s launch, Apple will still have a rough time pinching their CPU to non-geeks (aka non-OSNews readers). No matter if their CPU might be faster.
It sounds like this is going to be one expensive mofo. The last thing Apple needs in order to get more marketshare is to increase thier prices.
That IBM has been rapidly embracing LINUX as of late. I can see this as IBM’s way of pushing their own LINUX efforts.
As for Apple. I wouldn’t worry to much about how they will market this chip. Yeah, Intel and AMD will be up around 3.5-4 Ghz by then. But they will be 32 bit CPU’s, and their 64 bit offerings will be server only. Apple will be the only computer company to have a 64 bit desktop system. That’s right a desktop system. Imagine a 64 bit Final Cut Pro, iDVD, or any other Apple app. What would you buy. A 1.8 ghz 64 bit iMac or a 3.5 Ghz Dell? Or how about a Dual 64 bit PowerMac vs. a 4 Ghz Dell? Yeah the apple might be a lot more expensive..but man, the proformance would be insane!
> the problem here is clearly *marketing*. No matter how
> great a CPU might be, if the Mhz “number” is smaller
> than Intel’s by next year’s launch
I don’t know about that. For the past year or so, Apple has been something like 3 times behind in the “Hz race.” Has that stopped them from selling Macs? Not really. The kind of people who buy Macs, that is the kind of person who is willing to speed more money on a technically slower machine, probably cares very little about Hz. Either they buy it “because it’s a Mac” (meaning they like the OS/software) or because they know enough about processors to know that Hz is a non-issue. I just don’t think there are that many people who are honestly thinking, “Yeah, I’ll spend $2K on a Mac as soon as they get them hertz a little higher!”
According to my limited knowledge, at 64 bits, the issue may or may not be speed but how many instructions can be executed in one 64 bit opcode. Let’s say 64 bits can execute 3 instructions, that would be faster than a non-64 bit chip, though.
Although 64 bits are already here, I believe current software technology are just preparing for this new way of executing instructions.
A 64bit system isn’t necessarily any faster than your average P4, it can mainly address more RAM.
“It sounds like this is going to be one expensive mofo. The last thing Apple needs in order to get more marketshare is to increase thier prices.”
Why would you say that…
Because the full strength power 4 is expensive?
The trimming of the Full scale Power 4 was simply to decrease cost. PPC chips are less expensive to produce that x86 ones. The GPUL is a derivitive of the PPC.
If anything… the significant increase in speed over x86 offerings will simply justify apple’s slight price premium (compare exact components to a PC and the price is only “slight”).
Apple simply need to maintain its current pricing structure… (maybe even allow a slight increase) and Apple’s comps bearing this processor would be WELL worth the money.
I just don’t think there are that many people who are honestly thinking, “Yeah, I’ll spend $2K on a Mac as soon as they get them hertz a little higher!”
But there are a lot wanting Apple to get its scalability problems sorted out. There is nothing wrong with wanting a fast machine. And it will save an enormous amount for Apple, which has people working on optimization who should be on Features.
Well, Intel sits there and markets Hz… the average Joe doesn’t know what that means they just know the bigger number is better. If apple markets bits… the average Joe will still think bigger number is better. Then it just becomes a matter of saying apple Hz do more which is what they have been saying all along. No difficulty there.
Vince
Actually, computing at 64-bits does provide an improvement in terms of speed (especially with large processing loads). Since the processor can have larger registers (ie: the amount of data it can actually process at a time (word size)) it CAN in fact yield faster processing times. This is a variable gain however, but it exists at any form when running 64-bit code.
Isn’t the problem today, that now the Intel procs are running with higher Megahertz AND are faster ? So when the new one’s ARE faster with lower Megahertz it shoudn’t be the same prob as today.
Maybe they change the race to the bit’s and say 64 is faster than 32. Yeah forget Megahertz, bit matters 😉
Thoems
Depends on what you are actually using the computer for. For large number crunching, the conventional way is to break down the numbers into units small enough for a 32bit processor to chew through, which, not only makes programming harder, but also, there is only a limited time before the short cut becomes too much, and all the short cuts in the world won’t change and the definate need for 64bit computing will arrive.
I’ve yet to see the need for for Joe average applications requiring the number crunching that 64bit computing provides. As for the speed, the majority of RISC based processors are 64bit, and as a result, people assume that because it is 64bit, that is providing the speed, where by it is actually the RISC that provides it. If you look at the average RISC and VLIW processor, they are shit-house when it comes to integer benchmarks, however, when it comes to FPU caculations, even a nice little beast like a Power4 with 8MB of cache can *easily* beat so-called “faster” chips.
Apple has been something like 3 times behind in the “Hz race.” Has that stopped them from selling Macs?
I’d suggest that perhaps it has. If not, it may in the future.
Apple, in order to be more successful (I’m not saying that they aren’t), needs to sell more units. Why? Because that is why the exist – and it doesn’t matter if they ship 100 boxes a minute, they are a company that wants to make profit. The more sales, the better.
Rating a computer in Hz has always been an evil thing. People easily remember the arguments about whether a 486 DX/4 100 was superior to a Pentium 75. Better yet, I remember trying to convince a PC Zealot that my 68030-based Atari Falcon 030 was more powerful than his 386DX40 of the same time.
“Yours only runs at 30Mhz!”
Of course, the beauty of the chip, coupled with the multiplexer, DSP etc, showed that they were on completely different ends of the MIPS scale!
I doubt that in the short term people will come to the realisation that you cannot compare Mhz between different architectures.
What Apple etc need to do, is somehow find the marketting gurus to find a different way of pushing the new platforms. Even AMD needs to find a non-Mhz way of pushing their products, so that we can get rid of this “2800+” nonsense that always reminds me of the old 6×86 and K5 marketting.
Stuff your cycles-per-second until it mega-hurtz. We need something that had the simplicity of MIPS for rating products.
(At the same time, can we also get hard drive manufacturers to admit that a MegaByte is 1,048,576 bytes and not 1,000,000?)
Hmmm. Maybe I’ve been dreaming too much.
Eugenia, I agree that marketing is the thing that will tell the tale. I wonder though, as Blackthought was suggesting, if there could be a marketing “shift” (if done right) and, instead of MHz/GHz, people start talking about 64 bit as opposed to 32 bit? Well, we’ll have to wait and see, but that would be an interesting.
I would imagine these chips would start out on the Power Macs, of course.And, again, if somehow the focus could be shifted away from MHz to how much better the 64 bit Power Mac “handles” big graphics, big video, etc., that too would be very interesting to see. 2003 could be a big year!
While this processor is really cool, it doesn’t have the economics scale to compete properly with Intel and AMD. That means, it would go the way of the G4, and the G3… IBM shouldn’t AT ALL depend on Apple for its success.
I’m not at all certain IBM is relying on Apple for it’s success. Consider this quote from the article:
Chekib Akrout, vice president of IBM microprocessor development, said big databases and the Internet challenged PCs: “This is the time to introduce a 64-bit machine capable of being used on a desktop,” he said in a telephone interview.
Note that he doesn’t state specificly that this is for a Mac, although you can logically infer that is one of the intended markets. His implication here is that the Intel based machines are inadaquate for modern applications. Notice he says “This is the time to introduce a 64-bit machine capable of being used on a desktop”, rather than simply a 64-bit processor. I strongly get the impression IBM has some other ideas in mind for these chips other than simply being a supplier for Apple.
Bruno, I agree, I would assume that to be true too.
Alternative systems have always been interesting to me, but for the most part sun/sgi/hp/compaq have always had a much higher cost of ownership and were pretty specialized.
IBM cant sell x86 boxes with a large enough profit margin, but with their own fabbed chips they can make a desktop system with an acceptable profit margin. OS software allows this to be a viable option, allowing IBM to not need MS support to sell a desktop chip.
My only issue now is, do I want a ibm ppc system when they come out, or go with a apple desktop with new ppc from ibm.
By slimming down the chip so that they can make it cheap compared to the POWER4, they will probably lose performance. The question is how much, and it what type of applications.(Probably server stuff mostly)
One “problem” with 64-bit is caching, since you have to store 64 bit values, you can only store half as many(compared to 32-bit), so you need much larger caches to keep the same hit rate. But they will be reducing the cache sizes to make it cheaper, they’ll lose some performance.
64-bit CPU’s are only better in some applications, certainly scientific/HPC, and when you need more than 4GB of memory for a single program. That could add engineering, databases, and maybe some visual stuff(?). But your normal everyday apps won’t notice the 64-bitness as much as overall performance, which will certainly be good, but for the most part it won’t have anything to do with it being 64-bit.
This chip will be good, but it won’t be magical, only Alpha is magical ;-p
I remember reading before that the MS next generation of Windows would be running on multiple arch’s. So with x86, x86-64 and Itanium already guaranteed you might see Windows on a PPC again.
First of all, 64-bits is not magical, not at all. It won’t make your CPU magically faster, it will only make it process larger numbers by default and have a larger possible memory pointer (if I am not misstaken, most 64-bit CPUs don’t have a 64-bit virtual memory space). Other features might help, such as more registers, but that is something else. You can make a 4-bit CPU with 1024 hardware registers after all.
And not only that, misstakes as missing registers have been attacked by iNTEL and friends and have been compensated for. The P4 is a far cry from the 80386 after all (except the legacies of course). But I will leave it up to the CPU experts to explain all of this in detail.
And now to marketing. How many FPS will Doom 3 pull on CPU x? You can scream until you faint that “Mhz doesn’t matter”, but AMD CPUs are (roughly) twice as fast as when I got my 1.4Ghz CPU. And they will give a much better FPS in your favorite game, it will crunch movies and music faster. All this can be measured, and as lightwave have proven by now, optimize for P4/SSE2 and you can obtain huge gains where it matters. I hardly think that Apple would dare to put one of their machines against a dual P4 Xeon 2.8Ghz running an SSE2 optimized Photoshop.
I am sure marketing will do its best to try to make any future Apple computers look faster then they are (something MacOS X does as well, but in a good sense;)), but I like to think that we are better informed than to belive them. The only reason to get a Mac is if you like MacOS X and you like apple hardware in general. I hardly think they can keep up with the insane race of iNTEL and AMD.
Something Apple would benefit from would be a new CPU instruction set that anyone could license and produce CPUs for, as that would open up a market where new companies could release fast CPUs that could compete for Apples favores (and Linux machines as well I guess). Remember where the fastest supercomputer in the world is hosted? Jupp, not anywhere near iNTEL, IBM, nor AMD.
If I were Apple or IBM or whoever when I come to debut this 64bit desktop, I’d simply boast that it was 64bits! Suddenly that makes the 32bits of competitors seem inadequate. It is all in the marketing message.
Yes, you’re right. It all depends on how you sell things to people.
I can see the marketing headline:
<joke>
“64 Bits are better than 32 Bits!”
and combined with:
“Two CPUs are better then one!”
you get 128 Bits of CPU power.
</joke>
But fact is that IBM’s new PPC CPU is only vaporware at the moment. We have to wait and see what real performace that cpu delivers. And especialy with existing 32Bit PPC apps. I beleave this is their biggest issue. They must ensure that current 32Bit apps run at a adequate performance, because it will take some time until all apps are 64Bit.
If the don’t manage this issue the new PPC CPU is in danger to share the same fate like Intel’s PPro.
Ralf.
Jeremy: you doubt IBMs ability to engineer a CPU?
Well, the screwed up the G3 (or rather, Apple screwed the G3, but IBM allowed them to do so).
Jeremy: besides 64 bits running 1.8 ghz!!!! that is a ton for that archetecture…show me another 64 bit chip that moves that fast.
And I’ll show you various prototypes of Clawhammer and Sledghehammer….
Please, by time next year, they would already be beaten, or be closed to being beaten.
Jeremy: plus, I will bet you that Apple has looked around and they will be the only people out on the market with a consumer 64 bit Computer.
Compaq/HP plans to come up with desktop stuff using Clawhammer by H1 2003, and Apple….? Besides, does it matter about 64-bit? I mean, consumer apps haven’t even come close in reaching the limits of 32-bit, what use can it bring?
Anonymous: It sounds like this is going to be one expensive mofo. The last thing Apple needs in order to get more marketshare is to increase thier prices.
Right….. (most stupidest comment on OSNews this year, so far). Besides, Apple couldn’t be bothered about market share. They are bothered about profit. If getting market share means loosing profit, they won’t do it. But if they can expand their marketshare while increasing their profit…. why, it would be a nice by-product for them.
Blackthought: That IBM has been rapidly embracing LINUX as of late. I can see this as IBM’s way of pushing their own LINUX efforts.
IBM is pushing Linux in the server market. We have yet to see if they would push Linux in the desktop market. I think the best first step is actually writing drivers for their PCs.
*sigh* another Machead that seems to forget AMD exists…
Antarius: (At the same time, can we also get hard drive manufacturers to admit that a MegaByte is 1,048,576 bytes and not 1,000,000?)
Not a chance in the world. Philosophically, i think 1,000,000 is better than 1,048,576 cause mega in the metric system stands for 10x^6.
Bruno the Arrogant: Note that he doesn’t state specificly that this is for a Mac, although you can logically infer that is one of the intended markets.
Well, being as tight-lipped as possible, one could already draw a conclusion that Apple has a part of it. However, the prospect of having a IBM PowerPC-based desktop is a nice idea, but still doesn’t fit the economics scale. Why? Majority of PC sales come from white boxes, and the PC market is so commododized, it is hard to penetrate with something like a Mac, only with a IBM logo and Linux.
IBM must try to make this a commodity, like x86 processor, instead of locking it down to 2-3 OEMs.
Evan: OS software allows this to be a viable option, allowing IBM to not need MS support to sell a desktop chip.
It never needed Microsoft support for their processor. They need IBM support. Before you laugh, think about it. If IBM wasn’t so shortsighted, OS/2 would be our staple, instead of Windows.
If I were Apple or IBM or whoever when I come to debut this 64bit desktop, I’d simply boast that it was 64bits! Suddenly that makes the 32bits of competitors seem inadequate. It is all in the marketing message.
Sure, it would help in terms of marketing. But for sure, AMD would advertise 64-bit capablity (like, obviously) in their future processors, so thunder stolen. Unless Apple manages to burn down all AMD labs and once and for all get rid of all designs relating to Hammer….. maybe, just maybe…
Ralf.: If the don’t manage this issue the new PPC CPU is in danger to share the same fate like Intel’s PPro.
Well, PPro was kinda a success (not to the magnitude of other processors made by Intel). Heck, it ran Guiness Record’s fastest computer for two years in the row… I think.
Maybe I’m reading something wrong or mossing something between the lines but didn’t the article say “second half of next year”. How does that equate to “end of next year”.
From C|Net:
“…pricing and other commercial details won’t emerge until the chip ships in the second half of next year.
”
From Forbes/Reuters:
“ The chip will be available in the second half of 2003 and be built in IBM’s East Fishkill, New York,”
Am I missing something?
-Spider
A lot of people are worried about how Apple would market a 1.8GHz chip against the x86 offerings, but they could simply say 64 is 2 x 32 so you have to double the MHz. Specious reasoning, but so is the whole MHz thing in the first place.
Twice the SIMD goodness of the comparable 32 bit MMX/SSE (Hammer sports this too). Games, Video Editing, CD Ripping etc. all get performance imporovements with SIMD, and 64 bit registers w/ larger SIMD registers can help. If Apple is positioning to own Multi-Media content editing, then having large vector engines with lager addressable address space can’t hurt.
I would not be surprised to see IBM Netfinity PPC type machines running Linux. They have everything in place already w/ the struggling PC division they just need to manufacture a PPC motherboard and load Linux instead of M$.
I have a limited understanding of CPU’s and their inner workings, but from what I read this will be a killer CPU. If you look at Apples strongest markets..Desk top publishing, film editing, and now they are creaping into biotech the extra horsepower and memory handling will greatly be welcome. Imagine a 64 bit version of Final Cut Pro or Shake. That would wipe the floor with anything Avid or Adobe could come up with. Especially if reports are true and it can move 6.4 Gbs per second. This is staright from the Micro=Proc website…..
IBM Microelectronics IBM is disclosing the technical details of a new 64-bit PowerPC microprocessor designed for desktops and entry-level servers. Based on the award winning Power4 design, this processor is an 8-way superscalar design that fully supports Symmetric MultiProcessing. The processor is further enhanced by a vector processing unit implementing over 160 specialized vector instructions and implements a system interface capable of up to 6.4GB/s.
Even at half the Hertz of a P4 a dual 1.8 Ghz powerMac would kick Ass
A few points:
1. 64-bit registers only help applications already doing 64-bit addressing and/or 64-bit math.
2. 64-bit does not mean twice the speed. If anything, it means less speed. More transistors == more heat and cost == less clock speed for the buck.
Would anyone else care to contribute a few things to stem the growing tide of wild speculation here?
Of course the IBM part keeps you away from “trusted” computing at this point (IBM and Apple have not signed on to it as Intel and Microsoft have)
http://www.linuxdevices.com/articles/AT7225637142.html
http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html
Apple would certainly profit from the IBM PowerPC 970. It is the only option Apple has for increased performance so far. There is no G5 or other Motorola chip that would offer an alternative.
IBM will offer a basic workstation platform (high margin), and give the customer the choice of 64 bit Linux or 64 bit Microsoft Longhorn.
Apple will be able to offer the same chip and general infrastructure to run OS X.
It presents economies of scale on both sides.
What Apple will do for their notebook computers remains to be seen.
#p
“And it will save an enormous amount for Apple, which has people working on optimization who should be on Features.”
Apple should put ALL their people on optimization. Whatever else it is Mac OSX is S-L-O-W.
Unless you are living under a rock or something, but IBM is the first computer maker to integrate TCPA support into their computers. And advertise them.
Where are you from The Prophet?
The PC market is made out of mostly white box makers…… IBM isn’t a white box maker. They certainly can’t/won’t compete directly with Dell, and the likes of them. So there blows your economic scale, buddy.
Plus, there isn’t any proof that Longhorn would be ported to this processor, plus I don’t really see the point (no apps).
Why does the Mac DOMINATE the graphics world? Because in 1984 Apple’s Mac had 24 bit linear addressing while the PC was swapping 16 bit pages. This larger address space allowed bigger programs to more easily be written. Bill Gates on the other hand was quoted in 1984 saying “640k ought to be enough for anybody.” Remember RAM cram on the PC? Because Apple had 24 bits it could do things in graphics that were too hard to do in the kludgey PC memory architecture. I bought my first Mac for the address space, I didn’t even know what a mouse or icon was for. It took the PC until 1995 to catch up with Win 95.
Now Apple is moving to 64 bit processors with efficient SMP multi processor support. The multi-media and 3D worlds NEED this kind of power. Remember that to make 3D movies Pixar and other movie makes need to use render farms full of 10s or 100s of computers with each computer spending hours to render just one frame of a movie. Apple is making the moves necessary to compete in this and other visualization markets.
If you’re like me and want to do hard core 3D computer work in the next few years, you’ll want to make Apple’s 64 bit SMP OSX/UNIX based systems your secret advantage.
Rajan you post a lot of intelligent posts, but I think you dropped the ball on this one.
What does it take for IBM to have economies of scale? Economies of scale has nothing to do with matching the other guys volume. It means the volume at which the unit price decrease is negligable.
Besides they could fabricate the chip in any plant, they could go to Motorola or anyone who has the latest fabricating technology.
Forget about economies of scale. What is important is the design in this case and IBM has more than enough know how and R&D money to drive this chip. They are no Motorola who cannot afford to drive R&D for this kind of chip.
Just a small note. Everyone will see a preformance improvement with the move to 64bits. 64bit math is about 4x faster then emulated 64bit math on a 32bit system. All the new file systems (UDF, XFS, and NTFS) are all 64bits to handle the larger hard drive demands; thus, all disk addressing is done in 64bits and there should be a small improvement in the disk I/O.
IBM currently sales AIX workstations (desktops). These are used in conjunction with the pSeries (old RS/6000) computers and a designed for user’s. IBM is must likely going to use the new chip in their workstations for several reasons: 1) The included the SIMD extentions (these aren’t needed in servers), 2) It replaces the current PPC604e chips they currently use in them.
People keep talking about hammer being 64bits… While this is correct, the only people who have pledged to support this is the Linux community. MS is going to live with the P4 for it’s 32bit systems and IA-64 for it’s 64bit systems. Thus all those hammer computers will have to be Linux systems!
Intel has already hit the MHz marketing myth. Their new laptop chip set runs at a lower MHz than the P4 and can out perform it! I think the P4 would be a nice chip if Intel would clean up the design, remove about half the stages (20 is just too many), and reduce the MHz down to something reasonable (1GHz would be about right). They currently generate too much heat and the high number of stages keeps it from being a good multi-tasking chip (they currently need about 800 clocks to do a task switch, which means 2 task switches between each job switch {once to system & once to new job} at 32 times per second plus interupts for device IO means that about 1% CPU is consumed in chip overhead.
I’m afraid the IBM CPU is going to be too little, too late and way too expensive.
Economy of scale will make these CPUs not cost effective at all, esp when AMD will be shipping millions of 64 bit units while IBM/Apple will be shipping out 10’s of thousands of theirs.
Also note that Apple hasn’t done a very good job of making SMP systems, they suffer the same problems that Intel has with the bus sharing architecture. Are there any plans for Apple/IBM to use HyperTransport??
And, how much money do you want to bet that you’ll continue to never see Apple do benchies against AMD stuff, but (as usual) only against artifically crippled Intel based systems?
I would certainly get into the 64bit architecture wagon if and only if it is mated with a high performance GPU. Considering the wide bitwidth and the super fast blitting and matrix ops, a new era of graphics and sound sound computing is about to explode.
Well, the price should be competitive enough to be considered consumer, though. Putting these systems at the high price point will only delay it.
Although most consoles like PS2, Dreamcast, Cube, etc are quite aggressive building on it. The PCs, on the other hand, are quite slow on taking the ball.
The chip has only been announced and I’m quite sure that it is going to be expensive and *perhaps* not competitive speed wise with AMD or Intel processors. Assuming that this is the next processor Apple will be using, Apple must still remain competivie (more or less) with other Windows computers. Granted Macs’ base price has always been more, arguably for less processing power, Apple still has to price the computers within a price range that some percentage of middle income people can still afford.
At the risk of sounding unpopular (or against the general consensus here), I really don’t care if it is more expensive and/or slower than the Intel/AMD computers. I’m still using a PIII 800 and I’m happy with it. I don’t need or want a P4 2.x + processor or an AMD 1800+. I just want a computer that does the job.
Given that, if Apple comes out with a computer that is simply too expensive, too bad for them, I won’t buy it. If it offers competitive pricing (hey, keep in perspective that this is Apple here ), then I might be interested as long as they can optimise OS X and the apps that could use 64 bit or SMP processing.
Apple wouldn’t have to prove that a machine using this processor is faster, but rather that it makes more tasks reasonably possible on a desktop.
“No, you don’t have to send this off to a 64-bit server cluster to be rendered whenever the other guys’ project is done. Instead you can just do it on your PowerMac.”
Being able to natively do 64-bit math might not seem terribly important, but it’ll be a blessing for large image manipulations, and for doing heavy computations on complex data. Remeber folks, as many pretty icons as you stick on the screen, it’s all still math, and being able to use really big numbers is an advantage.
Actually, computing at 64-bits does provide an improvement in terms of speed (especially with large processing loads). Since the processor can have larger registers (ie: the amount of data it can actually process at a time (word size)) it CAN in fact yield faster processing times.
You’ve never programmed a CPU, have you ? Having 64-bit registers doesn’t let you do twice as much 32-bit processing at the same time (except in special cases like SIMD, but many 32-bit CPUs already have it, too).
For large number crunching, the conventional way is to break down the numbers into units small enough for a 32bit processor to chew through, which, not only makes programming harder, but also, there is only a limited time before the short cut becomes too much, and all the short cuts in the world won’t change and the definate need for 64bit computing will arrive.
No. There are _very seldom_ occasions where you need 64-bit integer computing. When you do “large number crunching”, in general you also need your numbers to be floating-point. 32-bit processors have been having 64-bit floating-point registers for ages, so 64-bit integers brings you no advantage here.
A 64-bit CPU is merely about addressing much more memory in a simple way (i.e. without special pagination tricks). That’s all.
it is actually the RISC that provides it. If you look at the average RISC and VLIW processor, they are shit-house when it comes to integer benchmarks, however, when it comes to FPU caculations, even a nice little beast like a Power4 with 8MB of cache can *easily* beat so-called “faster” chips.
Again a naive and false assumption. If RISC chips are generally faster on floating-point, it’s not because they are RISC, but just because potential customers of these chips are interested in fast FPUs. The companies that are marketing these CPUs are struggling for the scientific market, where heavy mathematic calculations are common. Intel’s x86, which happens to be CISC, has a historically weak FPU (especially on the instruction-set point of view – crappy stack machine) just because floating-point was not an important target for this processor, not because it has a CISC architecture.
RISC is not ontologically faster than CISC. It’s just a different architecture. Also, everyone that knows about old RISC CPUs will tell you that today’s so-called RISC CPUs don’t bear much resemblance to the former. Their instruction set is in fact hardly “reduced” even compared to x86 CPUs.
64-bit computing.
The article was published in Forbes – a magazine oriented for investors. The author clearily doesn’t know the sh*t he is talking about –
a microchip for personal computers that will crunch data in chunks twice as big as the current standard
And everybody jumps in. Let’s imagine how the article was made – a financial analyst went to IBM and talked to executives, they tried to explain to the stupid what is bit and what is byte , whatever he grasped he wrote and published. As many of you know the speed of publishing process this article was written probably a month ago – before we had the similar news here on OSNews.
So it’s not a news for us – it’s news for mutual fund managers. They need to decide will IBM grow faster because new market opportunity for them (chip for Appl) or not.
My experience with articles in Forbes is the same as with other financial advisors – they hype the news because they want to dump the stock.
It has absolutely nothing with technology.
The whole question “To bit 64 or not to bit?” is way more complicated than it was discussed here – modern computer as full of CPUs ranging from 8-bit controller in keyboard to 128 (or 256?) in graphic card. It could have 24-bit CPU in SCSI card and 48-bit CPU in some firewire adapter.
Application wise – it’s also different and “number crunching” is the last point here. Your favorite macro virus is working with 8-bit data (text), network streams are 16-bits, graphic is 24-bit (in MS WIndows and X11) or 32-bit (in BeOS, not sure about Mac OS), databases could use anything you want them to use.
How the desktop system would benefit in using 64-bit CPU – hell knows. After all AMD chips AFAIK are 64-bit RISC CPU at the core with x86 microcode on the top. So they look like 32-bit CPUs although they are not.
I’ve read many post as to why a 64 bit CPU would not be as “fast” as lets say a P4 at about 3.5-4 Ghz, and that a 64 bit system will only benefit programs that demand a lot of memory. Isn’t that what Apple is all about. Isn’t the desktop publishing/ video editing markets the ones that really need that 64 bit power? What about 3d graphics and bio tech? If 64 bit wasn’t anything to really jump for then why have them at all?
It seems to me that this is a prefect fit for Apple. I’m sure Apple programs are hard at work converting programs such as Shake, Final Cut, DVD Studio Pro, and other programs to take full advantage of this.
Like I said A Dual 1.8 Ghz 64 bit PowerMac would really kick booty.
The average iMac owner might not care, but the pro’s will.
2Ghz, 2x32bit, 4 processors on the core..
Let’s just call it a PC stomping 4X4.
I can see an animated monster truck mobile rack server now.
One thing that all of you 1.8GHz vs. ~4GHz people need to remember is that you are comparing an apple to an orange. You need to compare processor speeds of the Itaniam2, UltraSparc, MIPS and such because they are 64-bit processors and not 32-bit processors like P4.
When you compare apples to apples, this new chip is much faster than any of its peers.
Just my $0.02
With that backdrop, we peeked into IBM’s press release for the new processor. In addition to the 0.13-micron circuitry, which makes the processor smaller and cooler, IBM will be introducing the processor with a 900 MHz bus speed between the RAM and the processor. That leapfrogs the 400 MHz bus speeds being used on some PC motherboards. IBM says that those speeds allow data throughput of up to 6.4 Gigabytes per second. The company also says that the PowerPC 970 will support SMP, or multiple processors working together, something Apple has taken advantage of in its server and PowerMac lines with the G4. From IBM’s press release:
IBM today announced a newly-developed, high-performance PowerPC microprocessor for use in a variety of applications, including desktops, workstations, servers and communications products.
The new chip, called the IBM PowerPC 970, is derived from IBM’s award-winning POWER4 server processor to provide high performance and additional function for users. As the first in a new family of high-end PowerPC processors, the chip is designed for initial speeds of up to 1.8 gigahertz, manipulating data in larger, 64-bit chunks and accelerating compute-intensive workloads like multimedia and graphics through specialized circuitry known as a single instruction multiple data (SIMD) unit.
IBM plans to build the chip in its new state-of-the-art 300mm manufacturing facility here using leading-edge manufacturing technologies. IBM plans to pack performance and new features into the chip using ultra-thin 0.13-micron circuitry (nearly 800 times thinner than a human hair), constructed of copper wiring and about 52 million transistors based on IBM’s efficient silicon-on-insulator (SOI) technology. Additional details on the PowerPC 970 are to be disclosed by IBM this week in a paper presented at Microprocessor Forum, a chip design conference organized by industry analyst firm In-Stat/MDR.
“IBM’s new PowerPC 970 64-bit chip is all about bringing high-end server processing power to the desktop, low-end server and pervasive space,” said Michel Mayer, general manager, IBM Microelectronics Division. “IBM is committed to helping more customers put our expertise in advanced chip design and manufacturing technology to work for them.”
The chip incorporates an innovative communications link, or “bus,” specially developed to speed information between the processor and memory. Running at a speed of up to 900 megahertz, the bus can deliver information to the processor at up to 6.4 gigabytes per second, to help ensure that the high-performance processor is fed data at sufficient speeds.
While supporting 64-bit computing for emerging applications, the PowerPC 970 also provides native support for traditional 32-bit applications, which can help preserve users&Mac226; and developers&Mac226; software investments. The design also supports symmetric multi-processing (SMP), allowing systems to be created that link multiple processors to work in tandem for additional processing power.
IBM plans to make the PowerPC 970 chip available next year
The PowerPC 970 can process 8 instructions per cycle vs. 3 for the PowerPC G4. It is also optimized for SMP, can address massive amounts of memory and has a very fast system bus. Starting 1.8 Ghz, it will be very fast and powerful.
The Intel and AMD 64-bit chips are slow in comparison (in terms of clock speed and in every other way). Intel can not reach 1 GHz and it is not backwards compatible with 32-bits apps.
> Just a small note. Everyone will see a preformance improvement with the move to 64bits.
> 64bit math is about 4x faster then emulated 64bit math on a 32bit system. All the new
> file systems (UDF, XFS, and NTFS) are all 64bits to handle the larger hard drive demands;
> thus, all disk addressing is done in 64bits and there should be a small improvement in
> the disk I/O.
I do not understand the logic. Somehow applications which have to be run on a 64-bit CPU with much lower clock speed are supposed to run faster because of a relatively small decrease of execution time in an area where they typically spend little time working? For servers and other computers this is fine, but we are speculating about desktop machines and desktop applications.
> People keep talking about hammer being 64bits… While this is correct, the only people
> who have pledged to support this is the Linux community. MS is going to live with the
> P4 for it’s 32bit systems and IA-64 for it’s 64bit systems. Thus all those hammer
> computers will have to be Linux systems!
Yes, that’s why Microsoft is porting Windows XP and Windows 2000 Server to Hammer:
http://www.amd.com/us-en/Processors/ProductInformation/0,,30_118_46…
> Intel has already hit the MHz marketing myth. Their new laptop chip set runs at a lower MHz
> than the P4 and can out perform it!
What is it with you idiots and this “megahertz myth” stuff? Both clock speed and other factors play a role in the execution speed. Clock speed is not irrelevant. Cache is not irrelevant. IPC is not irrelevant. All are essential parts of estimating the execution speed of a chip.
> I think the P4 would be a nice chip if Intel would clean up the design, remove about half
> the stages (20 is just too many), and reduce the MHz down to something reasonable (1GHz
> would be about right).
I am glad that they do not listen to you, or we would all be using Mac-like (read: slow) computers.
> They currently generate too much heat and the high number of stages keeps it from being a
> good multi-tasking chip (they currently need about 800 clocks to do a task switch, which
> means 2 task switches between each job switch {once to system & once to new job} at 32 times
> per second plus interupts for device IO means that about 1% CPU is consumed in chip overhead.
And that is worth reducing the speed by about 66%? Hello? If you want a slower chip with “efficiency bragging rights” then buy an Apple.
> Being able to natively do 64-bit math might not seem terribly important, but it’ll be a blessing
> for large image manipulations, and for doing heavy computations on complex data. Remeber folks,
> as many pretty icons as you stick on the screen, it’s all still math, and being able to use
> really big numbers is an advantage.
I would be more than a little surprised if any of the eye candy that MacOS X draws actually uses 64-bit calculations. Most image components are still 8-bit, and that is all that the human eye can discern. Only people who professionally manipulate images need more than 8 bits per component. But of course that is not typical desktop stuff; that is for workstations.
> Intel’s x86, which happens to be CISC, has a historically weak FPU (especially on the
> instruction-set point of view – crappy stack machine) just because floating-point was not
> an important target for this processor, not because it has a CISC architecture.
And to add to your debunking session, AMD’s Athlon is a CISC architecture CPU with a very strong FPU. The only reason that Intel’s P4 has a weaker FPU is that the CPU is designed so that applications which make heavy use of floating-point calculations can do them in parallel with SSE2.
> I’ve read many post as to why a 64 bit CPU would not be as “fast” as lets say a P4 at about
> 3.5-4 Ghz, and that a 64 bit system will only benefit programs that demand a lot of memory.
> Isn’t that what Apple is all about. Isn’t the desktop publishing/ video editing markets the
> ones that really need that 64 bit power?
Show me an Apple desktop that needs more than 4 GB of RAM.
> When you compare apples to apples, this new chip is much faster than any of its peers.
Hmm…yes, perhaps. But then there is the megahertz myth, no? 😉
Actually the discussion focused around the use of such chips in Macs whose peers are the soon-to-be 4 GHz chips you mentioned.
Apple (read: Steve Jobs) doesn’t want to do anything for the short term besides focus on converting the existing Macintosh userbase to OS X.
If Apple does decide to switch to a non-Motorola processor, it’s a ways down the road, and my guess would be perhaps even longer than it will take for IBM to get the chip ready for production.
I don’t think we’ll even see Macs with a 1.8GHz GPUL… perhaps ones clocked higher after Apple has decided it has a sufficient userbase for which to market the chip.
Apple would be focus on upgrading the bus speed, apparently they were right. Gamers to graphic artists will enjoy this. Although clock speed will be lower, and there will be comparable bus speeds via hypertransport or whatnot, this new desktop proc arch will be well received by the aix users who want cheaper workstations, apple users who want better memory bandwidth and proc speed, and linux users just because a new high end system they can get cheaply (in comparison) is just that cool.
Hell, MS might even get windows on these systems.
Even with the lower mhz, apple will probably keep with the dual procs in powermacs, so Ill be very happy. I do wonder how apple is going to package imacs and ibooks now though. Maybe the moto proc? Faster g3s from IBM (the ibm g3s get up to 1.4ghz now). Possibly just put ddr g4s in them?
Oh well, just gotta wait and see.
> The PowerPC 970 can process 8 instructions per cycle vs. 3 for the PowerPC G4.
> It is also optimized for SMP, can address massive amounts of memory and
> has a very fast system bus. Starting 1.8 Ghz, it will be very fast and powerful.
Where did you get the average of 8 IPC for the PowerPC 970?
The Mac rumor sites often speak of HyperTransport but, well, they are rumor sites 🙂
I sure hope discussions like this don’t keep up for the whole year prior to the release of this PPC chip.
I’ve already drawn my conclusions. Even if this new PPC chip from IBM doesn’t compete with x86 CPU’s a year from now it will still be a step up from current G4 and this is real improvement (new PowerMacs: 133 MHz bus -> 167 MHz bus, you call that an improvement? please)
Andrew: What does it take for IBM to have economies of scale? Economies of scale has nothing to do with matching the other guys volume. It means the volume at which the unit price decrease is negligable.
Actually, you completely missed my point. Low volume= less economic advantage to push their processors like Intel and AMD has to. Why Intel and AMD are just pushing the speed limit? Well, when ever one company starts to lead the speed war, like AMD did once, and now Intel, they start to gain market share, and therefore $$$.
Now, if Apple makes Macs and charge a premium price for it, and IBM makes Linux-based desktops and charge it the way it charges its current PCs, I don’t really see them standing a chance to get a real market.
Well, if Eugenia postes it, hopefully, you would see an article on why IBM should ditch any plans to use PPC chips in their own desktops, but instead push it to compete directly with Intel and AMD.
Andrew: Besides they could fabricate the chip in any plant, they could go to Motorola or anyone who has the latest fabricating technology.
Chip fabrication has NOTHING to do with this. Does Intel has any incentive to come up with their 3GHz P4 and AMD with their 3000+ AthlonXP? Yes! Why? They stand to gain something. IBM on the other hand in current conditions have no reason to come up with 2Ghz or 2.2Ghz versions of this processor right after it is released.
Besides, IBM already has the most advance and cost effective fabircation plant, but that’s besides the point. Because the price difference between PPC and x86 is so small. But x86 got what it takes to push itself futher anr further. PPC don’t.
Joe Powers: I think the P4 would be a nice chip if Intel would clean up the design, remove about half the stages (20 is just too many), and reduce the MHz down to something reasonable (1GHz would be about right).
Funny. Do you even realize that it is the MHz that gives the speed? (P4 has a low IPC). And even more funnier, I don’t see you asking AMD to remove half of the stages (they have more than the P4).
Besides, people buying processor couldn’t care less about the chip design, which is also why monolithic kernel is better off than microkernel.
Brian: I’m afraid the IBM CPU is going to be too little, too late and way too expensive.
Wow, you are so keen on pricing when information about pricing isn’t close from coming out yet. Besides, traditionally, PPC processors are cheaper to manufacture. In fact, if weren’t for the deals with Apple, I bet the G3 and G4 would be cheaper.
And I think you got your whole point wrong. IBM can, and probably will, outprice x86 processors. But the economics scale doesn’t give IBM much incentive to make something faster.
Jim Harner: Intel can not reach 1 GHz and it is not backwards compatible with 32-bits apps.
This is because Intel didn’t make any 64-bit consumer processor, and the market it is targeting have little to no legacy 32-bit apps.
Evan: Apple would be focus on upgrading the bus speed, apparently they were right. Gamers to graphic artists will enjoy this.
I don’t see how gamers can enjoy this, unless this trickles down to consoles….
Jay: The Mac rumor sites often speak of HyperTransport but, well, they are rumor sites 🙂
Well, ironically, this rumuor has more sense than G5 which the press (read: mostly osOpinion) spew out.
—-
You know what? I just realize something. IBM would be using 0.13micron SOI process for a chip being introduced during the time AMD and Intel is moving to 0.9micron….
Don’t try to show up your ignorance.
This monster chip is a killer for desktop use.
It has:
– Vector processing unit called Velocity Engine by Apple.That means that it can greatly accelerate graphics, Video, Audio..multimedia in general. It has for that purpose 160 special instruction set. It runs 8 instructions per clock cycle than the G4 with only 4 instructions/clock.
– It runs present 32-bit software natively (With no use of Emulation mode)
– It uses SOI (silicon-on-insulator) almost 2 years ahead of Intel which hasn’t yet implemnted the technology. SOI reduces significantly power cosumption therefore allowing the IBM chip to increase its speed by 30%
– It supports cache-coherent, a speedy Bus running at 900MHz between processor and memory.
– It supports 16-way symmetric multiprocessing (Full 5 states called MERSI)
it will use very low power consumption at 130 nanameters.
No matter what Intel and AMD deliver in 2003 the Power PC 970 64-bit for desktop running at 1.8GHz will surprise many many people in computer industry.
IBM Fishkill brad-new chip plant located in upstate NY is ready to deliver on promise.
This is not Motorola which has greatly reduced its PowerPC resources and is still using its old-fashion chip plants.
Show me an Apple desktop that needs more than 4 GB of RAM.
640k ought the be enough for anyone. –Microsoft
Just because YOU don’t use it doesnt mean someone else won’t. Apple sells servers dont they? Pretty dumb statement you made now isnt it, after all what server would need more than 4GB of ram? Nice troll dumbass.
I see the arguments about speed and MHz/GHz, but IMHO it won’t matter. PCs, Macs, whatever, are just fast, with their speed accelerating it doesn’t not matter what the actual rating is, they’ll just be amazingly fast and be able to do everything the user wants. A powerful bus and wider addressing and registers are more future proof than clocking up an older architecture. Saying a CPU is 1 GHz, 20 GHz or 50 GHz will soon be pretty meaningless, they’ll just be damn fast 😉
> Just because YOU don’t use it doesnt mean someone else won’t. Apple sells
> servers dont they? Pretty dumb statement you made now isnt it, after all
> what server would need more than 4GB of ram? Nice troll dumbass.
Troll! I said *desktop*, not server, and I am talking about the near future (2-3 years). I did not say *never*.
Some friendly advice: do not try to insult my intelligence until you can understand what I say.
> A powerful bus and wider addressing and registers are more future proof
> than clocking up an older architecture.
But bumping up the bus speed only improves performance when the processor is choked for bandwidth. Anything above the amount of bandwidth the processor needs gives extremely small returns. And that amount depends on both the processor’s design and its clock speed.
If you were to underclock a 3.5 GHz P4 to 2.8 GHz while still using the 666 MHz bus speed, I doubt that it would outperform a stock 2.8 GHz chip at 533 MHz. Higher clocked chips require more bus bandwidth.
Also, the speed of the RAM plays a factor. The 533 MHz bus does not really outperform the 400 MHz bus when both use PC800 RDRAM. However, when the 533 MHz bus is paired with RAM running at the same speed (PC1066), then it pulls ahead of the older 400 MHz bus.
I wonder what kind of memory Apple would use with a 900 MHz bus?
> Don’t try to show up your ignorance.
Huh? I simply asked where did people get the information about the 8 IPC? Is it rumor? Is it part of an official press release? How can I verify it?
The PowerPC 970 processor has been known about for months. You can read speculation about Apple using IBM POWER4-series or POWER4-series derivatives as far back as 3 years ago. And these discussions are on the beaten track, now truly barren and devoid of fauna after months of discussion across the internet.
I think we all agree that the Macintosh platform has a performance issue, as much as proponents of the platform (like myself) would prefer it wasn’t so. I think we all agree that a new PowerPC-series CPU, which is being designed by IBM for workstation & server use (rather than embedded systems, like Motorola’s), is the logical choice for future Apple systems. I think we all agree that, in concert with other technologies (HyperTransport/DDR/whatever), Apple could put the Macintosh back onto the performance map in many areas. And I think we all agree that this is all just speculation for the moment, with no official announcements.
Attention is now focused on Apple will introduce at MacWorld San Franciso. Jobs’ keynote presentation is 84 days away (January 7th, 2003). Let the drooling commence…
Kamlion: It has for that purpose 160 special instruction set.
Actually, it is 162…. thr last I checked.
matt: Just because YOU don’t use it doesnt mean someone else won’t.
No, the point was (I think) that 64-bit isn’t needed *now* for Apple. Most Macs have RAM below 256MB, wow, I can really see why they need 64-bit. Sure, RAM needs would increase, but I wouldn’t expect Apple to ship machines with 4GB of RAM tommorrow.
Phyax: Attention is now focused on Apple will introduce at MacWorld San Franciso. Jobs’ keynote presentation is 84 days away (January 7th, 2003). Let the drooling commence…
My guess that in MWSF, you wouldn’t hear a hoot about 970. Why? The gap between the announcement and when the processor goes into mass production is too long. And by time Apple actually releases a PowerMac using this processor would be longer.
What you would be hearing about is some new geewhiz addition to iBook and TiBook (looking and the timeframes of the last update to it) and Panther (OS X 10.3).
So hold your drool, you are ruining the carpet.
eweek had a piece on the IBM/Apple cpu
<P>
“While GPUL is also designed to support eight-processor systems running the AIX OS, sources said Apple is focusing on testing the chip on dual-processor, Mac OS X-based systems. Apple and IBM are also tailoring the chip for a new high-frequency, point-to-point Mac bus dubbed ApplePI, short for Apple Processor Interconnect. According to sources, the companies describe ApplePI as “a replacement for the MaxBus used on current Apple systems. ApplePI is used to connect high-performance PowerPC processors to memory and high-speed I/O devices.”
<P>
http://www.eweek.com/print_article/0,3668,a=31212,00.asp
<P>
While an older Register piece on Motorola’s G4 roadmap spoke about a 500mhz “rapid IO” bus.
<P>
http://www.theregister.co.uk/content/3/24018.html
Rajan, When yout post includes statements like, “they don’t have the economies of scale” or something like that. The term economies of scale implies something very specific.
Economies of scale is used to describe the necessary volume produced such that the unit cost of production reaches reaches a point where further increases in volume see negligable if any increase in unit cost. It is about distributing the fixed cost across enough units to the point where the fixed costs associated with each unit is negligable.
That is why it makes no difference how many the other guy makes as long as you can produce the chip at a competitive cost.
IBM does have incentive to keep pace or exceed Intel/AMD. If they enter the Desktop market with this chip not only is there reputation at stake but also the fact that they are competing with X86 all be it not directly. They are buidling an alternate solution which makes them a direct competitor.
Previously the G3 while in the iMac was not the primary reason for them producing the chip. This new chip’s primary purpose will be the Desktop. Therefore they will attempt to keep pace and even surpass Intel/AMD.
But please don’t bring economies of scale into it.
rajan r: “My guess that in MWSF, you wouldn’t hear a hoot about 970. Why? The gap between the announcement and when the processor goes into mass production is too long.”
I wasn’t suggesting, or expecting, the 970 to be introduced at MWSF. I was hinting at all the OTHER important areas aside from CPU that Apple might introduce, including those you suggested.
Far too many Mac users have shown how little they know about performance of a computer. I have no problem with their lack of knowledge (hey, it’s not that interresting to every1) but when you start to claim things that is just plain wrong or rehash false marketing “truths” you are on very thin ice. This is the reason that Mac computers can still sell based on their “performance”
Fact 1: The P4 and Athlon is no more CISC processors than the PowerPC of today and tomorrow is a RISC processor. Both the P4 and the Athlon are internally more of RISC designs that runs and translates the x86 instuctionset into internal microOp codes by the decoder unit. The G4 is not a true RISC processor either. Check:
http://www.arstechnica.com/cpu/4q99/risc-cisc/rvc-1.html
Fact 2: 64 bit computing is not 2×32 bit! 64 bit computing refers to the “native” size of the registers and thus the addressing space of the CPU. A 32 bit CPU can with 1 instruction address up to 2^32 or roughly 4GB of data (64 bit CPUs can do 2^62 bit adressing in 1 instruction). 4GB is plenty of data for both todays as well as tomorrows desktop applications. Very few people have for the next 5 years use of datasizes of more than 3GB. Servers “often” have databases bigger than 4GB but folks, that is servers.
“Well a 64 bit CPU can do calculations on 2 32 bit number with just 1 instruction so it will be twice as fast” No, you can’t unless you have a special instruction for this due to carrier overflow etc. This special instruction would be a sort of vector instruction and that is what you already have AltiVec and SSE for! 64 bit computing is faster ONLY on large numbers compared to a 32 bit CPU.
Fact 3: “The Megahertz Myth” and “The P4 have to many stages” and “the P4 is a shoddy designed CPU”
Megaherz counts… simple fact. A 2 ghz P4 is faster than a 1,5Ghz P4. A 2Ghz P4 is thereby not be default faster than a 1,7Ghz Athlon. IPC also counts and to be frank the P4 have the lowest IPC count of the G4, the P4 and the Athlon. BUT the P4 was designed with lower IPC in mind in order to reach high frequencies. There is no magical way to reach high frequencies by designing the CPU in a smarter way. The PowerPC will have to increase the number of pipeline stages to reach higher frequencies. They have already done this G4 -> G4e from 4 to 7 stages. The Athlon XP for reference have a 10 stage pipeline.
A rough estimate of a processors performance is clockspeed * IPC. Now the P4 does not have that poor IPC that the P4 at 2.8Ghz cannot severely outperform the G4e at 1.2Ghz. Ever wonder why Apple doesn’t allow you to do independet benchmarking of the Mac? Howcome you trust the “benchmarks” at the Mac shows by Apple but don’t trust Intels numbers? An those independent test that show the not so stellar performance of the PowerMac?
http://www.digitalpostproduction.com/2002/07_jul/features/cw_macvsp…
Now about the IBM PowerPC 970.
Finished in about a year, reaching 1.8Ghz and manufactured at 0.13 micron SOI process.
Both the Athlon and the P4 is today already way past 1.8Ghz and both are manufactured at .13 micron. The Hammer (Opteron/Athlon64?) will be manufactured at SOI early next year.
The Opteron will use a paralogue technology to extending the x86 to 64 bits and the CPU is very soo to be released. In other words, the IBM PowerPC 970 is not so impressive considering that it is still at least a year a way. I wonder at what frequency the Athlon64 (or whatever the 64 bit AMD Hammer processor will be named) and the P4 or P5 will be running at then.
Remember the IBM PowerPC 970 is a scaled down Power4 with 1 processor core where the Power4 has 2. A scaled down processor is very unlikely to be as fast as the original at the same speed.
Final things. No I’m not an Intel fanboy. I prefer and use the Athlon due to the nice price/performance of this beast of a CPU.
I like the PowerPC architecture but it has for the forseeable future lost the performance war on the desktop. I would consider a Mac (I use and love FreeBSD) IF they improved the performance of the computers considerably. The new PowerPC 970 in about a year from now simply will not be enough for me.