Major electronics companies have come together to form a new standards body focused on Power Architecture technology. Power.org will create and promote a family of standards, reference designs, and more. Here’s a developer’s-eye view of the future and implications for Power Architecture standardization.
With rumours that the “Cell” processor will trounce x86-based processors, does anybody think that there will be a time when the PowerPC architecture will replace x86 as the mainstream architecture? I’m not trying to start a flame war, I’d just like to hear your thoughts and opinions. ๐
“…does anybody think that there will be a time when the PowerPC architecture will replace x86 as the mainstream architecture?”
That would make the people behind Itanium cry like babies.
With POWER, SPARC, and AMD64 all more viable platforms than Itanium…they probably are already crying like babies.
I would rejoice!!!
It’s highly possible that x86 may see the day they’re replaced with Power. With the processors increasingly hitting the mhz and heat limits and with power beginning to suprass them with slower speeds, heat, and power consumption we may see a migration.
The Cell will help this along with servers, game consoles, and yes workstations. Do you think those servers will run Windows Server 2006? I doubt it. The closest thing that MicroSoft owns that runs on PPC is their Xbox2 development kit and it’s NT 4 for PPC.
This may also have great implications for Linux as those servers most likely will run Linux/Unix variants though I belive Linux would likely be migrated quicker as the Kernel already runs on the PPC. Desktops will follow this natural progression in the high end workstation arena but lower end machines running OS X may be going to homes around the world. With cell based gaming platforms out there the gaming base for OS X will be rich and plentiful which has always been a barrier for OSX/Linux adoption beyond hardware issues and just plain MicroSoft Addiction.
5 years and the computing world could be completely different.
with the creation of AMD64 is there anything still wrong with x86? I mean once you stop running any 32bit apps. How does the AMD64 architecture stand up on it’s own?
AMD64 is somewhat the first okish incarnation of x86 given that AMD finally managed which Intel never could pull off, giving it a decent number of general purpose registers (which even the old 68000 processors had)
But it still is a huge mess given the old legacy burden the architecture carries on by Intel having done more than 25 years of architectural mismanagement not dropping old stuff….
Give me a fresh clean small instruction set any day….
The problem still is that there are so many old legacy apps (like Windows) which are hard to port to a new architecture without having millions of users crying like babies.
The first architecture which is good and can emulate a current x86 with more speed than an Intel/AMD probably can win out in the long term, but if that does not happen forget it.
AMD64 is an even uglier architecture than regular old x86. First, it has most of x86’s flaws, because it’s backwards compatible. Second, it leaves out segmentation. While that makes it simpler, it also means that the CPU lacks a high-speed address space sharing mechanism, something that PPC and Itanium both have. Lastly, it’s got only 16 registers (which is mitigated somewhat by register renaming), and to access those or to access 64-bit modes, you have to use new prefixes. These prefixes push the average instruction size of 64-bit binaries to 4.2 bytes, compared to regular x86’s 3.2 bytes. That means that the AMD64 has the hassle of decoding variable-length instructions, without any of the code-size advantages that variable-length instruction sets usually have.
Jeepers creepers!!!!
I thought AMD64 was a RISC chip that had a x86 area or translator. I assumed that in pure 64-bit mode it was at least something sane. I thought the plan was to use the dual chip to a sensible architecture in the future… man. I had always meant to read up on the internal design.
“But it still is a huge mess given the old legacy burden the architecture carries on by Intel having done more than 25 years of architectural mismanagement not dropping old stuff….
Give me a fresh clean small instruction set any day….”
Oh please! Who the hell is writing assebler code these days? I am no programming genious myself, but I DO know that modern compilers are so good that messing with the instruction sets is crazy unless you are programming some micro controller or similar and hence need total control and code efficiency.
15 years ago when the RISC vs CISC wars raged that MIGHT have been a valid argument (because compilers were less powerful then), but today when CISC is RISC (read up on modern x86) and RISC is CISC (and G5..) that argument makes little sense!
is it me or they are pushing marketing too far? I mean, it was already pimp to call it’s chip “power pc”, calling them only power now is ever more bling bling. No wonder they want to use gold in processor now
is it me or they are pushing marketing too far? I mean, it was already pimp to call it’s chip “power pc”, calling them only power now is ever more bling bling. No wonder they want to use gold in processor now
The POWER architecture predates the PowerPC architecture. In fact it is the basis of the PowerPC chips.
I think dropping segmentation is a good pragmatic decision. No major OS uses segmentation as extensively as envisioned originally by intel. And with managed environments like .NET and java you can even have multiple applications in the same process.
A very modern processor design might choose to eliminate memory protection altogether and instead rely on software code verification instead. I think the jnode os does it that way.
Fact that compilers do hide that legacy mess cannot make any real miracle and create more effective and reliable code than one dictated with CPU. You may look at it as creating some kind of virtual machine, additional layer, bloat of which depends of distance between well-structured concepts of modern languages and actual architecture with heritage of pocket calculator controller:)
@Rudiger Klaehn: x86, PPC, and Itanium have MMU mechanisms to allow fast shared memory, beyond just sharing page table entries. These mechanisms can be used (though they aren’t usually) to great effect for very fast IPC. L4 does this with segmentation on x86, for example. By eliminating segmentation, and not offering a replacement, AMD64 is at a disadvantage compared to these other architectures. I agree that the best thing to do would be to use software-based protection, but as long as we have to run code written in unsafe languages, we’ll need some sort of hardware protection.
@CaptainPinko: No, internally the Opteron processors are RISC (as are the P4 and P-M). I’m referring how the architecture looks to the software (and the compiler back-end!). That part of AMD64 is still quite ugly.
The “PC” in PowerPC stands for Performance Computing, NOT Personal Computer.
Also the PowerPC ISA is a subset of the POWER ISA.
The only place RISC vs CISC arguments belong is when dealing with MICROCONTROLLERS, with which code can be heavily optimized via assembly routines since the cores tend to have limited code space, RAM and processing capability.
Computers come with 512Megabytes of RAM nowadays, most popular microcontrollers come with 512bytes. Operating, for the most part at 4MHz – 133MHz where some complex maths like divide or multiply take 2 or 3 clock cycles to complete. Performing analog-digital conversions take 13 cycles on an Atmel ATMega128.
Efficient assembly coding can be dramatically faster on a small platform like Microcontrollers where compilers are relatively new and teething, but is totally worthless for both 68k, x86 or other processors considering the price of RAM and harddrives & cpus.
By the way, MenuetOS is an amazing work.
Well, it’s not quite so straight-forward. RISC vs CISC becomes quite an issue when you’re talking about something like a compiler, which needs to generate efficient machine code for the chip. It’s harder to generate good machine code for most CISC CPUs (instruction selection and instruction scheduling both need to be good — and the two interact in unusual ways). It’s easier to generate good machine code for modern RISC CPUs, since selection is straight-forward, and you really only have to worry about scheduling.
Also, with x86’s tiny and non-orthogonal register set, good register allocation algorithms like graph coloring become much less effective. GCC’s new graph coloring allocator, for example, isn’t all that much faster than the old ad-hoc local one. Beyond that, once you have to deal with implementing anything more complicated than C, the lack of registers becomes a very big problem. For example, a good way to do precise garbage collection while retaining the ability to call C code is to have a seperate stack for the C code. Well, when you’ve only got 8 integer registers, tying up 25% of them for 2 stack pointers isn’t really something you want to do.
Too bad there won’t be an affordable, non-Apple desktop anytime soon with a commercial OS to run on it.
Ideally, a nice stripped down POWER 5 with 1MB of L2 cache plus a system board and graphics card for a grand would be a great deal. IF (a big IF) Solaris is opensourced, getting it to run on POWER should be easy as the PowerPC port should still exist in the Solaris source tree, thus making it a compelling workstation.
It’s funny though Rayiner, I’ve been an engineer since 1999 and I have written code both for college & work in various assembly languages (my preferred language because I’m a control freak [but for work timing is almost always critical])
The first language I used was 8086 on a crappy ASM compiler, A86 and D86, yes many will laugh at the one, but it DID WORK though was very hard to debug once code spanned more than 300 lines. (It was quite interesting to write code that depended on clock speed [where the professor would tell you write a piece of code that ran a motor at a certain speed, but then you try it on faster computer and, as expected it runs too fast. A very important lesson learned there, that the FIRST thing you must code is proper timing to run your code at an expected speed.
I then programmed 8051’s in assembly (yes there are 5,000,000 C compilers for it, I think many university computer engineering majors have to write compilers for it,
For my actual employment I coded very much on Microchip PICs from the tiniest 12C508 to the 17C756A and now on ATMEL AVRs.
to sum up:
My experience has been this, RISC coding is excepetionally easy when writing stand alone hard real-time applications on microcontroller based products. Timing in CISC assembly just seems so much more difficult to keep track of (multi-clock cycle instructions vs single or two.)
CISC may save you on having to write your own routines for more complex operations, but how much development time does that save you vs the time you have to spend trying to keep track of timing.
I have to laugh at the 8 registers for x86, considering many of the microcontrollers I use regularly have anywhere from 32 to 256 or more, makes coding A HELL OF A LOT EASIER even for me who is stuck in the 8bit world (where much of the analog to digital conversion I do is 10-12-16bit.)
Many people do not realize that 4-bit mcu’s are not a thing of the past that they are so incredibly cheap now that they are used in everyday products. Try coding on a platform where an 8bit integer spans 2 registers!
For stand alone applications, where the only code running is for ONE application, assembly is the fastest and for me the easiest way to develop.
AND WHERE OR WHERE IS MY RECONFIGURABLE COMPUTING?
I’d love to be able to setup the # of registers my programs need whether they are 8, 16, 32, or 64bit and take all unused portions of the cpu and convert it to RAM.
What are PPC and Itanium allowing to do except sharing page with the MMU? Something that you can do also with the AMD64 AFAIK.
Also about the lack of segmentation: no offence but not too many people care about L4..
Making a CPU is about making choices: if a mechanism is nearly unused, removing it is a good choice IMHO.
This is what the people behind the RISC concept did.
About the additional prefix needed to access the “new” register of the AMD64, ok, this more costly to use a new register than an old register, but if you lack space in the old register set, it is cheaper to use one of the new register than to store a register in memory and reload it later: compilers only use this prefix when they need it..
So the increase is not one octet as you said, but much less than this on average..
That said, I find 80×86 (and even AMD64) ISA fugly, but history has shown that those who think that the beauty of RISC or VLIW may displace the 80×86 are dreamers: compatibility rules (the cell have *no* chance of replacing x86 for desktop PC, for embeded usage the situation is different).
Many people do not realize that 4-bit mcu’s are not a thing of the past that they are so incredibly cheap now that they are used in everyday products. Try coding on a platform where an 8bit integer spans 2 registers!
Why not just use Java or .NET. That should take care of any programming problem you would have since the hardware has been abstracted…just kidding.
It all depends on Microsoft and opening up the power architecture. The day i can log on to my favorite hardware vendor site and buy my choice of PPC mobo,CPU and a copy of Windows will be the day that PPC replaces x86. Its not too far fetched Microsoft windows has in the past run on PPC hardware and of course the next Xbox will be PPC based. I don’t see it happening soon though its not tha easy to shif t an entire software industy to a new platform over night.
Oh God! I thought I was the last person on earth who still loves Assembly. Thank goodness there are a few more maniacs around. ๐
> The day i can log on to my favorite hardware vendor site and buy my choice of PPC mobo,CPU and a copy of Windows will be the day that PPC replaces x86.
Bah, Windows used to run on MIPS, Alpha what good it does if the programs that you’re running on Windows are not available?
Also usually, the x86 has the best power/price ratio by far, not only for the CPU but also for the motherboard.
***
For the ‘real programmers do it in assembly’ comment: I dropped myself assembly because x86 makes me puke, gives me an orthogonal ISA anyday over this kludge of x86 (RISC or not: Motoral 68k was nice too)..