AMD has written some things lately, as well as Intel has in the past, pointing out that the difference between RISC and CISC no longer matter (in fact, modern x86 CPUs are largely RISC these days, except the memory interface). That CISC is catching up and surpassing RISC. iGeek looks at the facts.
Sorry, but your statement “in fact, modern x86 CPUs are largely RISC these days, except the memory interface”, is not true. You probably ment to say: modern x86 CPUs implement/copy much of the physical design found in RISC CPUs.
RISC means Reduced Instruction Set Computing. I don’t think Intel nor AMD has REDUCED their instruction set since they introduced they CISC CPUs.
For a fair comparison of RISC/CISC and an introduction what RISC means:
http://ctas.east.asu.edu/bgannod/CET520/Spring02/Projects/demone.ht…
http://www-compsci.swan.ac.uk/~csneal/HPM/risc.html
greetings max
I agree than CISC is a better commercial solution and even than CISC programs could execute as fast as RISC programs.
But my interest is learning assembler and CISC machines (PCs) have an enormous instruction set and a lot of options to spend my time learning 5 forms to do the same (4 of then obsoletes). If I could study an elegant and reduced instruction set like PowerPC, I could learn how a computer work early, and dedicate the time I save to do other thinks, so I decided some time ago to involve myself in studying PowerPC instruction set and at the present time I’m happy with my decision.
I could be wrong but it was my understanding that modern x86 CPUs are realling RISC that translate x86 instructions. The word emulation comes to mind but it’s not really emulaion since the translation is done at such low level in hardware.
What are we going to do when .Net takes over and hardware platforms don’t matter anymore?
first they point out that there is no more speed-advantange between RISC and CISC and that modern CISC CPUs surpassing RISC and then they admit that modern CISC CPUs uses a internal RISC arcitecture. *LOL*
Looks like the x86 boys had to change the internal CPU design to RISC over the years to catch up.
Now tell me “Is CISC really superior than RISC?”
They gave the answer by themselves!
Ralf.
From John R. Mashey at SGI.
http://groups.google.com/groups?
&hl=en&lr=&ie=UTF-8&selm=8p20b0%24dhh%243%40murrow.corp.sg i.com&rnum=1
I am not familiar with AMD’s 64 bit extension, so I think it could be an interesting exercise to update his tables with x86-64.
> RISC means Reduced Instruction Set Computing. I don’t think Intel nor AMD has REDUCED
> their instruction set since they introduced they CISC CPUs.
No, of course not. I think that she is referring to the microcode as opposed to the x86 instruction set. By breaking the various x86 instructions into micro-ops the chips can do the work more efficiently because of the increased parallelism.
> I could be wrong but it was my understanding that modern x86 CPUs are realling RISC
> that translate x86 instructions. The word emulation comes to mind but it’s not really
> emulaion since the translation is done at such low level in hardware.
(microcode)
> What are we going to do when .Net takes over and hardware platforms don’t matter anymore?
o_0
Intels Itanium 1 and 2 Microprocessors are not RISC, they are a VLIW (Very long instruction word)arcitecture.
IMHO IA-64 is not only not Risc, because of the upwards (multiprocessing) and outwards(clusters)scaleability of Itanium servers it is the tecnology that will KILL Risc-dependant venders like Sun Microsystems.
We don’t know which OS will dominate, but we do now that Intel is a relentless industrial powerhouse that will soon be pushing a superior IA-64 Arcitecture from a massive worldwide system of microprocessor fabrication plants.
IMHO Scale + superior technolgy = Victory over Risc
IA-64 is not an excellent platform for IA-32 code, but it will run it at adequate speed during the transition period.
The problem with AMD’s evolutionery approach is that 64-bit technology will soon dominate on the server side, but its still 2 years away from mainstream desk-top use.
Microsoft can’t support corporates and enterprise customers on the server side over two hardware arcitectures, because supporting NOS’s on servers is far easier on a limited cross section of higher quality hardware. So Intel will IMHO get a important head start from Microsoft’s .NET server release in April 2003.
Linux on AMD’s 64 bit platform might give Intel and Microsoft some well needed 2nd phase competition though. 🙂
Not to mention Linux on IA-64 :->
But Risc is TOAST.
IA-32 is x86-32
IA-64 is not x86-64
I do not know whether you were thinking that they were the same thing or if it was simply unclear from your post, but Intel and AMD have very different approaches to 64-bit computing. IA-64 is for Intel’s Itanium line and AFAIK it is a radical departure from IA-32. AMD’s solution is named x86-64, and it is an extension of x86-32.
From what I have read, AMD’s 64-bit solution seems to outperform its own 32-bit processors running at the equivalent clock speed, but it remains to be seen how high AMD will be able to clock their processors. Apparently there is a 64-bit version of Unreal Tournament 2003 in development:
http://firingsquad.gamers.com/news/newsarticle.asp?searchid=4569
This is slightly OT, but anyway
there are several projects to implement a processor which can handle JAVA-Bytecode direckty in hardware. An exapmle is Jazelle from ARM. AFAIK it is not very complex, since JAVA VM emulates a stack-processor.
Are there any plans to create a processor which can handle CLI-Bytecode at hardware level?
Greetings from Anton
Hi !
For my opinion the IA-64 architecture from AMD is a real challenge for intel because it is only a linear extension to the athlon design without any conceptionally changes !
I think reasons like compatiblity, performance and so on doesn’t really matter, no the main reason for it’s future success lies in the human nature, which tendends to choose the badest possible choice of two opinions if one of two toys give the chance to garantee more joy = complexibility to resolve.
IA-64 is NOT Risc
I couldn’r agree more.
IMHO IA-64 is not only not Risc, because of the upwards (multiprocessing) and outwards(clusters)scaleability of Itanium servers it is the tecnology that will KILL Risc-dependant venders like Sun Microsystems. I couldn’t agree less…
If IA-64 could show a clean pair of heels to RISC processors you might have a point
but IA-64 has singuarly failed to do this thus far.
Itanium 2 outperforms the POWER4 (at least on SPEC marks) but not
exactly by a massive margin, when the final Alpha 21364 turns up it’s
likely to outperform the Itanium 2
On the question of scalability UltraSPARC 3 was designed to go into
systems with 1024 processors, RISC vendors have been building big
multiprocessor boxes for the commercial market for years, Intel have not.
Then there is the reason why x86 still dominates PCs – Legacy.
RISC has it’s own legacy these days, will companies really want to
port their mission criticial systems over to a new immature platform?
Reliability is king in the Enterprise world – thats why Sun still sell
big systems even though they often lag in performance badly.
—
One thing I think the article was wrong in is saying CISC is mainstream.
In the PC world yes, but the Embedded world is 50X the size of the PC
market so if anything RISC is mainstream, not x86.
IA-64 is NOT Risc
I couldn’r agree more.
IMHO IA-64 is not only not Risc, because of the upwards (multiprocessing) and outwards(clusters)scaleability of Itanium servers it is the tecnology that will KILL Risc-dependant venders like Sun Microsystems. I couldn’t agree less…
If IA-64 could show a clean pair of heels to RISC processors you might have a point
but IA-64 has singuarly failed to do this thus far.
Itanium 2 outperforms the POWER4 (at least on SPEC marks) but not
exactly by a massive margin, when the final Alpha 21364 turns up it’s
likely to outperform the Itanium 2
On the question of scalability UltraSPARC 3 was designed to go into
systems with 1024 processors, RISC vendors have been building big
multiprocessor boxes for the commercial market for years, Intel have not.
Then there is the reason why x86 still dominates PCs – Legacy.
RISC has it’s own legacy these days, will companies really want to
port their mission criticial systems over to a new immature platform?
Reliability is king in the Enterprise world – thats why Sun still sell
big systems even though they often lag in performance badly.
—
One thing I think the article was wrong in is saying CISC is mainstream.
In the PC world yes, but the Embedded world is 50X the size of the PC
market so if anything RISC is mainstream, not x86.
So Intel will IMHO get a important head start from Microsoft’s .NET server release in April 2003.
If I run that sentence through the debullshittizer, I get April 2005. Could it be that MS is pulling a fast one, like with WinNT 5.0 (Allchin predicted the release in the first quarter of 1997, and then gradually pushed it back .. and then came Win200).
CISC has only gone ANYWHERE on Desktop PCs.
RISC processors are used ALL AROUND YOU. HVAC systems use a lot of 68k & some 803x, 805x’s. 95% of fire alarms sold in the US use a RISC microcontroller in them (we are talking BILLIONS of units in fire alarms alone!) CISC is a bad idea, Intel is keeping it working by speeding up the clock cycles ie: Motorola, AMD will never catch up to Intel in terms of Speed, but will greatly surpass them in terms of actual operations performed per clock cycle.
M68k are decidedly *not* RISC. They have a simpler instruction set than x86, but nowhere near the simplicity of MIPS, or even sparc.
> For my opinion the IA-64 architecture from AMD
There is no IA-64 architecture from AMD.
> I think reasons like compatiblity, performance and so on doesn’t really matter, no
> the main reason for it’s future success lies in the human nature, which tendends to
> choose the badest possible choice of two opinions if one of two toys give the chance
> to garantee more joy = complexibility to resolve.
*scratches head*
I am really sick and tired of these unecessary comparisons. Everybody knows that a clean design can have advantages over old designs, it does not matter if it’s CISC or RISC or whatever the buzzword. RISC is only one strategy for processor design, there are many others and they can be even mixed.<P>
Legacy always bloats a product, see in any OS around. What really matters for the customers and Intel/AMD are the millions lines of code written for the architecture. This is a huge asset. I will ensure that current customers can easily upgrade with the software they are using. We are not yet in a state were all of the software is released with sourcecode. I dought this will happen in the near future.<P>
Even though Intel tries to push for a new architecture it can take decades for it to be adopted. Customers are very conservative when it comes to switching to new technologies (right so, chances are something will break) so it can take decades to make this happen. Meanwhile AMD can offer something which works, is save and most importantly: sells.
According to this article…
http://courses.ece.uiuc.edu/ece497esd/lec/jones2.pdf
CISC have already matched and even surpassed RISC, though.
While we’re drifting off-topic with this AMD Hammer talk, here is an article with some interesting information:
http://www.anandtech.com/showdoc.html?i=1755&p=1
Transmeta processors aren’t neither RISCs nor CISCs… They are RCISCs (Reduced and Complex Instructions Set CPU – or whatever it is called their instruction set in processor lingo).
Pity they don’t know how to market their processors, because surelly they are inovators…
Cheers…
P.S.- Don’t have the russians another class of processors???
Intel’s Grove IIRC used to say at various conferences that Intel made more x86 CISC chips in a lunch break than all the RISC vendors combined made in a year (around when SPARC was becoming prominent) but he was really comparing the PC market with the workstation market which by then had rid itself of CISC (68Ks mostly).
But that was before System on Chip and the spread of RISC cores into most ASICs. ARM, MIPS, PPC cores though are everywhere, not at the rediculous speeds of x86, but faster & lower power & lower cost than x86 CISC would allow (MOTO ColdFire-68K may be an exception). I can’t think why anybody would embed an x86 core into any ASIC, too complex, too hot, & not available but the ARMs etc are pretty easy to deal with. Most embedded cpus are so well hidden, most users will never know.
However the real way to get massive (100s +) speed ups for power users is not more cranking on the x86 clock, but recompiling SW to HW on FPGA boxex & Reconfigurable Computing.
How good is gcc’s PPC optimization?
How much room is there for improvement?
This sounds like Apple and AMD beating the ‘megahertz doesn’t matter’ drum, which amazingly didn’t start I don’t think until each company figured out that they couldn’t keep up with Intel in the megahertz race.
The ‘RISC doesn’t matter’ drum sounds all too familiar.
I’ve been told that RISC architectures make for code that’s twice the size because complex intructions potentially make the code more compact.
Can anyone vouch for this?
AFAIK gcc is only starting to improve on PPC largely because there has been no mainstream need for a good PPC compiler. The companies who have been using unix on PPC up til now have all had their own highly optimized compilers (like IBM). Now that OS X uses it extensively I would not be surprised to see the quality skyrocket (mainly from Apple contribution).
There are two problems in my mind, gcc is meant to compile for everything under the sun, and it has been mainly been used on x86 up til now. The second reason is obvious, the generation and optimization of x86 code has been worked on the most. The first problem is intricately tied into the second, which makes things even weirder. Because all this work has gone into x86 forms of optimization, it fails to optimize correctly for the architectures like PPC and IA-64. The IA-64 gcc guy have had to dramatically rewrite stuff IIRC to even get decent performance. And HP’s compilers still destroy them entirely. Same deal with PPC, it fails to make use of the extra registers in a way conducive to speed on the PPC because it uses them like x86 registers.
It kinda sucks because some of Itanium and PPC’s biggest advantages rely heavily on the compiler. Itanium even more so than PPC.
It seems that the current CISC speed advantage is a great deal attributable to high clock speeds and so I would like to see an experiment at say IBM with a RISC chip with a huge pipeline and get it running at a blazing fast clock with solid branch prediction (is that what wipes out the pipeline? I forgot :-P) and see what it can do when it can meet pentiums at an equal clock.
My question is answered near the end of the article.
> I’ve been told that RISC architectures make for code
> that’s twice the size because complex intructions
> potentially make the code more compact.
> Can anyone vouch for this?
Coming from a BeOS perspective – PowerPC binaries are almost always a great geal smaller than Intel (e.g. 500k Intel is 300k ppc). This is more than likely down to the compiler, but it proves that it’s possible to say ‘PowerPC generates smaller binaries’ and have something to back it up with. Remember, this is the same code, same linking etc, just different compiler. PPC uses Metrowerks, Intel uses gcc. Even stripping all symbols from the intel will often leave a larger exe.
This of course proves nothing what so ever and should be completely ignored by everyone…
In theory yes. In reality, not really. Most CISCs have a smaller GP register set and dedicated-purpose registers which causes more moves and spills.
The clean pair of heels you are looking for is the 0.13 Micron implementation of Itanium 2 (currently at 0.18 micron)
We can expect the 0.13 Micron “Madison” Itanium 2 in approx. 2Q of 2003. Please see p.3 of the six page PDF Giga report on Intel’s website:-
http://www.intel.com/ebusiness/pdf/prod/itanium/wp023901.pdf
The report was written in June 2002, and it does contain an “alternative view” – but I hold that Itanium 2 has convinced folks who were previously skeptical (i.e. Dell)and the “alternative” (essentially it’s your view, right?) is just not keeping up with current events.
Now bear in mind that .NET server is scheduled for Q2 (currently April,) and that the Linux 64 boys have been very busy too, and you will see the Millions of troops massing to crush Scott McNeally’s last citadel.
Sun has nowhere to retreat to, as 64-bit computing will last at least until 2015, and IMHO a lot longer. As soon as the 0.13 Itanium 2 comes out, Sun will be chased down and sacked.
By end of 2004 its “Game over.”
RISC was more about having instructions that behave nicely in a pipeline, than reducing the instruction set. It worked great with pipelined non superscalar cpu. Today most of the energy is spent looking for Instruction level parallelism to feed massively superscalar architectures (speculative execution, branch prediction, out of order execution .. are needed when ILP is too weak). The IA64 design recognizes that and breaks the implicit dependencies between sequential instruction, forcing dependencies to be explicit. In that respect IA64 and RISC are really based on the same philosophy : change your ISA to accomodate the core architecture. predicates, software pipelining, instruction templates with implicit parallelism , all of these features make life easier for the core.
Kobold wrote:
> AFAIK gcc is only starting to improve on PPC [snip]
>
> It kinda sucks because some of Itanium and PPC’s
> biggest advantages rely heavily on the compiler. [snip]
This is good news. It means that there’s great promise in PPC optimization for gcc — and, of course, compiler optimization is RISC’s strong suit.
Search through http://www.ultratechnology.com and colorforth.com
for Minimal Instruction Set Computer.
C. Moore has designed MISC CPU with less than 32
instructions, packed 4 in 21 bits or 6 in 32 bits.
The internal sturcture (the core) of x86 CPUs (especially from AMD, Intel and VIA, but not from Transmeta for obvious reason) are more RISC like than CISC. However it isn’t pure RISC. Even architectures based on RISC (PowerPC immediately comes to mind) aren’t pure RISC.
besides, your point is? The whole article is concluded than there isn’t a big difference between CISC-based architecture and RISC-based architecture because they have evolved to met each other halfway. Plus, CISC processor makers never said CISC was better than RISC. In fact, it was otherwise, but they still go on with CISC because it has more applications based on it.
It seems some people here don’t understand the basics of processor design.
RISC:
Alpha, MIPS, SPARC, PA-RISC, PowerPC, 88k, i960, … and also the IA-64 (using a variation of VLIW, called EPIC)
CISC:
IA-32 (Intel and AMD), x86-64, 68k, 6502, VAX, …
RISC means how the instruction set was designed. E.g. orthogonal ISA, load&store, few addressing modes, minimal ISA. It doesn’t mean whether the CPU implements superscalar and muli-pipeline architecture etc, although they were first found in RISC processors.
VLIW/EPIC is just a way how serveral RISC instructions can be combined/connected into a “Very Long Instruction Word”. The instructions will be executed concurrently. RISC and VLIW are not mutually exclusive. I’m even not aware of VLIW designs NOT based on RISC.
RISC and CISC cannot be combined.
The CISC term was invented to distinguish non-RISC (traditional ISA designs) from RISC CPUs. And RISC just covers the instruction set, not the internal design of the processor!!!
Even if Pentium4 has a RISC subsystem inside (CISC is translated into RISC/Micro-ops, a RISC engine is executing them), it’s NOT a RISC CPU since a software developer can’t take advantage of the RISC subsystem’s ISA. A programmer can just use the CISC instructions.
greetings max
Currently, there isn’t a .net CLI hardware cpu, but i did hear a rumor that transmeta had an emulator partially implemented, as a proof of concept demo to try and get financing from MS.
This debate is obviously religious and pointless. The ALU, the heart of processing, is always “RISC” since it can do very few things. Microcode constitutes a type of “macro” language implemented in hardware; a CISC processor simply does a table lookup in microcode indexed by the object code instruction. Whether you remove the microcode and make it part of your object code, or allow the macros to remain on chip, does not make much difference. However, having macros on-chip reduces the main memory bandwidth requirement *dramatically* since one object code instruction may initiate a dozen or more micro code instructions. A RISC processor with large chip cache will achieve exactly the same result once the object code (containing micro code) is copied into cache. So you see, in practice there is not, nor would we expect to see much difference.
> The ALU, the heart of processing, is always “RISC” since it can do very few things.
The Algorithmic Locigal Unit (IIRC) is firstly just a small part of the CPU. Yes, it’s just doing add, sub, mul, div, cmp etc. But the MMU stuff, address operations, stack operations are also part of the instruction set of a processor and are implemented on the chip. Since CISC has more of them, there is additional logic (=more transitors, complexity) neccessary, more than on a RISC CPU.
And RISC is not “how many/what instructions is handled internally by the ALU”, but “how many/what instructions can the programmer use altogether”
Moreover on an typical RISC processor there are much more registers. And th is is also a major difference between RISC and CISC.
Assume you want to compute a formula like this : z = (a+b)*c + c + b + a.
On a CISC (since you have a limited set of variables/registers you can use) an optimized asm code is handled by the CPU something like this (extreme example):
1. load a in register1
2. load b in register2
3. sum R1,R2 => R1
4. load c in register2
5. mul R1,R2 => R1
7. sum R1,R2 => R1
8. load b in register2
9. sum R1,R2 => R1
10. load a in register2
11. sum R1,R2 => R1
12. store R1 in address z
On RISC:
1. load a in register2
2. load b in register3
3. load c in register4
4. sum R2,R3 => R1
5. mul R1,R4 => R1
6. sum R1,R4 => R1
7. sum R1,R3 => R1
7. sum R1,R2 => R1
8. store R1 in address z
So 12 stages in CISC vs. 8 in RISC. Moreover CISC has 6 internal load&store to memory (=which are slow), and RISC has just 4. Which one is faster (even if the actual assembler code maybe shorter for CISC than for RISC)?
Okay, on modern CISC the internal (RISC like) engine can identify code that accesses previous variables and stores them in additional temporary registers for later use. But if you access a variable used not recently before or handle large amounts of data, this won’t work.
> Microcode constitutes a type of “macro” language implemented in hardware;
And you can only use the macros. This shows another limitation: RISC is more flexible, because you can’t use microcode (RISC) in CISC CPUs as a programmer. IE, it’s likely that you could optimize a task better by using the microcode than only using the provided macros. That’s also the reason why good performace on RISC is very dependend on a good compiler (because the “macros” are also very optimized).
> However, having macros on-chip reduces the main memory bandwidth requirement *dramatically* since one object code instruction may initiate a dozen or more micro code instructions.
That depends. Firstly it’s nothing like “dozens or more”, and secondly the instruction size on RISC is usually 4bytes compared to up to 12bytes on IA-32. The difference of the size of RISC and CISC object code is usually NOT that significant.
> A RISC processor with large chip cache will achieve exactly the same result once the object code (containing micro code) is copied into cache.
Xeon has also a large chip cache, and that’s definitely not because Xeon is a RISC – it’s CISC. Server CPUs always have large caches. Look at ARM
RISCs for the embedded market, they don’t have large caches …
RISC is more flexible and is a cleaner design, but depends on good compilers. I said in a previous post: “RISC and CISC cannot be combined”. I actually ment: RISC and CISC (ISAs) are not converging. Nothing is “better”, both have advantages and disadvantages. I just like the RISC idea more
Especially considering that one of the most significant ways in which the x86 differs from most other semi-modern designs is that it has extremely few registers. This is probably just as significant as the CISC/RISC difference, if not more so, especially when considering out-of-order execution etc. It is possible to do it despite the small number of registers, but it requires much more logic.
And the FPU…well, x86 processors have never been very good in that respect, anyhow, and the architecture is definitely an issue there.
http://ultratechnology.com/chips.htm
RISC programming language _is_.
It is Forth.
—-
References: http://www.forth.org , http://www.fig-uk.org , http://www.forth.org.ru , http://www.colorforth.com ,
http://www.ultratechnology.com …