Two slides of an AMD presentation were leaked to a Czech web site and they show the SPECInt 2000 CPU benchmark performance results of the upcoming AMD 64-bit CPU, Hammer. In the two charts you will see the Hammer scoring best performance among Intel, HP, IBM, Alpha and other CPUs. The Hammer CPU is expected to be released at the end of 2002, after slipping the original release date of March 2002.
to get ready to dump the PowerPC for an AMD platform. I have been a long-time fan of the chip, but Motorola can’t keep up with Intel and AMD anymore ๐
The reason why I say this is because of the previous article, and the IBM PowerPC is listed there in the chart. PowerPC 1.3 GHz? That’s cool, but a Motorola G4 is not likely to do that soon because of Altivec being a dead weight on the clock speed.
Apple says that clock speed doesn’t matter, but when you are running at 1 GHz, and everyone else is at 3 GHz, there is something wrong.
Anyway, I can’t wait to build me a Hammer system one day ๐
on the second chart it seems that the Hammer and the athlon will soon perform about equal
Looks like Sum and Intel are going to have their behind handed to them on a plate.
This is just amazing that the geriatric x86 ISA just keeps going.
Hammer just slaughters all the other processors.
It looks like x86 is going to be around for another 20 years.
Itanium at 1GHz Intels 64 bit flagship for the future scores —650
Ultra sparcIII at 1Ghz —610
Hammer unknown speed (guessed at 2Ghz)———– 1350
The Itanium sells for several thousand dollars and only gets a 650.
If AMD sells Hammer for a reasonable price say 400$ while having double the performance they will just kill Intel. I wonder what the power use of Hammer is, it has to be less than Itaniums 130 watts!!!
Is this another one of those deliberate “leaks”?
End of 2002 instead of March? Ouch!
But Hammer seems to be promising. And if it runs old 32 bit x86 code much faster than Itanium (in addition to the increased performance of this presentation), it will be the dark horse AMD needed.
I really need to find a reason to get one of these. I’m having a hard enough time figuring out why i need dual athlons. Damn I wish I had a use. Well there is always seti and distibuted.net.
Maybe Microsoft will give me a reason, or a nice CAD program will fall out of the sky.
> Apple says that clock speed doesn’t matter, but when you are running at 1 GHz, and everyone else is at 3 GHz, there is something wrong.
And Apple is right, in my opinion. “The Clock Race” is an artificial thing created by Intel to mine the competitors… do you know wich are clock of MIPS and “Gekko” (game-cube processor)? alas, a Cray super-computer does not go beyond 500 MHz.
If your processor can make 2 intructions by clock cicle at 500 MHz, is the same another can make just one instruction by clock cicle at 1GHz.
Anyaway, I want a Hammer beast too ๐
ps.: sorry my poor english, is not my natural language.
Have to say that while the performance certainly is impressive it is not that interesting to see just specint, while having good integer performance was very important a few years ago it has always, and does especially now that most pure integer tasks are performed quickly enough anyway live very much in the shadow of floating point performance. Give us a SpecFP along the same lines and we will all hail the hammer the new ruler of the high end, but as it stands I really dont care that much.
It seems I read somewhere that when clock speeds reach a high enough level, the processor will in fact be more inefficient. I can’t seem to recall where I heard that.
Anyways, hasn’t AMD been pushing the same idea of performance != clock cycles with their XP chips as Apple has had for some time.
http://www.anandtech.com/showdoc.html?i=1546&p=1
I hope AMD knows exactly what they are doing with this decision to have yet another “dirty” architecture, in the sense that it’s not purely 64-bit. The Hammer will have “R” registers (rax, rbx, rcx, rdx, etc.) which are 64-bit, but whose lower 32 bits can be accessed as the old intel 32-bit registers (eax, ebx, ecx, etc.) It’s the same move Intel made when moving from 16-bit to 32-bit. However, it seems that the move was a good one on Intel’s part, business-wise. I hope all goes well for AMD. If pure 64-bit architectures become /it/ in the near future, then the Itanium could steal the spotlight.
http://www.theregister.co.uk/content/archive/22328.html
So in SpecINT AMD got 1350, at 2Ghz ?
Motorola(G5) got 1340 at 1.6Ghz, while chips already sampling at 2.4-2.6Ghz…
What about FP? (motorola at 1359, at 1.6Ghz)
Even though the products are not available yet, this might become a very interesting year….
Eric Murphy;
>>Apple says that clock speed doesn’t matter, but when you are running at 1 GHz, and everyone else is at 3 GHz, there is something wrong.<<
MHz comparisons don’t matter when comparing different CPU architectures. Look at Sun Microsystems, they are still running similar clock speeds as Motorola/IBM/Apple and use the same concept of architecture of ‘RISC’ design. So for you PC users that don’t know the concepts between ‘RISC’ and ‘CISC’ you should do some research, or maybe one of the other guys (or gals) here will give a good explanation?!
I think there is a valid criticism of Motorola in the clock speed department. I don’t really care so much about the raw clock speed if there is a jump in architecture. If for example we went from an 800MHz G4 to a 600MHz G5, but the G5 ran code twice as fast, then I would say the MHz issue would be a non-issue. The problem is that the G4 processor line is basically running twice as fast now as it did back in mid-1999 when it was first released. Compare this to the equivalent speed increases in the x86 world. Basically the PowerPC platform that at one point provided a 10x speed advantage (on some operations) now runs neck-in-neck or worse with the x86 platform. That is what is unacceptable. Unfortunately benchmark after benchmark are showing that there is little to no performance advantage for the G4 processor anymore. Unless the G5 provides a radical jump in performance, there is little incentive to stay with Motorola anymore.
If Apple does make the jump, they should do it by still making the computers require Apple-specific hardware to run. This would make it so that they can have the new processor architecture without worrying about clone makers cannibalizing their sales.
Well CISC stands for complex instruction set computing
and RISC stands for reduced instructoin set computing.
RISC won. Don’t listen to the stuff Apple and Motorola put out, all modern x86 processors are RISC at their core with the x86 instructions changed into into internal RISC style instructions by a hardware decoder.
AMD calls them macro-ops. Yes it makes an Athlon go a little slower than it could but it provides the backwards compatability people want.
Got to keep running win 3.1 and dos on that 2 GHz PIV. :/
The G series processors from Motorola are slow because their main market is NOT the desktop. Apple is small change compared to the embedded space.
Embedded people want low power use and they don’t need or want high clock speeds. Running a car ignition only takes a G3 at 250 Mhz.
If it was not for Apple and a few IBM computers the G series processor would only be in embedded systems, and some game consoles.
Apple is gooing to have to wake up and admit the PPC was the wrong choice. It looked good at the time but their just is not enough market demand to push Motorola to keep up with Intel and AMD.
Maybe the G5 will save the day and the PPC arch can be kept alive on the desktop. Time will tell.
>>RISC won. Don’t listen to the stuff Apple and Motorola put out, all modern x86 processors are RISC at their core with the x86 instructions changed into into internal RISC style instructions by a hardware decoder.<<
Where did you dream this up? And yes I know exactly the definitions of ‘RISC’ and ‘CISC’! As for the PPC it has more uses than just computer themselves, though Intel has optimized it’s ‘CISC’ design it is till inefficent for simplistic electronics and is hard to adapt to all sorts of electronics ranging from TVs to Toasters. Above all else most electronic companies need something that is more efficient when power consumption is a priority. Take laptops for example, power is of an importance, so I’ll take PPC over x86 anyday!
Intel can push MHz all they want, but until they design a more efficient CPU than the old ‘CISC’ architecture, they are not impressing me at all.
Oh and one more thing… I work/program around Sun Microsystems Workstations and servers, Linux boxes, Windows PCs and my Ti-Book G4 (the fastest laptop in the building, our work laptops are IBM ThinkPads and Toshiba Satellites incase you’re wondering). There is a difference in performance between ‘RISC’ and ‘CISC’ including the CPU types of SPARC/PPC/x86!!!
Don’t read too many stories until you work with all the different equipment yourself at a professional level!!!
At the end of the day software is another important factor… I can take BeOS and run it on the same PC as Windows and get better performance. So to be honest I don’t think about MHz until I know what OS I am running and what machine is optimized to run it!!!
Have you read ANYTHING about x86 chips in the last several years? They are mostly pure RISC cores. Here is a good PDF to get you started: http://www.epos.kiev.ua/mailserv/amdk7tb.pdf. Now that we’ve settled this stupid RISC vs CISC business (btw: there are no “pure” RISC or CISC chips anymore, they’ve converged a lot), now to settle this power consumption business. The reason x86 chips generally have huge power consumption is they have more electronics. x86 compatibility is a huge burden. The architecture makes all sorts of guarentees that require hardware assistance. For example, x86 promises that all interrupts will be taken at instruction bounderies. With all of the x86 -> risc-op decoding that goes on, combined with speculative execution and whatnot, this is a very non-trivial promise. A lot of embedded processors don’t make such guarentees, and can thus ditch the associated electronics. Also, x86 proces have lots of functional units. A K7 has 3 integer pipes and 3 FP pipes. Meanwhile, a G4 has two integer units, one FPU, and one vector. ‘RISC’ processors that have similarly large amounts of resources, like the Alpha (4 int, 2 fp) have similarly huge power requirements.
All modern processors are RISC at heart, ever since the 686 generation of processors, new Intel/AMD chips translate x86 instructions into internal risc style instructions, although they do have several big units bolted onto that, i.e. SSI/MMX/3dNow! and all but then again Motorola have the Altivec.
I am willing to bet that if AMD or Intel were to strip the translation layer out of their chips and let people code for the interal ops they would beat any “RISC” chip out there.
Then again, I may be totally wrong of course.
And yes, maybe I should explain myself more better and understand this since I take the hardware more seriously. I have read about this stuff from one of Robert X Cringley’s articles. But the fact remains it that if you were to take a Pentium III 400 MHz and a G3 400 Mhz (since it would be unfair to compare a G4 due to Velocity Engine’s advantages) the performance factor is a huge difference , but if you were to bump up that Pentium III to about 700 to 800 MHz then you’ll see a likely comparison. I have tested this with our machines at work, but then again I don’t even take the MHz issue seriously until I know what OS I am using. BeOS performs better than any other OS on similar hardware and I have tested this fact when comparing it with Linux and Windows (on my buddy’s home built PC, and at work with our Compaq DesPros using PIIs/PIIIs at 400 MHz)
The whole issue that Motorola/IBM/Sun have to match Intel/AMD in clock speed is totally misleading and makes normal PC users think that a Sun’s UltraSPARC III and/or Motorola/IBM’s PPC/G4 has to match Intel’s Pentium/Celeron and/or AMD’s Athlon/Duron will perform the due to similar clock speeds. Helk it is almost unfair either way due to the software being run on it, whether it being Windows, Linux, BeOS, Mac OS and/or Solaris.
So why people keep comparing only MHz (clock speeds) when there are other factors involved including the software is beyond me! If you guys are up on all this hardware stuff, then you know comparing only MHz (clock speeds) is very immature and incorrect when other factors are as important and need to be addressed!!!
Since we are comparing chips and marketing for the 2… I will read your PDF file and you can read this PDF file as well;
http://a1872.g.akamai.net/7/1872/51/fc3f3a53a0c596/www.apple.com/po…
This will give you a deeper understanding about the PPC techno and how the AltiVec works!
If I understand this right, CISC means thst complex instructions can be performed with this processor in several clock takts. RISC was invented for better pipelining and higher clock speeds. One demand that RISC has to fullfill was that every operation takes only one takt. Otherwise pipelining wouldnยดt function properly. So for executing one command comperable with one CISC-command a RISC processor needs eventually more takts than a CISC. But due to pipelining and the possibility of higher takt rate RISC is faster than CISC. Now we see that so called new CISC-Processors like Pentium have much higher clock rates than classic RISC-processors. That means they are able to execute Complex Instructions with higher speed, than RISC-processors execute Reduced Instructions. Or is there a point I donยดt get?
Greetings from Anton
That is also correct, the bit rate is important as well. the PPC chips process data in 128 Bit chunks compared to Intel’s x86 chips processing at 32 Bit (64 Bit pseudo) chunks, it kinda gets a bottleneck effect which can drive down CPU performance. But one thing Intel can say that Apple/Motorola/IBM can’t is that they don’t just stop at the CPU, they also optimize the other components to further push their CPU performance, which I wish that Apple would take this into consideration! Though it may be true that ‘RISC’ and ‘CISC’ are alot more similar than they use to be (and yes I probably read the same tech news you guys do) there are still some differences. I did read that the G5 chips will be getting some more instructions added, so hopefully they will bump up the MHz to counter that (which we all know they will)!
Quote:
“The majority of today’s processors canโt rightfully be called completely RISC or completely CISC. The two textbook architectures have evolved towards each other to such an extent that thereโs no longer a clear distinction between their respective approaches to increasing performance and efficiency. To be specific, chips that implement the x86 CISC ISA have come to look a lot like chips that implement various RISC ISAโs; the instruction set architecture is the same, but under the hood itโs a whole different ball game. But this hasn’t been a one-way trend. Rather, the same goes for todayโs so-called RISC CPUs. Theyโve added more instructions and more complexity to the point where theyโre every bit as complex as their CISC counterparts. Thus the “RISC vs. CISC” debate really exists only in the minds of marketing departments and platform advocates whose purpose in creating and perpetuating this fictitious conflict is to promote their pet product by means of name- calling and sloganeering.”
Arstechnica’s old RISC Vs CISC article: http://www.arstechnica.com/cpu/4q99/risc-cisc/rvc-1.html
I read that article, it’s pretty good and I like the other factors it pointed out like what I said above (software being one of them)!!!
” the PPC chips process data in 128 Bit chunks”
I don’t think this is correct. As far as I know the G4 is a 32 bit processor just like a pentium or athlon. I think its only the Altivec unit that can process data in 128 bit chunks, and not the whole processor. If a G4 could process all data in 128 bit chunks we would call it a 128 bit processor.
That is what I meant, thanks for the technical correction!
lets be honnest, Mac v PC war is DAFT and boils down to…
Mac: I have a pritty box
PC: I have lots of silly numbers!
Mac: Your silly numbers are silly! Ha! I have a pritty box!
/me drinks even more caffeen and dances merry about the office, dribbling….
Thanks for the article. However, you’re wrong about the whole 32-bit vs 128-bit thing. The x86 is a 32-bit chip with a 64 bit bus and 128-bit vector unit (SSE). The G4 is a 32-bit (properly, 32-bit subset of a 64-bit architecture) chip with a 64 bit bus and 128-bit vector unit. If the G4 is faster per clock, its only because of a better architecture (less pipeline stages, etc). Of course, its a moot point. As nice an architecture as the PPC might be (though I prefer the Alpha, its so clean to write OS code for if the Intel chips have double the clock-rate, the PPC chips will be hard-pressed to beat them. That’s why I’m hoping Motorola gets their act together with the high clockspeed (and efficient) G5 chips.
>[..] if the Intel chips have double the clock-rate, the PPC chips will be hard-pressed to beat them.
Exactly. A G4 at 800 Mhz, may be like a PIII at ~933 (or somewhere along those lines), but there is no way the current G4 offerings can compete with today’s big Athlons or P4s that are more than 1.3 Ghz. Motorola need to catch up, badly. It is sad for Mac people indeed, but it is the truth.
>>Thanks for the article. However, you’re wrong about the whole 32-bit vs 128-bit thing. The x86 is a 32-bit chip with a 64 bit bus and 128-bit vector unit (SSE)<<
Don’t get the other guys comment (buzzlightyear) confused with what I was talking about I never said that the PPC/G4 was a 128 Bit CPU I was referring to only the AltiVec -vs- MMX and I should have been more precise in my wording sorry. As for the above statement… Is this with the Pentium 4?
I still haven’t finished reading the AMD PDF article yet (it’s been a busy night at work sorry) If I did return to the PC for my home computing I would go AMD since most of my PC friends swear by them!
As for Motorola, it seems that ever since their huge failure with the Iridium project, they can’t get their act back together, so I can agree with you there and I hope that the G5 will be a success!
How can you say that when P4s can’t really compete with PIIIs of similar spec? And with AMD trying to compare their CPUs that are 500 MHz less than Intel’s is even proving that MHz alone doesn’t matter!!!
check this out… it was a recent benchmark test with various G4 800 MHz CPUs -vs- P4s. It’s a good article and it does point out the advantage/disadvantage for the G4 and for the P4!
http://www.barefeats.com/pentium4.html
>How can you say that when P4s can’t really compete with PIIIs of similar spec?
P4s run at 2 Ghz and PIIIs at 1.2 max. Code compiled for P4, truly shows the power of the cpu.
>and with AMD trying to compare their CPUs that are 500 MHz less than Intel’s is even proving that MHz alone doesn’t matter!!!
First off, it is not 500 Mhz, it is 300 Mhz I think. Second, I never said that G4 is worse than PIIIs on the same Mhz. I said that G4s are BETTER than PIIIs at the same Mhz. But NOT as much as you think they are. And *even* if you add a whopping 500 Mhz in their advantage, an 800 Mhz G4 will only be like a 1.3 PIII (let alone the fact that they are like a PIII 933 in reality according to SPEC). So, even if I add that 500 Mhz, G4s are equivelant to P4 or Athlon at 1.3 Ghz. But today, P4s are running on 2.2 Ghz!!! This is what we are trying to explain to you. EVEN with G4 being BETTER than the equivelant x86 CPUs at the same Mhz, they cannot compete *anymore* with the new offerings from Intel or AMD. G4s are stuck on last year’s “fashion”, performance-wise. The best G4 today is performance-wise, 1 Ghz behind the x86 offerings of today. Motorola needs to CATCH UP, badly!
Not always true, even the G3s with similar spec were faster than PIIIs at the same clock speed. I have already proven this point with the PCs we run at work that hav PIIs/PIIIs at 400 MHz comparing them to my iMac at home running a G3 at 400 MHz… when it comes to reality terms of performance there is a significant amount of difference, from startup to initializing applications to actual computing itself. And the last I read AMD was comparing their 1.6 GHz chips to anything that Intel was dishing out up to 2.2 GHz, so I have to say that comparing only clock speed is a flawed concept and CPU architecture has to be taken into account before anything else is relevant. It’s almost like comparing horsepower in motor sports, it is not the way you compare engines (this is another hobby of mine) other factors are involved like torque and gear ratio, everything from the intake to the exhaust down to the wheels/tires is taken into account, not just horsepower and people never seem to understand that (it took me awhile to learn that too). Honda (and other car companies) has proven this to the ‘Big 3’ numerous times over. Most analysts have been comparing 500 MHz G4s to anything AMD/Intel running over 1 GHz with similar performance and some better perfs in certain areas.
I do agree that Motorola needs to get their head out their rear and start getting serious and get the PPC tech back to top notch like it was a couple years ago because Intel will keep pushing along with MHz and tech improvements no matter if Motorola (or even IBM for that matter) wants to keep up or not!
I just found this actually I had a read and I think it gives a darn explanation of what I have been saying all along;
Part I
http://maccentral.macworld.com/news/0104/09.megahertz.shtml
Part II
http://maccentral.macworld.com/news/0104/11.myth.shtml
Part III
http://maccentral.macworld.com/news/0104/17.myth.shtml
Though it’s from a Mac site, it is not a biased article as you might assume!
Stop jabbing about clock frequency, asynchronous is the way to go.
Read this:
http://www.nytimes.com/2001/03/05/technology/05IVAN.html?pagewanted…
the only probblem with async is that you will have large leves of latency as you begin to compute large levels of data.
A) AltiVec (Velocity Engine) is 128-bit, x86 are largely a RISC/CISC mish-mash but they still keep most of their code tied to CISC. This is why Itanium, in and of itself, was to be a combination… allowing RISC/CISC processes to work in conjunction with one another. This might also be a big reason why Itanium is God-awful expensive, and hasn’t been able to be cranked up, due to the extensive complexity. Hammer, still unsure, but I’m banking that AMD makes this processor more efficient, simpler. Simple is proving to be the way to go. After all, if Motorola stuck with the G3, they could’ve been beating the pants off x86, but adding the heft of AltiVec… they’re only faster (although significantly so) when the application is heavily optimized.
B) The G4 running against the PIV; the G4 is slower on average, until you factor in AltiVec. The problem with x86 platforms has been, and to my knowledge, remains to be that when the x86 part of the processor is operating, MMX is shut off, or 3DNow is shut off. When the MMX or 3DNow parts of those processors are running, the x86 instruction set is shutoff. With that said, the PowerPC using a 64-bit architecture with a 32-bit instruction set has extra room to allow the PowerPC and AltiVec/Velocity Engine to operate in “PARALLEL”. This means that on any graphics intensive application optimized for AltiVec; the only way to beat it on the PC side is to add processors and split the load, which means that your PC eventually costs as much as a similar single-processor Mac with equal performance. When you factor in dual-processor G4’s… well, you get the idea.
C) With the G5, things will change. You see, the G5 is designed from the ground up to support multiple processors on a single chip. This is vastly different than G4, which was more or less a refined G3 chip with room for an AltiVec Vector processor, and optimizations to allow two chips to exist on a motherboard. The G5 will allow multiple processors (I believe up to 4) to exist on a single chip. It’s also known that the G5 is “NOT” an embedded chip in the traditional sense. The G5, more or less, is geared towards being both an embedded, and like a traditional chipset. This “WILL” be achieved by the nature that the G5 can be shipped as a 32-bit or 64-bit processor. The 64-bit processor is less energy efficient, but also promises greater speeds, not in SpecInt, but moreso in SpecFP, which as noted earlier in this thread… is where the “IMPORTANT” information lies. I’ve also heard that the IBM and Motorola architectures will be closely paralleled, and that IBM will unveil their “OWN” Vector processor architecture, which reportedly Apple will support within OS X upon IBM’s releasal of said chips. After the PowerPC debacle and Jobs’ issues with Motorola, along with Motorola’s floundering in “MANY” key markets, “EVERY” aspect in Motorola’s business plan is important, as are Apple’s sales. It’s sorta’ like Vidal Sassoon… if Apple doesn’t look good, Motorola doesn’t look good. You can bank on Apple working tightly with both, as IBM and Motorola in regards to these processors will depend on it (much as IBM’s G5 variant will be used in AIX Boxes).
D) The G5, compared to the G4, is like comparing an Itanium or Hammer to an ARM or a StrongARM… rather than comparing them to a PIII, P4. The G5 (8xxx) will be much more closely related to the Power3/Power4 architectures, and therefore the Motorola PowerPC 615 (64-bit RISC processor, 64-bit instructions), than it will be the 6xx, 7xx (G3), 7xxx (G4) series of processors. The difference is… Itanium is like CISC/RISC fusion, intertwined fluidly. Hammer? Don’t know… might be a CISC/RISC hybrid, more like the PIII and PII than the Itanium. PowerPC G5? It’s strict RISC with an expanded interpretation of the instruction set. How will it fare… hard to say ’til you get a real-world comparison, and get to see how the SpecFP and SpecINT intertwine. My guess is… it’ll probably be all even across the board.
I wanted to add my two cents to the RISC and CISC comparisons. There has been
a discussion going on to which type of archetecture is better. This is not
an appropriate way of looking at the situation, because both of the types
have their strengths. And the weaknesses of both are often
overcome with technology improvements and ISA changes.
Basically, the RISC
concept became popular in the mid ’90s. Many designers got the idea that
the basic CISC processor was getting overloaded with specialized commands.
Since each of these commands takes a circuit to implement, the processor had
to grow as commands were added. As Intel added commands to the processor and
added “modes” and bit levels, they had to add more and more chip circuitry
to do this. Often times, these intructions were very complex and required
multiple clock cycles to compute a result (often up to 5 or 6). It became
challenging to perform instructions in parallel, especially when some
instructions required 3 clock cycles and others some other number. This
is where RISC comes in. Designers figured that if they could design a
processor that just had very basic instructions (and a limited amount of
instructions) that they could optimize the circuitry to make sure that
all of the instructions could (remember this: ) PRODUCE A RESULT at every
clock cycle. They since all of the instuctions were very basic, they could
be performed in one clock cycle, and then could be “PIPELINED”. What this
really means is that while an instruction is being crunched to get a result,
another instruction is simultaneously being pulled from memory, called “being
fetched”. This way, you can crunch on a one cycle instruction and have the
next instruction queued up for the processor. This means that the instuctions
would be processed in one clock cycle, one right after the next, or pipelined.
The problem with the CISC style, is that (traditionally) the next instruction is
fetched after the result is computed, meaning you waste clock cycles just
fetching (traditionally, they get fancy on newer processors). The RISC
designers believed that in doing this not only would they effectively use
every clock cycle, but they would also be able to greatly reduce the amount
of circuitry in the core of the processor, which gives great benefits. These
include less heat dissipation (giving longer battery life since you use less
power) and a smaller foot print. The RISC designers effectively made the
processor simpler and more efficient, but could do less with one instruction.
They believed that the programmers could combine instructions to perform
larger tasks, and that in doing this, the processor functions could be improved
through software rather than hardware.
There are some drawbacks to this way of doing things as well. For a
practical example, some RISC processors do not have a command (OPCODE) to
perform multiplication. This is especially true on more primitive processors
like some in the Microchip PIC family. In this example, if a programmer wanted
to say multiply a number by 100, the programmer would have to set up a loop
to add the number to a register 100 times!? On some primitive RISC processors
loops actually take 2 clock cycles to perform.
This means that 200 cycles at least would be used to perform the calculation.
In a CISC processor, since there is
circuitry (or a module) to perform multiplication, the multiplication may
occur in less clock cycles, like 5 or 6 on primitive CISC. This is a 20X
improvement in speed with CISC running at the same clock speed. Of course,
there are cases the other way around, where simpler commands take longer on
CISC because of the one-cycle execution of code on RISC.
Today, I have noticed that many processors have used a combination of
technology to overcome the weaknesses. It is true that Intel has implemented
larger instructions in the core with a bunch of smaller ones. This is good
because it allows the instruction to be executed efficiently, reduced power
consuption and maintains backward compatibility <<<— a big deal. The
CISC chips have also overcome power limitations in that Intel uses better
processes (smaller trace sizes down to .13 microns now) to make circuits
run on less power and produce less heat. This is why you can get like a
Pentium 3 or 4 notebook and have a decent run time despite the fact
that its a traditional CISC archetecture. The RISC types are inherently good
in saving power, and if programmed right can be dramatically faster in some
cases. Just ask any electronic engineer about using a microcontroller in
circuits. I have converted many circuits that I have made in CISC architectures
to the RISC based PIC Microcontroller and have seen a 20 x inrease in speed in
areas that count in my application. I have seen other applications where
the CISC types have been faster (like lookup up tables, or multiplying or
dividing, computing lines on plots etc).
Lastly, one of the biggest problems that I have seen with modern PCs is the
fact that people buy the fastest processor available, and then straddle it
with little memory and a slow 5400 RPM IDE hard drive. The Hard Drive seems
to be one of the biggest bottle necks. I would rather use a 300 Mhz AMD K6
with a 7200 RPM SCSI or Ultra Wide ATA drive than a 1.4 Ghz P4 with a 5400
slow IDE drive. Its just that much noticeable in actual use. Alot of
memory is also needed, especially under the Windows 9x family of OSs because
they handle memory so poorly and often don’t even unload program data (just
cache it for some reason). So the moral here is, get a good processor, but
spend more on the memory and hard drive, which in real use will give you
performance you can use, unless you are doing CAD or crunching numbers all
day which seems to be only for engineers these days
Dano.
Forgot to mention that code size (memory size) is often significantly smaller
with a CISC program because it takes less commands to perform a given function.
Often, a compiled program on a RISC machine is larger, even though it may
have been compiled from the same source language (like C and the like.) This
code size is often important when using ROM enviromnents to hold the operating
system or control program…
Dano.
>>I wanted to add my two cents to the RISC and CISC comparisons. There has been
a discussion going on to which type of archetecture is better. This is not
an appropriate way of looking at the situation, because both of the types
have their strengths.<<
But you are also saying that comparing clock speed (MHz/GHz) is also a flawed concept right?!
>>CattBeMac (IP: —.speed.planet.nl) wrote on 2002-01-24 14:53:13 Dano…
>>I wanted to add my two cents to the RISC and CISC comparisons. There has been
>>a discussion going on to which type of archetecture is better. This is not
>>an appropriate way of looking at the situation, because both of the types
>>have their strengths.<<
>>But you are also saying that comparing clock speed (MHz/GHz) is also a flawed >>concept right?!
Not sure what you mean here. If the chips do different functions in different
amounts of clock cycles, what does comparing Mhz to Mhz do?