Usually, x86 tutorials don’t spend much time explaining the historical perspective of design and naming decisions. When learning x86 assembly, you’re usually told something along the lines: Here’s EAX. It’s a register. Use it.
So, what exactly do those letters stand for? E–A–X.
I’m afraid there’s no short answer! We’ll have to go back to 1972…
I love digital archeology.
Many of us lived through this period, so we have no need to be told things we already knew. It’s the young whipper-snappers that need history lessons like this. It’s still fun to read.
Other 8086/x86 registers have similar meanings:
BX = Base register
CX = Counter register
DX = Data register
Certain instructions only work if you have data in the correct register. For example, the string comparison/copy instructions must have data in the SI (Source) and DI (Destination) registers and will iterate through exactly the number specified by the CX register. These instructions will also increment/decrement CX accordingly. And then the Jump if CX is zero or non-zero instructions can be used to go from there.
I am genuinely surprised every time I learn a new fact about how much backwards-compatible modern PCs are when it comes to hardware.
kurkosdr,
It’s by far the biggest selling point for x86. Every time a new PC architecture crops up, it’s judged not so much on it’s own merits, but in terms of how well it runs wintel software. To this day x86 remains the king of desktop PCs, not because it was superior to alternatives, but because most business and consumer software run on windows & x86 exclusively and this is by far the most profitable platform in terms of consumer investment. For better or worse, intel failed to capture the new mobile markets (when they were new) because there was no need to be backwards compatible with pre-existing work. It’s just really hard to change an established market’s momentum.
Intel on mobile had the problem that x86 wasn’t compatible with ARM, with several ARM-native apps crashing or running slow in their translator. Which was a problem since most ARM-native apps are games and other “benchmark” games.
x86 isn’t as bad as people think btw. It has proven itself to be able to leverage new technologies and scale well. What’s the cost of JIT-ing those old instructions? 5% of the chip? More than worth it. The absolute worst instruction set is SPARC, which mandates triplicating registers in the execution units for no performance gain, due to the useless “register windows” feature mandated in the SPARC instruction set. This proved to be a big problem when multiple execution units started to be put in a core and multiple cores started to be put in a CPU, eventually dooming SPARC.
Yeah, the register windows were also a PITA for out-of-order SPARC designs.
I think the register windows may have made sense for the original in-order non-superscalar SPARC pipeline, I assume that the concept of register windows made lots of sense from a compiler-support standpoint as well.
But even in the original SPARC designs they ended up with huge register files, for little gain.
It is more funny that register name says – Extended Accumulator eXtended.
If you follow the EMS/XMS convention, that would be Expanded Accumulator eXtended.
When I learnt assembler in the 90’s it was on an ARM with its clean, numerical, register naming and with everything generic. Apart from the programme counter, other special registers were purely by convention. It felt very beautiful at the time, but ARM chips have become a lot more complex now, arguably more CISC than RISC.
It’s interesting to understand how both instructions sets are as much a product of history as of technology.
I always get a chuckle when people praise RISC assembly for being so much nicer to program for than CISC, when the whole initial point of RISC was for programmers to have as little contact as possible with their assembler.
I have a 16-row HTML table (excluding headers) laying out the format of general ARM instruction classes. Zeroes, ones, and fields. The general instruction classes (data ops, memory, co-processor, etc.) are expanded in other tables, including condition codes, shifter ops, and the traditional/modern register sets. All tolled, 7 screen pages.
The Thumb decoding charts fit on a single screen page. Instruction format, field key, and opcode breakdown for formats 1, 3, 4, and 5.
The only simpler opcode chart I’ve done is for the ZPU.
No, the whole initial point of RISC was to acknowledge that memory was far slower (at the time) than were CPUs. This meant a load/store architecture would be faster, and load/store architecture is one of the key fundamentals of RISC design. Whether or not a programmer used assembly had nothing at all to do with it. I have written as much assembly for RISC as CISC, and don’t see any issue with doing so. Avoiding assembly took decades as compilers had to advance quite a bit before they became remotely comparable to the speed decent hand-written assembly could reach.
Well the original point of RISC had more to with transferring design complexity from HW to Software. RISC machines were designed with the assumption that compilers would do most of the heavy lifting. Which is why I think it is amusing that people ended up liking RISC ISAs better, when they’re for most intents and purposed the exposed micro-ops that CISC architectures shield the programmer from via microcode.
In the end if you program in RISC you end up with a bunch of macros over and over, while in CISC it takes just a couple of instructions.
But as I am not claiming either is better or worse. I am just amused about how one of the original goals of RISC was to have the programmer touch the ISA directly as little as possible, and just let the compiler do most of the assembler.
Letting someone else do most of the work is a basic point to nearly every job since the beginning of time. 😀 Use of macros in assembly was as big for CISC as RISC. That is more a point towards reducing the amount of work you have to do rather than a CISC/RISC issue. Indeed, x86 being so non-orthogonal meant you were more likely to use macros or rely on a compiler to lessen your work-load. That was another point of RISC – improving the orthogonality of instructions. That made it easier to program in assembly since instructions were easier to remember since they were more orthogonal. More than half of programming in x86 was trying to remember which register did what since they weren’t usually general purpose. That’s were the macros came in – the macro was written to deal with putting the arguments in the proper registers to do what you wanted.
javiercero1,
For me, I just find RISC a lot more consistent. With x86 I feel like it’s mostly designed out of legacy factors. It may have been pragmatic at one point to reduce the number of transistors at the expense of having software shuffle around data between limited and special purpose registers, but these days it’s counterproductive and we’re throwing in a lot of transistors (and microcode) at making x86 software run faster. Arguably the complex pipelines of the x86 were necessary to optimize away software inefficiency caused by having to design (and compile) around the severe limits of the x86 architecture. These limits were especially problematic before x86-64, 32bit software doesn’t have enough registers and computation is often forced to rely on the stack even for intermediary values.
Intel and AMD have done a great job at accelerating this of course, but unfortunately some of the complexity that smooths out the architecture comes at the expense of greater heat output and power consumption. Obviously this varies by application, but I predict that overall x86 processors will continue to be a deficiency in terms of energy consumption.
The 8008 and the z80 have a very similar instruction set. I believe the z80 was actually an enhanced clone originally….
The Z80 was a kind-of clone of the 8080. It’s backwards compatible with the 8080, but extends the instruction set.
The iAPX432 delay has a lot to answer for! It would have been a totally different world. All stack machines. No registers called EAX or anything, Everybody programming in Forth. C forgotten. …. maybe slight exaggeration.
The 432 was a pig. I don’t know what Forth has to do with it, as it had object oriented primitives in HW!
I just remembered as Extended AX. AX being a 16 bit number and the 2 lesser AH/AL each 8 bits each comprised the whole of AX. EAX was the 32 bit version.