Looking inside the Intel 8087, an early floating point chip, I noticed an interesting feature on the die: the substrate bias generation circuit. In this articleI explain how this circuit is implemented, using analog and digital circuitry to create a negative voltage.
Intel introduced the 8087 chip in 1980 to improve floating-point performance on 8086/8088 computers such as the original IBM PC. Since early microprocessors were designed to operate on integers, arithmetic on floating point numbers was slow, and transcendental operations such as trig or logarithms were even worse. But the 8087 co-processor greatly improved floating point speed, up to 100 times faster. The 8087’s architecture became part of later Intel processors, and the 8087’s instructions are still a part of today’s x86 desktop computers.
A detailed and very technical article.
missing link?
http://www.righto.com/2018/08/inside-die-of-intels-8087-coprocessor…
It was an XT-compatible Sanyo luggable. According to the benchmarks, the 8087 sped up floating-point over Borland’s FP library by 18x.
The 8087 was also easily the hottest-running chip in the entire case. But for the price at the time (free*), it easily justified the knowledge gained from hacking on it.
(*)I had a friend who worked for a NASA contractor, and they had an entire room of parts for prototyping & experimenting. Apparently, my friend convinced the owner that a small investment of an x87 would yield bigger payoffs for the world at large. I’m still working on that.
I’ve always been intrigued by how CPUs and more specifically ALUs are designed on a gate-level.
Can anyone recommend a good book on the subject?
Do you have the fundamentals? Start with boolean (digital) logic, and work up to CPUs. Look for tutorials on boolean logic like this one:
https://learn.sparkfun.com/tutorials/digital-logic
Work up to things like adders and latches. CPUs (simple ones at any rate) are merely groups of adders and latches and other similar logic elements.
I designed and built my own 4-bit MCU from TTL chips back in college – that was a load of fun I recommend to anyone interesting in computers.
Thanks for the reply.
I know my way around boolean algebra and de Morgan
What I’m interested in is how they do things like division or multiplications (I’m pretty sure the won’t just cascade a bunch of adders )
smashIt,
Consider something like A=9 * B=10.
Basically every bit in A determines whether or not to add the value in B at a corresponding bit offset.
A=0b001001;
B=0b001010;
A B
1 000001010 // add 10
0 000010100 // skip 20
0 000101000 // skip 40
1 001010000 // add 80
0 010100000 // skip 160
0 101000000 // skip 320
out = 10 + 80 = 90
More efficient algorithms are possible using more complex algebraic equivalents.
Hopefully that’s clear enough in osnews formatting.
Division can use a similar but opposite algorithm.
I don’t know how many CPU architectures use it, but the Amber project (based on the last ARM that wasn’t locked down by copyright/patent) uses it internally.
https://en.wikipedia.org/wiki/Booth‘s_multiplication_algorithm
And what you’re calling “cascading adders” is really a binary multiplier (read the whole thing) :
https://en.wikipedia.org/wiki/Binary_multiplier
Booth multipliers are pretty common. Division is an art form in itself, and you’ll have to hunt down technical papers online to get an idea. Multipliers can be fast, but division remains tricky. In cases where multiplication is fast, algorithms that use tables and recursive multiplication can generally do division faster than “normal” division routines.
JLF65,
I concur, it’s pretty easy to come up with algorithms that work, but the optimizations can be a kind of black art.
I wrote an arbitrary size math library a couple of years ago. My division function wasn’t as fast in the general case, but I did manage to beat GNU’s GMP math library at division for some cases. Anyways, their documentation is a good source of information…
https://gmplib.org/manual/Algorithms.html#Algorithms
Edited 2018-08-18 15:50 UTC
You’ll be wanting Charles Petzold’s Code: The Hidden Language of Computer Hardware and Software
I really liked this course/book. It goes from nand and false to a computer, programming language, and os.
https://www.nand2tetris.org
Edited 2018-08-17 22:12 UTC
Yet Microsoft locks 80bit compilation. They put it in there for a reason.