The Intel i960 was a remarkable 32-bit processor of the 1990s with a confusing set of versions. Although it is now mostly forgotten (outside the many people who used it as an embedded processor), it has a complex history. It had a shot at being Intel’s flagship processor until x86 overshadowed it. Later, it was the world’s best-selling RISC processor. One variant was a 33-bit processor with a decidedly non-RISC object-oriented instruction set; it became a military standard and was used in the F-22 fighter plane. Another version powered Intel’s short-lived Unix servers. In this blog post, I’ll take a look at the history of the i960, explain its different variants, and examine silicon dies. This chip has a lot of mythology and confusion (especially on Wikipedia), so I’ll try to clear things up.
Not even Intel can overcome x86 – and I can guarantee you: neither will ARM. The truth is that x86 simply cannot die.
i960 shipped in enough volume to be somewhat of a success for intel. And it is also very counter intuitive for some folks that intel was one of the first vendors to ship a commercial RISC processor, and at several points in time they were the largest RISC vendor by volume (both with the i960 and StrongARM).
But when it comes to replacing x86, in the end it is always the same: software sells processors, not the other way around.
The software library for x86 is just too huge at this point, and has such momentum that it will be very difficult to challenge it’s dominance in the sectors its entrenched. Plus instruction encoding is no longer a first order performance limiter, and x86 vendors have access to the same microarchitectural improvements as everyone else. So the ISA has little effect on performance, whereas it still retains a huge effect on software adoption.
Which is why people should not focus with ISA as the main architectural differentiator. I don’t think people realize this is one of the times with most diversity of microarchitectures in the consumer market.
I know more and more people, that doesn`t have PC. Just phone, sometimes tablet. More and more young people don`t even know how print documents on PC at school. So I guess that in time PC software won`t have much user share at homes.. That don`t include corporations and medium business of course.
“Software sells processors, not the other way around.”. I completely agree with this. Deep wisdom. That said, I think it matters less than ever.
First, the “operating system” is likely to be available on whatever processor you are trying to run. Windows was so tied to x86 that they called it the “Wintel Duopoly”. Linux runs on almost anything and runs on new architectures much earlier than past chips found themselves supported by anything interesting.
Second, you are getting your software online these days. You do not have to deal with whatever hardware support is available in the box of shrink-wrapped software on the shelf at your local computer store. You do not even have to wait for a SKU to be available from the distributor you use for your “computer stuff”. This is how things used to work. Now, it is fairly trivial to make software available for new architectures that the OS supports ( see above ) because you just build it and put it online. The “cost” of doing so is quite low.
Apple has made several chip transitions at this point both for MacOS and iOS too. They have been able to do that because they have done a great job of maintaining software continuity across these transitions.
You could easily transition the Android world from ARM to RISC-V in a similar way and nobody would bat an eye. Nobody is “installing” software on these devices really. It just comes down over the wire. Most Android consumers do not know or care what chip is in their device really.
The whole RaspberryPi ecosystem was possible because Linux made it easy to bring-up a universe of software on ARM simply by rebuilding it into a Linux distribution that targeted that hardware.
Making the universe of Linux software available on RISC-V is similarly trivial these days. The bigger problem is not the chip ( the SoC ) but the lack of a “platfrom”. The PC had a BIOS and a bunch of other stuff that you could target that had nothing to do with the ISA. The RaspberryPi brings some of that to ARM but other Pi-like devices do not benefit from that compatibility generally. The same is true for RISC-V.
A single Haiku dev brought the entire Haiku OS and software universe to RISC-V fairly quickly but it only works on specific boards like the HiFive Unmatched that this one developer happened to have.
Windows has it harder transitioning from x86 to ARM as more people are trying to “run the installer” and expecting it to work. That is why MS really has to steal plays from Apple’s playbook.
Another thing that makes it easier these days though is that more and more of what we run is really just “the web”. If a hardware platform runs an OS that can host a modern web browser, there is a already A LOT that you can do on that platform. For some people, that alone can be enough.
Absolutely, the main reason why x86 basically is never going to make it to mobile is because iOS and Android are already entrenched and they basically are ARMland OSs. That is also why Windows Phone failed.
I’d say that applications sell processors and OS (and everything else associated with running them), not the other way around.
FWIW Since Windows NT days, a few decades ago already, Windows has supported many architectures. But it did not matter, since most applications for Windows were for x86.
So likely x86 will continue to dominate desktop/datacenter/corporate stuff. Whereas ARM will be prevalent in Mobile/Tablet/Auto. And RISC-V will likely own a big chunk of IoT.
javiercero1,
I agree with you on desktop and corporate stuff, which are still monopolized by windows. But I feel “datacenter” is a different beast. Linux is more popular in the data center than windows and it isn’t tied to x86 in the same way that windows is.
While datacenters are still dominated by x86 today, I believe this comes down to easy availability of cheap commodity x86 servers rather than the market’s need to have x86. We can usually run the same programs on ARM without caring about CPU architecture and many of us are looking forward to when ARM servers become less niche and more accessible.
Datacenter is a lot more that LAMP, lots of custom backend apps (oracle, peoplesoft, SAS, etc), which is what sells the HW. Plus Intel and AMD have a huge stronghold on the container virtualization market.
NVIDIA is moving their ARM CPUs closer to the GPU, so that is likely the main inroads for ARM in the 3rd party datacenter HW. As a lot of CUDA workloads will follow that. Most ARM server startups go the way of the dodo the minute they have to release their 2nd generation, because Datacenter also require long term roadmaps, which these startups can never deliver on. Amazon is also offering ARM instances, but that is because they have such high volume that it makes sense to make their own silicon.
Consumer ARM servers for local home use are not going to happen in all likelihood, unless it is at the very low end. For mid/high end, they can’t compete on price vs x86 as AMD and Intel have huge economies of scale there. Plus ARMland is still very fragmented, which is why only Raspberry Pi has gotten somewhere in that DYI market.
javiercero1,
Well, yes it’s certainly true for websites as well, although I wasn’t talking about LAMP specifically. Many companies build out their proprietary back end systems on linux. Oracle supports linux. And while it’s in early stages, they are beginning to support ARM too. I don’t see oracle rejecting would-be ARM customers.
https://www.oracle.com/linux/downloads/linux-arm-downloads.html
Companies like amazon are fabricating their own ARM chips to run their enterprise applications. My opinion is that many linux-using companies could benefit from ARM processors as well though it needs to become more accessible. You want to disagree? That’s fine, but that’s what I think.
javiercero1,
Yes I am sure that will be the consensus. x86 windows software has been a huge driver for x86 demand, although I would also add that better standardization favors x86 over alternatives even for FOSS operating systems that are ISA agnostic.
It’s not the main differentiator, but ARM’s code density and simpler preprocessing does lend itself to better cache utilization and better energy efficiency. On the windows side this hasn’t been enough to overcome the x86 software advantage, as you’ve noted.
While the wintel monopoly is still a force to be reckoned with, I think they are vulnerable especially in new markets. As Marshal Jim Raynor mentions, windows software compatibility isn’t relevant on the mobile side, and efficiency is king. This gives ARM the advantage there.
You have it backwards; x86 has traditionally had a better code density than ARM, although they are both comparable at “pure” 64bit object code. Unless you’re talking about Thumb.
The preprocessing differential is also minimal; ARM requires higher fetch BW to generate the same rate of internal uOps. So if anything Aarch64 places slightly more pressure on the cache than x86_64. But again, both are comparable at the end of the day, most of the energy consumption/area comes from the out-of-order support not fetch/decode. I think we keep going over this.
For the low end/mobile stuff, ARM has an advantage but it is mainly due to the licensing of the ISA not because of it’s decoding. ARM SoC vendors have much better culture in terms of low power design techniques/libraries. And had much faster roadmaps and execution; they have been able to leverage fabless design cycles and integrated lots of on-die IPs. AMD and Intel are pretty much SoL out of that market, even though Android runs on x86, most Apps that people want are already on the ARM side, and Apple are not planning on running iOS on x86 at all.
There are some interesting ARM SoCs coming to Windowsland, so that do a number on AMD/Intel in terms of laptops/tablets. And maybe even consoles.
In any case, most of the excitement has been on the SoC vendors, so that is where the talent and investment flew. AMD and Intel will likely go the way of IBM/DEC in the 90s. Still relevant, but stagnant. While the ARM SoCs will get more and more performant to the point most consumer stuff will run on it, there is already a generation of users whose concept of a “computer” is a phone/tablet.
javiercero1
No not thumb, I am talking about 64bit software today.
You have it backwards actually. x86 requires a higher fetch bandwidth on average because it has worse code density on average.
read up
https://web.eece.maine.edu/~vweaver/papers/iccd09/iccd09_density.pdf
javiercero1,
First off, thank you for providing a source. However it is not representative of what I see in practice. At least when it comes to modern compilers ARM binaries are typically more compact. I see this not only with my own software, but with publicly available software too. For example, here are binary size comparisons that I did earlier with debian packages…
the AMD64 binaries for postgre database are 3.4% larger than ARM64.
The AMD64 binaries for postfix email daemon are 5.6% larger than ARM64.
The AMD64 binaries for gimp graphics editor are 13.8% larger than ARM64.
ibb.co/Bt1t9Xj
ibb.co/3WF9Yk2
ibb.co/NyrNQH5
Here’s a comparison of binaries I did just now for the latest version of blender for macos which you can extract and see for yourself.
blender.org/download/
“MacOS Intel 264.7MB”
“MacOS Apple Silicon 238.3MB”
Note that the x86_64 binaries collectively are about 12.2% larger.
I’m willing to look at counter examples if you have any, but like I said these days ARM64 typically has better code density on average. And in conjunction with simplified instruction that’s two advantages for ARM that enable the apple M1 to pre-fetch denser instructions with less latency. I know what you are going to say, microarch caches can mitigate some of this x86 ISA overhead. And that’s fine but those caches are still relatively small and they’re not unique to x86 so it doesn’t yield a corresponding advantage for x86 over ARM.
I already provided you with a properly peer reviewed published study. I have no idea why you think your uneducated guesswork puts any sort of burden of proof on my end whatsoever.
To properly study code density and fetch/cache pressure, we must compare isolated execution kernels (or as isolated as possible). Different ABIs have different linkage requirements, and compilers produce different optimizations for different architectures. Comparing random executable image sizes, by themselves, is not a good metric for defining code density. Since we’re comparing two runtimes out of context.
Which is why, again, I provided you with a more educated published work which tried to account for a lot of these issues.
javiercero1,
Your link may have once been true, but that source is from 14 years ago. That is so old that it even predates 64bit ARM by 2 years. Your notion that it somehow overrules straightforward observations of actual modern day 64bit software is kind of silly if we’re being honest. Maybe you can make a better case using modern data, but so far you haven’t done it and I don’t think that you will because like I said, modern 64bit programs compiled for ARM do tend to be more compact than when compiled for x86_64 on average.
Blah blah blah.
I’ll just go with the data, trends, and insights/conclusions from a peer reviewed paper, in the matter, from one of the top conferences in the field. Than your random uneducated guesswork nonsense.
Cheers.
javiercero1,
You say that, but your actions here betray you since you haven’t backed your claim with any data covering ARM 64 at all. You don’t even have to take my examples at face value since there’s no shortage of publicly available software comparing ARM64 and AMD64 binaries side by side. I encourage you to look at the code density of modern real world programs in practice. If you do that then you’d find that I actually make valid points. Anyway, cheers to you too.
ARM dominance on mobile was solidified way before IOS/Android era. There was a practical reason why everybody used ARM on mobile including MS with their their Windows Mobile despite loosing compatibility with DOS / Windows 3 that would give them some advantages against contenders like Nokia. It had everything to do with relative architectural advantages of ARM RISC vs x86.
Exactly that. My M1 Mac is great… But the inability to effectively run an X86 VM is more than a slight irritation. Then building containers, suddenly I need to put in work arounds all over the place or make compromises. My experience has led me to promise myself my next Laptop will be x86
What do you mean by “inability to effectively run an X86”? I run 4+ x86 Docker containers daily on my M1 for work, without any issues at all.
The irony is that the worst case is running x86 JVM + Java app container on ARM64 machine. So we’ve got 2 approaches that were intended to improve runtime portability and when they are combined they actually make thigs worse and hamper adoption of new HW arch.
dsmogor,
What you say does make sense, JIT compiled software does present an additional challenge for code translation because there are two levels of translation happening concurrently. Some other languages like PHP use JIT compilation too. To be fair, software translation is only meant as a stop gap measure until native software can be used. Translating x86 software to run on ARM adds inefficiency both in performance and energy consumption, which I’d argue negates the main benefit of switching to ARM. In this case one really should be running a JVM and other languages targeting ARM. A heterogeneous software environment that mixes architectures is sub-optimal compared to native.
Edit: A question I have is how long apple will be committed to supporting rosetta 2 and running x86 on ARM. Rosetta 1 wasn’t supported forever and people running old software were eventually unsupported. Is x86 software going to be supported longer?
Not only will Apple not support x86 code forever, I suspect it will not really be very long.
Apple is great at minimizing the disruption when changing architectures. They are also great at force marching their customers to newer software. Even on x86, they fully got rid of 32 bit support nevermind PowerPC support.
Getting rid of x86 may be slightly harder because of containers. It may be that they support it there longer than they do the “native” application environment.
x86 will die if Intel succeeds in pushing x86S architecture. The fact is that there’s little that initial 8086 and x86S cpus will have in common, and 64bit x86 cleared of the legacy cruft is in fact not such a bad arch.
Factory floors with DOS machines will still custom order 286s
dark2,
x86 backwards compatibility is so good that they could probably run that DOS software on modern xeon and core processors, no legacy x86 CPUs required. I’ve worked with a company that did exactly this. They needed DOS compatible hardware for both replacements and provisioning new computers at industrial sites. Fortunately modern off the shelf hardware still runs DOS and their software, which is an impressive feat. However sourcing DOS compatible network cards and packet drivers was a bigger problem as official supply channels dried up. They had to go sleuthing through secondhand markets like ebay in hopes that someone else was offloading equipment the company needed.
Anyway, I worked on the contingency plan of running their software inside of linux VMs for once their remaining stock was gone. DOSEMU and DOSBOX did not work well for them, but QEMU did with some tweaking.