But what’s so compelling about RISC-V isn’t the technology – it’s the economics. The instruction set is open source. Anyone can download it and design a chip based on the architecture without paying a fee. If you wanted to do that with ARM, you’d have to pay its developer, Arm Holding, a few million dollars for a license. If you wanted to use x86, you’re out of luck because Intel licenses its instruction set only to Advanced Micro Devices.
For manufacturers, the open-source approach could lower the risks associated with building custom chips. Already, Nvidia and Western Digital Corp. have decided to use RISC-V in their own internally developed silicon. Western Digital’s chief technology officer has said that in 2019 or 2020, the company will unveil a new RISC-V processor for the more than 1 billion cores the storage firm ships each year. Likewise, Nvidia is using RISC-V for a governing microcontroller that it places on the board to manage its massively multicore graphics processors.
This really explains why ARM is so scared of RISC-V. I mean, RISC-V might not make it to high-end smartphones for now, but if RISC-V takes off in the market for microcontrollers and other “invisibe” processors, it could be a huge threat to ARM’s business model.
What is a bit “delicate” with RISC-V is that it was first designed as an educational instruction set, allowing simple implementation by students, replacing textbook MIPS.
It makes RISC-V not really innovative, nor optimised.
Worse instruction sets are, of course, still used nowadays for fast CPUs (for example x86 or SPARC), but it is not the “state of the art” of 2015. There is also some feature creep of extensions which may end as competing implementations (like the x86 x87 vs SSE vs AVX…)
If that is just an educational tool, that would not explain the number of members of the RISC-V foundation:
https://riscv.org/risc-v-foundation/
How about OpenSPARC?
https://en.wikipedia.org/wiki/OpenSPARC
Edited 2018-07-31 08:48 UTC
I would have gone the 68k or sh2 path instead.
It is better than other free alternatives such as Sparc (nobody wants anymore register windows or delayed branches) or OpenRISC (amateurish).
The basic instruction set has minimal addressing modes and is limited to 2R1W instructions.
ARM64 is arguably better designed for high performance implementations.
RISC-V has benefitted from 30 years of RISC CPUs and has avoided the most obvious errors, for example delayed branches. It is also designed for an era where the MMU and FPU could be fitted on the same die without having to do memory accesses for FPU registers (68K, x87, SPARC ) or pretending that the MMU is a coprocessor (MIPS, 68K)
I wish someone made also a new CISC CPU (if I had time…) which would have similarly benefitted of the last 30 years of CISC and made a “clean” 32-64bits CISC without the x86, VAX and 68K errors (indirect memory accesses, not enough registers, prefix creep…)
Treza,
Clearly x86 registers have always been a huge mess from the start. Likewise, x87 was a huge mess from the start. A lot of intel’s architecture extensions since then have been ugly, but are often the consequence of backwards compatibility, like how MMX registers overlap with floating point unit registers. Or the A20 memory address line being disabled using the PC keyboard controller. Objectively awful, but “backwards compatible”.
AMD64 added new general purpose registers, which helped the register starved platform, though went down the path of “prefix creep”.
I’m curious what your new CISC would look like though. Clearly you could avoid x86’s most egregious mistakes with a clean slate. But aren’t things like indirect memory and opcode prefixes a staple of CISC? If you remove too many of CISC’s irregularities then doesn’t that start to become RISC? So I’m curious where you draw the line.
x86 don’t really have indirect accesses as MC68020+. MC68020 indirect is awful because, in the presence of MMUs faults, the CPU needs, sometimes, to continue the execution of partially executed instructions. Look for MC68K different exception stack formats : Terrible. x86 instructions are either executed or not executed, there is no “partially executed” nonsense (but IIRC the x87 coprocessor can generate memory incoherence, instructions sequences that cannot be resumed after a trap)
A few things in a CISC :
– Complex addressing modes : Something like [R1++ + 4*R2 + 100]
– register-mem operations : ADD AX,[4*BX+100]. With complex addressing, reduce a bit register pressure compared to load-store architectures. Allows more explicit register use (what is a pointer, what is data,…). PUSH / POP instructions.
– Complex instructions like memcopy, memfill (REP MOVS…) tuned for best performance on every implementation of the architecture (instead of different code paths, live patching of code). Copying, clearing, searching memory still represent a significant portion of CPU time. With explicit instructions, the microarchitecture could be adapted either for best performance (for example deferred copies) or use simple microcode. In a RISC, the CPU must guess that after reading and writing many times consecutive memory accesses, maybe it’s doing a memcopy, maybe it should prefetch the next cache line. Another example : Usually one of the registers is designated as the stack pointer in the ABI so the CPUs may have dedicated hardware for that register even it is not different from any other register in the ISA.
The general theory is that, by having more complex, higher level instructions, the CPU may be able to extract more information, and be more efficient. It’s not a revolutionary new idea, but it deserves fair treatment. Nowadays, CISC is reduced to x86, and x86 is ugly, therefore CISC sucks.
(of couse, CISC and RISC are simple terms which cannot faithfully express many different aspects,…)
The 68020+ addressing modes are a God given, simple and sleek, allowing C-like array addressing even in assembly language without having to do it by hand using several instructions. That the MMU or FPU can break sequences, that’s a problem. And yeah, the stack frame formats…
If you look closely, all these are basically addressed by newer architecture like ARMv7 or ARMv8.
But who really is bothered about the underlying ISA if it’s “good enough”? Most people don’t code in Assembly anyway.
If RISC-V can perform as needed, within the required (or lower) power envelope – it will win.
What would you classify a leading edge ISA?
Anyone with a bit of low-level knowledge can come up with an instruction set. It’s the actual implementation of the instruction set with modern features like caching and branch prediction that makes it hard.
Exactly. The ISA, especially in RISC-V simplistic form isn’t even the hard part.
Apart from lowRSIC, there isn’t any other open source RSIC-V design out there. All the RSIC-V implementation are proprietary.
What it is competing against, are the low end ARM micro controller implementation, the embedded M series, and other ISA like Arc.
And it only works in Western Digital because they ship billions of micro controller every year. Even if that is $0.1 per controller, it is $100M per year. For something so simple WD could invest into making it their-self.
So unless you have a huge volume, then designing your own controller from RSIC-V make sense. Buying blueprint from ARM still saves you money and time. The nanoMIPS is actually even better than what ARM has to offer.
Hi,
Can someone explain me or point to an article how long a certain architecture is closed from reimplementation? I thought anything pre-SSE is now free to be reimplemented, and SSE is soon to open to the public..
This is an old question!
There were a few trials about instruction sets in the past. There were clones of Data General NOVA in the 70’s, as well as DEC PDPs and IBM minis.
In the 80’s, MIPS attacked LEXRA because they implemented their instruction set without asking. In an alternative timeline, they could have collaborated as a CPU core licensing business and be what ARM is today.
There is no risk in re-creating 20 years old CPUs such as the original 32bits i486 or Pentium instruction set. Someone already made a free i486SX in Verilog, albeit a very slow and inefficient implementation.
For more recent CPUs, there may be patents less on the instructions themselves than on now to implement these instructions. This is important because, so far, you don’t need a license to emulate an instruction set. And it angers Intel when Qualcomm and Microsoft tries to emulate the x86 for Windows-on-ARM.
Intel also licenses their instruction set to VIA, Nvidia also had a license, but i think it lapsed.
More correctly, VIA wound up buying Cyrix’s IP from National Semiconductor and thus received the license that IBM required Intel to grant back in the day to ensure they weren’t depending on a single supplier.
SiS also has a clean license from their purchase of RISE technologies and parts of NEC. They are still producing their and developing x86 chips (Vortex series) for “DM&P”. Latest one i think is a dual core chip that is fully DOS, TRON, Linux and Windows compatible.
However as they are extremely low power usage chips they are mostly used in embedded.
Also ZF micro has a valid license for 486 cpus that is manufactured for them by IBM.
It seems odd that Cyrix license was tranferable – AFAIK, AMD one isn’t; if someone buys AMD, the license terminates; AMD x86 business dies with them…