Recently, I’ve started to explore RISC-V. I experienced the journey as pretty refreshing, particularly because I’ve been working on x86 low-level software almost exclusively for about 10 years.
In this post, I want to quickly go over some high-level stumbling blocks I noticed as I was starting out. I’m probably going to write about more technical differences in subsequent blog posts.
While reading the rest of this post, please keep in mind: RISC-V is simple! If you managed to do any low-level coding on x86, you will find your way around RISC-V with little effort. It’s easy to forget how crazy some parts of x86 are. Just see my past posts.
The more interest in RISC-V, the quicker we can expect a RISC-V laptop to run Linux on. I’m all for it.
RISC-V is more unlikely in laptops than Cortex-A cores are. But let’s wait what Apple will show this autumn 😉
DeepThought,
That’s what I was thinking too. The biggest challenge has nothing to do with technical merit, but just getting hardware off the ground and priced competitively. I hate to say it, but part of the problem with open platforms is that they’re not as profitable for big manufacturers who would rather invest in something proprietary that they can own exclusively.
From the article…
This is ok for researching RISC-V, but until we have real hardware that is both economical and performant, it’s just not going to be popular. Ideally we can focus on the benefits of openness, like not being dependent upon unverifiable & unaudited binary blobs from intel or amd that have contained vulnerabilities in the past and could even contain backdoors. Theoretically there’s value in that, but ordinary consumers don’t seemed to care. Today’s consumers are content with devices that they know 3rd party corporations have the keys to. We’re past the days when owners had an expectation of being in control over the technology they “own” 🙁
Judging RISC-V on that is just really early right now.
I see companies shipping it instead of other cores as part of their own larger products. Just look at what nvidia and WD are doing. They take the RISC-V core and adapt it for their own product.
Lennie,
I looked this up, but I didn’t find much. A bunch of articles referencing the exact same IEEE “access denied” link, including this one
https://alterslash.org/day/20180728#article-12409620
and an nvidia pdf highlighting a successor to falcon they’re calling “nv-riscv”
https://riscv.org/wp-content/uploads/2016/07/Tue1100_Nvidia_RISCV_Story_V2.pdf
Perhaps it is too early to say much about it, but their wording makes me question their end goal: “Build our own implementation of RISCV core”…”Tool for flexibility –That users can easily customize ISA”
do they intend to build a proprietary CPU silicon using the RISCV architecture and tools?
One of the commentators from the first link has similar concerns:
We’ll see where it goes, hopefully my cynicism is unwarranted! But boy would it suck if open RISC-V made a grand entrance into consumer devices only to be fully tivoized by corporations exploiting the free licensing costs of RISC-V for themselves while denying us the benefits of an open hardware platform. Corporate greed is the reason I remain skeptical about the viability of open hardware.
The “Falcon” processor NVidia refer to, is a design for auxiliary processors littered around their GPUs. They are resposible for all kinds of housekeeping (like power management), but if I’m not mistaken also form the interface to other more specialized blocks like video decoding.
The developers for the open source Nouveau driver managed to reverse engineer big parts of those processors, in some cases even creating their own firmware to run on those embedded cores. For newer models NVidia has made that impossible though, the hardware now requires those firmwares to be cryptographically signed. Which is the biggest reason we cannot expect functioning power management from the open source driver anytime soon, if ever.
NVidia switching from Falcon to RiscV for those embedded cores is probably just a cost-saving measure. Falcon was based on the commercially available XTensa design by Tensilica, for which they probably payed recurring or even per-unit licensing fees…
So yes, it seems the comment you quoted at the end is a nice illustration of this.
RISC V is more likely to disrupt SOC than discrete microprocessors. More like a RISC V RPi than a Laptop.
lapx432,
I can see this happening, although RPI used commodity android CPUs with huge scales of economy, which is why they’re so cheap. In order to achieve commodity pricing, RISCV would need major commercial backing. Maybe android manufacturers could pick up RISC-V? I’d be all for it if they managed to do away with those damned binary blobs!
You realize all AMD cpus are SoCs… From the smallest whimpist sempron to EPYC… SoC.
Basically if you want to get into workstation space with RISC-V someone should look at making a RISC-V that plugs into an AM4 motherboard. From there out it’s mostly I2C devices PCIe, and xGMI which between the chipset and the CPU.
The most interesting part to me is the simd extension. In x86/x64 we had MMX SSE/SSE2/SSE3 and the different AVXs with different vector sizes. In RISC-V you specify the vector size you want to use at compile time: you use the same instructions if you use a 64 bit vector than with 128, 256 or even bigger vectors. Even more, you can instruct to use the biggest available size so you can use the same compiled code with a CPU with a 256 bits simd unit than with a 2048 bits unit and each CPU will use its biggest size.
jgfenix,
I still haven’t learned much about RISC-V. As interesting as it is, who knows when I’ll have practical hardware for it? I was interested in itaniums too for their vector processing too and that never happened, haha.
What you describe though is more or less how GPGPU programming works with much larger vectors (cuda as well as opencl): the same code can often be distributed across many different execution unit arrangements. You’d need very large block sizes to be competitive against GPUs, and although that could be feasible, GPUs and CPUs have somewhat competing engineering goals. CPUs typically use very deep speculative pipelines that are less scalable than normal GPUs and furthermore intel’s AVX extensions downclock the CPU.
http://composter.com.ua/documents/TurboBoostUpAVXClockDown.pdf
(slide 19)
I’m not entirely sure whether a CPU that mixes vector and sequential code can really be optimal compared to dedicating silicone especially for long pipelines versus vectors.
I wonder how well it would work if some day we had a computer built entirely with GPUs and no CPU. We could build a whole OS on GPU primitives: disks, networking, graphics, sound, etc. I imagine this might be technically possible, but not likely to happen so long as the innards of modern GPUs remain proprietary.
I think you could do something like Larrabee tried to do but better with RISC-V
I found this:
http://cseweb.ucsd.edu/~mbtaylor/papers/Davidson_IEEE_Micro_Celerity_2018.pdf