A group of enthusiasts are proposing a new set of graphics instructions designed for 3D graphics and media processing. These new instructions are built on the RISC-V base vector instruction set. They will add support for new data types that are graphics specific as layered extensions in the spirit of the core RISC-V instruction set architecture (ISA). Vectors, transcendental math, pixel, and textures and Z/Frame buffer operations are supported. It can be a fused CPU-GPU ISA. The group is calling it the RV64X as instructions will be 64-bit long (32 bits will not be enough to support a robust ISA).
There’s a lot of activity around RISC-V, and with it being open and freely usable, a lot of – at first – cheaper, embedded uses will be taken over by RISC-V, hopefully followed by more performant use cases in the near future.
If it supports high end graphics gor animation and gaming also GPu processing and mining akin to OpenCL, I’m game
I don’t think it will support OpenCL. At least this was what I understood from the article. But of course, I might be mistaken.
spiderdroid,
I’d find GPGPU very useful as well. Echoing sukru, the article doesn’t really say whether opencl is specifically planned.
Compute tasks are technically possible using only the vulkan API, but this is only useful if you are willing & able to code to vulkan instead of opencl.
I’ve been acquiring ARM SBCs for embedded projects but their proprietary nature creates ongoing friction and I would drop them in a heartbeat if there were an open alternative capable of replacing ARM. Here’s hoping that RISCV stays open and doesn’t get usurped by proprietary manufacturers (how realistic is my wish?). The article says it will be two years for hardware to ship, we need a “review” article once it does!
Alfman,
RISC-V is in a curious position right now. The base ISA is sound, but has some shortcomings:
https://gist.github.com/erincandescent/8a10eeeea1918ee4f9d9982f7618ef68
On the other hand there is an extension mechanism. So if a player like Apple comes, and usurps the movement by adding lots of proprietary extensions for a custom device, things might change. On the other hand, I don’t think that will be easy, or even possible, for two reasons:
First, they would need good compilers. And GCC or LLVM will not do the the work freely for a closed system.
Second, CPU design is competitive, and those extensions most likely will be copied, possibly in a free version.
So overall, at least in the short term, RISC-V would stay open, albeit with some design issues to fix.
For the GPU, PS3 had the same idea with CELL processors. They had 7 (?) extra cores with no access to RAM but very fast parallel instruction to supplement the main CPU. If that GPU design is just using the RISC-V instruction set, albeit in a very limited but much wider setting, it might actually work. After all many actual supercomputers were powered by those CELL CPUs.
Fact 1. The author of the assessment worked for ARM so some bias should be accounted for.
Fact 2. The author worked for ARM, so definitely knows the subject.
I just hope to see a RISC-V counter to M1. If a decent GPU with a unified memory architecture arises, we’re not far. There need only be DSP, codecs, neural processing, and encryption. It’d be fun to try this out on a FPGA!
It looks like architectural changes are necessary. Once again the previous post, and the discussion at https://news.ycombinator.com/item?id=24958423 point out that there are some RISC-V design choices that hinder its ability to scale.
For example not being able to auto sync for different sized opcodes would limit the decoder lookahead size.
https://news.ycombinator.com/item?id=25257932 (M1 cited to have a decoder width of 8 instructions, i.e.: it will run up to 8 instructions in parallel at the same time).
And they also mention limits with atomic operations, but that could possibly be overcome with extensions.
sukru,
Conversely a more ridged opcode size is less compressed, but it simplifies decoding enabling it to be faster and potentially brings simpler & longer lookahead circuitry. Funnily enough it’s always the same old RISC vs CISC debate. Everything is a tradeoff in this game. People continuously debate microarchitectural differences, but ultimately the end result is what matters most. Alas, it might be a long time before we see highly optimized RISCV cores produced on high-end fabs. Consider that ARM was a lightweight for the longest time and it took decades to get promoted to high end specs.