We’ve talked about Chimera Linux a few times now on OSNews, so I won’t be repeating what makes it unique once more. The project announced today that it will be shuttering its RISC-V architecture support, and considering RISC-V has been supported by Chimera Linux pretty much since the beginning, this is a big step. The reason is as sad as it is predictable: there’s simply no RISC-V hardware out there fit for the purpose of building a Linux distribution and all of its packages.
Up until this point, Chimera Linux built its RISC-V variant “on an x86_64 machine with qemu-user
binfmt emulation coupled with transparent cbuild
support”. There are various problems with this setup, like serious reliability problems, not being able to test packages, and a lack of performance. The setup was intended to be a temporary solution until proper, performanct RISC-V hardware became available, but this simply hasn’t happened, and it doesn’t seem like this is going to change soon.
Most of the existing RISC-V hardware options simply lack the performance to be used as build machines (think Raspberry Pi 3/4 levels of performance), making them even slower than the emulation setup they’re currently using. The only machine that in theory would be performant enough to serve as a build machine is the Milk-V Pioneer, but this machine has serious other problems, as the project notes:
Milk-V Pioneer is a board with 64 out-of-order cores; it is the only of its kind, with the cores being supposedly similar to something like ARM Cortex-A72. This would be enough in theory, however these boards are hard to get here (especially with Sophgon having some trouble, new US sanctions, and Mouser pulling all the Milk-V products) and from the information that is available to me, it is rather unstable, receives very little support, and is ridden with various hardware problems.
↫ Chimera Linux website
So, not only is the Milk-V Pioneer difficult to get due to, among other things, US sanctions, it’s also not very stable and receives very little support. Aside from the Pioneer and the various slow and therefore unsuitable options, there’s nothing else in the pipeline either for performant RISC-V hardware, making it quite difficult to support the architecture. Of course, this could always change in the future, but for now, supporting RISC-V is clearly not an option for Chimera Linux.
This is clearly sad news, especially for those of us hoping RISC-V becomes an open source hardware platform that we can use every day, and I wonder how many other projects are dealing with the same problem.
This is a very good reason. It’s very difficult for some open source OS projects to get decent build machines for more common hardware, let alone exotic hardware.
When I first started MidnightBSD, we had i386, amd64 and sparc64. I acquired a sun desktop system to do builds on and it died rather quickly. Then I got two 1u sun netra servers. It took nearly 3 weeks to build packages then. I had under 1000 packages at the time. I was living in an apartment and they were loud also. It just didn’t make sense to keep sparc64 support around. Then desktops were killed at sun.
I bought a risc v board with hopes of adding support in MidnightBSD eventually. It’s been flaky and I’ve had trouble reliably running linux on it, let alone getting a bsd to boot. it’s also pretty obvious that a lot of packages won’t build due to the ram and storage capabilities.
Maintaining two architectures is hard enough. For a release, I need to build packages and need critical ports to build. It takes about 3 days to do a package build for one architecture with my current hardware.
Unfortunate and a bit ironic given that Fedora just adopted RISC-V as a teir one platform last month stating “2025 is the year of Linux on RISC-V”.
The SiFive HiFive Premier P550 is available with 32 GB of RAM. I would have thought that it could handle a full build of Chimera Linux. It is indeed slow though. At the stated equivalent performance as a Pi 4, you would need 10 of them to equal an M4 Mac.
If RISC-V machines are too slow though, it makes me wonder what they are using for 32 bit PPC (still supported) or what they are planning for ARM7 (an arch they say they are considering).
PPC and (most) ARM chips are backwards compatible, so they can run a 32bit userland on a modern 64bit host. The only difficulty will be for certain things (rust for instance) that require a lot of memory during compilation and can fail to build with only a 32bit address space.
How were compilations done before 64 bits existed ?
Kochise,
Hmm, good question about compilation memory requirements over time. What are memory requirements for compiling the same program today versus 10, 20, 30 years ago?
Optimizers today may be doing more sophisticated analysis and require more memory, I’m curious how much.
I suspect a big reason more memory is needed today is simply because we have so many cores compiling stuff in parallel, ramping up memory requirements to keep those cores busy without swapping. We could do with a lot less memory if we had fewer compile jobs running in parallel.
I don’t know how much it affects compilation, but 64bit binary targets are quite a bit bigger than 32bit binaries. I guess this might affect compilation as well.
I’ve always sought to stomp out swapping on my computers. Commodity DDR ram is fairly cheap compared to other components so I’d over-provision ram so it wouldn’t be a bottleneck for me. I’ve been compiling linux kernels for decades and I think I was always good on ram. I always wanted more CPU though, haha. Actually I’m pretty spoiled now days, even though my cpus are a few generations old, I have enough cores to tackle large compiles in much less time than it used to take when I started building linux kernels.
Compilers are making different choices knowing that the RAM is out there for sure. This includes running optimizations which use a lot of RAM. As you say, we are compiling on multiple cores at once, all of which are consuming as much or more RAM individually as we use to need overall. Even simple choices can have a big impact. In the past, intermediate steps would be written to disk and reading from large files would avoid having to load the entire file at once. Today, we just load everything into RAM.
The code bases we are compiling are far larger than in the past. You mention the Linux kernel. Linux 2.4 had about 3 million lines of code. It has almost 40 million now (a lot of which is drivers to be fair).
LLVM alone is over a gigabyte of code. It is also an example of code that is much more complex than in the past (eg. C++ templates). It is going to take a lot of RAM to build.
Finally, we have compilers for languages which are now far more ambitious knowing that we have CPU and RAM to spare. The Rust compiler is doing far more work than a C compiler. Things like the borrow checker and macros are not coming for free.
It would be interesting to see how modern compilers fair vs older ones on the exact same code though. How much RAM was required to build GCC 2.95 back in the day. How much would GCC 14 use compiling the same code today? I tried to do some simple Googling but the answers did not immediately appear. It would be an interesting experiment to run.