Keep OSNews alive by becoming a Patreon, by donating through Ko-Fi, or by buying merch!

Linux Archive

Chimera Linux drops RISC-V support because capable RISC-V hardware doesn’t exist

We’ve talked about Chimera Linux a few times now on OSNews, so I won’t be repeating what makes it unique once more. The project announced today that it will be shuttering its RISC-V architecture support, and considering RISC-V has been supported by Chimera Linux pretty much since the beginning, this is a big step. The reason is as sad as it is predictable: there’s simply no RISC-V hardware out there fit for the purpose of building a Linux distribution and all of its packages. Up until this point, Chimera Linux built its RISC-V variant “on an x86_64 machine with qemu-user binfmt emulation coupled with transparent cbuild support”. There are various problems with this setup, like serious reliability problems, not being able to test packages, and a lack of performance. The setup was intended to be a temporary solution until proper, performanct RISC-V hardware became available, but this simply hasn’t happened, and it doesn’t seem like this is going to change soon. Most of the existing RISC-V hardware options simply lack the performance to be used as build machines (think Raspberry Pi 3/4 levels of performance), making them even slower than the emulation setup they’re currently using. The only machine that in theory would be performant enough to serve as a build machine is the Milk-V Pioneer, but this machine has serious other problems, as the project notes: Milk-V Pioneer is a board with 64 out-of-order cores; it is the only of its kind, with the cores being supposedly similar to something like ARM Cortex-A72. This would be enough in theory, however these boards are hard to get here (especially with Sophgon having some trouble, new US sanctions, and Mouser pulling all the Milk-V products) and from the information that is available to me, it is rather unstable, receives very little support, and is ridden with various hardware problems. ↫ Chimera Linux website So, not only is the Milk-V Pioneer difficult to get due to, among other things, US sanctions, it’s also not very stable and receives very little support. Aside from the Pioneer and the various slow and therefore unsuitable options, there’s nothing else in the pipeline either for performant RISC-V hardware, making it quite difficult to support the architecture. Of course, this could always change in the future, but for now, supporting RISC-V is clearly not an option for Chimera Linux. This is clearly sad news, especially for those of us hoping RISC-V becomes an open source hardware platform that we can use every day, and I wonder how many other projects are dealing with the same problem.

A love letter to Void Linux

I installed Void on my current laptop on the 10th of December 2021, and there has never been any reinstall. The distro is absurdly stable. It’s a rolling release, and yet, the worst update I had in those years was one time, GTK 4 apps took a little longer to open on GNOME. Which was reverted after a few hours. Not only that, I sometimes spent months without any update, and yet, whenever I did update, absolutely nothing went wrong. Granted, I pretty much only did full upgrades, and never partial upgrades, which generally help a lot. Still. ↫ Sarah Mathey Void is love, Void is life. It’s such an absurdly good distribution, and if it wasn’t for the fact that I prefer Fedora KDE by a hair, I’d be using Void on all of my machines. The only reason I’m not is that I would set up Void very close to what I get from Fedora KDE out of the box anyway, so my laziness gets the better of me there. I used to run Void on my POWER9 hardware, but that architecture is no longer supported by Void. If you’re looking for a Linux distribution free of systemd, there’s little out there that can equal or top Void, and even if you don’t care much about init systems, Void still has a lot to offer. The documentation is decent, its package manager is a joy to use, the repositories are loaded and up-to-date, and it strikes a great balance between building an entire Linux system from scratch on the one hand, and complete desktop distributions like Fedora on the other. The best way I can describe it is that Void feels like the most BSD-y of Linux distributions. Void is my fallback, in case Fedora for whatever reason slips up and dies. IBM, Red Hat, “AI” – there’s a lot of pits Fedora can fall into, after all.

Flathub safety: a layered approach from source to user

About two weeks ago we talked about why Fedora manages its own Flatpak repository, and why that sometimes leads to problems with upstream projects. Most recently, Fedora’s own OBS Flatpak was broken, leading to legal threats from the OBS project, demanding Fedora remove any and all branding from its OBS Flatpak. In response, Fedora’s outgoing project leader Matthew Miller gave an interview on YouTube to Brodie Robertson, in which Miller made some contentious claims about a supposed lack of quality control, security, and safety checks in Flathub. These claims led to a storm of criticism directed at Miller, and since I follow quite a few people actively involved in the Flatpak and Flathub projects – despite my personal preference for traditional Linux packaging – I knew the criticism was warranted. As a more official response, Cassidy James Blaede penned an overview of all the steps Flathub takes and the processes it has in place to ensure the quality, security, and safety of Flathub and its packages. With thousands of apps and billions of downloads, Flathub has a responsibility to help ensure the safety of our millions of active users. We take this responsibility very seriously with a layered, in-depth approach including sandboxing, permissions, transparency, policy, human review, automation, reproducibility, auditability, verification, and user interface. Apps and updates can be fairly quickly published to Flathub, but behind the scenes each one takes a long journey full of safety nets to get from a developer’s source code to being used on someone’s device. While information about this process is available between various documentation pages and the Flathub source code, I thought it could be helpful to share a comprehensive look at that journey all in one place. ↫ Cassidy James Blaede Flathub implements a fairly rigorous set of tests, both manual and automated, on every submission. There’s too many to mention, but reading through the article, I’m sure most of you will be surprised by just how solid and encompassing the processes are. There are a few applications from major, trusted sources – think applications from someone like Mozilla – who have their own comprehensive infrastructure and testing routines, but other than those few, Flathub performs extensive testing on all submissions. I’m not a particular fan of Flatpak for a variety of reasons, but I prefer to stick to facts and issues I verifiably experience when dealing with Flatpaks. I was definitely a bit taken aback by the callousness with which such a long-time, successful Fedora project leader like Miller threw Flathub under the bus, but at least one of the outcomes of all this is greater awareness of the steps Flathub takes to ensure the quality, security, and safety of the packages it hosts. Nothing is and will be perfect, and I’m sure issues will occasionally arise, but it definitely seems like Flathub has its ducks in a row.

Oasis: a small, statically-linked Linux system

You might think the world of Linux distributions is a rather boring, settled affair, but there’s actually a ton of interesting experimentation going on in the Linux world. From things like NixOS with its unique packaging framework, to the various immutable distributions out there like the Fedora Atomic editions, there’s enough uniqueness to go around to find a lid for every pot. Oasis Linux surely falls into this category. One of its main unique characteristics is that it’s entirely statically linked. All software in the base system is linked statically, including the display server (velox) and web browser (netsurf). Compared to dynamic linking, this is a simpler mechanism which eliminates problems with upgrading libraries, and results in completely self-contained binaries that can easily be copied to other systems. ↫ Oasis GitHub page That’s not all it has to offer, though. It also offers fast and 100% reproducible builds, it’s mostly ISO C conformant, and it has minimal bootstrap dependencies – all you need is a “POSIX system with git, lua, curl, a sha256 utility, standard compression utilities, and an x86_64-linux-musl cross compiler”. The ISO C-comformance is a crucial part of one of Oasis’ goals: to be buildable with cproc, a small, very strict C11 compiler. It has no package manager, but any software outside of Oasis itself can be installed and managed with pkgsrc or Nix. Another important goal of the project is to be extremely easy to understand, and its /etc directory is honestly a sight to behold, and as the project proudly claims, the most complex file in there is rc.init at a mere 16 lines. The configuration files are indeed incredibly easy to understand, which is a breath of fresh air compared to the archaic stuff in commercial UNIX or the complex stuff in modern Linux distributions that I normally deal with. I’m not sure is Oasis would make for a good, usable day-to-day operating system, but I definitely like what they’re putting down.

Three years of ephemeral NixOS: my experience resetting root on every boot

We had a bit of a bug caused by changes we made to make quotes look better, but we’ve fixed it now, so we’re back on track (you may need to do a force-reload in your browser). Sorry for the disruption – and if you want to stay up-to-date on such issues next time it (inevitably) happens, you should follow the OSNews Fedi account (or just bookmark it without following it, if you’re not interested in social media). Anyway, back to the news! Fresh OS installs are bliss. But the joy fades quickly as installing and uninstalling programs leave behind a trail of digital debris. Even configuration management and declarative systems like NixOS miss crucial bits, like the contents of /var/lib or stray dotfiles. This debris isn’t just unsightly. It can be load-bearing, crucial to the functioning of your system, but outside of your control, and not preserved on rebuilds. Full system backups merely preserve this chaos. I wanted a clean slate, automatically, every boot. “Erase your darlings” inspired an idea in the NixOS community: allowlisting files and directories that persist across reboots. Anything not on the list gets wiped. The simplest implementation involves mounting / as a tmpfs (i.e. in RAM), and then bind-mounting or symlinking the allowlisted items to a disk-backed filesystem. ↫ Tuxes.uk I dabbled in NixOS over the past week or so, and while I find it intriguing and can definitely see a use for it, I also found it rather needlessly cumbersome and over-engineered for something as simple as a desktop system. I felt like I was taking a whole bunch of additional steps to do basic things, without needing any of the benefits Nix and NixOS bring. This doesn’t mean Nix and NixOS are bad – just that for me, personally, it doesn’t fill any need I have. Taking the Nix concept as far as starting with a completely fresh installation on every boot sounds absolutely insane to me. Of course, it’s not entirely fresh on every reboot, as several applications and important configuration elements do survive the reboot, but it’s still quite drastic compared to what everyone else is doing. Unsurprisingly, there are a few issues; it’s hard to know what really needs and doesn’t need saving, there might be some unexpected issues because software doesn’t expect to be wiped, and so on. Overall though, it seems to work susprisingly well, and for a specific type of person, this is definitely bliss.

TuxTape: a kernel livepatching solution

Geico, an American insurance company, is building a live-patching solution for the Linux kernel, called TuxTape. TuxTape is an in-development kernel livepatching ecosystem that aims to aid in the production and distribution of kpatch patches to vendor-independent kernels. This is done by scraping the Linux CNA mailing list, prioritizing CVEs by severity, and determining applicability of the patches to the configured kernel(s). Applicability of patches is determined by profiling kernel builds to record which files are included in the build process and ignoring CVEs that do not affect files included in kernel builds deployed on the managed fleet. ↫ Presentation by Grayson Guarino and Chris Townsend It seems to me something like live-patching the Linux kernel should be a standardised framework that’s part of the Linux kernel, and not several random implementations by third parties, one of which is an insurance company. There’s a base core of functionality for live-patching in the Linux kernel since 4.0, released in 2015, but it’s extremely limited and requires most of the functionality to be implemented separately, through things like Red Hat’s kpatch and Oracle’s Ksplice. Geico is going to release TuxTape as open source, and is encouraging others to adopt and use it. There are various other solutions out there offering similar functionality, so you’re not spoiled for choice, and I’m sure there’s advantages and disadvantages to each. I would still prefer if functionality like this is a standard feature of the kernel, not something tied to a specific vendor or implementation.

Run Linux inside a PDF file via a RISC-V emulator

You might expect PDF files to only be comprised of static documents, but surprisingly, the PDF file format supports Javascript with its own separate standard library. Modern browsers (Chromium, Firefox) implement this as part of their PDF engines. However, the APIs that are available in the browser are much more limited. The full specfication for the JS in PDFs was only ever implemented by Adobe Acrobat, and it contains some ridiculous things like the ability to do 3D rendering, make HTTP requests, and detect every monitor connected to the user’s system. However, on Chromium and other browsers, only a tiny subset of this API was ever implemented, due to obvious security concerns. With this, we can do whatever computation we want, just with some very limited IO. ↫ LinuxPDF GitHub page I’m both impressed and concerned.

The GNU Guix System

GNU Guix is a package manager for GNU/Linux systems. It is designed to give users more control over their general-purpose and specialized computing environments, and make these easier to reproduce over time and deploy to one or many devices. ↫ GNU Guix website Guix is basically GNU’s approach to a reproducible, functional package manager, very similar to Nix because, well, it’s based on Nix. GNU also has a Linux distribution built around Nix, the GNU Guix System, which is fully ‘libre’ as all things GNU are, and also makes use of the GNU Shepard init system. Both Shepard and Guix make use of Guile. The last release of the GNU Guix System is a few years old already, but it’s a rolling release, so that’s not much of an issue. It uses the Linux kernel, but support for GNU Hurd is also being worked on, for whatever that’s worth. There’s also a third-party distribution that is built around the same projects, called rde. It focuses on being lightweight, ready for offline use, and minimal distractions. It’s probably not suitable for most normal users, but if you’re a power user and you’re looking for something a little bit different, this could be for you. While it’s in active development, it’s considered usable and stable by its creators. I haven’t tried it yet, but I’m definitely intrigued by what it has to offer. Nix sucks up a lot of the attention in this space, so it’s interesting to see some of the alternatives that aim for similar goals.

Linux 6.14 with Rust: “We are almost at the ‘write a real driver in Rust’ stage now”

With the Linux 6.13 kernel, Greg Kroah-Hartman described the level of Rust support as a “tipping point” for Rust drivers with more of the Rust infrastructure having been merged. Now for the Linux 6.14 kernel, Greg describes the state of the Rust driver possibilities as “almost at the “write a real driver in rust” stage now, depending on what you want to do.“ ↫ Michael Larabel Excellent news, as there’s a lot of interest in Rust, and it seems that allowing developers to write drivers for Linux in Rust will make at least some new and upcoming drivers comes with less memory safety issues than non-Rust drivers. I’m also quite sure this will anger absolutely nobody.

When a sole maintainer steps down, Linux drivers become orphans

The Linux kernel has become such an integral, core part of pretty much all aspects of the technology world, and corporate contributions to the kernel make up such a huge chunk of the kernel’s ongoing development, it’s easy to forget that some parts of the kernel are still maintained by some lone person in Jacksonville, Nebraska, or whatever. Sadly, we were reminded of this today when the sole maintainer of a few DRM (no, not the bad kind) announced he can no longer maintain the gud, mi0283qt, panel-mipi-dbi, and repaper drivers. Remove myself as maintainer for gud, mi0283qt, panel-mipi-dbi and repaper. My fatigue illness has finally closed the door on doing development of even moderate complexity so it’s sad to let this go. ↫ Noralf Trønnes There must be quite a few obscure parts of the Linux kernel that are of no interest to the corporate world, and thus remain maintained by individuals in their free time, out of some personal need or perhaps a sense of duty. If one such person gives up their role as maintainer, for whatever reason, you better hope it’s not something your workflow relies, because if no new maintainer is found, you will eventually run into trouble. I hope Trønnes gets better soon, and if not, that someone else can take over from him to maintain these drivers. The gud driver seems like a really neat tool for homebrew projects, and it’d be sad to see it languish as the years go by.

Linux 6.13 released

Linux 6.13 comes with the introduction of the AMD 3D V-Cache Optimizer driver for benefiting multi-CCD Ryzen X3D processors, the new AMD EPYC 9005 “Turin” server processors will now default to AMD P-State rather than ACPI CPUFreq for better power efficiency, the start of Intel Xe3 graphics bring-up, support for many older (pre-M1) Apple devices like numerous iPads and iPhones, NVMe 2.1 specification support, and AutoFDO and Propeller optimization support when compiling the Linux kernel with the LLVM Clang compiler. Linux 6.13 also brings more Rust programming language infrastructure and more. ↫ Michael Larabel A big release, with a ton of new features. It’ll make its way to your distribution soon enough.

A Microsoft change to the Linux kernel caused major breakage, but was caught in time

A change to the Linux 6.13 kernel contributed by a Microsoft engineer ended up changing Linux x86_64 code without proper authorization and in turn causing troubles for users and now set to be disabled ahead of the Linux 6.13 stable release expected next Sunday. ↫ Michael Larabel What I like about this story is that it seems to underline that the processes, checks, and balances in place in Linux kernel development seem to be working – at least, this time. A breaking change was caught during the prerelease phase, and a fix has been merged to make sure this issue will be fixed before the stable version of Linux 6.13 is released to the wider public. This all sounds great, but there is an element of this story that raises some serious questions. The change itself was related to EXECMEM_ROX, and was intended to improve performance of 64bit AMD and Intel processors, but in turn, this new code broke Control Flow Integrity on some setups, causing some devices not to wake from hibernation while also breaking other features. What makes this spicy is that the code was merged without acknowledgement from any of the x86 kernel maintainers, which made a lot of people very unhappy – and understandably so. So while the processes and checks and balances worked here, something still went horribly wrong, as such changes should not be able to be merged without acknowledgement from maintainers. This now makes me wonder how many more times this has happened without causing any instantly discoverable issues. For now, some code has been added to revert the offending changes, and Linux 6.13 will ship with Microsoft’s bad code disabled.

Chimera Linux enters beta

We’ve talked about Chimera Linux before – it’s a unique Linux distribution that combines a BSD userland with the LLVM/Clang toolchain, and musl. Its init system is dinit, and it uses apk-tools from Alpine as its package manager. None of this has anything to do with being anti-anything; the choice of BSD’s tools and userland is mostly technical in nature. Chimera Linux is available for x86-64, AArch64, RISC-V, and POWER (both little and big endian). I am unreasonably excited for Chimera Linux, for a variety of reasons – first, I love the above set of choices they made, and second, Chimera Linux’ founder and lead developer, q66, is a well-known and respected name in this space. She not only founded Chimera Linux, but also used to maintain the POWER/PowerPC ports of Void Linux, which is the port of Void Linux I used on my POWER9 hardware. She apparently also contributed quite a bit to Enlightenment, and is currently employed by Igalia, through which she can work on Chimera. With the description out of the way, here’s the news: Chimera Linux has officially entered beta. Today we have updated apk-tools to an rc tag. With this, the project is now entering beta phase, after around a year and a half. In general, this does not actually mean much, as the project is rolling release and updates will simply keep coming. It is more of an acknowledgement of current status, though new images will be released in the coming days. ↫ Chimera Linux’s website Despite my excitement, I haven’t yet tried Chimera Linux myself, as I figured its pre-beta stage wasn’t meant for an idiot like me who can’t contribute anything meaningful, and I’d rather not clutter the airwaves. Now that it’s entered beta, I feel like the time is getting riper and riper for me to dive in, and perhaps write about it here. Since the goal of Chimera Linux is to be a general-purpose distribution, I think I’m right in the proper demographic of users. It helps that I’m about to set up my dual-processor POWER9 machine again, and I think I’ll be going with Chimera Linux. As a final note, you may have noticed I consistently refer to it as “Chimera Linux”. This is very much on purpose, as there’s also something called ChimeraOS, a more standard Linux distribution aimed at gaming. To avoid confusion, I figured I’d keep the naming clear and consistent.

T2 Linux takes weird architectures seriously, including my beloved PA-RISC

With more and more Linux distributions – as well as the kernel itself – dropping support for more exotic, often dead architectures, it’s a blessing T2 Linux exists. This unique, source-based Linux distribution focuses on making it as easy as possible to build a Linux installation tailored to your needs, and supports an absolutely insane amount of architectures and platforms. In fact, calling T2 a “distribution” does it a bit of a disservice, since it’s much more than that. You may have noticed the banner at the top of OSNews, and if we somehow – unlikely! -manage to reach that goal before the two remaining new-in-box HP c8000 PA-RISC workstations on eBay are sold, my plan is indeed to run HP-UX as my only operating system for a week, because I like inflicting pain on myself. However, I also intend to use that machine to see just how far T2 Linux on PA-RISC can take me, and if it can make a machine like the c8000, which is plenty powerful with its two dual-core 1.0Ghz PA-RISC processors, properly useful in 2024. T2 Linux 24.12 has just been released, and it brings with it the latest versions of the Linux kernel, gcc, LLVM/Clang, and so on. With T2 Linux, which describes itself as a System Development Environment, it’s very easy to spin up a heavily customised Linux installation fit for your purpose, targeting anything from absolutely resource-starved embedded systems to big hunks of, I don’t know, SPARC or POWER metal. If you’ve got hardware with a processor in it, you can most likely build T2 for it. The project also provides a large number of pre-built ISOs for a whole slew of supported architectures, sometimes further divided into glibc or musl, so you can quickly get started even without having to build something yourself. It’s an utterly unique project that deserves more attention than it’s getting, especially since it seems to be one of the last Linux “distributions” that takes supporting weird platforms out-of-the-box seriously. Think of it as the NetBSD of the Linux world, and I know for a fact that there’s a very particular type of person to whom that really appeals.

QEMU with VirtIO GPU Vulkan support

With its latest reales qemu added the Venus patches so that virtio-gpu now support venus encapsulation for vulkan. This is one more piece to the puzzle towards full Vulkan support. An outdated blog post on clollabora described in 2021 how to enable 3D acceleration of Vulkan applications in QEMU through the Venus experimental Vulkan driver for VirtIO-GPU with a local development environment. Following up on the outdated write up, this is how its done today. ↫ Pepper Gray A major milestone, and for the adventurous, you can get it working today. Give it a few more months, and many of the versions required will be part of your ditribution’s package repositories, making the process a bit easier. On a related note, Linux kernel developers are considering removing 32-bit x86 KVM host support for all architectures that support it – PowerPC, MIPS, RISC-V, and x86-64 – because nobody is using this functionality. This support was dropped from 32bit ARM a few years ago, and the remaining architectures mentioned above have orders of magnitude fewer users still. If nobody is using this functionality, it really makes no sense to keep it around, and as such, the calls to remove it. In other words, if your custom workflow of opening your garage door through your fridge light’s flicker frequency and the alignment of the planets and custom scripts on a Raspberry Pi 2 requires this support, let the kernel developers know, or forever hold your peace.

Using (only) a Linux terminal for my personal computing in 2024

A month and a bit ago, I wondered if I could cope with a terminal-only computer. The only way to really find out was to give it a go. My goal was to see what it was like to use a terminal-only computer for my personal computing for two weeks, and more if I fancied it. ↫ Neil’s blog I tried to do this too, once. Once. Doing everything from the terminal just isn’t viable for me, mostly because I didn’t grow up with it. Our family’s first computer ran MS-DOS (with a Windows 3.1 installation we never used), and I’m pretty sure the experience of using MS-DOS as my first CLI ruined me for life. My mental model for computing didn’t start forming properly until Windows 95 came out, and as such, computing is inherently graphical for me, and no matter how many amazing CLI and TUI applications are out there – and there are many, many amazing ones – my brain just isn’t compatible with it. There are a few tasks I prefer doing with the command line, like updating my computers or editing system files using Nano, but for everything else I’m just faster and more comfortable with a graphical user interface. This comes down to not knowing most commands by heart, and often not even knowing the options and flags for the most basic of commands, meaning even very basic operations that people comfortable using the command line do without even thinking, take me ages. I’m glad any modern Linux distribution – I use Fedora KDE on all my computers – offers both paths for almost anything you could do on your computer, and unless I specifically opt to do so, I literally – literally literally – never have to touch the command line.

What’s in a Steam Deck kernel anyway?

Valve, entirely going against the popular definition of Vendor, is still actively working on improving and maintaining the kernel for their Steam Deck hardware. Let’s see what they’re up to in this 6.8 cycle. ↫ Samuel Dionne-Riel Just a quick look at what, exactly, Valve does with the Steam Deck Linux kernel – nothing more, nothing less. It’s nice to have simple, straightforward posts sometimes.

Linux to lose support for Apple and IBM’s failed PowerPC Common Hardware Reference Platform

Ah, the Common Hardware Reference Platform, IBM’s and Apple’s ill-fated attempt at taking on the PC market with a reference PowerPC platform anybody could build and expand upon while remaining (mostly) compatible with one another. Sadly, like so many other things Apple was trying to do before Steve Jobs returned, it never took off, and even Apple itself never implemented CHRP in any meaningful way. Only a few random IBM and Motorola computers ever fully implemented it, and Apple didn’t get any further than basic CHRP support in Mac OS 8, and some PowerPC Macs were based on CHRP, without actually being compatible with it. We’re roughly three decades down the line now, and pretty much everyone except weird nerds like us have forgotten CHRP was ever even a thing, but Linux has continued to support CHRP all this time. This support, too, though, is coming to an end, as Michael Ellerman has informed the Linux kernel community that they’re thinking of getting rid of it. Only a very small number of machines are supported by CHRP in Linux: the IBM B50, bplan/Genesi’s Pegasos/Pegasos2 boards, the Total Impact briQ, and maybe some Motorola machines, and that’s it. Ellerman notes that these machines seem to have zero active users, and anyone wanting to bring CHRP support back can always go back in the git history. CHRP is one of the many, many footnotes in computing history, and with so few machines out there that supported it, and so few machines Linux’ CHRP support could even be used for, it makes perfect sense to remove this from the kernel, while obviously keeping it in git’s history in case anyone wants to work with it on their hardware in the future. Still, it’s always fun to see references to such old, obscure hardware and platforms in 2024, even if it’s technically sad news.

Linux 6.12 released

Version 6.12 of the Linux kernel has been released. The main feature consists of the merger of the real-time PREEMPT_RT scheduler, most likely one of the longest-running merger sagas in Linux’ history. This means that Linux now fully supports both soft and hard real-time capabilities natively, which is a major step forward for the platform, especially when looking at embedded development. It’s now no longer needed to draw in real-time support from outside the kernel. Linux 6.12 also brings a huge number of improvements for graphics drivers, for both Intel and AMD’s graphics cards. With 6.12, Linux now supports the Intel Xe2 integrated GPU as well as Intel’s upcoming discrete “Battlemage” GPUs by default, and it contains more AMD RDNA4 support for those upcoming GPUs. DRM panics messages in 6.12 will show a QR code you can scan for more information, a feature written in Rust, and initial support for the Raspberry Pi 5 finally hit mainline too. Of course, there’s a lot more in here, like the usual LoongArch and ARM improvements, new drivers, and so on. and if you’re a regular Linux user you’ll see 6.12 make it to your distribution within a few weeks or months.

Improving Steam Client stability on Linux: setenv and multithreaded environments

Speaking of Steam, the Linux version of Valve’s gaming platform has just received a pretty substantial set of fixes for crashes, and Timothee “TTimo” Besset, who works for Valve on Linux support, has published a blog post with more details about what kind of crashes they’ve been fixing. The Steam client update on November 5th mentions “Fixed some miscellaneous common crashes.” in the Linux notes, which I wanted to give a bit of background on. There’s more than one fix that made it in under the somewhat generic header, but the one change that made the most significant impact to Steam client stability on Linux has been a revamping of how we are approaching the setenv and getenv functions. One of my colleagues rightly dubbed setenv “the worst Linux API”. It’s such a simple, common API, available on all platforms that it was a little difficult to convince ourselves just how bad it is. I highly encourage anyone who writes software that will run on Linux at some point to read through “RachelByTheBay”‘s very engaging post on the subject. ↫ Timothee “TTimo” Besset This indeed seems to be a specific Linux problem, and due to the variability in Linux systems – different distributions, extensive user customisation, and so on – debugging information was more difficult to parse than on Windows and macOS. After a lot of work grouping the debug information to try and make sense of it all, it turned out that the two functions in question were causing issues in threads other than those that used them. They had to resort to several solutions, from reducing the reliance on setenv and refactoring it with exevpe, to reducing the reliance on getenv through caching, to introducing “an ‘environment manager’ that pre-allocates large enough value buffers at startup for fixed environment variable names, before any threading has started”. It was especially this last one that had a major impact on reducing the number of crashes with Steam on Linux. Besset does note that these functions are still used far too often, but that at this point it’s out of their control because that usage comes from the libraries of the operating system, like x11, xcb, dbus, and so on. Besset also mentions that it would be much better if this issue can be addressed in glibc, and in the comments, a user by the name of Adhemerval reports that this is indeed something the glibc team is working on.