Keep OSNews alive by becoming a Patreon, by donating through Ko-Fi, or by buying merch!

Ironclad 0.6 released

It’s been a while, but there’s a new release of Ironclad, the formally verified, hard real-time capable kernel written in SPARK and Ada. Aside from the usual bugfixes, this release moves Ironclad from multiboot to Limine, adds x86_64 ACPI support for poweroff and reboot, improvements to PTY support, the VFS layer, and much more.

The easiest way to try out Ironclad is to download Gloire, a distribution that uses Ironclad and the GNU tools. It can be installed in both a virtual machine and on real hardware.

A look at Firefox forks

Mozilla’s actions have been rubbing many Firefox fans the wrong way as of late, and inspiring them to look for alternatives. There are many choices for users who are looking for a browser that isn’t part of the Chrome monoculture but is full-featured and suitable for day-to-day use. For those who are willing to stay in the Firefox “family” there are a number of good options that have taken vastly different approaches. This includes GNU IceCat, Floorp, LibreWolf, and Zen.

↫ Joe Brockmeier

It’s a tough situation, as we’re all aware. We don’t want the Chrome monoculture to get any worse, but with Mozilla’s ever-increasing number of dubious decisions some people have been warning about for years, it’s only natural for people to look elsewhere. Once you decide to drop Firefox, there’s really nowhere else to go but Chrome and Chrome skins, or the various Firefox skins. As an aside, I really don’t think these browsers should be called Firefox “forks”; all they really do is change some default settings, add in an extension or two, and make some small UI tweaks. They may qualify as forks in a technical sense, but I think that overstates the differentiation they offer.

Late last year, I tried my best to switch to KDE’s Falkon web browser, but after a few months the issues, niggles, and shortcomings just started to get under my skin. I switched back to Firefox for a little while, contemplating where to go from there. Recently, I decided to hop onto the Firefox skin train just to get rid of some of the Mozilla telemetry and useless ‘features’ they’ve been adding to Firefox, and after some careful consideration I decided to go with Waterfox.

Waterfox strikes a nice balance between the strict choices of LibreWolf – which most users of LibreWolf seem to undo, if my timeline is anything to go by – and the choices Mozilla itself makes. On top of that, Waterfox enables a few very nice KDE integrations Firefox itself and the other Firefox skins don’t have, making it a perfect choice for KDE users. Sadly, Waterfox isn’t packaged for most Linux distributions, so you’ll have to resort to a third-party packager.

In the end, none of the Firefox skins really address the core problem, as they’re all still just Firefox. The problem with Firefox is Mozilla, and no amount of skins is going to change that.

Google makes Vulkan the official graphics API for Android

Google’s biggest announcement today, at least as it pertains to Android, is that the Vulkan graphics API is now the official graphics API for Android. Vulkan is a modern, low-overhead, cross-platform 3D graphics and compute API that provides developers with more direct control over the GPU than older APIs like OpenGL. This increased control allows for significantly improved performance, especially in multi-threaded applications, by reducing CPU overhead. In contrast, OpenGL is an older, higher-level API that abstracts away many of the low-level details of the GPU, making it easier to use but potentially less efficient. Essentially, Vulkan prioritizes performance and explicit hardware control, while OpenGL emphasizes ease of use and cross-platform compatibility.

↫ Mishaal Rahman at Android Authority

Android has supported Vulkan since Android 7.0, released in 2016, so it’s not like we’re looking at something earth-shattering here. The issue has been, as always with Android, fragmentation: it’s taken this long for about 85% of Android devices currently in use to support Vulkan in the first place. In other words, Google might’ve wanted to standardise on Vulkan much sooner, but if only a relatively small number of Android devices support it, that’s going to be a hard sell.

In any event, from here on out, every application or game that wants to use the GPU on Android will have to do so through Vulkan, including everything inside Android. It’s still going to be a long process, though, as the requirement to use Vulkan will not fully come into effect until Android 17, and even then there will be exceptions for certain applications. Android tends to implement changes like this in phases, and the move to Vulkan is no different.

All of this does mean that older devices with GPUs that do not support Vulkan, or at least not properly, will not be able to be updated to the Vulkan-only releases of Android, but let’s be real here – those kinds of devices were never going to be updated anyway.

A more robust raw OpenBSD syscall demo

Ted Unangst published dude, where are your syscalls? on flak yesterday, with a neat demonstration of OpenBSD’s pinsyscall security feature, whereby only pre-registered addresses are allowed to make system calls. Whether it strengthens or weakens security is up for debate, but regardless it’s an interesting, low-level programming challenge. The original demo is fragile for multiple reasons, and requires manually locating and entering addresses for each build. In this article I show how to fix it. To prove that it’s robust, I ported an entire, real application to use raw system calls on OpenBSD.

↫ Chris Wellons

Some light reading for the weekend.

Musk’s Tesla warns Trump’s tariffs and trade wars will harm Tesla

Elon Musk’s Tesla is waving a red flag, warning that Donald Trump’s trade war risks dooming US electric vehicle makers, triggering job losses, and hurting the economy.

In an unsigned letter to the US Trade Representative (USTR), Tesla cautioned that Trump’s tariffs could increase costs of manufacturing EVs in the US and forecast that any retaliatory tariffs from other nations could spike costs of exports.

↫ Ashley Belanger at Ars Technica

Back in 2020, scientists at the University of Twente, The Netherlands, created the smallest string instrument that can produce tones audible by human ears when amplified. Its strings were a mere micrometer thin, or one millionth of a meter, and about half to one millimeter long. Using a system of tiny weights and combs producing tiny vibrations, tones can be created.

And yet, this tiny violin still isn’t small enough for Tesla.

Haiku gets new malloc implementation, removes Gopher support from its browser

We’ve got the Haiku activity report covering February, and aside from the usual slew of bug fixes and minor improvements, there’s one massive improvement that deserves attention.

waddlesplash continued his ongoing memory management improvements, fixes, and cleanups, implementing more cases of resizing (expanding/shrinking) memory areas when there’s a virtual memory reservation adjacent to them (and writing tests for these cases) in the kernel. These changes were the last remaining piece needed before the new malloc implementation for userland (mostly based on OpenBSD’s malloc, but with a few additional optimizations and a Haiku-specific process-global cache added) could be merged and turned on by default. There were a number of followup fixes to the kernel and the new allocator’s “glue” and global caching logic since, but the allocator has been in use in the nightlies for a few weeks with no serious issues. It provides modest performance improvements over the old allocator in most cases, and in some cases that were pathological for the old allocator (GCC LTO appears to have been one), provides order-of-magnitude (or mode) performance improvements.

↫ waddlesplash on the Haiku website

Haiku also continues replacing implementations of standard C functions with those from musl, Haiku can now be built on FreeBSD and Linux distributions that use musl, C5/C6 C-states were disabled for Intel Skylake to fix boot problems on that platform, and many, many more changes. There’s also bad news for fans of Gopher: support for the protocol was removed from WebPositive, Haiku’s native web browser.

WinRing0: why Windows is flagging your PC monitoring and fan control apps as a threat

When I checked where Windows Defender had actually detected the threat, it was in the Fan Control app I use to intelligently cool my PC. Windows Defender had broken it, and that’s why my fans were running amok. For others, the threat was detected in Razer Synapse, SteelSeries Engine, OpenRGB, Libre Hardware Monitor, CapFrameX, MSI Afterburner, OmenMon, FanCtrl, ZenTimings, and Panorama9, among many others.

“As of now, all third-party/open-source hardware monitoring softwares are screwed,” Fan Control developer Rémi Mercier tells me.

↫ Sean Hollister at The Verge

Anyone reading OSNews can probably solve this puzzle. Many fan control and hardware monitoring applications for Windows make use of the same open source driver: WinRing0. Uniquely, this kernel-level driver is signed, since it’s from back in the days when developers could self-sign these sorts of drivers, but the signed version has a known vulnerability that’s quite dangerous considering it’s a kernel-level driver. The vulnerability has been fixed, but signing this new version – and keeping it signed – is a big ordeal and quite expensive, since these days, drivers have to be signed by Microsoft.

And it just so happens that Windows Defender has started marking this driver, and thus any tool that uses it, as dangerous, sending it to quarantine. The result is failing hardware monitoring and fan control applications for quite a few Windows users. Some companies have invested in developing their own closed-source alternatives, but they’re not sharing them. Luckily, Windows OEM iBuyPower says it’s trying to get the patched version of WinRing0 signed, and if that happens, they will share it back with the community. Classy.

For now, though, hardware monitoring and fan control on Windows might be a bit of an ordeal.

KDE splits KWin into kwin_x11 and kwin_wayland

One of the biggest behind-the-scenes changes in the upcoming Plasma 6.4 release is the split of kwin_x11 and kwin_wayland codebases. With this blog post, I would like to delve in what led us to making such a decision and what it means for the future of kwin_x11.

↫ Vlad Zahorodnii

For the most part, this change won’t mean much for users of KWin on either Wayland or X11, at least for now. At least for the remainder of the Plasma 6.x life cycle, kwin_x11 will be maintained, and despite the split, you can continue to have both kwin_x11 and kwin_wayland installed and use them interchangeably. Don’t expect any new features, though; kwin_x11 will get the usual bug fixes, some backports, and they’ll make sure it keeps working with any new KDE frameworks introduced during the 6.x cycle, but that’s all you’re going to get if you’re using KDE on X11.

There’s one area where this split might cause problems, though, and that’s if you’re using a particular type of KWin extension. While KWin extensions written in JavaScript and QML are backend agnostic and can be used without issues on both variants of KWin, extensions written in C++ are not. These extensions need to be coded specifically for either kwin_x11 or kwin_wayland, and with Wayland being the default for KDE, this may mean some of these extensions will leave X11 users behind to reduce the maintenance burden.

It seems that very few people are still using KDE on X11, and kwin_x11 doesn’t receive much testing anymore, so it makes sense to start preparations for the inevitable deprecation. While I think the time of X11 on Linux has come and gone, it’s unclear what this will mean for KDE on the BSDs. While Wayland is available on all of the BSDs in varying states of maturity, I honestly don’t know if they’re ready for a Wayland-only KDE at this point in time.

Iconography of the PuTTY tools

Ah, PuTTY. Good old reliable PuTTY. This little tool is one of those cornerstone applications in the toolbox of most of us, without any fuss, without any upsells or anti-user nonsense – it just does its job, and it has been doing its job for 30 years. Have you ever wondered, though, where PuTTY’s icons come from, how they were made, and how they evolved over time?

PuTTY’s icon designs date from the late 1990s and early 2000s. They’ve never had a major stylistic redesign, but over the years, the icons have had to be re-rendered under various constraints, which made for a technical challenge as well.

↫ Simon Tatham

The icons have basically not changed since the late ’90s, and I think that’s incredibly fitting for the kind of tool PuTTY is. It turns out people actually offer to redesign all the icons in a modern style, but that’s not going to happen.

People sometimes object to the entire 1990s styling, and volunteer to design us a complete set of replacements in a different style. We’ve never liked any of them enough to adopt them. I think that’s probably because the 1990s styling is part of what makes PuTTY what it is – “reassuringly old-fashioned”. I don’t know if there’s any major redesign that we’d really be on board with.

↫ Simon Tatham

Amen.

Ubuntu to replace classic coreutils and more with new Rust-based alternatives

After so much terrible tech politics news, let’s focus on some nice, easy-going Linux news that’s not going to be controversial at all: Ubuntu intends to replace numerous core Linux utilities with newer Rust replacements, starting with the ubiquitous GNU Coreutils.

This package provides utilities which have become synonymous with Linux to many – the likes of ls, cp, and mv. In recent years, there has been an effort to reimplement this suite of tools in Rust, with the goal of reaching 100% compatibility with the existing tools. Similar projects, like sudo-rs, aim to replace key security-critical utilities with more modern, memory-safe alternatives.

Starting with Ubuntu 25.10, my goal is to adopt some of these modern implementations as the default. My immediate goal is to make uutils’ coreutils implementation the default in Ubuntu 25.10, and subsequently in our next Long Term Support (LTS) release, Ubuntu 26.04 LTS, if the conditions are right.

↫ Jon Seager

Obviously, this is a massive change for Ubuntu, and while performance is one of the cited reasons for undertaking this effort, the biggest reason is, of course, security. To aid in the testing effort, Seager created a tool called oxidizr, with which you can swap between the classic versions and the new Rust versions of various tools to try them out in a non-destructive way.

This is a massive vote of confidence in uutils, and I’m curious to see if it works out for Ubuntu. I doubt it’s going to take long before other prominent distributions follow suit.

Chimera Linux drops RISC-V support because capable RISC-V hardware doesn’t exist

We’ve talked about Chimera Linux a few times now on OSNews, so I won’t be repeating what makes it unique once more. The project announced today that it will be shuttering its RISC-V architecture support, and considering RISC-V has been supported by Chimera Linux pretty much since the beginning, this is a big step. The reason is as sad as it is predictable: there’s simply no RISC-V hardware out there fit for the purpose of building a Linux distribution and all of its packages.

Up until this point, Chimera Linux built its RISC-V variant “on an x86_64 machine with qemu-user binfmt emulation coupled with transparent cbuild support”. There are various problems with this setup, like serious reliability problems, not being able to test packages, and a lack of performance. The setup was intended to be a temporary solution until proper, performanct RISC-V hardware became available, but this simply hasn’t happened, and it doesn’t seem like this is going to change soon.

Most of the existing RISC-V hardware options simply lack the performance to be used as build machines (think Raspberry Pi 3/4 levels of performance), making them even slower than the emulation setup they’re currently using. The only machine that in theory would be performant enough to serve as a build machine is the Milk-V Pioneer, but this machine has serious other problems, as the project notes:

Milk-V Pioneer is a board with 64 out-of-order cores; it is the only of its kind, with the cores being supposedly similar to something like ARM Cortex-A72. This would be enough in theory, however these boards are hard to get here (especially with Sophgon having some trouble, new US sanctions, and Mouser pulling all the Milk-V products) and from the information that is available to me, it is rather unstable, receives very little support, and is ridden with various hardware problems.

↫ Chimera Linux website

So, not only is the Milk-V Pioneer difficult to get due to, among other things, US sanctions, it’s also not very stable and receives very little support. Aside from the Pioneer and the various slow and therefore unsuitable options, there’s nothing else in the pipeline either for performant RISC-V hardware, making it quite difficult to support the architecture. Of course, this could always change in the future, but for now, supporting RISC-V is clearly not an option for Chimera Linux.

This is clearly sad news, especially for those of us hoping RISC-V becomes an open source hardware platform that we can use every day, and I wonder how many other projects are dealing with the same problem.

‘I feel utter anger’: from Canada to Europe, a movement to boycott US goods is spreading

In Canada, where the American national anthem has been booed during hockey matches with US teams, a slew of apps has emerged with names such as “buy beaver”, “maple scan” and “is this Canadian” to allow shoppers to scan QR barcodes and reject US produce from alcohol to pizza toppings.

[…]

In Sweden, more than 70,000 users have joined a Facebook group calling for a boycott of US companies – ironically including Facebook itself – which features alternatives to US consumer products.

[…]

In Denmark, where there has been widespread anger over Trump’s threat to bring the autonomous territory of Greenland under US control, the largest grocery company, the Salling group, has said it will tag European-made goods with a black star to allow consumers to choose them over products made in the US.

↫ Peter Beaumont at the Guardian

These are just a few of the examples of a growing interest in places like Canada and Europe to boycott American products to the best of one’s ability. It’s impossible to boycott everything coming from a certain country – good luck finding a computer without American software and/or hardware, for instance – but these small acts of disapproval and resistance allow people to vent their anger. It’s clearly already having an effect on Tesla, whose sales have completely collapsed in Europe, so much so that the president of the United States has to do his best Billy Mays impression in front of the White House to help his buddy sell cars.

Very classy.

With the United States threatening war on Canada, Greenland and Denmark, and Panama, it’s only natural for citizens of those countries, as well as those of close friends of those countries, to want do something, and being more mindful or what you spend your money on is a tried and true way to do that. Technology can definitely help here, as we’ve talked about before, and as shown in the linked article. While no tool to determine place of origin of products will ever be perfect, it can certainly help to avoid products you don’t want to buy.

I can only hope this doesn’t get even more out of hand than it already has. The United States started a trade war with the European Union today as well, and of course, the EU retaliated. I doubt the average person has any clue just how intertwined the global economy and supply chains are, and that the only people paying for this are people like you and I. The tech billionaires and career politicians won’t be the ones screwed over by surging prices of basic necessities because of tariffs, and it won’t be the children of the rich and powerful being sent to war with Canada or Panama or whatever.

The very companies that OSNews has reported on for almost 30 years are the ones pushing and enabling most of this vile nonsense, so yes, you will be seeing items about this here, whether you and I like it or not. Only cowards and the privileged have the luxury of ignoring what the United States is doing right now.

Tech execs are pushing Trump to build ‘Freedom Cities’ run by corporations

A new lobbying group, dubbed the Freedom Cities Coalition, wants to convince President Trump and Congress to authorize the creation of new special development zones within the U.S. These zones would allow wealthy investors to write their own laws and set up their own governance structures which would be corporately controlled and wouldn’t involve a traditional bureaucracy. The new zones could also serve as a testbed for weird new technologies without the need for government oversight.

↫ Lucas Ropek

I mean, just in case you weren’t convinced yet these people are utterly insane.

This is the kind of nonsensical libertarian Ayn Rand-inspired wank material dystopian fiction draws a lot of inspiration from, and it never ever ends well for anyone involved, especially not for the poor and lower classes inhabiting such places, because they’re supposed to be warnings, not instruction manuals. The fact that this insipid brand of utter stupidity is even considered by a president of the United States in this day and age should be all the proof you need that he and those around him have the moral compass of the rotting carcass of Margaret Thatcher.

I can’t believe we have to tell these Silicon Valley “geniuses” that lawless corporate towns are bad. In 2025.

The fascist tech bro takeover is here

The future of the United States is no longer decided in Washington. That ship has sailed. It’s now dictated in the bunkers, private jets, and compounds of an ideological Silicon Valley, by billionaires and wealth extremists intent on treating democracy as a nuisance that must be swatted away. These men – raised on a rabid press that mythologized their existence in their lifetimes, called them Wunderkind and treated them as something above and beyond mere mortality – have consumed a steady diet of libertarian and authoritarian fan fiction and conceived a new order, designed to elevate their lofty egos at any and all cost.

The Internet was supposed to be the great equalizer. It was meant to be a force that shattered hierarchies and gave power to ordinary people. Instead, it enabled the wealth extraction and avarice of a cartel of overfed, over-pampered despots who enriched themselves in the name of innovation, bled the world to the point of near-total collapse, intellectualized their power fetish and now view public institutions as the final obstacles to be dismantled in their megalomanic pursuit of More.

↫ Joan Westenberg

The US has only itself to blame. Let’s hope they don’t drag the rest of us with them.

EU-US rift triggers call for made-in-Europe tech

The utter chaos in the United States and the country’s antagonistic, erratic, and often downright hostile approach to what used to be its allies has not gone unnoticed, and it seems it’s finally creating some urgency in an area in which people have been fruitlessly advocating for urgency for years: digital independence from US tech giants.

Efforts to make Europe more technologically “sovereign” have gone mainstream. The European Commission now has its first-ever “technology sovereignty” chief, Henna Virkkunen. Germany’s incoming ruling party, the center-right Christian Democratic Union, called for “sovereign” tech in its program for the February election.

“Mounting friction across the Atlantic makes it clearer than ever that Europe must control its own technological destiny,” said Francesca Bria, an innovation professor at University College London and former president of Italy’s National Innovation Fund.

↫ Pieter Haeck at Politico

This should’ve been a primary concern for decades, as many have been trying to make it. Those calls usually fell on deaf ears, as relying on Google, Microsoft, Amazon, and other US tech giants was simply the cheapest option for EU governments and corporations alike. However, now that the US is suffering under a deeply dysfunctional, anti-EU regime, the chickens are coming home to roost, and it’s dawning on European politicians and business leaders alike that relying on US corporations that openly and brazenly cheer on the Trump/Elon regime might’ve been a bad idea.

To the surprise of nobody with more than two brain cells.

It’s going to take a long, long time for this situation to get any better. Europe simply doesn’t have any equivalents to the services offered by companies like Google, Amazon, and Microsoft, and even if does, certainly not at their scale. Building up the resources these US companies offers is going to take a long time, and it won’t be cheap, making it hard to sell such moves to voters and shareholders alike, both of which are not exactly known for their long-term views on such complex matters.

Still, it seems consumers in the EU might be more receptive to messages of digital independence from the US than ever before. Just look at how hard Tesla is tanking all over Europe, part of which can definitely be attributed to Europeans not wanting to buy any products from a man openly insulting and lying about European elected officials. If this groundswell of sentiment spreads, I can definitely see European politicians tapping into it to sell massive investments in digital independence.

Personally, banning Twitter and Facebook from operating in the EU should be step one, as its owners have made it very clear that illegal election interference and nazi propaganda is something they have no issues with, followed by massive investments in alternatives to the services offered by the US big tech companies. China has been doing this for a long time now, and Europe should follow in its footsteps. There are enough bases to work from – from open source non-Google Android smartphones to EU-based Linux distributions for everything from desktops to server farms, and countless other open source services – so it’s not like we have to start from nothing.

If we can spend €800 billion to finally get EU defense up to snuff, we should be able to spare something for digital independence, too.

A 10x Faster TypeScript

To meet those goals, we’ve begun work on a native port of the TypeScript compiler and tools. The native implementation will drastically improve editor startup, reduce most build times by 10x, and substantially reduce memory usage. By porting the current codebase, we expect to be able to preview a native implementation of tsc capable of command-line typechecking by mid-2025, with a feature-complete solution for project builds and a language service by the end of the year.

↫ Anders Hejlsberg

It seems Microsoft is porting TypeScript to Go, and WILL eventually offer both “TypeScript (JS)” and “TypeScript (native)” alongside one another during a transition period. TypeScript 6.x will be the JavaScript-based one and will continue to be developed until TypeScript 7.0, the Go-one, is mature enough. During the 6.x release cycle, however, there will be breaking changes and deprecations in preparation for 7.0.

Those are some serious performance improvements, but I’m sure quite a few projects are going to run into issues during the transition period. I hope for them that the 6.x branch remains maintained for long enough to reasonably get everyone on board the new Go version.

Notes from setting up GlobalTalk using QEMU on Ubuntu

I signed up for GlobalTalk in 2024, but never found the time to get a machine set up. Fast-forward to MARCHintosh 2025 and I wasn’t going to let another year go by. This is a series of notes from my experience getting System 7.6 up and running on QEMU 68k on Ubuntu. Hopefully this will help others that might be hitting a roadblock. I certainly hit several!

↫ Cale Mooth

A short and to-the-point guide for those of us who want to partake in GlobalTalk but can’t due to the lack of compatible hardware.

Exploring the (discontinued) hybrid Debian GNU/kFreeBSD distribution

For decades, Linux and BSD have stood as two dominant yet fundamentally different branches of the Unix-like operating system world. While Linux distributions, such as Debian, Ubuntu, and Fedora, have grown to dominate the open-source ecosystem, BSD-based systems like FreeBSD, OpenBSD, and NetBSD have remained the preferred choice for those seeking security, performance, and licensing flexibility. But what if you could combine the best of both worlds—Debian’s vast package ecosystem with FreeBSD’s robust and efficient kernel?

Enter Debian GNU/kFreeBSD, a unique experiment that merges Debian’s familiar userland with the FreeBSD kernel, offering a hybrid system that takes advantage of FreeBSD’s technical prowess while maintaining the ease of use associated with Debian. This article dives into the world of Debian GNU/kFreeBSD, exploring its architecture, installation, benefits, challenges, and real-world applications.

↫ George Whittaker

More of a list of upsides and downsides than an actual in-depth article, but that doesn’t make it any less interesting. There’s a variety of attempts out there to somehow marry the Linux and BSD worlds, and each of them takes a unique approach. I’m not sure the Debian userland with a FreeBSD kernel is the way to go, though, and it seems I’m not alone – Debian GNU/kFreeBSD was officially dropped from Debian in 2015 or so, and after a flurry of unofficial activity in 2019, it was discontinued completely in 2023 due to a lack of activity and developer interest. Odd that the source article doesn’t mention that.

If you’re still interested in a combination of Linux and BSD, I’d keep an eye on Chimera Linux instead. It’s very actively developed, focuses on portable code by supporting many architectures, and its developers are veterans in this space. I have my eye on Chimera Linux as my future distribution of choice.

Brother denies using firmware updates to brick printers with third-party ink

Brother laser printers are popular recommendations for people seeking a printer with none of the nonsense. By nonsense, we mean printers suddenly bricking features, like scanning or printing, if users install third-party cartridges. Some printer firms outright block third-party toner and ink, despite customer blowback and lawsuits. Brother’s laser printers have historically worked fine with non-Brother accessories. A YouTube video posted this week, though, as well as older social media posts, claim that Brother has gone to the dark side and degraded laser printer functionality with third-party cartridges. Brother tells Ars that this isn’t true.

↫ Scharon Harding at Ars Technica

I find this an incredibly interesting story. We all know the printer space is a cursed hellhole of the very worst worst types of enshittification, but Brother seemed like an island of relative calm in a sea of bullshit. In turn, people are so used to printers being shit, that any problem that comes up is automatically explained by malice, which is not entirely unreasonable. Borther insists, though, that it does not break printers using third-party toner or ink through firmware.

Brother does make it very clear that it is standard procedure to only perform troubleshooting on Brother printers using ‘genuine’ Brother ink and toner, which is not entirely unreasonable in my book. There’s no telling what kind of effects third part cartridges – which do contain electronics – have on the rest of the printer, and I don’t think it’s fair to expect Brother to be able to document all of those possible issues. As long as using third-party toner and ink cartridges doesn’t invalidate any warranties, and as long as Brother doesn’t intentionally break printers for using third-party toner and ink, I think Brother meets its obligations to consumers.

If you choose to use third-party ink and toner cartridges in Brother printers, I think it’s only reasonable you remove those during the troubleshooting process to ensure they’re not the cause of any problems you’re experiencing.

Porting the curl command-line tool and library with Goa

For more than a decade, we have a port of the curl library for Genode available. With the use of Sculpt OS as a daily driver as well as the plan to run Goa natively on Sculpt OS by the end of the year, the itch to also port the curl command-line tool became irresistible. Of course this is a perfect territory for using Goa.

In this article, I will share the process of porting the curl command-line tool and shared library in order to guide future porting efforts of other projects.

↫ Johannes Schlatow

A detailed, step-by-step retelling of porting the curl command-line tool and associated libraries to Genode/Sculpt OS. Articles like these are invaluable to anyone trying to port things to Genode and Sculpt OS, as it points to some directions you can explore when encountering errors and hurdles of your own.