Keep OSNews alive by becoming a Patreon, by donating through Ko-Fi, or by buying merch!

Nvidia Linux GPU driver ported to Haiku

Nvidia releasing its Linux graphics driver as open source is already bearing fruit for alternative operating systems.

As many people already knows, Nvidia published their kernel driver under MIT license: GitHub – NVIDIA/open-gpu-kernel-modules: NVIDIA Linux open GPU kernel module source (I will call it NVRM). This driver is very portable and its platform-independent part can be compiled for Haiku with minor effort (but it need to implement OS-specific binding code to be actually useful). This is very valuable for Haiku because Linux kernel GPU drivers are very hard to port and it heavily depends on Linux kernel internals. Unfortunately userland OpenGL/Vulkan driver source code is not published. But as part of Mesa 3D project, new Vulkan driver “NVK” is being developed and is functional already. Mesa NVK driver is using Nouveau as kernel driver, so it can’t be directly used with NVRM kernel driver. NVK source code provides platform abstraction that allows to implement support of other kernel drivers such as NVRM.

I finally managed to make initial port NVRM kernel driver to Haiku and added initial NVRM API support to Mesa NVK Vulkan driver, so NVRM and NVK can work together. Some simple Vulkan tests are working.

↫ X512 on the Haiku forums

Incredibly impressive, and a huge milestone for the Haiku operating system. It supports any Nvidia GPU from the Turing architecture, which I think means Nvidia RTX 20xx and newer, since they have a required microcontroller older GPUs do not have. Of course, this is an early port and a lot of work remains to be done, but it could lead to huge things for Haiku.

SoftBank acquires Ampere Computing

SoftBank Group Corp. today announced that it will acquire Ampere Computing, a leading independent silicon design company, in an all-cash transaction valued at $6.5 billion. Under the terms of the agreement, Ampere will operate as a wholly owned subsidiary of SoftBank Group and retain its name. As part of the transaction, Ampere’s lead investors – Carlyle and Oracle – are selling their respective positions in Ampere.

↫ SoftBank and Ampere Computing press release

Despite not really knowing what SoftBank does and what their long-term goals are – I doubt anyone does – I hope this at the very least provides Ampere with the funds needed to expand its business. At this point, the only serious options for Arm-based hardware are either Apple or Qualcomm, and we could really use more players. Ampere’s hardware is impressive, but difficult to buy and expensive, and graphics card support is patchy, at best.

What Ampere needs is more investment, and more OEMs picking up their chips. An Ampere workstation is incredibly high on my list of machines to test for OSNews (perhaps a System76 model?), and it’d be great if economies of scale worked to bring the prices down, possibly allowing Ampere to developer cheaper, more affordable variants for us mere mortals, too. I would love to build an Arm workstation in much the same way we build regular x86 PCs today, but I feel like that’s still far off.

I have no idea if SoftBank is the right kind of company to make this possible, but one can dream.

FOSS infrastructure is under attack by AI companies

What do SourceHut, GNOME’s GitLab, and KDE’s GitLab have in common, other than all three of them being forges? Well, it turns out all three of them have been dealing with immense amounts of traffic from “AI” scrapers, who are effectively performing DDoS attacks with such ferocity it’s bringing down the infrastructures of these major open source projects. Being open source, and thus publicly accessible, means these scrapers have unlimited access, unlike with proprietary projects.

These “AI” scrapers do not respect robots.txt, and have so many expensive endpoints it’s putting insane amounts of pressure on infrastructure. Of course, they use random user agents from an effectively infinite number of IP addresses. Blocking is a game of whack-a-mole you can’t win, and so the GNOME project is using a rather nuclear option called Anubis now, which aims to block “AI” scrapers with a heavy-handed approach that sometimes blocks real, genuine users as well.

The numbers are insane, as Niccolò Venerandi at Libre News details.

Over Mastodon, one GNOME sysadmin, Bart Piotrowski, kindly shared some numbers to let people fully understand the scope of the problem. According to him, in around two hours and a half they received 81k total requests, and out of those only 3% passed Anubi’s proof of work, hinting at 97% of the traffic being bots – an insane number!

↫ Niccolò Venerandi at Libre News

Fedora is another project dealing with these attacks, with infrastructure sometimes being down for weeks as a result. Inkscape, LWN, Frama Software, Diaspora, and many more – they’re all dealing with the same problem: the vast majority of the traffic to their websites and infrastructure now comes from attacks by “AI” scrapers. Sadly, there’s doesn’t seem to be a reliable way to defend against these attacks just yet, so sysadmins and webmasters are wasting a ton of time, money, and resources fending off the hungry “AI” hordes.

These “AI” companies are raking in billions and billions of dollars from investors and governments the world over, trying to build dead-end text generators while sucking up huge amounts of data and wasting massive amounts of resources from, in this case, open source projects. If no other solutions can be found, the end game here could be that open source projects will start to make their bug reporting tools and code repositories much harder and potentially even impossible to access without jumping through a massive amount of hoops.

Everything about this “AI” bubble is gross, and I can’t wait for this bubble to pop so a semblance of sanity can return to the technology world. Until the next hype train rolls into the station, of course.

As is tradition.

Memory safety for web fonts in Chrome: Google replaces FreeType with Rust-based alternative

There’s no escaping Rust, and the language is leaving its mark everywhere. This time around, Chrome has replaced its use of FreeType with Skrifa, a Rust-based replacement.

Skrifa is written in Rust, and created as a replacement for FreeType to make font processing in Chrome secure for all our users. Skifra takes advantage of Rust’s memory safety, and lets us iterate faster on font technology improvements in Chrome. Moving from FreeType to Skrifa allows us to be both agile and fearless when making changes to our font code. We now spend far less time fixing security bugs, resulting in faster updates, and better code quality.

↫ Dominik Röttsches, Rod Sheeter, and Chad Brokaw

The move to Skrifa is already complete, and it’s being used now by Chrome users on Linux, Android, and ChromeOS, and as a fallback for users on Windows and macOS. The reasons for this change are the same as they always are for replacing existing tools with new tools written in Rust: security. FreeType is a security risk for Chrome, and by replacing it with something written in a memory-safe language like Rust, Google was able to eliminate a whole slew of types of security issues.

To ensure rendering correctness, Google performed a ton of pixel comparison tests to compare FreeType output to Skrifa output. On top of that, Google is continuously running similar tests to ensure no quality degradation sneaks into Skrifa as time progresses.

Whether anyone likes Rust or not, the reality of the matter is that using Rust provides tangible benefits that reduce cost and lower security risks, and as such, its use will keep increasing, and tried and true tools will continue to be replaced by Rust counterparts.

I think we need a bigger boot partition

Long ago, during the time of creation, I confidently waved my hand and allocated a 1GB ESP partition and a 1GB boot partition, thinking to myself with a confident smile that this would surely be more than enough for the foreseeable future. However, this foreseeable future quickly vanished along with my smile. What was bound to happen eventually came, but I didn’t expect it to arrive so soon. What could possibly require such a large boot partition? And how should we resolve this? Here, I would like to introduce the boot partition issue I encountered, as well as temporary coping methods and final solutions, mentioning the problems encountered along the way for reference.

↫ fernvenue

Some of us will definitely run into this issue at some point, so if you’re doing a fresh installation it might make sense to allocate a bit more space to your boot partition. If you have a running system and are bumping into the limitations of your boot partition and don’t want to reinstall, the linked article provides some possible solutions.

GNOME 48 released

One of the two major open source desktop environments, GNOME, just released version 48, and it’s got some very big and welcome improvements. First and foremost there’s dynamic triple-buffering, a feature that took over five years of extensive testing to get ready. It will improve the smoothness and fluidity of animations and other movements on the screen, as it did for KDE when it landed there in the middle of last year.

GNOME 48 also brings notification stacking, combining notifications from the same source, improvements to the new default image viewer such as image editing features, a number of digital well-being options, as well as the introduction of a new, basic audio player designed explicitly for quickly playing individual audio files. There’s also a few changes to GNOME’s text editor, and following in KDE’s recent footsteps, GNOME 48 also brings HDR support.

Another major change are the new default fonts. Finally, Cantarell is gone, replaced by slightly modified versions of Inter and Iosevka. Considering I absolutely adore Inter and installing and setting it as my main font is literally the first thing I do on any system that allows me to, I’m fully behind this change. Inter is exceptional in that it renders great in both high and low DPI environments, and its readability is outstanding.

GNOME 48 will make its way to your distribution’s repositories soon enough.

Java 24 released

Oracle, the company owned by a guy who purchased a huge chunk of the Kingdom of Hawaii from the Americans, has released Java 24. I’ll be honest and upfront: I just don’t care very much at all about this, as the only interaction I’ve had with Java over the past, I don’t know, 15 years or so, is either because of Minecraft, or because of my obsession with ancient UNIX workstations where Java programs pop up in the weirdest of places. I know Java is massive and used everywhere, but going through the list of changes and improvements does not spark any joy in me at all, and just makes me want to stick my pinky in an electrical socket to make something interesting happen.

If you work with Java, you know all of this stuff already anyway, as you’ve been excitedly trying to impress Nick from accounting with your knowledge of Flexible Constructor Bodies and Quantum-Resistant Module-Lattice-Based Key Encapsulation Mechanisms because he’s just so dreamy and you desperately want to ask him out for a hot cup of coffee, but you’re not sure if he’s married or has a boy or girlfriend so you’re just kind of scoping things out a bit too excitedly and now you’re worried you might be coming off as too desperate for his attention.

Anyway, that’s how offices work, right? I’ve never worked for anyone but myself and office settings induce a deep sense of existential dread in me, so my knowledge of office work, and Java if we’re honest, may be based a bit too much on ’90s sitcoms and dramas. Whatever, Java 24 is here. Do a happy dance.

After 47 years, OpenVMS gets a package manager

As of the 18th of February, OpenVMS, known for its stability and high-availability, 47 years old and ported to 4 different CPU architecture, has a package manager! This article shows you how to use the package manager and talks about a few of its quirks. It’s an early beta version, and you do notice that when using it. A small list of things I noticed, coming from a Linux (apt/yum/dnf) background: There seems to be no automatic dependency resolution and the dependencies it does list are incomplete. No update management yet, no removal of packages and no support for your own package repository, only the VSI official one. Service startup or login script changes are not done automatically. Packages with multiple installer files fail and require manual intervention. It does correctly identify the architectures, has search support and makes it way easier to install software. The time saved by downloading, manually copying and starting installation is huge, so even this early beta is a very welcome addition to OpenVMS.

↫ Remy van Elst

Obviously, a way to install software packages without having to manually download them is a huge step forward for OpenVMS. The listed shortcomings might raise some eyebrows considering most of us are used to package management on Linux/BSD, which is far more advanced. Bear in mind, however, that this is a beta product, and it’s quite obvious these missing essential features will be added over time. Luckily it at least lists dependencies, so let’s hope actually automating installing them is in the works and will be available soon.

I actually have an OpenVMS virtual machine set up and running, but I find using it incredibly difficult – but only because of my own lack of experience with and knowledge about OpenVMS, of course. Any experience of knowledge rooted in UNIX-based and Windows operating systems is useless here, even for the most basic of CLI tasks. If I find the time, I’d love to spend more time with it and get more acquainted with the way it works, including this new package manager.

Pebble unveils new devices, and strongly suggests you dump iOS for Android

It’s barely been two months after the announcement that Pebble would return with new watches, and they’re already here – well, sort of. Pebble has announced two new watches for preorder, the Core 2 Duo and the Core Time 2. The former is effectively a Pebble 2, upgraded with new internals, while the Core Time 2 is very similar, but comes with a colour e-ink display and a metal case. They’re up for preorder now at $149 and $225, respectively, with the Core 2 Duo shipping in July, and the Core Time 2 shipping in December.

Alongside this unveil, Eric Migicovsky, the creator of Pebble, also published a blog post detailing the trouble Pebble is and will have with making smartwatches for iOS users. Apple effectively makes it impossible for third parties to make a proper smartwatch for iOS, since access to basic functionality you’d come to expect from such a device are locked by Apple, reserved only for its own Apple Watch. As such, Migicovsky makes it explicitly clear that iOS users who want to buy one of these new Pebbles will are going to have a very degraded experience compared to Android users.

Not only will Android users with Pebble have access to a ton more functionality, any Pebble features that could exist for both Android and iOS users will always come to Android first, and possibly iOS later. In fact, Migicovksy goes as far as suggesting that if you want a Pebble, you should buy an Android phone.

I don’t want to see any tweets or blog posts or complaints or whatever later on about this. I’m publishing this now so you can make an informed decision about whether to buy a new watch or not. If you’re worried about this, the easiest solution is to buy an Android phone.

↫ Eric Migicovsky

I have to hand it to Migicovksy – I love the openness about this, and the fact he’s making this explicitly clear to any prospective buyers. There’s no sugarcoating or PR speak to try and please Tim Cook – he’s putting the blame squarely where it belongs: on Apple. It’s kind of unreal to see such directness about a new product, but as a Dutch person, it feels quite natural. We need more of this style of communication in the technology world, as it makes it much clearer that you’re getting – and not getting.

I do hope that Pebble’s Android support functions without the need for Google Play Services or other proprietary Google code, since it would be great to have a proper, open source smartwatch fully supported by de-Googled Android.

GIMP 3.0 released

It’s taken a Herculean seven-year effort, but GIMP 3.0 has finally been released. There are so many new features, changes, and improvements in this release that it’s impossible to highlight all of them. First and foremost, GIMP 3.0 marks the shift to GTK3 – this may be surprising considering GTK4 has been out for a while, but major applications such as GIMP tend to stick to more tried and true toolkit versions. GTK4 also brings with it the prickly discussion concerning a possible adoption of libadwaita, the GNOME-specific augmentations on top of GTK4. The other major change is full support for Wayland, but users of the legacy X11 windowing system don’t have to worry just yet, since GIMP 3.0 supports that, too.

As far as actual features go, there’s a ton here. Non-destructive layer effects is one of the biggest improvements.

Another big change introduced in GIMP 3.0 is non-destructive (NDE) filters. In GIMP 2.10, filters were automatically merged onto the layer, which prevented you from making further edits without repeatedly undoing your changes. Now by default, filters stay active once committed. This means you can re-edit most GEGL filters in the Fx menu on the layer dockable without having to revert your work. You can also toggle them on or off, selectively delete them, or even merge them all down destructively. If you prefer the original GIMP 2.10 workflow, you can select the “Merge Filters” option when applying a filter instead.

↫ GIMP 3.0 release notes

There’s also much better color space management, better layer management and control, the user interface has been improved across the board, and support for a ton of file formats have been added, from macOS icons to Amiga ILBM/IFF formats, and much more. GIMP 3.0 also improves compatibility with Photoshop files, and it can import more palette formats, including proprietary ones like Adobe Color Book (ACB) and Adobe Swatch Exchange (ASE).

This is just a small selection, as GIMP 3.0 truly is a massive update. It’s available for Linux, Windows, and macOS, and if you wait for a few days it’ll probably show up in your distribution’s package repositories.

More pro for the DEC Professional 380 (featuring PRO/VENIX)

Settle down children, it’s time for another great article by Cameron Kaiser. This time, they’re going to tell us about the DEC Professional 380 running PRO/VENIX.

The Pro 380 upgraded to the beefier J-11 (“Jaws”) CPU from the PDP-11/73, running two to three times faster than the 325 and 350. It had faster RAM and came with more of it, and boasted quicker graphics with double the vertical resolution built right into the logic board. The 380 still has its faults, notably being two-thirds the speed of the 11/73 and having no cache, plus all of the 325/350’s incompatibilities. Taken on its merits, though, it’s a tank of a machine, a reasonably powerful workstation, and the most practical PDP-adjacent thing you can actually slap on a (large) desk.

This particular unit is one of the few artifacts I have left from a massive DEC haul almost twelve years ago. It runs PRO/VENIX, the only official DEC Unix option for the Pros, but in its less common final release (we’ll talk about versions of Venix). I don’t trust the clanky ST-506 hard drive anymore, so today we’ll convert it to solid state and double its base RAM to make it even more professional, and then play around in VENIX some for a taste of old-school classic Unix — after, of course, some history.

↫ Cameron Kaiser

Detailed, interesting, fascinating, and full of photos as always.

Apple’s long-lost hidden recovery partition from 1994 has been found

In 1994, a single Macintosh Performa model, the 550, came from the factory with a dedicated, hidden recovery partition that contained a System 7 system folder and a small application that would be set as bootable if the main operating system failed to boot. This application would then run, allowing you to recover your Mac using the system folder inside the recovery partition. This feature was apparently so obscure, few people knew it existed, and nobody had access to the original contents of the recovery partition anymore.

It took Doug Brown a lot of searching to find a copy of this recovery partition. The issue is that nobody really knows how this partition is populated with the recovery data, so the only way to explore its contents was to somehow find a Performa 550 hard drive with a specific version of Mac OS that had never been reformatted after leaving the factory.

The thing is, this whole functionality was super obscure. It’s understandable that people weren’t familiar with it. Apple publicly stated it was only included with this one specific Performa model. Their own documentation also said that it would be lost if you reformatted the hard drive. It was hiding in the background, so nobody really knew it was there, let alone thought about saving it. Also, I can say that the first thing a lot of people do when they obtain a classic computer is erase it in order to restore it to the factory state. Little did anyone know, if they reformatted the hard drive on a Performa 550, they could have been wiping out rare data that hadn’t been preserved!

↫ Doug Brown

Brown found a copy, and managed to get the whole original functionality working again. It’s a fairly basic way of doing this, but we shouldn’t forget we’re talking 1994 here, and I don’t think any other operating system at the time had the ability to recover from an unbootable state like this. Like Brown, I wonder why it was abandoned so quickly. Perhaps Apple was unwilling to sacrifice the hard drive space?

Groundbreaking or not, it’s still great to have this recovered and preserved for the ages.

Microsoft accidentally cares about its users, releases update that unintentionally deletes Copilot from Windows

It’s rare in this day and age that proprietary operating system vendors like Microsoft and Apple release updates you’re more than happy to install, but considering even a broken clock is right twice a day, we’ve got one for you today. Microsoft released KB5053598 (OS Build 26100.3476) which “addresses security issues for your Windows operating system”. One of the “security issues” this update addresses, is Microsoft’s “AI” text generator, Copilot. To address this glaring security issue, this update removes Copilot from your Windows installation altogether.

Sadly, it’s only by mistake, and not by design.

We’re aware of an issue with the Microsoft Copilot app affecting some devices. The app is unintentionally uninstalled and unpinned from the taskbar.

[…]

Microsoft is working on a resolution to address this issue.

In the meantime, affected users can reinstall the app from the Microsoft Store and manually pin it to the taskbar.

↫ Microsoft Support

Well, at least until Microsoft “fixes” this “issue” with KB5053598, consider this update a simple way to get rid of Copilot. Microsoft accidentally cared about its users for once, so cherish this moment – it won’t happen again.

Ironclad 0.6 released

It’s been a while, but there’s a new release of Ironclad, the formally verified, hard real-time capable kernel written in SPARK and Ada. Aside from the usual bugfixes, this release moves Ironclad from multiboot to Limine, adds x86_64 ACPI support for poweroff and reboot, improvements to PTY support, the VFS layer, and much more.

The easiest way to try out Ironclad is to download Gloire, a distribution that uses Ironclad and the GNU tools. It can be installed in both a virtual machine and on real hardware.

A look at Firefox forks

Mozilla’s actions have been rubbing many Firefox fans the wrong way as of late, and inspiring them to look for alternatives. There are many choices for users who are looking for a browser that isn’t part of the Chrome monoculture but is full-featured and suitable for day-to-day use. For those who are willing to stay in the Firefox “family” there are a number of good options that have taken vastly different approaches. This includes GNU IceCat, Floorp, LibreWolf, and Zen.

↫ Joe Brockmeier

It’s a tough situation, as we’re all aware. We don’t want the Chrome monoculture to get any worse, but with Mozilla’s ever-increasing number of dubious decisions some people have been warning about for years, it’s only natural for people to look elsewhere. Once you decide to drop Firefox, there’s really nowhere else to go but Chrome and Chrome skins, or the various Firefox skins. As an aside, I really don’t think these browsers should be called Firefox “forks”; all they really do is change some default settings, add in an extension or two, and make some small UI tweaks. They may qualify as forks in a technical sense, but I think that overstates the differentiation they offer.

Late last year, I tried my best to switch to KDE’s Falkon web browser, but after a few months the issues, niggles, and shortcomings just started to get under my skin. I switched back to Firefox for a little while, contemplating where to go from there. Recently, I decided to hop onto the Firefox skin train just to get rid of some of the Mozilla telemetry and useless ‘features’ they’ve been adding to Firefox, and after some careful consideration I decided to go with Waterfox.

Waterfox strikes a nice balance between the strict choices of LibreWolf – which most users of LibreWolf seem to undo, if my timeline is anything to go by – and the choices Mozilla itself makes. On top of that, Waterfox enables a few very nice KDE integrations Firefox itself and the other Firefox skins don’t have, making it a perfect choice for KDE users. Sadly, Waterfox isn’t packaged for most Linux distributions, so you’ll have to resort to a third-party packager.

In the end, none of the Firefox skins really address the core problem, as they’re all still just Firefox. The problem with Firefox is Mozilla, and no amount of skins is going to change that.

Google makes Vulkan the official graphics API for Android

Google’s biggest announcement today, at least as it pertains to Android, is that the Vulkan graphics API is now the official graphics API for Android. Vulkan is a modern, low-overhead, cross-platform 3D graphics and compute API that provides developers with more direct control over the GPU than older APIs like OpenGL. This increased control allows for significantly improved performance, especially in multi-threaded applications, by reducing CPU overhead. In contrast, OpenGL is an older, higher-level API that abstracts away many of the low-level details of the GPU, making it easier to use but potentially less efficient. Essentially, Vulkan prioritizes performance and explicit hardware control, while OpenGL emphasizes ease of use and cross-platform compatibility.

↫ Mishaal Rahman at Android Authority

Android has supported Vulkan since Android 7.0, released in 2016, so it’s not like we’re looking at something earth-shattering here. The issue has been, as always with Android, fragmentation: it’s taken this long for about 85% of Android devices currently in use to support Vulkan in the first place. In other words, Google might’ve wanted to standardise on Vulkan much sooner, but if only a relatively small number of Android devices support it, that’s going to be a hard sell.

In any event, from here on out, every application or game that wants to use the GPU on Android will have to do so through Vulkan, including everything inside Android. It’s still going to be a long process, though, as the requirement to use Vulkan will not fully come into effect until Android 17, and even then there will be exceptions for certain applications. Android tends to implement changes like this in phases, and the move to Vulkan is no different.

All of this does mean that older devices with GPUs that do not support Vulkan, or at least not properly, will not be able to be updated to the Vulkan-only releases of Android, but let’s be real here – those kinds of devices were never going to be updated anyway.

A more robust raw OpenBSD syscall demo

Ted Unangst published dude, where are your syscalls? on flak yesterday, with a neat demonstration of OpenBSD’s pinsyscall security feature, whereby only pre-registered addresses are allowed to make system calls. Whether it strengthens or weakens security is up for debate, but regardless it’s an interesting, low-level programming challenge. The original demo is fragile for multiple reasons, and requires manually locating and entering addresses for each build. In this article I show how to fix it. To prove that it’s robust, I ported an entire, real application to use raw system calls on OpenBSD.

↫ Chris Wellons

Some light reading for the weekend.

Musk’s Tesla warns Trump’s tariffs and trade wars will harm Tesla

Elon Musk’s Tesla is waving a red flag, warning that Donald Trump’s trade war risks dooming US electric vehicle makers, triggering job losses, and hurting the economy.

In an unsigned letter to the US Trade Representative (USTR), Tesla cautioned that Trump’s tariffs could increase costs of manufacturing EVs in the US and forecast that any retaliatory tariffs from other nations could spike costs of exports.

↫ Ashley Belanger at Ars Technica

Back in 2020, scientists at the University of Twente, The Netherlands, created the smallest string instrument that can produce tones audible by human ears when amplified. Its strings were a mere micrometer thin, or one millionth of a meter, and about half to one millimeter long. Using a system of tiny weights and combs producing tiny vibrations, tones can be created.

And yet, this tiny violin still isn’t small enough for Tesla.

Haiku gets new malloc implementation, removes Gopher support from its browser

We’ve got the Haiku activity report covering February, and aside from the usual slew of bug fixes and minor improvements, there’s one massive improvement that deserves attention.

waddlesplash continued his ongoing memory management improvements, fixes, and cleanups, implementing more cases of resizing (expanding/shrinking) memory areas when there’s a virtual memory reservation adjacent to them (and writing tests for these cases) in the kernel. These changes were the last remaining piece needed before the new malloc implementation for userland (mostly based on OpenBSD’s malloc, but with a few additional optimizations and a Haiku-specific process-global cache added) could be merged and turned on by default. There were a number of followup fixes to the kernel and the new allocator’s “glue” and global caching logic since, but the allocator has been in use in the nightlies for a few weeks with no serious issues. It provides modest performance improvements over the old allocator in most cases, and in some cases that were pathological for the old allocator (GCC LTO appears to have been one), provides order-of-magnitude (or mode) performance improvements.

↫ waddlesplash on the Haiku website

Haiku also continues replacing implementations of standard C functions with those from musl, Haiku can now be built on FreeBSD and Linux distributions that use musl, C5/C6 C-states were disabled for Intel Skylake to fix boot problems on that platform, and many, many more changes. There’s also bad news for fans of Gopher: support for the protocol was removed from WebPositive, Haiku’s native web browser.