POSIX conformance testing for the Redox signals project

The Redox team has received a grant from NLnet to develop Redox OS Unix-style Signals, moving the bulk of signal management to userspace, and making signals more consistent with the POSIX concepts of signaling for processes and threads. It also includes Process Lifecycle and Process Management aspects. As a part of that project, we are developing tests to verify that the new functionality is in reasonable compliance with the POSIX.1-2024 standard. This report describes the state of POSIX conformance testing, specifically in the context of Signals. ↫ Ron Williams This is the kind of dry, but important matter a select few of you will fawn over. Consider it my Christmas present for you. There’s also a shorter update on the dynamic linker in Redox, which also goes into some considerable detail about how it works, and what progress has been made.

How to make an Apple Watch work with Android

What if you have an Android phone, but consider the Apple Watch superior to other smartwatches? Well, you could switch to iOS, or, you know, you could hack your way into making an Apple Watch work with Android, like Abishek Muthian did. So I decided to make Apple Watch work with my Android phone using open-source applications, interoperable protocols and 3rd party services. If you just want to use my code and techniques and not read my commentary on it then feel free to checkout my GitHub for sources. ↫ Abishek Muthian Getting notifications to work, so that notifications from the Android phone would show up on the Apple Watch, was the hardest part. Muthian had to write a Python script to read the notifications on the Android device using Termux, and then use Pushover to send them to the Apple Watch. For things like contacts and calendar, he relied on *DAV, which isn’t exactly difficult to set up, so pretty much anyone who’s reading this can do that. Sadly, initial setup of the watch did require the use of an iPhone, using the same SIM as is in the Android phone. This way, it’s possible to set up mobile data as well as calling, and with the SIM back in the Android phone, a call will show up on both the Apple Watch and the Android device. Of course, this initial setup makes the process a bit more cumbersome than just buying a used Apple Watch off eBay or whatever, but I’m honestly surprised everything’s working as well as it does. This goes to show that the Apple Watch is not nearly as “deeply integrated” with the iPhone as Apple so loves to claim, and making the Apple Watch work with Android in a more official manner certainly doesn’t look to be as impossible as Apple makes it out to be when dealing with antitrust regulators. Of course, any official support would be much more involved, especially in the testing department, but it would be absolute peanuts, financially, for a company with Apple’s disgusting level of wealth. Anyway, if you want to setup an Apple Watch with Android, Muthian has put the code on GitHub.

A quick look at OS/2’s built-in virtualisation

Most of us are aware that IBM’s OS/2 has excellent compatibility with DOS and Windows 3.x programs, to the point where OS/2 just ships with an entire installation of Windows 3.x built-in that you can run multiple instances of. In fact, to this day, ArcaOS, the current incarnation of the maintained and slightly modernised OS/2 codebase, still comes with an entire copy of Windows 3.x, making ArcaOS one of the very best ways to run DOS and Windows 3.x programs on a modern machine, without resorting to VMware or VirtualBox. Peter Hofmann took a look at one of the earlier versions of OS/2 – version 2.1 from 1993 – to see how its DOS compatibility actually works, or more specifically, the feature “DOS from drive A:”. You can insert a bootable DOS floppy and then run that DOS in a new window. Since this is called “DOS from drive A:”, surely this is something DOS-specific, right? Maybe only supports MS-DOS or even only PC DOS? Far from it, apparently. ↫ Peter Hofmann Hofmann wrote a little test program using nothing but BIOS system calls, meaning it doesn’t use any DOS system calls. This “real mode BIOS program” can run from the bootsector, if you wanted to, so after combining his test program with a floppy disk boot record, you end up with a bootable floppy that runs the test program, for instance in QEMU. After a bit of work, the test program on the bootable floppy will work just fine using OS/2’s “DOS from drive A:” feature, even though it shouldn’t. What this seems to imply is that this functionality in OS/2 2.1 looks a lot like a hypervisor, or as Hofmann puts it, “basically a builtin QEMU that anybody with a 386 could use”. That’s pretty advanced for the time, and raises a whole bunch of questions about just how much you can do with this.

Fedora proposes dropping Atomic desktops for PPC64LE

Fedora is proposing to stop building their Atomic desktop versions for PPC64LE. PopwerPC 64 LE basically comes down to IBM’s POWER architecture, and as far as desktop use goes, that exclusively means the POWER9 machines from Raptor Computing Systems. I reviewed their small single-socket Blackbird machine in 2021, and I also have their dual-socket Talos II workstation. I can tell you from experience that nobody who owns one of these is opting for an immutable Fedora variant, and on top of that, these machines are getting long in the tooth. Raptor passed on POWER10 because it required proprietary firmware, so we’ve been without new machines for years now. As such, it makes sense for Fedora to stop building Atomic desktops for this architecture. We will stop building the Fedora Atomic Desktops for the PowerPC 64 LE architecture. According to the count me statistics, we don’t have any Atomic Desktops users on PPC64LE. Users of Atomic Desktops on PPC64LE will have to either switch back to a Fedora package mode installation or build their own images using Bootable Containers which are available for PPC64LE. ↫ Timothée Ravier I’ve never written much about the Talos II, surmising that most of my Blackbird review applies to the Talos II, as well. If there’s interest, I can check to see what the current state of Fedora and/or other distributions on POWER9 is, and write a short review about the experience. I honestly don’t know if there’s much interest at this point in POWER9, but if there is, here’s your chance to get your questions answered.

Microsoft Recall screenshots credit cards and Social Security numbers, even with the “sensitive information” filter enabled

Microsoft’s Recall feature recently made its way back to Windows Insiders after having been pulled from test builds back in June, due to security and privacy concerns. The new version of Recall encrypts the screens it captures and, by default, it has a “Filter sensitive information,” setting enabled, which is supposed to prevent it from recording any app or website that is showing credit card numbers, social security numbers, or other important financial / personal info. In my tests, however, this filter only worked in some situations (on two e-commerce sites), leaving a gaping hole in the protection it promises. ↫ Avram Piltch at Tom’s Hardware Recall might be one of the biggest own goals I have seen in recent technology history. In fact, it’s more of a series of own goals that just keep on coming, and I honestly have no idea why Microsoft keeps making them, other than the fact that they’re so high on their own “AI” supply that they just lost all touch with reality at this point. There’s some serious Longhorn-esque tunnel vision here, a project during which the company also kind of forgot the outside world existed beyond the walls of Microsoft’s Redmond headquarters. It’s clear by now that just like many other tech companies, Microsoft is so utterly convinced it needs to shove “AI” into every corner of its products, that it no longer seems to be asking the most important question during product development: do people actually want this? The response to Windows Recall has been particularly negative, yet Microsoft keep pushing and pushing it, making all the mistakes along the way everybody has warned them about. It’s astonishing just how dedicated they are to a feature nobody seem to want, and everybody seems to warn them about. It’s like we’re all Kassandra. The issue in question here is exactly as dumb as you expect it to be. The “Filter sensitive information” setting is so absurdly basic and dumb it basically only seems to work on shopping sites, not anywhere else where credit card or other sensitive information might be shown. This shortcoming is obvious to anyone who think about what Recall does for more than one nanosecond, but Microsoft clearly didn’t take a few moments to think about this, because their response is to let them know through the Feedback Hub any time Recall fails to detect and sensitive information. They’re basically asking you, the consumer, to be the filter. Unpaid, of course. After the damage has already been done. Wild. If you can ditch Windows, you should. Windows is not a place of honour.

Fedora’s new Btrfs SIG should focus on making Btrfs’ features more accessible

As Michel Lind mentioned back in August, we wanted to form a Special Interest Group to further the development and adoption of Btrfs in Fedora. As of yesterday, the SIG is now formed. ↫ Neal Gompa Since I’ve been using Fedora on all my machines for a while now, I’ve also been using Btrfs as my one and only file system for just as much time, without ever experiencing any issues. In fact, I recently ordered four used 4TB enterprise hard drives (used, yes, but zero SMART issues) to set up a storage pool whereto I can download my favourite YouTube playlists so I don’t have to rely on internet connectivity and YouTube not being shit. I combined the four drives into a single 16TB Btrfs volume, and it’s working flawlessly. Of course, not having any redundancy is a terrible idea, but I didn’t care much since it’s just downloaded YouTube videos. However, it’s all working so flawlessly, and the four drives were so cheap, I’m going to order another four drives and turn the whole thing into a 16TB Btrfs volume using one of the Btrfs RAID profiles for proper redundancy, even if it “costs” me half of the 32TB of total storage. This way, I can also use it as an additional backup for more sensitive data, which is never a bad thing. The one big downside here is that all of this has to be set up and configured using the command line. While that makes sense in a server environment and I had no issues doing so, I think a product that calls itself Fedora Workstation (or, in my case, Fedora KDE, but the point stands) should have proper graphical tools for managing the file system it uses. Fedora should come with a graphical utility to set up, manage, and maintain Btrfs volumes, so you don’t have to memorise a bunch of arcane commands. I know a lot of people get very upset when you even suggest someting like this, but that’s just elitist nonsense. Btrfs has various incredibly useful features that should be exposed to users of all kinds, not just sysadmins and weird nerds – and graphical tools are a great way to do this. I don’t know exactly what the long-term plans of the new Btrrfs SIG are going to be, but I think making the useful features of Btrfs more accessible should definitely be on the list. You shouldn’t need to be a CLI expert to set up resilient, redundant local storage on your machine, especially now that the interest in digital self-sufficiency is increasing.

There’s a market out there for a modern X11/Motif-based desktop distribution

EMWM is a fork of the Motif Window Manager with fixes and enhancements. The idea behind this is to provide compatibility with current xorg extensions and applications, without changing the way the window manager looks and behaves. This includes support for multi-monitor setups through Xinerama/Xrandr, UFT-8 support with Xft fonts, and overall better compatibility with software that requires Extended Window Manager Hints. Additionally a couple of goodies are available in the separate utilities package: XmToolbox, a toolchest like application launcher, which reads it’s multi-level menu structure from a simple plain-text file ~/.toolboxrc, and XmSm, a simple session manager that provides session configuration, locking and shutdown/suspend options. ↫ EMWM homepage I had never heard of EMWM, but I immediately like it. This same developer, Alexander Pampuchin, also develops XFile, a file manager for X11 which presents the file system as it actually is, instead of using a bunch of “imaginary” locations to hide the truth, if you will. On top of that, they also develop XImaging, a comprehensive image viewer for X11. All of these use the Motif widget toolkit, focus on plain X11, and run on most Linux distributions and BSDs. They need to be compiled by the user, most likely. I am convinced that there is a small but sustainable audience for a modern, up-to-date Linux distribution (although a BSD would work just as well), that instead of offering GNOME, KDE, Xfce, or whatever, focuses instead of delivering a traditional, yet modernised and maintained, desktop environment and applications using not much more than X11 and Motif, eschewing more complex stuff like GTK, Qt, systemd, Wayland, and so on. I would use the hell out of a system that gives me a version of the Motif-based desktops like CDE from the ’90s, but with some modern amenities, current hardware support, support for high-resolution displays, and so on. You can certainly grab bits and bobs left and right from the web and build something like this from scratch, but not everyone has the skills and time to do so, yet I think there’s enough people out there who are craving for something like this. There’s tons of maintained X11/Motif software out there – it’s just all spread out, disorganised, and difficult to assemble because it almost always means compiling it all from scratch, and most people simply don’t have the time and energy for that. Package this up on a solid Debian, Fedora, or FreeBSD base, and I think you’ve got quite some users lining up.

Xfce 4.20 with experimental Wayland support released

After two years of intense development, the third major Linux desktop environment has released a new version: Xfce 4.20 is here. The major focus of this release cycle was getting Xfce ready for Wayland, and they’ve achieved quite a bit of that goal, but support for it is still experimental. Thanks to Brian and Gaël almost all Xfce components are able to run on Wayland windowing, while still keeping support for X11 windowing. This major effort was achieved by abstracting away any X11/Wayland windowing specific calls and making use of Wayland/Wlroots protocols. A whole new Xfce library, “libxfce4windowing” was introduced during that process. XWayland will not be required to run any of the ported Xfce components. ↫ Xfce development team A major gap in Xfce’s Wayland support is the fact that Xfwm4 has not been ported to Wayland yet, so the team suggests using Labwc or Wayfire instead if you want to dive into using Xfce on Wayland. While there are plans to port Xfwm4 over to Wayland, this requires a major restructuring and they’re not going to set any timelines or expectations for when this will be completed. Regardless, this is an excellent achievement and solid progress for Xfce on Wayland, which is pretty much a requirement for Xfce (and other desktop environments) te remain relevant going forward. Of course, while Wayland is a major focus this release, there’s a lot more here, too – and that’s not doing the Xfce developers justice. Xfce 4.20 comes packed with so many new features, enchancements, and bug fixes across the board that I have no idea where to start. I like the large number of changes to Thunar, like the ability to use symoblic icons in the sidebar, optimising it for small window sizes, automatically opening folders when dragging and dropping, and so much more. They’ve also done another pass to update any remaining icons not working well on HiDPI displays, removing any instances where you’d encounter fuzzy icons. I can’t wait to give Xfce 4.20 a go once it lands in Fedora Xfce.

“Firefox” ported to Haiku

Haiku is already awash with browsers to choose from, with Falkon (yes, the same one) being the primary choice for most Haiku users, since it offers the best overall experience. We’ve got a new addition to the team, however, as Firefox – in the form of Iceweasel, because trademark stuff and so on – has been ported to Haiku. Jules Enriquez provides some more background in a post on Mastodon: An experimental port of Firefox Iceweasel is now available on HaikuDepot! So far, most sites are working fine. YouTube video playback is fine and Discord just works, however the web browser does occasionally take itself down. Still rather usable, though! If @ActionRetro thought that Haiku was ready for daily driving with Falkon (see first screenshot), then rebranded Firefox surely has to make it even more viable by those standards! It should be noted though that just like with Falkon, some crash dialogs can be ignored (drag them to another workspace) and the web browser can still be used. ↫ Jules Enriquez It’s not actually called Firefox at the moment because of the various trademark restrictions Mozilla places on the Firefox branding, which I think is fair just to make sure not every half-assed barely-working port can slap the Firefox logo and name on itself and call it a day. As noted, this port is experimental and needs more work to bring it up to snuff and eligible for using the name Firefox, but this is still an awesome achievement and a great addition to the pool of applications that are already making Haiku quite daily-drivable for some people. Speaking of which, are there any people in our audience who use Haiku as their main operating system? There’s a lot of chatter out there about just how realistic of an option this has become, but I’m curious if any of you have made the jump and are leading the way for the rest of us. Action Retro‘s videos about Haiku have done a lot to spread the word, and I’m noticing more and more people from far outside the usual operating system circles talking about Haiku. Which is great, and hopefully leads to more people also picking up Haiku development, because I’m sure the team can always use fresh blood.

Google unveils Android XR for headsets and glasses

It was only a matter of time before Google would jump into the virtual/augmented reality fray once again with Android, after their several previous attempts failed to catch on. This time, it’s called Android XR, and it’s aimed at both the big clunky headsets like Apple’s Vision Pro as well as basic glasses that overlay information onto the world. Google has been working on this with Samsung, apparently, and of course, this new Android variant is drenched in “AI” slop. We’re working to create a vibrant ecosystem of developers and device makers for Android XR, building on the foundation that brought Android to billions. Today’s release is a preview for developers, and by supporting tools like ARCore, Android Studio, Jetpack Compose, Unity, and OpenXR from the beginning, developers can easily start building apps and games for upcoming Android XR devices. For Qualcomm partners like Lynx, Sony and XREAL, we are opening a path for the development of a wide array of Android XR devices to meet the diverse needs of people and businesses. And, we are continuing to collaborate with Magic Leap on XR technology and future products with AR and AI. ↫ Shahram Izadi at Google’s blog What they’ve shown of Android XR so far looks a lot like the kind of things Facebook and Apple are doing with their headsets, as far as user interface and interactions go. As for the developer story, Google is making it possible for regular Android applications to run on XR headsets, and for proper XR applications you’ll need to user Jetpack Compose and various new additions to it, and the 3D engine Google opted for is Unity, with whom they’ve been collaborating on this. For now, it’s just an announcement of the new platform and the availability of the development tools, but for actual devices that ship with Android XR you’ll have to wait until next year. Other than the potential for exercise, I’m personally not that interested in VR/AR, and I doubt Google’s Android-based me-too will change much in that regard.

Support my attempt to find out if you can do NFC tap-to-pay without big tech

I’ve been dropping a lot of hints about my journey to rid myself of Google’s Android on my Pixel 8 Pro lately, a quest which grew in scope until it covered everything from moving to GrapheneOS to dropping Gmail, from moving to open source “stock” Android application replacements to reconsidering my use of Google Photos, from dropping my dependency on Google Keep to setting up Home Assistant, and much, much more. You get the idea: this has turned into a very complex process where I evaluated my every remaining use of big tech, replacing them with alternatives where possible, leaving only a few cases where I’m sticking with what I was using. And yes, this whole process will turn into an article detailing my quest, because I think recent events have made remocing big tech from your life a lot more important than it already was. Anyway, one of the few things I couldn’t find an alternative for was Google Pay’s tap-to-pay functionality in stores. I don’t like using cash – I haven’t held paper money in my hands in like 15 years – and I’d rather keep my bank cards, credit card, and other important documents at home instead of carrying them around and losing them (or worse). As such, I had completely embraced the tap-to-pay lifestyle, with my phone and my Pixel Watch II. Sadly, Google Pay tap-to-pay NFC payments are simply not possible on GrapheneOS (or other de-Googled ROMS, for that matter), because of Google’s stringent certification requirements. Some banks do offer NFC payments through their own applications, but mine does not. I thought this is where the story ended, but as it turns out, there is actually a way to get tap-to-pay NFC payments in stores back: Garmin Pay. Garmin offers this functionality on a number of its watches, and it pretty much works wherever Google Pay or Apple Pay is accepted, too. And best of all: it works just fine on de-Googled Android ROMs. Peope have been asking me to check this out and make it part of my quest, and ever the people-pleaser, I would love to oblige. Sadly, it does require owning a supported Garmin watch, which I don’t have. To guage interest in me testing this, I’ve set up a Ko-Fi goal of €400 you can contribute to. Obviously, this is by no means a must, but if you’re interested in finding out if you can ditch big tech, but keep enjoying the convenience of tap-to-pay NFC payments – this is your chance.

QEMU with VirtIO GPU Vulkan support

With its latest reales qemu added the Venus patches so that virtio-gpu now support venus encapsulation for vulkan. This is one more piece to the puzzle towards full Vulkan support. An outdated blog post on clollabora described in 2021 how to enable 3D acceleration of Vulkan applications in QEMU through the Venus experimental Vulkan driver for VirtIO-GPU with a local development environment. Following up on the outdated write up, this is how its done today. ↫ Pepper Gray A major milestone, and for the adventurous, you can get it working today. Give it a few more months, and many of the versions required will be part of your ditribution’s package repositories, making the process a bit easier. On a related note, Linux kernel developers are considering removing 32-bit x86 KVM host support for all architectures that support it – PowerPC, MIPS, RISC-V, and x86-64 – because nobody is using this functionality. This support was dropped from 32bit ARM a few years ago, and the remaining architectures mentioned above have orders of magnitude fewer users still. If nobody is using this functionality, it really makes no sense to keep it around, and as such, the calls to remove it. In other words, if your custom workflow of opening your garage door through your fridge light’s flicker frequency and the alignment of the planets and custom scripts on a Raspberry Pi 2 requires this support, let the kernel developers know, or forever hold your peace.

Turning off Zen 4’s op cache for curiosity and giggles

CPUs start executing instructions by fetching those instruction bytes from memory and decoding them into internal operations (micro-ops). Getting data from memory and operating on it consumes power and incurs latency. Micro-op caching is a popular technique to improve on both fronts, and involves caching micro-ops that correspond to frequently executed instructions. AMD’s recent CPUs have particularly large micro-op caches, or op caches for short. Zen 4’s op cache can hold 6.75K micro-ops, and has the highest capacity of any op cache across the past few generations of CPUs. This huge op cache enjoys high hitrates, and gives the feeling AMD is leaning harder on micro-op caching than Intel or Arm. That begs the question of how the core would handle if its big, beautiful op cache stepped out for lunch. ↫ Chester Lam at Chips and Cheese The results of turning off the op cache were far less dramatic than one would expect, and this mostly comes down to the processor having to wait on other bottlenecks anyway, like the memory, and a lot of tasks consisting of multiple types of operations which not all make use of op cache. While it definitely contributes to making Zen 4 cores faster overall, even without it, it’s still an amazing core that outperforms its Intel competition. As a sidenote, this is such a fun and weird thing to do and benchmark. It doesn’t serve much of a purpose, and the information gained isn’t very practical, but turning off specific parts of a processor and observing the consequences does create some insight into exactly how a modern processor works. There are so many different elements that make up a modern processor now, and just gigahertz or even the number of cores barely tells even half the story. Anyway, we need more of these weird benchmarks.

A twenty-five year old curl bug

When we announced the security flaw CVE-2024-11053 on December 11, 2024 together with the release of curl 8.11.1 we fixed a security bug that was introduced in a curl release 9039 days ago. That is close to twenty-five years. The previous record holder was CVE-2022-35252 at 8729 days. ↫ Daniel Stenberg Ir’s really quite fascinating to see details like this about such a widepsread and widely used tool like curl. The bug in question was a logic error, which made Stenberg detail how any modern language like Rust, instead of C, would not have prevented this issue. Still, about 40% of all security issues in curl stem from not using a memory-safe language, or about 50% of all high/critical severity ones. I understand that jumping on every bandwagon and rewriting everything in a memory-safe language is a lot harder than it sounds, but I also feel like it’s getting harder and harder to keep justifying using old languages like C. I really don’t know why people get so incredibly upset at the cold, hard data about this. Anyway, the issue that sparked this post is fixed in curl 8.11.1.

HP-RT: HP’s real-time operating system from the ’90s

Every now and then I load OpenPA and browse around. Its creator and maintainer, Paul Weissmann, has been very active lately updating the site with new articles, even more information, and tons of other things, and it’s usually a joy to stumble upon something I haven’t read yet, or just didn’t know anything about. This time it’s something called HP-RT, a real-time operating system developed and sold by HP for a number of its PA-RISC workstations back in the ’90s. HP-RT is derived from the real-time operating system LynxOS and was built as real-time operating system from scratch with native POSIX API and Unix features like protected address spaces, multiprocessing, and standard GUI. Real-time scheduling is part of the kernel with response times under 200 µs, later improved to sub-100 µs for uses such as hospital system tied to a heart monitor, or a missile tracking system. For programming, HP-RT supported dynamic shared libraries, ANSI C, Softbench (5.2), FORTRAN, ADA, C++ and PA-RISC assembly. From HP-RT 3.0, GUI-based debugging environment (DDErt) and Event Logging library (ELOG) were included. POSIX 1003.1, 1003.1b and POSIX 1003.4a draft 4 were supported. On the software side, HP-RT supported fast file system, X and Motif clients, X11 SERVERrt, STREAMSrt (SVR 3.2), NFS, and others. ↫ Paul Weissmann at OpenPA I had no idea HP-RT existed, and looking at the feature list, it seems like it was actually a pretty impressive operating system and wider ecosystem back in the ’90s when it was current. HP released several versions of its real-time operating system, with 1997’s 3.0 and 3.01 being the final version. Support for it ended in the early 2000s alongside the end of the line for PA-RISC. I’d absolutely love to try it out today, but sadly, my PA-RISC workstation – an HP Visualise c3750 – is way too “new” to be supported by HP-RT, and in the wrong product category at that. HP-RT required both a regular HP 9000 700 HP-UX workstation, as well as one of HP’s VME machines with a single-module module with the specific “rt” affix in the model number. On top of that you obviously needed the actual HP-RT operating system, which was part of the HP-RT Development Environment. The process entails using the HP-UX machine to compile HP-RT, which was then downloaded to the VMe machine. The odds of not only finding all the right parts to complete the setup, but also to get it all working with what is undoubtedly going to be spotty documentation and next to nobody to talk to about how to do it, are very, very slim. I’m open to suggestions, of course, but considering the probable crazy rarity of the specific hardware, the price-gauging going on in the retrocomputing world, the difficulty of shipping to the Swedish Arctic, and the knwoledge required, I don’t think I’ll be the one to get this to work and show it off. But man do I want to.

Maker of emotional supports robots for kids abruptly shuts down, kills all the robots in the process

Some news is both sad and dystopian at the same time, and this is one of those cases. Moxie, a start-up selling $800 emotional support robots intended to help children is shutting down operations since it can’t find enough money, and since their robots require constant connectivity to servers to operate, all of the children’s robots will cease functioning within days. They’re not offering refunds, but they will send out a letter to help parents tell their children “in an age-appropriate way” that their lovable robot is going to die. If you have kids yourself, you know how easily they can sometimes get attached to the weirdest things, from fluffy stuffed animals designed to be cute, to random inanimate objects us adults would never consider to be even remotely interesting. I can definitely see how my own kids would be devastated if one of their favourite “emotional” toys were to suddenly stop working or disappear, and we don’t even have anything that pretends to have a personality or that actively interacts with our kids like this robot thing does. We can talk about how it’s insane that no refunds will be given, or how a company can just remotely kill a product like this without any repercussions, but most of all I’m just sad for the kids who use one and are truly attached to it, who now have to deal with their little friend going away. That’s just heartbreaking, and surely a sign of things to come as more and more companies start stuffing “AI” into their toys. The only thing I can say is that we as parents should think long and hard about what kind of toys we give our children, and that we should maybe try to avoid anything tied to a cloud service that can go away at any time.

A brief history of Mac servers

Although there’s little evidence of them today, Apple made a long succession of Mac servers and servers for Macs from 1988 to 2014, and only discontinued support for the last release of macOS Server in April 2022. Its first entry into the market was a special version of the Macintosh II running Apple’s own port of Unix way back in 1988. ↫ Howard Oakley These days, you can nab Xserves for pretty cheap on eBay, but since Apple doesn’t properly support them anymore, they’re mostly a curiosity for people who are into retro homelab stuff and the odd Apple enthusiast who doesn’t know what to do with it. It always felt like Apple’s head was never really in the game when it came to its servers, despite the fact that both its hardware and software were quite interesting and user friendly compared to the competition. Regardless, if my wife and I ever manage to buy our own house, the basement’s definitely getting a nice homelab rack with old – mostly non-x86 Sun and HP – servers, and I think an Xserve would be a fun addition, too. Living in the Arctic means any heat they generate is useful for like 9 or so months of the year to help warm the house, and since our electricity is generated from hydropower they wouldn’t be generating a massive excess of pollution, either. I have to figure out what to do with the excess heat during the few months of the year where it’s warm outside, though.

Meet Willow, our state-of-the-art quantum chip

Today I’m delighted to announce Willow, our latest quantum chip. Willow has state-of-the-art performance across a number of metrics, enabling two major achievements. The concensus seems to be that this is a major achievement and milestone in quantum computing, and that it’s come faster than everyone expected. This topic is obviously far more complicated than most people can handle, so we have to rely on the verdicts and opinions from independent experts to gain some sense of just how significant an announcement this really is. The paper’s published in Nature for those few of us possessing the right amount of skill and knowledge to disseminate this information.

Thank you to our weekly sponsor, OS-SCi

We’re grateful for our weekly sponsor, OpenSource Science B.V., an educational institution focused on Open Source software. OS-SCi is training the next generation FOSS engineers, by using Open Source technologies and philosophy in a project learning environment. OS-SCi is offering OSNews readers a free / gratis online masterclass by Prof. Ir. Erik Mols on how the proprietary ecosystem is killing itself. This is a live event, on January 9, 2025 at 17:00 PM CET. Sign up here.

The state of Falkon: KDE’s browser is much better than you know

It’s no secret that I am very worried about the future of Firefox, and the future of Firefox on Linux in particular. I’m not going to rehash these worries here, but suffice to say that with Mozilla increasingly focusing on advertising, Firefox’ negligible market share, and the increasing likeliness that the Google Search deal, which accounts for 85% of Mozilla’s revenue, will come to an end, I have little faith in Firefox for Linux remaining a priority for Mozilla. On top of that, as more and more advertising nonsense, in collaboration with Facebook, makes its way into Firefox, we may soon arrive at a point where Firefox can’t be shipped by Linux distributions at all anymore, due to licensing and/or idealogical reasons. I’ve been warning the Linux community, and distributions in particular, for years now that they’re going to need an alternative default browser once the inevitable day Firefox truly shits the bed is upon us. Since I’m in the middle of removing the last few remaining bits of big tech from my life, I figured I might as well put my money where my mouth is and go on a small side quest to change my browser, too. Since I use Fedora KDE on all my machines and prefer to have as many native applications as possible, I made the switch to KDE’s own browser: Falkon. What is Falkon? Falkon started out as an independent project called QupZilla, but in 2017 it joined the KDE project and was renamed to Falkon. It uses QtWebEngine as its engine, which is Qt’s version of the Chromium engine, but without all the services that talk to Google, which are stripped out. This effectively makes it similar to using de-Googled Chromium. The downside is that QtWebEngine does lag behind the current Chromium version; QtWebEngine 6.8.0, the current version, is Chromium 122, while Chromium 133 is current at the time of writing. The fact that Falkon uses a variant of the Chromium engine means websites just work, and there’s really nothing to worry about when it comes to compatibility. Another advantage of using QtWebEngine is that the engine is updated independently from the browser, so even if it seems Falkon isn’t getting much development, the engine it uses is updated regularly as part of your distribution’s and KDE’s Qt upgrades. The downside, of course, is that you’re using a variant of Chromium, but at least it’s de-Googled and entirely invisible to the user. It’s definitely not great, and it contributes to the Chromium monoculture, but I can also understand that a project like Qt isn’t going to develop its own browser engine, and in turn, it makes perfect sense for KDE, as a flagship Qt product, to use it as well. It’s the practical choice, and I don’t blame either of them for opting for what works, and what works now – the reality is that no matter what browser you’re choose, you’re either using a browser made by Google, or one kept afloat by Google. Pick your poison. It’s not realistic for Qt or KDE to develop their own browser engine from scratch, so opting for the most popular and very well funded browser engine and strip out all of its nasty Google bits makes the most sense. Yes, we’d all like to have more capable browser engines and thus more competition, but we have to be realistic and understand that’s not going to happen while developing a browser engine is as complex as developing an entire operating system. Falkon’s issues and strengths While rendering websites, compatibility, and even performance is excellent – as a normal user I don’t notice any difference between running Chrome, Firefox, or Falkon on my machines – the user interface and feature set is where Falkon stumbles a bit. There’s a few things users have come to expect from their browser that Falkon simply doesn’t offer yet, and those things needs to be addressed if the KDE project wants Falkon to be a viable alternative to Firefox and Chrome, instead of just a languishing side project nobody uses. The biggest thing you’ll miss is without a doubt support for modern extensions. Falkon does have support for the deprecated PPAPI plugin interface and its own extensions system, but there’s no support for the modern extensions API Firefox, Chrome, and other browsers use. What this means for you as a user is that there are effectively no extensions available for Falkon, and that’s a huge thing to suddenly have to do without. Luckily, Falkon does have adblock built-in, including support for custom block lists, so the most important extension is there, but that’s it. There’s a very old bug report/feature request about adding support for Firefox/Chrome extensions, which in turn points to a similar feature request for QtWebEngine to adopt support for such extensions. The gist is that for Falkon to get support for modern Firefox and Chrome extensions, it will have to go through QtWebEngine and thus the Qt project. While I personally can just about get by with using the BitWarden application (instead of the extension) and the built-in adblock, I think this is an absolute most for most people to adopt Falkon in any serious numbers. Most people who would consider switching to a different browser than Chrome or Firefox are going to need extensions support. The second major thing you’ll miss is any lack of synchronisation support. You won’t be synchronising your bookmarks across different machines, let alone open tabs. Of course, this extends to mobile, where Falkon has no presence, so don’t expect to send your open tabs from your phone to your desktop as you get home. While I don’t think this is as big of an issue as the lack of modern extensions, it’s something I use a lot when working on OSNews – I find stories to link to while browsing on my phone, and then open them on my desktop to post them through the tab sharing feature of Firefox, and