Monthly Archive:: November 2024
More bad news from Mozilla. The Mozilla Foundation, the nonprofit arm of the Firefox browser maker Mozilla, has laid off 30% of its employees as the organization says it faces a “relentless onslaught of change.” Announcing the layoffs in an email to all employees on October 30, the Mozilla Foundation’s executive director Nabiha Syed confirmed that two of the foundation’s major divisions — advocacy and global programs — are “no longer a part of our structure.” ↫ Zack Whittaker at TechCrunch This means Mozilla will no longer be advocating for an open web, privacy, and related ideals, which fits right in with the organisation’s steady decline into an ad-driven effort that also happens to be making a web browser used by, I’m sorry to say, effectively nobody. I just don’t know how many more signs people need to see before realising that the future of Firefox is very much at stake, and that we’re probably only a few years away from losing the only non-big tech browser out there. This should be a much bigger concern than it seems to be to especially the Linux and BSD world, who rely heavily on Firefox, without a valid alternative to shift to once the browser’s no longer compatible with the various open source requirements enforced by Linux distributions and the BSDs. What this could also signal is that the sword of Damocles dangling above Mozilla’s head is about to come down, and that the people involved know more than we do. Google is effectively bankrolling Mozilla – for about 80% of its revenue – but that deal has come under increasing scrutiny from regulars, and Google itself, too, must be wondering why they’re wasting money supporting a browser nobody’s using. We’re very close to a web ruled by Google and Apple. If that prospect doesn’t utterly terrify you, I honestly wonder what you’re doing here, reading this.
Earlier this year, a proposal was made to replace the primary edition of Fedora from the GNOME variant to the KDE variant. This proposal, while serious, was mostly intended to stir up discussion about the position of the Fedora KDE spin within the larger Fedora community, and it seems this has had its intended effect. A different, but related proposal, to make Fedora KDE equal in status to the Fedora GNOME variant, has been accepted. The original proposal read: After a few months of being live, the proposal has now been unanimously accepted, which means that starting with Fedora 42, the GNOME and KDE versions will have equal status, and thus will receive equal marketing and positioning on the website. Considering how many people really enjoy Fedora KDE, this is a great outcome, and probably the fairest way to handle the situation for a distribution as popular as Fedora. I use Fedora KDE on all my machines, so for me, this is great news.
LXQt, the desktop environment that is to KDE what Xfce is to GNOME, has released version 2.1.0, and while the version number change seems average, it’s got a big ace up its sleeve: you can now run LXQt in a Wayland session, and they claim it works quite well, too, and it supports a wide variety of compositors. Through its new component lxqt-wayland-session, LXQt 2.1.0 supports 7 Wayland sessions (with Labwc, KWin, Wayfire, Hyprland, Sway, River and Niri), has two Wayland back-ends in lxqt-panel (one for kwin_wayland and the other general), and will add more later. All LXQt components that are not limited to X11 — i.e., most components — work fine on Wayland. The sessions are available in the new section Wayland Settings inside LXQt Session Settings. At least one supported Wayland compositor should be installed in addition to lxqt-wayland-session for it to be used. There is still hard work to do, but all of the current LXQt Wayland sessions are quite usable; their differences are about what the supported Wayland compositors provide. ↫ LXQt 2.1.0 release announcement This is great news for LXQt, as it ensures the desktop environment is ready to keep up with what modern Linux distributions provide. Crucially and in line with what we’ve come to expect from LXQt, X11 support is a core part of the project, and they even go so far as to say “the X11 session will be supported indefinitely”, which should set people preferring to stay on X11 at ease. I personally may have gleefully left X11 in the dustbin of history, but many among us haven’t, and it’s welcome to see LXQt’s clear promise here. Many of the other improvements in this release are tied to Wayland, making sure the various components work and Wayland settings can be adjusted. On top of that, there’s the usual list of bug fixes and smaller changes, too.
The current version of Windows on ARM contains Prism, Microsoft’s emulator that allows x86-64 code to run on ARM processors. While it was already relatively decent on the recent Snapdragon X platform, it could still be very hit-or-miss with what applications it would run, and especially games seemed to be problematic. As such, Microsoft has pushed out a major update to Prism that adds support for a whole bunch of extensions to the x86 architecture. This new support in Prism is already in limited use today in the retail version of Windows 11, version 24H2, where it enables the ability to run Adobe Premiere Pro 25 on Arm. Starting with Build 27744, the support is being opened to any x64 application under emulation. You may find some games or creative apps that were blocked due to CPU requirements before will be able to run using Prism on this build of Windows. At a technical level, the virtual CPU used by x64 emulated applications through Prism will now have support for additional extensions to the x86 instruction set architecture. These extensions include AVX and AVX2, as well as BMI, FMA, F16C, and others, that are not required to run Windows but have become sufficiently commonplace that some apps expect them to be present. You can see some of the new features in the output of a tool like Coreinfo64.exe. ↫ Amanda Langowski and Brandon LeBlanc on the Windows Blog Hopefully this makes running existing x86 applications that don’t yet have an ARM version a more reliable affair for Windows on ARM users.
A long, long time ago, back when running BeOS as my main operating system had finally become impossible, I had a short stint running QNX as my one and only operating system. In 2004, before I joined OSNews and became its managing editor, I also wrote and published an article about QNX on OSNews, which is cringe-inducing to read over two decades later (although I was only 20 when I wrote that – I should be kind to my young self). Sadly, the included screenshots have not survived the several transitions OSNews has gone through since 2004. Anyway, back in those days, it was entirely possible to use QNX as a general purpose desktop operating system, mostly because of two things. First, the incredible Photon MicroGUI, an excellent and unique graphical environment that was a joy to use, and two, because of a small but dedicated community of enthousiasts, some of which QNX employees, who ported a ton of open source applications, from basic open source tools to behemoths like Thunderbird, the Mozilla Suite, and Firefox, to QNX. It even came with an easy-to-use package manager and associated GUI to install all of these applications without much hassle. Using QNX like this was a joy. It really felt like a tightly controlled, carefully crafted user experience, despite desktop use being so low on the priority list for the company that it might as well have not been on there at all. Not long after, I think a few of the people inside QNX involved with the QNX desktop community left the company, and the entire thing just fizzled out afterwards when the company was acquired by Harman Kardon. Not long after, it became clear the company lost all interest, a feeling only solidified once Blackberry acquired the company. Somewhere in between the company released some of its code under some not-quite-open-source license, accompanied by a rather lacklustre push to get the community interested again. This, too, fizzled out. Well, it seems the company is trying to reverse course, and has started courting the enthusiast community once again. This time, it’s called QNX Everywhere, and it involves making QNX available for non-commercial use for anyone who wants it. No, it’s not open source, and yes, it requires some hoops to jump through still, but it’s better than nothing. In addition, QNX also put a bunch of open source demos, applications, frameworks, and libraries on GitLab. One of the most welcome new efforts is a bootable QNX image for the Raspberry Pi 4 (and only the 4, sadly, which I don’t own). It comes with a basic set of demo application you can run from the command line, including a graphical web browser, but sadly, it does not seem to come with Photon microGUI or any modern equivalent. I’m guessing Photon hasn’t seen a ton of work since its golden days two decades ago, which might explain why it’s not here. There’s also a list of current open source ports, which includes chunks of toolkits like GTK and Qt, and a whole bunch of other stuff. Honestly, as cool as this is, it seems it’s mostly aimed at embedded developers instead of weird people who want to use QNX as a general purpose operating system, which makes total sense from QNX’ perspective. I hope Photon microGUI will make a return at some point, and it would be awesome – but I expect unlikely – if QNX could be released as open source, so that it would be more likely a community of enthusiasts could spring up around it. For now, without much for a non-developer like me to do with it, it’s not making me run out to buy a Raspberry Pi 4 just yet.
Old-school Apple fans probably remember a time, just before the iPhone became a massive gaming platform in its own right, when Apple released a wide range of games designed for late-model clickwheel iPods. While those clickwheel-controlled titles didn’t exactly set the gaming world on fire, they represent an important historical stepping stone in Apple’s long journey through the game industry. Today, though, these clickwheel iPod games are on the verge of becoming lost media—impossible to buy or redownload from iTunes and protected on existing devices by incredibly strong Apple DRM. Now, the classic iPod community is engaged in a quest to preserve these games in a way that will let enthusiasts enjoy these titles on real hardware for years to come. ↫ Kyle Orland at Ars Technica A nice effort, of course, and I’m glad someone is putting time and energy into preserving these games and making them accessible to a wider audience. As is usual with Apple, these small games were heavily encumbered with DRM, being locked to both the the original iTunes account that bought them, but also to the specific hardware identifier of the iPod they were initially synchronised to using iTunes. A clever way around this DRM exists, and it involves collectors and enthusiasts creating reauthorising their iTunes accounts to the same iTunes installation, and thus adding their respective iPod games to that single iTunes installation. Any other iPods can then be synced to that master account. The iPod Clickwheel Games Preservation Project takes this approach to the next level, by setting up a Windows virtual machine with iTunes installed in it, which can then be shared freely around the web for people to the games to their collection. This is a rather remarkably clever method of ensuring these games remain accessible, but obviously does require knowledge of setting up Qemu and USB passthrough. I personally never owned an iPod – I was a MiniDisc fanatic until my Android phone took over the role of music player – so I also had no clue these games even existed. I assume most of them weren’t exactly great to control with the limited input method of the iPod, but that doesn’t mean there won’t be huge numbers of people who have fond memories of playing these games when they were younger – and thus, they are worth preserving. We can only hope that one day, someone will create a virtual machine that can run the actual iPod operating system, called Pixo OS.
Nothing is sacred. With this update, we are introducing the ability to rewrite content in Notepad with the help of generative AI. You can rephrase sentences, adjust the tone, and modify the length of your content based on your preferences to refine your text. ↫ Dave Grochocki at the Windows Insider Blog This is the reason everything is going to shit.
Today, Microsoft announced the general availability of Windows Server IoT 2025. This new release includes several improvements, including advanced multilayer security, hybrid cloud agility, AI, performance enhancements, and more. Microsoft claims that Windows Server IoT 2025 will be able to handle the most demanding workloads, including AI and machine learning. It now has built-in support for GPU partitioning and the ability to process large datasets across distributed environments. With Live Migration and High Availability, it also offers a high-performance platform for both traditional applications and advanced AI workloads. ↫ Pradeep Viswanathan at Neowin Windows Server IoT 2025 brings the same benefits, new features, and improvements as the just-released regular Windows Server 2025. I must admit I’m a little unclear as to what Windows Server IoT has to offer over the regular edition, and reading the various Microsoft marketing materials and documents don’t really make it any clearer for me either, since I’m not particularly well-versed in all that enterprise networking lingo.
NetBSD is an open-source, Unix-like operating system known for its portability, lightweight design, and robustness across a wide array of hardware platforms. Initially released in 1993, NetBSD was one of the first open-source operating systems based on the Berkeley Software Distribution (BSD) lineage, alongside FreeBSD and OpenBSD. NetBSD’s development has been led by a collaborative community and is particularly recognized for its “clean” and well-documented codebase, a factor that has made it a popular choice among users interested in systems programming and cross-platform compatibility. ↫ André Machado I’m not really sure what to make of this article, since it mostly reads like an advertisement for NetBSD, but considering NetBSD is one of the lesser-talked about variants of an operating system family that already sadly plays second fiddle to the Linux behemoth, I don’t think giving it some additional attention is really hurting anybody. The article is still gives a solid overview of the history and strengths of NetBSD, which makes it a good introduction. I have personally never tried NetBSD, but it’s on my list of systems to try out on my PA-RISC workstation since from what I’ve heard it’s the only BSD which can possibly load up X11 on the Visualize FX10pro graphics card it has (OpenBSD can only boot to a console on this GPU). While I could probably coax some cobbled-together Linux installation into booting X11 on it, where’s the fun in that? Do any of you lovely readers use NetBSD for anything? FreeBSD and even OpenBSD are quite well represented as general purpose operating systems in the kinds of circles we all frequent, but I rarely hear about people using NetBSD other than explicitly because it supports some outdated, arcane architecture in 2024.
Another month lies behind us, so another monthly update from Redox is upon us. The biggest piece of news this time is undoubtedly that Redox now runs on RISC-V – a major achievement. Andrey Turkin has done extensive work on RISC-V support in the kernel, toolchain and elsewhere. Thanks very much Andrey for the excellent work! Jeremy Soller has incorporated RISC-V support into the toolchain and build process, has begun some refactoring of the kernel and device drivers to better handle all the supported architectures, and has gotten the Orbital Desktop working when running in QEMU. ↫ Ribbon and Ron Williams That’s not all, though. Redox on the Raspberry Pi 4 boots to the GUI login screen, but needs more work on especially USB support to become a fully usable target. The application store from the COSMIC desktop environment has been ported, and as part of this effort, Redox also adopted FreeDesktop standards to make package installation easier – and it just makes sense to do so, with more and more of COSMIC making its way to Redox. Of course, there’s also a slew of smaller improvements to the kernel, various drivers including the ACPI driver, RedoxFS, Relibc, and a lot more. The progress Redox is making is astounding, and while that’s partly because it’s easier to make progress when there’s a lot of low-hanging fruit as there inevitably will be in a relatively new operating system, it’s still quite an achievement. I feel very positive about the future of Redox, and I can’t wait until it reaches a point where more general purpose use becomes viable.
Microsoft has confirmed the general availability of Windows Server 2025, which, as a long-term servicing channel (LTSC) release, will be supported for almost ten years. This article describes some of the newest developments in Windows Server 2025, which boasts advanced features that improve security, performance, and flexibility. With faster storage options and the ability to integrate with hybrid cloud environments, managing your infrastructure is now more streamlined. Windows Server 2025 builds on the strong foundation of its predecessor while introducing a range of innovative enhancements to adapt to your needs. ↫ What’s new in Windows Server 2025 article It should come as no surprise that Windows Server 2025 comes loaded with a ton of new features and improvements. I already covered some of those, such as DTrace by default, NVMe and storage improvements, hotpatching, and more. Other new features we haven’t discussed yet are a massive list of changes and improvements to Active Directory, a feature-on-demand feature for Azure Arc, support for Bluetooth keyboards, mice, and other peripherals, and tons of Hyper-V improvements. SMB is also seeing so many improvements it’s hard to pick just a few to highlight, and software-defined networking is also touted as a major aspect of Server 2025. With SDN you can separate the network control plane from the data plane, giving administrators more flexibility in managing their network. I can just keep going listing all of the changes, but you get the idea – there’s a lot here. You can try Windows Server 2025 for free for 180 days, as a VM in Azure, a local virtual machine image, or installed locally through an ISO image.
Some months ago, I got really fed up with C. Like, I don’t hate C. Hating programming languages is silly. But it was way too much effort to do simple things like lists/hashmaps and other simple data structures and such. I decided to try this language called Odin, which is one of these “Better C” languages. And I ended up liking it so much that I moved my game Artificial Rage from C to Odin. Since Odin has support for Raylib too (like everything really), it was very easy to move things around. Here’s how it all went.. Well, what I remember the very least. ↫ Akseli Lahtinen You programmers might’ve thought you escaped the wrath of Monday on OSNews, but after putting the IT administrators to work in my previous post, it’s now time for you to get to work. If you have a C codebase and want to move it to something else, in this case Odin, Lahtinen’s article will send you on your way. As someone who barely knows how to write HTML, it’s difficult for me to say anything meaningful about the technical details, but I feel like there’s a lot of useful, first-hand info here.
It’s the start of the work week, so for the IT administrators among us, I have another great article by friend of the website, Stefano Marinelli. This article covers migrating a Proxmox-based setup to FreeBSD with bhyve. The load is not particularly high, and the machines have good performance. Suddenly, however, I received a notification: one of the NVMe drives died abruptly, and the server rebooted. ZFS did its job, and everything remained sufficiently secure, but since it’s a leased server and already several years old, I spoke with the client and proposed getting more recent hardware and redoing the setup based on a FreeBSD host. ↫ Stefano Marinelli If you’re interested in moving one of your own setups, or one of your clients’ setups, from Linux to FreeBSD, this is a great place to start and get some ideas, tips, and tricks. Like I said, it’s Monday, and you need to get to work.
It’s been less than a week, and late Friday night we reached the fundraiser goal of €2500 (it sat at 102% when I closed it) on Ko-Fi! I’m incredibly grateful for each and every donation, big or small, and every new Patreon that joined our ranks. It’s incredible how many of you are willing to support OSNews to keep it going, and it means the absolute world to me. Hopefully we’ll eventually reach a point where monthly Patreon income is high enough so we can turn off ads for everyone, and be fully free from any outside dependencies. Of course, it’s not just those that choose to support us financially – every reader matters, and I’m very thankful for each and every one of you, donor/Patreon or not. The weekend’s almost over, so back to regular posting business tomorrow. I wish y’all an awesome Sunday evening.
Many MacOS users are probably used by now to the annoyance that comes with unsigned applications, as they require a few extra steps to launch them. This feature is called Gatekeeper and checks for an Apple Developer ID certificate. Starting with MacOS Sequoia 15, the easy bypassing of this feature with e.g. holding Control when clicking the application icon is now no longer an option, with version 15.1 disabling ways to bypass this completely. Not unsurprisingly, this change has caught especially users of open source software like OpenSCAD by surprise, as evidenced by a range of forum posts and GitHub tickets. ↫ Maya Posch at Hackaday It seems Apple has disabled the ability for users to bypass application signing entirely, which would be just the next step in the company’s long-standing effort to turn macOS into iOS, with the same, or at least similar, lockdowns and restrictive policies. This would force everyone developing software for macOS to spend €99 per year in order to get their software signed, which may not be a realistic option for a lot of open source software. Before macOS 15.0, you could ctrl+right-click an unsigned application and force it to run. In macOS 15.0, Apple removed the ability to do this; instead, you had to try and open the application (which would fail), and then open System Settings, go to Privacy & Security, and click the “Open Anyway” button to run the application. Stupidly convoluted, but at least it was possible to run unsigned applications. In macOS 15.1, however, even this convoluted method no longer seems to be working. When you try and launch an unsigned application in macOS 15.1, you get a dialog that reads The application “Finder” does not have permission to open “(null)”, and no button to open the application anyway appears under Privacy & Security. The wording of the dialog would seem to imply this is a bug, but Apple’s lack of attention to UI detail in recent years means I wouldn’t be surprised if this is intentional. This means that the only way to run unsigned applications on macOS 15.1 is to completely disable System Integrity Protection and Gatekeeper. To do this, you have to boot into recovery mode, open the terminal, run the command sudo spctl --master-disable, reboot. However, I do not consider this a valid option for 99.9% of macOS users, and having to disable complex stuff like this through recovery mode and several reboots just to launch an application is utterly bizarre. For those of you still stuck on macOS, I can only hope this is a bug, and not a feature.
In a major shift of its release cycle, Google has revealed that Android 16 will be released in Q2 of 2025, confirming my report from late last month. Android 16 is the name of the next major release of the Android operating system, and its release in Q2 marks a significant departure from the norm. Google typically pushes out a new major release of Android in Q3 or Q4, but the company has decided to move next year’s major release up by a few months so more devices will get the update sooner. ↫ Mishaal Rahman at Android Authority That’s a considerable shake-up of Android’s long-lasting release cadence. The change includes more than just moving up the major Android release, as Google also intends to ship more minor releases of Android throughout the year. The company has already unveiled a rough schedule for Android 16, only weeks after releasing Android 15, with the major Android 16 release coming in the second quarter of 2025, followed by a minor release in the fourth quarter of 2025. There are two reasons Google is doing this. First, this new release schedule better aligns with when new flagship Android devices are released, so that from next year onwards, they can ship with the latest version of Android of that year preinstalled, instead of last year’s release. This should help bump up the number of users using the latest release. Second, this will allow Google to push out SDK releases more often, allowing for faster bug fixing. I honestly feel like most users will barely notice this change. Not only is the Android update situation still quite messy compared to its main rival iOS, the smartphone operating system market has also matured quite a bit, and the changes between releases are no longer even remotely as massive as they used to be. Other than Pixel users, I don’t think most people will even realise they’re on a faster release schedule.
Genode’s rapid development carries on apace. Whilst Genode itself is a so-called OS Framework – the computing version of a rolling chassis that can accept various engines (microkernels) and coachwork of the customer’s choice – they also have an in-house PC desktop system. This flagship product, Sculpt OS, comes out on a bi-annual schedule and Autumn brings us the second for the year, with what has become an almost a customary big advance: Among the many usability-related topics on our road map, multi-monitor support is certainly the most anticipated feature. It motivated a holistic modernization of Genode’s GUI stack over several months, encompassing drivers, the GUI multiplexer, inter-component interfaces, up to widget toolkits. Sculpt OS 24.10 combines these new foundations with a convenient user interface for controlling monitor modes, making brightness adjustments, and setting up mirrored and panoramic monitor configurations. ↫ Genode website Sculpt OS 24.10 is available as ready-to-use system image for PC hardware, the PinePhone, and the MNT Reform laptop.
Another day, another Windows Recall problem. Microsoft is delaying the feature yet again, this time from October to December. “We are committed to delivering a secure and trusted experience with Recall. To ensure we deliver on these important updates, we’re taking additional time to refine the experience before previewing it with Windows Insiders,” says Brandon LeBlanc, senior product manager of Windows, in a statement to The Verge. “Originally planned for October, Recall will now be available for preview with Windows Insiders on Copilot Plus PCs by December.” ↫ Tom Warren at The Verge Making Recall secure, opt-in, and uninstallable is apparently taking more time than the company originally planned. When security, opt-in, and uninstallable are not keywords during your design and implementation process for new features, this is the ungodly mess that you end up with. This could’ve all been prevented if Microsoft wasn’t high on its own “AI” supply.
Torvalds said that the current state of AI technology is 90 percent marketing and 10 percent factual reality. The developer, who won Finland’s Millennium Technology Prize for the creation of the Linux kernel, was interviewed during the Open Source Summit held in Vienna, where he had the chance to talk about both the open-source world and the latest technology trends. ↫ Alfonso Maruccia at Techspot Well, he’s not wrong. “AI” definitely feels like a bubble at the moment, and while there’s probably eventually going to be useful implementations people might actually want to actively use to produce quality content, most “AI” features today produce a stream of obviously fake diarrhea full of malformed hands, lies, and misinformation. Maybe we’ll eventually work out these serious kinks, but for now, it’s mostly just a gimmick providing us with an endless source of memes. Which is fun, but not exactly what we’re being sold, and not something worth destroying the planet for even faster. Meanwhile, Google is going utterly bananas with its use of “AI” inside the company, with Sundar Pichai claiming 25% of code inside Google is now “AI”-generated. ↫ Sundar Pichai We’re also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster. So much here feels wrong. First, who wants to bet those engineers care a whole lot less about the generated code than they do about code they write themselves? Second, who wants to bet that generated code is entirely undocumented? Third, who wants to bet what the additional costs will be a few years from now when the next batch of engineers tries to make sense of that undocumented generated code? Sure, Google might save a bit on engineers’ salaries now, but how much extra will they have to spend to unspaghettify that diarrhea code in the future? It will be very interesting to keep an eye on this, and check back in, say, five years, and hear from the Google engineers of the future how much of their time is spent fixing undocumented “AI”-generated code. I can’t wait.