Keep OSNews alive by becoming a Patreon, by donating through Ko-Fi, or by buying merch!

Understanding surrogate pairs: why some Windows filenames can’t be read

Windows was an early adopter of Unicode, and its file APIs use UTF‑16 internally since Windows 2000-used to be UCS-2 in Windows 95 era, when Unicode standard was only a draft on paper, but that’s another topic. Using UTF-16 means that filenames, text strings, and other data are stored as sequences of 16‑bit units. For Windows, a properly formed surrogate pair is perfectly acceptable. However, issues arise when string manipulation produces isolated or malformed surrogates. Such errors can lead to unreadable filenames and display glitches—even though the operating system itself can execute files correctly. But we can create them deliberately as well, which we can see below.

↫ Zafer Balkan

What a wild ride and an odd corner case. I wonder what kind of odd and fun shenanigans this could be used for.

Mozilla is going to collect a lot more data from Firefox users

I guess my praise for Mozilla’s and Firefox’ continued support for Manifest v2 had to be balanced out by Mozilla doing something stupid. Mozilla just published Terms of Use for Firefox for the first time, as well as an updated Privacy Notice, that come into effect immediately and include some questionable terms.

The Terms of Use state:

When you upload or input information through Firefox, you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of Firefox.

↫ Firefox’ new Terms of Use

That’s incredibly broad, and it could easily be argued that this gives Mozilla the right to use whatever you post or upload online through Firefox, which is clearly insane. However, it might also just be standard, regular, wholly unenforceable legalese, but the fact it’s in the Firefox Terms of Use now that you have to accept is disconcerting. Does this mean that if an artist uses Firefox to upload a new song they made, Mozilla now has a license to use that song in whatever way they deem fit? You’d hope not, but that does seem what the terms are stating here.

Moving on to the new Privacy Notice, and it seems Mozilla intends to collect more data in more situations. For instance, Mozilla is going to collect things such as “Unique identifiers” and “Browsing data” to “Market [their] services”, consent for which Mozilla will only ask for in jurisdictions where such consent is required, and it’s opt-out, not opt-in. I would hazard a guess that even in places where strict privacy regulations are in place, the wording of such consent will probably be obtuse, and the opt-out checkbox hidden somewhere deep in settings.

Mozilla also intends to collect “all data types” to “comply with applicable laws, and identify and prevent harmful, unauthorized or illegal activity”. Considering how fast, I don’t know, being trans or women’s health care is criminalised in the US, “illegal activity” can cover a lot of damn things once you have totalitarians like Musk and Trump in power. An organisation like Mozilla shouldn’t be collecting any data types, let alone all of them, and especially not in places where such data types can lead to real harm to innocent people.

The backlash to the new Terms of Use and updated Privacy Notice is already growing, and it further cements my worries that Mozilla is intending to invest more and more into becoming an advertising company first, browser maker second. The kinds of data they’re going to collect now from Firefox users are exactly the kinds of data that are incredibly useful to advertisers, and it doesn’t take a genius to see where this is going.

PowerPC Windows NT made to run on GameCube and Wii

Remember about half a year ago, when the PowerPC versions of Windows NT were made to run on certain models of PowerPC Macs? The same developer responsible for that work, Rairii, took all of this to the next level, and it’s now possible to run the PowerPC version of Windows NT on the GameCube, Wii, Wii U, and a few related development boards.

NT 3.51 RTM and higher. NT 3.51 betas (build 944 and below) will need kernel patches to run due to processor detection bugs. NT 3.5 will never be compatible, as it only supports PowerPC 601. (The additional suspend/hibernation features in NT 3.51 PMZ could be made compatible in theory but in practise would require all of the additional drivers for that to be reimplemented.)

↫ Windows NT for GameCube/Wii GitHub page

As you may have expected, there are some issues, such as instability and random reboots, USB hotplugging doesn’t work, and some other, smaller issues, but none of that takes away from just how awesome and impressive this really is. There’s framebuffer support for the Flipper GPU, full support for the controllers ports and a ton of compatible controllers and related input devices, including support for the N64 mouse and keyboard, although said support is untested.

The GameCube and Wii (U) are PowerPC computers, after all, running IBM processors, so it shouldn’t be surprising that running Windows NT on them is possible. Still, it’s an impressive feat of engineering to get this to work at all, let alone in as complete a state as it appears to be.

zlib-rs is faster than C

I’m sure we can all have a calm, rational discussion about this, so here it goes: zlib-rs, the Rust re-implementation of the zlib library, is now faster than its C counterparts in both decompression and compression.

We’ve released version 0.4.2 of zlib-rs, featuring a number of substantial performance improvements. We are now (to our knowledge) the fastest api-compatible zlib implementation for decompression, and beat the competition in the most important compression cases too.

↫ Folkert de Vries

As someone who isn’t a programmer, looking at all the controversies and fallout around anything related to Rust is both fascinating and worrying. Fascinating because Rust clearly brings a whole slew of improvements over established and older languages, and worrying because the backlash from the establishment has been wildly irrational and bordering on the childish, complete with tamper tantrums and the taking of balls and going home. It shouldn’t surprise me that people get attached to programming languages the same way people get attached to operating systems, but surprisingly, it still does.

If Rust not only provides certain valuable benefits like memory safety, but can also be used to create implementations that are faster than those created in, say, C, it’s really only going to be a matter of time before it simply becomes an untenable position to block Rust from, say, the Linux kernel. Progress has a tendency to find a way, especially the more substantial the benefits get, and as studies show, even only writing new code in memory-safe languages provides substantial benefits. In other words, more and more projects will simply switch over to Rust for new code where it makes sense, whether Rust haters want it or not.

There will be enough non-Rust code to write and maintain, though, so I don’t think people will be out of a job any time soon because they refuse to learn Rust, but to me as an outsider, the Rust hate seems to grow more and more irrational by the day.

Mozilla reaffirms it won’t remove Manifest v2 support from Firefox

Mozilla has officially reiterated that it’s going to keep offering support for both Manifest v2 and Manifest v3 extensions in Firefox. Google is removing support for Manifest v2 from Chrome, and with it a feature called blockingWebRequest that is used by ad blockers like uBlock Origin. Google’s replacement for that feature is more restrictive and less capable, and as such, uBlock Origin no longer works on Chrome.

Firefox, however, will continue supporting both blockingWebRequest and declarativeNetRequest — giving developers more flexibility and keeping powerful privacy tools available to users.

↫ Scott DeVaney and Ed Sullivan

There’s a lot to be worried about when it comes to Mozilla’s future, but on this matter, at least, they’re taking the correct stance that genuinely puts users first. It’s no surprise Google is using Manifest v3 as an excuse to nerf adblocking in Chrome, since adblocking cuts into Google’s most important source of revenue. If you’re still using Chrome, this alone should be more than enough reason to switch to Firefox, so you can retain the most optimal form of adblocking.

The various Chromium skins will most likely all lose support for Manifest v2 as well once the code is actually removed from Chromium in June 2025. Vivaldi announced as such, and unless any of the other Chromium skins out there decide to fork Chromium and maintain their own version, you can expect all of them to lose support for Manifest v2 around the same date. Safari is harder to pin down, since Apple doesn’t make any statements about future product. For now, it supports both Manifest v2 and v3, and I don’t really see a reason why Apple would remove v2 support.

12 years of incubating Wayland color management

The Wayland color-management protocol extension has landed on Feb 13th, 2025, in upstream wayland-protocols repository in the staging directory. It was released with wayland-protocols 1.41. The extension enables proper interactions between traditional (sRGB), Wide Color Gamut (WCG), and High Dynamic Range (HDR) image sources and displays once implemented in Wayland compositors and used in applications.

[…]

Of course, a protocol is just a language. Two participants need to speak the same language for the language to be of any use: Wayland compositors and a component on the application side (e.g., a toolkit library). Major efforts have been going on in various projects to prove and take advantage of the protocol, including KWin, Mutter, Weston, wlroots, GStreamer, GTK, Qt, SDL, Mesa, and mpv. Support in Mesa means that applications will be able to render and display in HDR by using the relevant EGL and Vulkan features.

↫ Pekka Paalanen

Colour management has been an important missing piece of the Wayland puzzle, so it’s good to see this finally released and added to Wayland as a new protocol after so many years of work. It’s important to note that the work done so far focuses almost entirely on the entertainment side of things, like watching video or playing games. The other important side, professional colour management for things like photo editing or desktop publishing, is still missing, with the major holdup being measuring physical monitor response (measuring a reference image displayed on a monitor with a hardware device).

Xcode phones home a lot, and that should worry you

I’ve saved the worst for last. For some reason, Xcode phones home to appstoreconnect.apple.com every time I open an Xcode project. This also appears to be unnecessary, and I experience no problems after denying the connections in Little Snitch, so I do! I assume that the connections send identifying information about the Xcode project to Apple, otherwise why even make the connections when opening a project? And all of these connections from Xcode, to every domain, require login to your Apple Developer account, so Apple is definitely receiving identifying information about you in any case.

In effect, Xcode is a developer analytics collection mechanism, whether you like it or not, which I don’t.

↫ Jeff Johnson

If, at this point in time, you’re still surprised Apple doesn’t practice what it preaches, the fault lies pretty much entirely with you.

Anyway, it seems Xcode phones home to Apple quite a bit, which I doubt is all that unique in the world of commercial development environments. I honestly don’t think Apple itself doing anything particularly nefarious with this data, but the fact it’s collecting it in the first place should still make you think twice about using Xcode, especially if you’re developing anything even remotely sensitive. What should really worry you is the fact Tim Cook and Apple are close allies of Trump and his regime.

Xcode is required for iOS/iPadOS/etc. development, because the App Store requires applications be built and submitted with it. As such, every iOS developer is sending substantial amounts of data to Apple during development, which should be especially concerning for people outside of the US and people who aren’t straight white males; using Xcode requires an Apple Account, so Apple knows quite a bit about who is using it. With the breakdown of the rule of law in the US, all of this data is basically freely accessible to US authorities, and we’ve seen by now that people like self-styled genius Elon Musk don’t worry too much about pesky things like the rule of law.

If Musk wants this data, Apple will hand it over.

If you’re an Apple developer, you should stop and think every time you open Xcode. You’re sending your data straight to a hostile entity. If you’re claiming to use Apple products because of Apple’s privacy “promises”, Xcode’s data collection should be a huge worry for you.

Qualcomm gives OEMs the option of 8 years of Android updates

Starting with Android smartphones running on the Snapdragon 8 Elite Mobile Platform, Qualcomm Technologies now offers device manufacturers the ability to provide support for up to eight consecutive years of Android software and security updates. Smartphones launching on new Snapdragon 8 and 7-series mobile platforms will also be eligible to receive this extended support.

↫ Mike Genewich

I mean, good news of course, but Qualcomm has a history of making empty promises, so I’ll see it when I believe it. Also note that this news doesn’t mean every Snapdragon 8 Elite Android device will get eight years of updates – it just means OEMs are able to offer such support now, not that they’ll actually do it. Considering it’s usually the OEMs refusing to offer updates, I wonder just how big the actual impact of this news will be.

In any event, this includes both regular Android updates as well as two Android Common Kernel upgrades, which are required to meet this eight year window. If you want to get into the nitty-gritty about Android and the Android Common Kernels, the official Android documentation has more details.

It is no longer safe to move our governments and societies to US clouds

We now have the bizarre situation that anyone with any sense can see that America is no longer a reliable partner, and that the entire US business world bows to Trump’s dictatorial will, but we STILL are doing everything we can to transfer entire governments and most of our own businesses to their clouds.

Not only is it scary to have all your data available to US spying, it is also a huge risk for your business/government continuity. From now on, all our business processes can be brought to a halt with the push of a button in the US. And not only will everything then stop, will we ever get our data back? Or are we being held hostage? This is not a theoretical scenario, something like this has already happened.

↫ Bert Hubert

The cold and harsh reality is that the alliance between the United States and Europe, the single-most powerful alliance in human history, is over. Voters in the United States prefer their country ally itself with the brutal and genocidal dictator of Russia, instead of being allied with the democratic and free nations of Europe. That’s their choice to make, their consequences to face, and inevitably, their cross to bear.

Governments in Europe have not yet fully accepted that they can no longer rely on the United States for, well, anything. Whether it be existential, like needing to shore up defense spending and possibly unifying European militaries, or something more mundane, like which computer systems European governments use, the United States should be treated in much the same way as Russia or China. Europe has to fend for itself, spend on itself, and build for itself, instead of assuming that the Americans will come through on any “promise” they make. An unreliable partner like the US is a massive liability.

Bert Hubert is exactly right. European data needs to be stored within European borders. Just as we wouldn’t store our data on servers owned or controlled by the Chinese government, we shouldn’t be storing our data on servers owned or controlled by the US government. The general European public is already changing its buying habits – it’s time our governments do so too.

Sailfish OS 5.0 released for all supported devices

Sailfish OS 5.0, originally released late last year as part of the new Jolla C2 Community Phone, will now be pushed to all Sailfish OS devices. There have been several other minor releases since the original release, so if you’re running Sailfish OS on something other than the C2, you’re getting a release with some more bugfixes and improvements.

The main improvement is an upgrade to Gecko ESR91, with work underway to move to ESR102 – this is far from the latest release, but sticking to ESR releases seems like a wise idea for a smaller team. This release also upgrades the Android application support to Android 13 (API level 33), and adds the microG 0.3.6 enablers. There’s Wireguard support now, call blocking, and new landscape view for a variety of applications.

Incidentally, I was one of the first people to publish a review of the original Jolla Phone, exactly 11 years ago in 2014. Since I was such an early adopter, I have the The First One version, and it just so happens I’m also one of the very few people who actually received the Jolla Tablet, after being an extremely early backer of that device, too. I still have both of them, and especially the Jolla Phone I used as my main device for quite a while – half a year to a year, or so – before going back to Android.

I’m glad Sailfish OS is still going, and I’m definitely interested in giving this new release a go. I would need to buy the Jolla C2 Community Phone, and if finances allow, I may actually do so. In case you want to help, feel free to become an OSNews Patreon or make a one-time donation through Ko-Fi.

Microsoft improves Windows 11’s Start menu somewhat

Microsoft seems to be addressing some of the oddities with the Windows 11 Start menu, finally adding basic views that should’ve been in Windows 11 since the very start.

We’re introducing two new views to the “All” page in the Start menu: grid and category view. Grid and list view shows your apps in alphabetical order and category view groups all your apps into categories, ordered by usage. This change is gradually rolling out so you may not see it right away. We plan to begin rolling this out to Windows Insiders who are receiving updates based on Windows 11, version 24H2 in the Dev and Beta Channels soon.

↫ Amanda Langowski and Brandon LeBlanc

These new views are very welcome, but sadly, you still can’t set them as the default view in the Start menu. You’re still forced to use whatever that default view is, and click on “All” to get to these new views, instead of being available right as you open the Start menu. I messed around with Windows 11 on my XPS 13 9370 for a few weeks as I waited for a review laptop to arrive, and I couldn’t last for a few hours without buying a replacement for the Start menu that allowed me to have a working, non-terrible menu that I could configure to my own needs.

It’s wild to me that such an iconic element of the Windows user interface is in such a dire, unliked state. We all know Windows seems to be a in a bit of a rut, with Microsoft investing more in nonsense like “AI” and ads in the operating system than in actually listening to users and improving their experience. It’s been roughly thirty years since the introduction of the Start menu, and the original one from Windows 95 is still superior to whatever’s in Windows now.

Wild.

The DOS 3.3 SYS.COM bug hunt!

Last year somebody reported a problem with the DOS 3.3 SYS.COM command when used with NetDrive. They started with a valid FAT12 image, ran SYS.COM to make it bootable, and then they were not able to mount the image using NetDrive again. Running SYS.COM against the image had broken something.

Besides copying the operating system’s hidden files to the target drive letter, SYS.COM also copies some boot code into the first sector of the disk. In general it does not make sense to run it against a NetDrive image because you already had to boot DOS to mount the image, but it should not hurt anything. So I decided to have a look at what was going on.

↫ Michael Brutman

A good old classic bug hunt in some retro DOS code from roughly 1987. This one’s a bit more technical and in-depth than these things usually are, and quite a bit of it goes over my head, but I’m sure since most of you are much smarter than I am, you’ll do a better job understanding what’s going on.

Illumos on SPARC: possible, but problematic

While SPARC may no longer be supported by the main Illumos project, it still works and is still viable. This page brings together a variety of information regarding Illumos on SPARC, not necessarily limited to Tribblix.

↫ Tribblix website

It seems running Tribblix – and other Illumos-based distributions – on SPARC is still possible, but there are some serious limitations anyone who has tried to use even slightly older operating systems will be fairly familiar with. For instance, since there’s no Rust for Illumos on SPARC, Firefox and other applications that use it are not available, and Tribblix in particular no longer builds Pale Moon (or LibreOffice). Rust is available on Solaris 11, though, so it may be possible to bring it to Illumos. In a similar vein, Go also isn’t available for SPARC either.

As far as hardware support goes, it’s a bit of a mixed bag, as systems that should work do, in fact, not, and even systems that do work run into a very familiar problem: graphics card support is a big issue. This is a problem plaguing X.org on any outdated or sidelined architecture, and it seems Illumos is also affected. Obviously, this greatly reduces the usefulness of Illumos on workstations, but is less of an issue on servers. You’ll run into the same problem when trying to run NetBSD, OpenBSD, or Linux in, say, PA-RISC hardware.

Of course, the problem is both a lack of people interested in and capable of contributing to keeping stuff running on older architectures, further spurred on by a dwindling supply of hardware available at reasonable prices. Sad, but there isn’t much that can be done about it.

Flathub safety: a layered approach from source to user

About two weeks ago we talked about why Fedora manages its own Flatpak repository, and why that sometimes leads to problems with upstream projects. Most recently, Fedora’s own OBS Flatpak was broken, leading to legal threats from the OBS project, demanding Fedora remove any and all branding from its OBS Flatpak. In response, Fedora’s outgoing project leader Matthew Miller gave an interview on YouTube to Brodie Robertson, in which Miller made some contentious claims about a supposed lack of quality control, security, and safety checks in Flathub.

These claims led to a storm of criticism directed at Miller, and since I follow quite a few people actively involved in the Flatpak and Flathub projects – despite my personal preference for traditional Linux packaging – I knew the criticism was warranted. As a more official response, Cassidy James Blaede penned an overview of all the steps Flathub takes and the processes it has in place to ensure the quality, security, and safety of Flathub and its packages.

With thousands of apps and billions of downloads, Flathub has a responsibility to help ensure the safety of our millions of active users. We take this responsibility very seriously with a layered, in-depth approach including sandboxing, permissions, transparency, policy, human review, automation, reproducibility, auditability, verification, and user interface.

Apps and updates can be fairly quickly published to Flathub, but behind the scenes each one takes a long journey full of safety nets to get from a developer’s source code to being used on someone’s device. While information about this process is available between various documentation pages and the Flathub source code, I thought it could be helpful to share a comprehensive look at that journey all in one place.

↫ Cassidy James Blaede

Flathub implements a fairly rigorous set of tests, both manual and automated, on every submission. There’s too many to mention, but reading through the article, I’m sure most of you will be surprised by just how solid and encompassing the processes are. There are a few applications from major, trusted sources – think applications from someone like Mozilla – who have their own comprehensive infrastructure and testing routines, but other than those few, Flathub performs extensive testing on all submissions.

I’m not a particular fan of Flatpak for a variety of reasons, but I prefer to stick to facts and issues I verifiably experience when dealing with Flatpaks. I was definitely a bit taken aback by the callousness with which such a long-time, successful Fedora project leader like Miller threw Flathub under the bus, but at least one of the outcomes of all this is greater awareness of the steps Flathub takes to ensure the quality, security, and safety of the packages it hosts.

Nothing is and will be perfect, and I’m sure issues will occasionally arise, but it definitely seems like Flathub has its ducks in a row.

Microsoft is paywalling features in Notepad and Paint

There’s some bad news for Windows users who want to use all of the built-in features of the operating system and its integrated apps. Going forward, Microsoft is restricting features in two iconic apps, which you’ll need to unlock with a paid subscription.

The two apps in question? Notepad and Paint. […]

Windows Insiders were previously able to use these app features free of charge. However, Microsoft is now making it necessary to have a Microsoft 365 subscription for full use of these apps. You’ll see a new overlay that informs you of this before use. In our case, however, the respective features were simply grayed out.

↫ Laura Pippig at PCWorld

It’s only the “AI” features that are being paywalled here, so I doubt many people will care. What does feel unpleasent, though, is that the features are visible but greyed out, instead of being absent entirely until you log into Windows with an account that has a Microsogt 365 subscription with the “AI” stuff enabled. Now it just feels like the operating system you paid good money for – and yes, you do actually pay for Windows – is incomplete and badgering you for in-app purchases. The gameification of Windows continues.

There’s also a y in the day, so we have another Ars Technica article detailing the long list of steps you need to take to make Windows suck just a little less. The article is long, and seems to grow longer every time Ars, or any other site for that matter, posts an updated version. I installed Windows 11 on my XPS 13 9370 a few weeks ago to see just how bad things had gotten, and the amount of work I had to do to make Windows 11 even remotely usable was insane. Even the installation alone – including all the updates – took several hours, compared to a full installation of, say, Fedora KDE, which, including updated, takes like 10 minutes to install on the same machine.

I personally used WinScript to make the process of unfucking Windows 11 less cumbersome, and I can heartedly recommend it to anyone else forced to use Windows 11. Luckily for me, a brand new laptop is being delivered today, without an operating system preinstalled. Can’t wait to install Fedora KDE and be good to go in like 20 minutes after unboxing the thing.

Chromium Ozone/Wayland: the last mile stretch

Lets start with some context, the project consists of implementing, shipping and maintaining native Wayland support in the Chromium project. Our team at Igalia has been leading the effort since it was first merged upstream back in 2016. For more historical context, there are a few blog posts and this amazing talk, by my colleagues Antonio Gomes and Max Ihlenfeldt, presented at last year’s Web Engines Hackfest.

Especially due to the Lacros project, progresses on Linux Desktop has been slower over the last few years. Fortunately, the scenario changed since last year, when a new sponsor came up and made it possible to address most of the outstanding missing features and issues required to move Ozone Wayland to the finish line.

↫ Nick Yamane

There still quite a bit of work left to do, but a lot of progress has been made. As usual, Nvidia setups are problematic, which is a recurring theme for pretty much anything Wayland-related. Aside from the usual Nvidia problems, a lot of work has been done on improving and fixing fractional scaling, adding support for the text-input-v3 protocol, reimplementing tab dragging using the proper Wayland protocol, and a lot more.

They’re also working on session management, which is very welcome for Chrome/Chromium users as it will allow the browser to remember window positions properly between restarts. Work is also being done to get Chromium’s interactive UI tests infrastructure and code working with Wayland compositors, with a focus on GNOME/Mutter – no word on KDE’s Kwin, though.

I hope they get the last wrinkles worked out quick. The most popular browser needs to support Wayland out of the box.

1972 UNIX V2 “beta” resurrected from old tapes

There’s a number of backups of old DECtapes from Dennis Ritchie, which he gave to Warren Toomey in 1997. The tapes were eventually uploaded, and through analysis performed by Yufeng Gao, a lot of additional details, code, and software were recovered from them. A few days ago, Gao came back with the results from their analys of two more tapes, and on it, they found something quite special.

Here’s an update on my work with the s1/s2 tapes – I’ve managed to get a working system out of them. The s1 tape is a UNIX INIT DECtape containing the kernel, while s2 includes most of the distribution files.

The s1 kernel is, to date, the earliest machine-readable UNIX kernel, sitting between V1 and V2. It differs from the unix-jun72 kernel in the following ways:

  • It supports both V1 and V2 a.outs out of the box, whereas the unmodified unix-jun72 kernel supports only V1.
  • The core size has been increased to 16 KiB (8K words), while the unmodified unix-jun72 kernel has an 8 KiB (4K word) user core.

On the other hand, its syscall table matches that of V1 and the unix-jun72 kernel, lacking all V2 syscalls. Since it aligns with V1 in terms of syscalls, has the V2 core size and can run V2 binaries, I consider it a “V2 beta”.

↫ Yufeng Gao

Getting this recovered version to run was a bit of a challenge, and only aap’s PDP-11 emulator is capable of running it. To even get it to run in the first place, Gao had perform quite some intricate steps, but eventually he managed to build an image that can be downloaded and booted on aap’s PDP-11 emulator. The image in question, as well as some more details, can be found on the GitHub page.

Mozilla once again confirms it’s all about ads and “AI” now

We’ve recognized that Mozilla faces major headwinds in terms of both financial growth and mission impact. While Firefox remains the core of what we do, we also need to take steps to diversify: investing in privacy-respecting advertising to grow new revenue in the near term; developing trustworthy, open source AI to ensure technical and product relevance in the mid term; and creating online fundraising campaigns that will draw a bigger circle of supporters over the long run. Mozilla’s impact and survival depend on us simultaneously strengthening Firefox AND finding new sources of revenue AND manifesting our mission in fresh ways. That is why we’re working hard on all of these fronts.

↫ Mark Surman on the Mozilla blog

None of this is new to anyone reading OSNews. I’ve been quite vocal about Mozilla’s troubles and how it intends to address those troubles, and I’m incredibly worried and concerned about the increasing efforts by Mozilla to push advertising and “AI” to somehow find more revenue streams. I think this is the wrong direction to take, and will not make up for the seemingly inevitable loss of the Google search deal – and my biggest fear is that Firefox will get a lot worse before Mozilla realises advertising and “AI” just aren’t compatible with their mission and the morals and values of the last few remaining Firefox users.

I don’t have any answers either, of course. Making a competitive browser is hard, and clearly requires a lot of people and a lot of time. Donations are fickle, nobody will pay for a browser, and relying on corporate sponsoring in other forms than the Google search deal will just mean Firefox will become like Chrome even faster, with more and more exceptions for “allowed” ads and additional roadblocks for adblockers to try and work around. In essence, I strongly believe that it is impossible to both earn money from online ads and make a good browser. It’s one or the other – not both.

There’s basically no competition in the browser space, and if we lose Firefox, the only other option is Chrome and its various skins. Not a future I’m looking forward to.

NES86: x86 emulation on the NES

The goal of this project is to emulate an Intel 8086 processor and supporting PC hardware well enough to run the Embeddable Linux Kernel Subset (ELKS), including a shell and utilities. It should be possible to run other x86 software as long as it doesn’t require more than a simple serial terminal.

↫ NES86 GitHub page

Is this useful in any meaningful sense? No. Will this change the word? No. Does it have any other purpose than just being fun and cool? Nope.

None of that matters.

The generative AI con

Everywhere you look, the media is telling you that OpenAI and their ilk are the future, that they’re building “advanced artificial intelligence” that can take “human-like actions,” but when you look at any of this shit for more than two seconds it’s abundantly clear that it absolutely isn’t and absolutely can’t.

Despite the hype, the marketing, the tens of thousands of media articles, the trillions of dollars in market capitalization, none of this feels real, or at least real enough to sustain this miserable, specious bubble. People like Marc Benioff claiming that “today’s CEOs are the last to manage all-human workforces” are doing so to pump up their stocks rather than build anything approaching a real product. These men are constantly lying as a means of sustaining hype, never actually discussing the products they sell in the year 2025, because then they’d have to say “what if a chatbot, a thing you already have, was more expensive?”

↫ Edward Zitron

Looking at the data and numbers, as Zitron did for this article, the conclusions are sobering and harsh for anyone still pushing the “AI” bubble. Products aren’t really getting any better, they’re not making any money because very few people are paying for them, conversion rates are abysmal, the reported user numbers don’t add up, the projections from “AI” companies are batshit insane, new products they’re releasing are shit, and the media are eating it up because they stand to benefit from the empty promises.

Generative AI is a financial, ecological and social time bomb, and I believe that it’s fundamentally damaging the relationship between the tech industry and society, while also shining a glaring, blinding light on the disconnection between the powerful and regular people. The fact that Sam Altman can ship such mediocre software and get more coverage and attention than every meaningful scientific breakthrough of the last five years combined is a sign that our society is sick, our media is broken, and that the tech industry thinks we’re all fucking morons.

This entire bubble has been inflated by hype, and by outright lies by people like Sam Altman and Dario Amodei, their lies perpetuated by a tech media that’s incapable of writing down what’s happening in front of their faces. Altman and Amodei are raising billions and burning our planet based on the idea that their mediocre cloud software products will somehow wake up and automate our entire lives.

↫ Edward Zitron

In a just world, these 21st century snake oil salesmen would be in prison.