Keep OSNews alive by becoming a Patreon, by donating through Ko-Fi, or by buying merch!

A new PowerPC board with support for Amiga OS 4 and MorphOS is on its way

The Amiga, a once-dominant force in the personal computer world, continues to hold a special place in the hearts of many. But with limited next-gen hardware available and dwindling AmigaOS4 support, the future of this beloved platform seemed uncertain. That is, until four Dutch passionate individuals, Dave, Harald, Paul, and Marco, decided to take matters into their own hands.

Driven by a shared love for the Amiga and a desire to see it thrive, they embarked on an ambitious project: to create a new, low-cost next-gen Amiga mainboard.

↫ Mirari’s Our Story page

Experience has taught me to be… Careful of news of new hardware from the Amiga world, but for once I have strong reasons to believe this one is actually the real deal. The development story – from the initial KiCad renders to the first five, fully functional prototype boards – seems to be on track, software support for Amiga OS is in development, Linux is working great already, and since today, MorphOS also boots on the board. It’s called the Mirari, and it’s very Dutch.

So, what are we looking at here? The Mirari is a micro-ATX board, sporting either a PowerPC T10x2 processor (2-4 e5500 cores) up to 1.5GHz or a PowerPC T2081 processor (4 dual-threaded e6500 cores with Altivec 2.0) up to 1.8GHz, both designed by NXP in The Netherlands. It supports DDR3 memory, PCIe 2.0 (3.0 for the 4x slot when using the T2081), SATA and NVMe, the usual array of USB 2.0 and 3.2 ports, audio jacks, Ethernet, and so on. No, this is not a massive powerhouse that can take on the latest x86 or ARM machines, but it’s more than enough to power Amiga OS 4 or MorphOS, and aims to be actually affordable.

Being at the prototype stage means they’re not for sale quite yet, but the fact they have a 100% yield so far and are comfortable enough to send one of the prototypes to a MorphOS developer, who then got MorphOS booting rather quickly, is a good sign. I also like the focus on affordability, which is often a problem in the Amiga world. I hope they make it to production, because I want one real bad.

Google’s “AI” is convinced Solaris uses systemd

Who doesn’t love a bug bounty program? Fix some bugs, get some money – you scratch my back, I pay you for it. The CycloneDX Rust (Cargo) Plugin decided to run one, funded by the Bug Resilience Program run by the Sovereign Tech Fund. That is, until “AI” killed it.

We received almost entirely AI slop reports that are irrelevant to our tool. It’s a library and most reporters didn’t even bother to read the rules or even look at what the intended purpose of the tool is/was.

This caused a lot of extra work which is why we decided to abandon the program. Thanks AI.

↫ Lars Francke

On a slightly related note, I had to do search the web today because I’m having some issues getting OpenIndiana to boot properly on my mini PC. For whatever reason, starting LightDM fails when booting the live USB, and LightDM’s log is giving some helpful error messages. So, I searched for "failed to get list of logind seats" openindiana, and Google’s automatic “AI Overview” ‘feature’, which takes up everything above the fold so is impossible to miss, confidently told me to check the status of the logind service… With systemctl.

We’ve automated stupidity.

Home Assistant deprecates Core and Supervised installation methods and 32bit systems

We are today officially deprecating two installation methods and three legacy CPU architectures. We always strive to have Home Assistant run on almost anything, but sometimes we must make difficult decisions to keep the project moving forward. Though these changes will only affect a small percentage of Home Assistant users, we want to do everything in our power to make this easy for those who may need to migrate.

↫ Franck Nijhof on the Home Assistant blog

Home Assistant is quite popular among the kind of people who read OSNews, and this news might actually hit our little demographic particularly hard. The legacy CPU architectures they’re removing support for won’t make much of a difference, as we’re talking 32bit x86 and 32bit ARM, although that last one does include version 1 and 2 of the Raspberry Pi, which were quite popular at the time. Do check to make sure you’re not running your Home Assistant installation on one of those.

The bigger hit is the deprecation of two installation methods: Home Assistant Core and Home Assistant’s Supervised installation method. In Core, you’re running it in a Python environment, and with Supervised, you’re installing the various components that make up Home Assistant manually. Supervised is used to install Home Assistant on unsupported operating systems, like the various flavours of BSD. What this means is that if you are running Home Assistant on, say, OpenBSD, you’re going to have to migrate soon.

Apparently, these installation methods are not used very often, and are difficult for Home Assistant to support. These changes do not mean you can no longer perform these installation methods; it just means they are not supported, will be removed from the documentation, and new issues with these methods will not be accepted. Of course, anyone is free to take over hosting any documentation and guides, as Home Assistant is open source.

Home Assistant generally wants you to use Home Assistant OS, which is basically a Linux distribution designed to run Home Assistant, either on real hardware (which is what I do, on an x86 thin client) or in a container.

TrueNAS uses “AI” for customer support, and of course it goes horribly wrong

Let’s check in on TrueNAS, who apparently employ “AI” to handle customer service tickets. Kyle Kingsbury had to have dealings with TrueNAS’ customer support, and it was a complete trashfire of irrelevance and obviously wrong answers, spiraling all the way into utter lies. The “AI” couldn’t generate its way out of a paper bag, and for a paying customer who is entitled to support, that’s not a great experience.

Kingsbury concludes:

I get it. Support is often viewed as a cost center, and agents are often working against a brutal, endlessly increasing backlog of tickets. There is pressure at every level to clear those tickets in as little time as possible. Large Language Models create plausible support responses with incredible speed, but their output must still be reviewed by humans. Reviewing large volumes of plausible, syntactically valid text for factual errors is exhausting, time-consuming work, and every few minutes a new ticket arrives.

Companies must do more with less; what was once a team of five support engineers becomes three. Pressure builds, and the time allocated to review the LLM’s output becomes shorter and shorter. Five minutes per ticket becomes three. The LLM gets it mostly right. Two minutes. Looks good. Sixty seconds. Click submit. There are one hundred eighty tickets still in queue, and behind every one is a disappointed customer, and behind that is the risk of losing one’s job. Thirty seconds. Submit. Submit. The metrics do not measure how many times the system has lied to customers.

↫ Kyle Kingsbury

This time, it’s just about an upgrade process for a NAS, and the worst possible outcome “AI” generated bullshit could lead to is a few lost files. Potentially disastrous on a personal level for the customer involved, but not exactly a massive problem. However, once we’re talking support for medical devices, medication, dangerous power tools, and worse, this could – and trust me, will – lead to injury and death.

TrueNAS, for its part, contacted Kingsbury after his blog post blew up, and assured him that “their support process does not normally incorporate LLMs”, and that they would investigate internally what, exactly, happened. I hope the popularity of Kingsbury’s post has jolted whomever is responsible for customer service at TrueNAS that farming out customer service to text generators is a surefire way to damage your reputation.

Linux Mint forks GNOME’s Libadwaita to add theme support

On numerous occasions, we’ve talked about the issue facing non-GNOME GTK desktops, like Xfce, MATE, and Cinnamon: the popularity of Libadwaita. With more and more application developers opting for GNOME’s Libadwaita because of the desktop environment’s popularity, many popular GTK applications now look like GNOME applications instead of GTK applications, and they just don’t mesh well with traditional GTK desktops. Since Libadwaita is not themeable, applications that use it can’t really be made to feel at home on non-GNOME GTK desktops, unless said desktops adopt the entire GNOME design language, handing over control ovr their GUI design to outsiders in the process.

The developers of Libadwaita, as well as the people behind GNOME, have made it very clear they do not intend to make Libadwaita themeable, and they are well within their rights to make that decision. I think it’s a bad decision – themeing is a crucial accessibility feature – but it’s their project, their code, and their time, and I fully respect their decision, since it’s really not up to GNOME to worry about the other GTK desktops. So, what are the developers of Xfce, MATE, and Cinnamon supposed to do?

Well, how about taking matters into their own hands? Clement Lefebvre, the lead developer of Linux Mint and its Cinnamon desktop environment, has soft-forked Libadwaita to add theme support to the library. They’re calling it LibAdapta.

libAdapta is libAdwaita with theme support and a few extra.

It provides the same features and the same look as libAdwaita by default.

In desktop environments which provide theme selection, libAdapta apps follow the theme and use the proper window controls.

↫ LibAdapta’s GitHub page

The reason they consider libAdapta a “soft-fork” is that all it does is add theme support; they do not intended to deviate from Libadwaita in any other way, and will follow Libadwaita’s releases. It will use the current GTK3 theme, and will fallback to the default Libadwaita look and feel if the GTK3 theme in question doesn’t have a libadapta-1.0 directory. This seems like a transparent and smart way to handle it.

I doubt it will be long before libAdapta becomes a default part of a lot of user instructions online, GTK theme developers will probably add support for it pretty quickly, and perhaps even of a lot of non-GNOME GTK desktop environments will add it by default. It will make it a lot easier for, say, the developers of MATE to make use of the latest Libadwaita applications, without having to either accept a disjointed, inconsistent user experience, or adopt the GNOME design language hook, line, and sinker and lose all control over the user experience they wish to offer to their users.

I’m glad this exists now, and hope it will prove to be popular. I appreciate the pragmatic approach taken here – a relatively simple fork that doesn’t burden upstream, without long feature request threads where everybody is shouting at each other that needlessly spill over onto Fedi. This is how open source is supposed to work.

GhostBSD: from usability to struggle and renewal

This article isn’t meant to be technical. Instead, it offers a high-level view of what happened through the years with GhostBSD, where the project stands today, and where we want to take it next. As you may know, GhostBSD is a user-friendly desktop BSD operating system built with FreeBSD. Its mission is to deliver a simple, stable, and accessible desktop experience for users who want FreeBSD’s power without the complexity of manual setup. I started this journey as a non-technical user. I dreamed of a BSD that anyone could use.

↫ Eric Turgeon at the FreeBSD Foundation’s website

I’m very glad to see this article published on the website of the FreeBSD Foundation. I firmly believe that especially FreeBSD has all the components to become an excellent desktop alternative to desktop Linux distributions, especially now that the Linux world is moving fast with certain features and components not everyone likes. FreeBSD could serve as a valid alternative.

GhostBSD plays an important role in this. It offers not just an easily installable FreeBSD desktop, but also several tools to make managing such an installation easier, like in-house graphical user interfaces for managing Wi-Fi and other networks, backups, updates, installing software, and more. They also recently moved from UFS to ZFS, and intend to develop graphical tools to expose ZFS’s features to users.

GhostBSD can always use more contributors, so if you have the skills, interest, and time, do give it a go.

You are not needed

You want more “AI”? No? Well, too damn bad, here’s “AI” in your file manager.

With AI actions in File Explorer, you can interact more deeply with your files by right-clicking to quickly take actions like editing images or summarizing documents. Like with Click to Do, AI actions in File Explorer allow you to stay in your flow while leveraging the power of AI to take advantage of editing tools in apps or Copilot functionality without having to open your file. AI actions in File Explorer are easily accessible – to try out AI actions in File Explorer, just right-click on a file and you will see a new AI actions entry on the content menu that allows you to choose from available options for your file.

↫ Amanda Langowski and Brandon LeBlanc at the Windows Blogs

What, you don’t like it? There, “AI” that reads all your email and sifts through your Google Drive to barf up stunt, soulless replies.

Gmail’s smart replies, which suggest potential replies to your emails, will be able to pull information from your Gmail inbox and from your Google Drive and better match your tone and style, all with help from Gemini, the company announced at I/O.

↫ Jay Peters at The Verge

Ready to submit? No? Your browser now has “AI” integrated and will do your browsing for usyou.

Starting tomorrow, Gemini in Chrome will begin rolling out on desktop to Google AI Pro and Google AI Ultra subscribers in the U.S. who use English as their Chrome language on Windows and macOS. This first version allows you to easily ask Gemini to clarify complex information on any webpage you’re reading or summarize information. In the future, Gemini will be able to work across multiple tabs and navigate websites on your behalf.

↫ Josh Woodward

Mercy? You want mercy? You sure give up easily, but we’re not done yet. We destroyed internet search and now we’re replacing it with “AI”, and you will like it.

Announced today at Google I/O, AI Mode is now available to all US users. The focused version of Google Search distills results into AI-generated summaries with links to certain topics. Unlike AI Overviews, which appear above traditional search results, AI Mode is a dedicated interface where you interact almost exclusively with AI.

↫ Ben Schoon at 9To5Google

We’re going to assume control of your phone, too.

The technology powering Gemini Live’s camera and screen sharing is called Project Astra. It’s available as an Android app for trusted testers, and Google today unveiled agentic capabilities for Project Astra, including how it can control your Android phone.

↫ Abner Li at 9To5Google

And just to make sure our “AI” can control your phone, we’ll let it instruct developers how to make applications, too.

That’s precisely the problem Stitch aims to solve – Stitch is a new experiment from Google Labs that allows you to turn simple prompt and image inputs into complex UI designs and frontend code in minutes.

↫ Vincent Nallatamby, Arnaud Benard, and Sam El-Husseini

You are not needed. You will be replaced. Submit.

Jwno: a highly customisable tiling WM for Windows built with Janet

Jwno is a highly customizable tiling window manager for Windows 10/11, built with Janet and ❤️. It brings to your desktop magical parentheses power, which, I assure you, is not suspicious at all, and totally controllable.

↫ Jwno documentation

Yes, it’s a Lisp system, so open your bag of spare parentheses and start configuring and customising it, because you’re going to need it if you want to use Jwno to its fullest.

In general, Jwno works as a keyboard driven tiling window manager. When a new window shows up, it tries to transform the window so it fits in the layout you defined. You can then use customized key bindings to modify the layout or manipulate your windows, rather than drag things around using the mouse. But, since a powerful generic scripting engine is built-in, you can literally do anything with it.

↫ Jwno documentation

It’s incredibly lightweight, comes as a single executable, integrates perfectly with Windows’ native virtual desktop and window management features, has support for REPL, and much more.

Making video games in 2025 (without an engine)

I genuinely believe making games without a big “do everything” engine can be easier, more fun, and often less overhead. I am not making a “do everything” game and I do not need 90% of the features these engines provide. I am very particular about how my games feel and look, and how I interact with my tools. I often find the default feature implementations in large engines like Unity so lacking I end up writing my own anyway. Eventually, my projects end up being mostly my own tools and systems, and the engine becomes just a vehicle for a nice UI and some rendering…

At which point, why am I using this engine? What is it providing me? Why am I letting a tool potentially destroy my ability to work when they suddenly make unethical and terrible business decisions? Or push out an update that they require to run my game on consoles, that also happens to break an entire system in my game, forcing me to rewrite it? Why am I fighting this thing daily for what essentially becomes a glorified asset loader and editor UI framework, by the time I’m done working around their default systems?

↫ Noel Berry

Interesting and definitely unique perspective, as I feel most game developers just pick one of the existing big engines and work from there. I’m not saying either option is wrong, but I do feel like the dependence on the popular engines can potentially harm the game industry as a whole, as it reduced diversity, drains valuable knowledge and expertise, and leaves developers – especially smaller ones – at the mercy of a few big players.

Perhaps not every game needs to be made in Unity or Unreal.

On the relationship between Qt and KDE

Volker Hilsheimer, chief maintainer of the Qt project, says he has learned lessons from the painful Qt 5 to Qt 6 transition, the importance of Qt Bridges for using Qt from any language, and the significance of the relationship with the Linux KDE desktop.

↫ Tim Anderson at Dev Class

Qt plays a significant role in the open source desktop world in particular, because it’s the framework KDE uses. Hilsheimer notes that KDE’s role in the Qt community is actually quite important, because not only is it a source of people learning how to use Qt and who can thus make contributions to the project, KDE also tends to use the latest Qt versions, creating a lot of confidence among the wider Qt community to also adopt the latest versions.

The relationship with KDE and Qt is an interesting one, and sometimes leads to questions about the future availability of the open source edition of Qt since the Qt Company licenses Qt under a dual-license structure (both open and proprietary). To avoid any uncertainty, KDE and Qt have an agreement that covers pretty much every possible scenario and which is worded to ensure the availability of Qt as an open source framework.

KDE, through the KDE Free Qt Foundation, has a number rights and options to ensure the availability of Qt as an open source framework. I’m no lawyer, so I might get some of the details wrong, but the main points are that if the Qt Company ever decides to discontinue the open source edition of Qt, the KDE Free Qt Foundation has the right to release Qt under a BSD-style license within 12 months. The same applies to any addition to Qt which are not released as open source; they must be released under an open source license within 12 months of initial release. This agreement remains valid in the case of buyouts, mergers, or bankruptcies.

This agreement has existed in one form or another since the late ’90s, and has survived Qt being owned by Nokia and Digia, as well as various other organisational changes. Despite the issue of Qt’s ownership coming up every now and then, the agreement is pretty airtight, and considering its longevity there’s no reason to be worried about it at all.

Still, this structure is clearly more complex and less straightforward than, say, the status of GTK and its relationship to GNOME, so it’s not entirely unreasonable the issue comes up every now and then. I wonder if we’ll ever see this situation become less complex, without the need for special agreements. While it wouldn’t make a practical difference, it would make things less… Legalese.

Telum II at Hot Chips 2024: mainframe with a unique caching strategy

Mainframes still play a vital role in today, providing extremely high uptime and low latency for financial transactions. Telum II is IBM’s latest mainframe processor, and is designed unlike any other server CPU. It only has eight cores, but runs them at a very high 5.5 GHz and feeds them with 360 MB of on-chip cache. IBM also includes a DPU for accelerating IO, along with an on-board AI accelerator. Telum II is implemented on Samsung’s leading edge 5 nm process node.

IBM’s presentation has already been covered by other outlets. Therefore I’ll focus on what I feel like is Telum (II)’s most interesting features. DRAM latency and bandwidth limitations often mean good caching is critical to performance, and IBM has a often deployed interesting caching solutions. Telum II is no exception, carrying forward a virtual L3 and virtual L4 strategy from prior IBM chips.

↫ Chester Lam at Chips and Cheese

If you’ve been keeping track, you can possibly deduce that I’m bit of a sucker for IBM’s mainframes and big POWER machines. These Telum II processors are absolutely wild.

Two weeks with AR glasses and Linux on Android

I recently learned something that blew my mind; you can run a full desktop Linux environment on your phone.

[…]

That’s a graphical environment via X11 with real window management and compositing, Firefox comfortably playing YouTube (including working audio), and a status bar with system stats. It launches in less than a second and feels snappy.

↫ Hold the Robot

In and of itself, this is a neat trick most of us are probably aware of. Running a full Linux distribution on an Android phone using chroot is an awesome party trick, but I doubt many people take this concept to its logical conclusion by connecting it up to a display, keyboard, and mouse, and use it as their mobile workstation. Well, the author of this article did, and he took it even one step further by replacing the display part of the logical conclusion with AR glasses.

The AR glasses in question were a pair of Xreal Air 2 Pro, which put a 120Hz 1080p display in front of your eyes using Sony micro-OLED panels. This will create the illusion of a 130″ screen with a 46° field of view, from a pair of glasses that honestly do not feel that much more massive than regular sunglasses or some of the thicker glasses frames some people like. I’m honestly kind of impressed this is possible these days.

Add in a keyboard and mouse, and you’ve got a mobile workstation that takes up very little space, especially since you’re carrying your phone with you at all times anyway. Of course, you have to be comfortable with using Linux – no Windows or macOS here – and the software side of the equation requires more setup and fiddling than I thought it would, but the end result is exactly like using a regular Linux desktop, but on your phone and a pair of AR glasses instead of on a laptop or desktop.

If I had the cash to throw around on fun side projects like this (you can help with that, actually, through Ko-Fi donations), I would totally order a pair of these Xreal glasses to try this out.

Microsoft releases WSL as open source, announces CLI text editor to replace the MS-DOS Editor

Today we’re very excited to announce the open-source release of the Windows Subsystem for Linux. This is the result of a multiyear effort to prepare for this, and a great closure to the first ever issue raised on the Microsoft/WSL repo: Will this be Open Source? · Issue #1 · microsoft/WSL.

That means that the code that powers WSL is now available on GitHub at Microsoft/WSL and open sourced to the community! You can download WSL and build it from source, add new fixes and features and participate in WSL’s active development.

↫ Pierre Boulay at the Windows Blogs

Windows Subsystem for Linux seems like a relatively popular choice for people who want a modern, Linux-based development environment but are stuck using Windows. I’m happy to see Microsoft releasing it as open source, which is no longer something to be surprised by at this point in time. It leaves one to wonder how long it’s going to be before more parts of Windows will be released as open source, since it could allow Microsoft’s leadership to justify some serious job cuts.

I honestly have no idea how close to the real thing Windows Subsystem for Linux is, and if it can actually fully replace a proper Linux installation, with all the functionality and performance that entails. I’m no developer, have no interest in Windows, so I’ve never actually tried it. I’d love to hear some experiences from all of you.

Aside from releasing WSL as open source, Microsoft also released a new command-line text editor – simply called Edit. It’s also open source, in its early stages, and is basically the equivalent of Nano. It turns out 32bit versions of Windows up until Windows 10 still shipped with the MS-DOS Editor, but obviously that one needed a replacement. It already has support for multiple documents, mouse support, and a few more basic features.

With how user-hostile Windows and macOS are, is it any wonder people long for computers from the ’80s and ’90s?

Every so often people yearn for a lost (1980s or so) era of ‘single user computers’, whether these are simple personal computers or high end things like Lisp machines and Smalltalk workstations. It’s my view that the whole idea of a 1980s style “single user computer” is not what we actually want and has some significant flaws in practice.

↫ Chris Siebenmann

I think the premise of this entire article is flawed, and borders on being a strawman argument. I honestly don’t think there’s many people out there who genuinely and seriously want to use an ’80s home computer for all their computing tasks, but this article seems to think that there are. Virtually every single person expressing interest in and a desire for classic computers does so from a point of nostalgia, as a learning experience, or as a hobby. They’re definitely not interested in using any of those ’80s machine to do their banking or to collaborate with their colleagues.

Additionally, the problems and issues people have with modern computing platforms is not that they are too complex, but that they are no longer designed with the user in mind. Windows, macOS, iOS; they’re all first and foremost designed to extract money from you through ads, upsells, nag screens, and similar anti-user features, and it’s those things that people are sick of. Coincidentally, they are all things we didn’t have to deal with back in the ’80s and ’90s. In other words, remove the user-hostility from modern operating systems, and people wouldn’t complain about them so much.

Which seems rather obvious, doesn’t it?

It’s why using a Linux desktop like Fedora is such a breath of fresh air. There’s no upsells for cloud storage or streaming services, no restrictions on what I can and cannot install to protect some multitrillion euro company’s revenue streams, no ads and nag screens infesting my operating system – it’s just an operating system waiting for me to tell it what it do, and then it does it. It’s wild how increasingly revolutionary that’s becoming.

Whenever I am forced to interact with Windows 11 or whatever the current version of macOS is, I feel such a profound and deep sadness for what they’ve become, and it seems only natural to me that this sadness is fueling a longing for back when these systems weren’t so user-hostile.

Render a Guitar Pro score in real time on Linux

Tuxguitar is a quite powerful application written in a mixture of Java / C. It is able to render a score in real time either via Fluidsynth or via pure MIDI. The development of Tuxguitar started in 2008 on Sourceforce and after a halt in 2022, the project restarted on Github and is still actively developed.

The goal of this article is to try to render a score via Tuxguitar, and various other applications connected to Tuxguitar, via Jack or Pipewire-Jack. The score used throughout this article will be The Pursuit Of Vikings by the band Amon Amarth. It has 2 guitars, a bass and a drum track.

↫ Yann Collette at Fedora Magazine

If you’re into audio production and are considering using Linux for your audio needs, this article is a good starting point.

What were the MS-DOS programs that the moricons.dll icons were intended for?

Last time, we looked at the legacy icons in progman.exe. But what about moricons.dll?

Here’s a table of the icons that were present in the original Windows 3.1 moricons.dll file (in file order) and the programs that Windows used the icons for. As with the icons in progman.exe, these icons are mapped from executables according to the information in the APPS.INF file.

↫ Raymond Chen

These icons age like a fine wine. They’re clear, well-designed, easy to read, and make extraordinary good use of the limited amount of available pixels. Icons from Mac OS, BeOS, OS/2, and a few others from the same era also look timeless, and I wish modern designers learned a thing or two from these.

“TCF” cookie consent popups violate GDPR; OSNews wants to stop using cookie popups too once we get enough Patreons

You may not have heard of the “Transparency & Consent Framework”, but you’ve most likely interacted with it, probably on a daily basis. The TCF is used by 80% of the internet to obtain “consent” from users to collect their data and share it among advertisers – you know, the cookie popups. In a landmark EU ruling yesterday, the TCF has been declared to violate the GDPR, making it illegal.

For seven years, the tracking industry has used the TCF as a legal cover for Real-Time Bidding (RTB), the vast advertising auction system that operates behind the scenes on websites and apps. RTB tracks what Internet users look at and where they go in the real world. It then continuously broadcasts this data to a host of companies, enabling them to keep dossiers on every Internet user. Because there is no security in the RTB system it is impossible to know what then happens to the data. As a result, it is also impossible to provide the necessary information that must accompany a consent request.

↫ Irish Council for Civil Liberties

It’s no secret that cookie consent popups do not actually comply with the GDPR, and that they are not even necessary if you simply don’t do any cross-site sharing of personal information. It seems that this ruling confirms this in a legal sense, forcing the advertising industry to come up with a new, better system. On top of that, every individual company that participated in this scheme is now liable for fines and damages.

Complaints coordinated by Johnny Ryan, Director of Enforce at the Irish Council for Civil Liberties, prompted the ruling. He said:

Today’s court’s decision shows that the consent system used by Google, Amazon, X, Microsoft, deceives hundreds of millions of Europeans. The tech industry has sought to hide its vast data breach behind sham consent popups. Tech companies turned the GDPR into a daily nuisance rather than a shield for people.

↫ Irish Council for Civil Liberties

The problem here is not so much the clarity of applicable laws and regulations, but the cost and effectiveness of enforcement. If it takes years of expensive and complex legal proceedings to bring a company that violates the GDPR to heel, is it really an effective legal framework? Especially when you take into account just how many companies, big and small, there are that violate the GDPR?

OSNews uses a cookie popup and displays advertising, something we have to do to gain a little bit of extra income – but I’m not happy about it. Our ads don’t provide us with much income, perhaps about €150-200, but that’s still a decent enough chunk of our income pie that we need it. I would greatly prefer we turn off these ads altogether, but in order to be able to afford that, we’d need to up our Patreon income. OSNews Patreons get an ad-free version of OSNews.

That’s a long and slow process, especially with the current economic uncertainty making people reconsider their expenses. Disabling our ads altogether for everyone once we’re fully reader-funded is still my end goal, but until the world around us settles down a bit, that’s a little while off. If you want to speed this process up – you can become an OSNews Patreon and enjoy an ad-free OSNews today.

Rust celebrates ten year anniversary with Rust 1.87.0 release

I generally don’t pay attention to the releases of programming languages unless they’re notable for some reason or another, and I think this one qualifies. Rust is celebrating its ten year anniversary with a brand new release, Rust 1.87.0. This release adds anonymous pipes to the standard library, inline assembly can now jump to labeled blocks in Rust code, and support for the i586 Windows target has been removed. Considering Windows 7 was the last Windows version to support i586, I’d say this is fair.

You can update to the new version using the rustup command, or wait until your operating system adds it to its repository if you’re using a modern operating system.

Accessibility on Linux sucks, but GNOME and KDE are making progress

Accessibility in the software world is a problem in general, but it’s an even bigger problem on open source desktops, as painfully highlighted by this excellent article detailing the utterly broken state of accessibility on Linux. Reading the article is soul-crushing as it starts to dawn on you just how bad the situation really is for those among us who require accessibility features, making it virtually impossible for them to switch to Linux.

This obviously has to change, and it just so happens that both on the GTK/GNOME and KDE side, recent work on accessibility has delivered some valuable results. Starting with GTK and GNOME, the framework has recently merged the AccessKit backend with GTK 4.18, which enables accessibility features when running GTK applications on Windows and macOS. On Linux, GTK still defaults to at-spi, but I’m sure this will change eventually too.

Another major improvement are the special keyboard shortcuts normally provided by the screen reader Orca. Support for these was in the works for a while but incomplete, but now this work has been completed, and the new shortcuts ship as part of GNOME 48. Accessibility support for GNOME Web has been greatly improved as well, and Elevado is a new tool that shows you what applications expose on the a11y bus. There’s a ton additional, smaller changes too.

On the KDE side, a number of accessibility improvements have been implemented as part of the project’s goal to improving input handling. You can now use the numerical pad’s arrow keys to move the mouse cursor, there’s a new 3-finger gesture to invoke the desktop zoom accessibility feature, keyboard navigation in general has been improved in a wide variety of places in KDE, and a whole bunch more improvements. In addition, a number of financial grants have been given to developers working on accessibility in KDE, such as a project to make file management-related features – think open/save dialogs, Dolphin, and so on – fully accessibly, and projects to make touchpad and screen gestures fully customisable.

Accessibility is never really “done” or “perfect”, but there’s definitely an increasing awareness among the two major open source desktops of just how important it is. A few confounding factors – like the switch to Wayland or the complicated history of audio on Linux – have actually hurt accessibility, and it’s only now that things are starting to look up again. However, as anyone with reduced vision or auditory problems can tell you, Linux and the open source desktop still has a very long way to go.

Xiaomi joins Google Pixel in making its own smartphone chip

Following rumors, Xiaomi today announced that it will launch its very own chip for smartphones later this month. The “XRING 01” is a chip that the company has apparently been working on for over 10 years now.

Details about the chip are scarce so far, but GizmoChina points to recent leaks that suggest the chip is built on a 4nm process through TSMC. The chip supposedly has a 1+3+4 layout and should lag just a bit behind Snapdragon 8 Elite and Dimensity 9400 in terms of raw horsepower, sounding familiar to Google’s work with Tensor chips.

↫ Ben Schoon at 9To5Google

I like this. Having almost every Android device use Qualcomm’s chips is not good for fostering competition, and weakens Android OEMs’ bargaining position. If we have more successful SoC makers, consumers will not only gain access to a wider variety of chips that may better suit their needs, it will also force Qualcomm to lower its prices, compete better, or both. Everybody wins.

Well, except Qualcomm, I guess.