Sony’s NEWS UNIX workstations

The first prototype was ready in just six months. By October 1986, the project was announced, and in January 1987, the first NEWS workstation, the NWS 800 series, officially launched. It ran 4.2BSD UNIX and featured a Motorola 68020 CPU. Its performance rivaled that of traditional super minicomputers, but with a dramatically lower price point ranging from ¥950,000 to ¥2.75 million (approximately $6,555 to $18,975 USD in 1987). Competing UNIX workstations typically cost closer to ¥10 million (around $69,000 USD). NEWS caught on quickly in universities and R&D labs, where cost sensitive researchers needed real performance. The venture team had invested ¥400 million into development (about $2.76 million USD), and remarkably, they recouped those costs within just two months of launch.

That same year, Sony introduced a lower cost version called POP NEWS (PWS 1550). With a GUI shell named NEWS Desk, a document sharing format called CDFF (Common Document File Format), and a focus on Japanese language desktop publishing, PopNEWS aimed to make UNIX more accessible to general business users. Targeted at the Desktop Publishing market, it showed Sony’s desire to bridge consumer and professional segments in ways no other UNIX vendor was trying at the time.

↫ Obsolete Sony’s Newsletter

I’ve been fascinated by Sony’s NEWS workstations, and especially the NEWS-OS operating system, for a long time now. Real hardware is hard to find and prohibitively expensive, but some of these Sony NEWS workstations can be emulated through MAME. Sadly, as far as I can tell, you can only emulate NEWS-OS up to version 4.x, as I haven’t been able to find any information about emulating version 5.x and the final version, 6.x. If anyone knows anything about how to emulate these, if at all possible, please do share with the rest of us.

What’s interesting about Sony’s UNIX workstation efforts from the ’80s and ’90s is that they played an important role in the early development of the PlayStation. The early development kits for the PlayStation were modified NEWS workstations, with added PlayStation hardware. To further add to the importance of the NEWS line for gaming, Nintendo used them to develop several influential and popular first-party SNES titles, which isn’t surprising considering Nintendo and Sony originally worked together on bringing a CD-ROM drive to the SNES, which would later morph into the PlayStation as Nintendo cancelled the agreement at the last second.

That time “AI” translation almost caused a fight between a doctor and my parents

What if you want to find out more about the PS/2 Model 280? You head out to Google, type it in as a query, and realise the little “AI” summary that’s above the fold is clearly wrong. Then you run the same query again, multiple times, and notice that each time, the “AI” overview gives a different wrong answer, with made-up details it’s pulling out of its metaphorical ass. Eventually, after endless tries, Google does stumble upon the right answer: there never was a PS/2 Model 280, and every time the “AI” pretended that there was, it made up the whole thing.

Google’s “AI” is making up a different type of computer out of thin air every time you ask it about the PS/2 Model 280, including entirely bonkers claims that it had a 286 with memory expandable up to 128MB of RAM (the 286 can’t have more than 16). Only about 1 in 10 times does the query yield the correct answer that there is no Model 280 at all.

An expert will immediately notice discrepancies in the hallucinated answers, and will follow for example the List of IBM PS/2 Models article on Wikipedia. Which will very quickly establish that there is no Model 280.

The (non-expert) users who would most benefit from an AI search summary will be the ones most likely misled by it.

How much would you value a research assistant who gives you a different answer every time you ask, and although sometimes the answer may be correct, the incorrect answers look, if anything, more “real” than the correct ones?

↫ Michal Necasek at the OS/2 Museum

This is only about a non-existent model of PS/2, which doesn’t matter much in the grand scheme of things. However, what if someone is trying to find information about how to use a dangerous power tool? What if someone asks the Google “AI” about how to perform a certain home improvement procedure involving electricity? What if you try to repair your car following the instructions provided by “AI”? What if your mother follows the instructions listed in the leaflet that came with her new medication, which was “translated” using “AI”, and contains dangerous errors?

My father is currently undertaking a long diagnostic process to figure out what kind of age-related condition he has, which happens to involve a ton of tests and interviews by specialists. Since my parents are Dutch and moved to Sweden a few years ago, language is an issue, and as such, they rely on interpreters and my Swedish wife’s presence to overcome that barrier. A few months ago, though, they received the Swedish readout of an interview with a specialist, and pasted it into Google Translate to translate it to Dutch, since my wife and I were not available to translate it properly.

Reading through the translation, it all seemed perfectly fine; exactly the kind of fact-based, point-by-point readout doctors and medical specialists make to be shared with the patient, other involved specialists, and for future reference. However, somewhere halfway through, the translation suddenly said, completely out of nowhere: “The patient was combative and non-cooperative” (translated into English).

My parents, who can’t read Swedish and couldn’t double-check this, were obviously taken aback and very upset, since this weird interjection had absolutely no basis in reality. This readout covered a basic question-and-answer interview about symptoms, and at no point during the conversation with the friendly and kind doctor was there any strife or modicum of disagreement. Still, being into their ’70s and going through a complex and stressful diagnostic process in a foreign healthcare system, it’s not unsurprising my parents got upset.

When they shared this with the rest of our family, I immediately thought there must’ve been some sort of translation error introduced by Google Translate, because not only does the sentence in question not match my parents and the doctor in question at all, it would also be incredibly unprofessional. Even if the sentence were an accurate description of the patient-doctor interaction, it would never be shared with the patient in such a manner.

So, trying to calm everyone down by suggesting it was most likely a Google Translate error, I asked my parents to send me the source text so my wife and I could pour over it to discover where Google Translate went wrong, and if, perhaps, there was a spelling error in the source, or maybe some Swedish turn of phrase that could easily be misinterpreted even by a human translator. After pouring over the documents for a while, we came to a startling conclusion that was so, so much worse.

Google Translate made up the sentence out of thin air.

This wasn’t Google Translate taking a sentence and mangling it into something that didn’t make any sense. This wasn’t a spelling error that tripped up the numbskull “AI”. This wasn’t a case of a weird Swedish expression that requires a human translator to properly interpret and localise into Dutch. None of the usual Google Translate limitations were at play here. It just made up a very confrontational sentence out of thin air, and dumped it in between two other sentence that were properly present in the source text.

Now, I can only guess at what happened here, but my guess is that the preceding sentence in the source readout was very similar to a ton of other sentences in medical texts ingested by Google’s “AI”, and in some of the training material, that sentence was followed by some variation of “patient was combative and non-cooperative”. Since “AI” here is really just glorified autocomplete, it did exactly what autocomplete does: it made shit up that wasn’t there, thereby almost causing a major disagreement between a licensed medical professional and a patient.

Luckily for the medical professional and the patient in question, we caught it in time, and my family had a good laugh about it, but the next person this happens to might not be so lucky. Someone visiting a foreign country and getting medicine prescribed there after an incident might run instructions through Google Translate, only for Google to add a bunch of nonsense to the translation that causes the patient to misuse the medication – with potentially lethal consequences.

And you don’t even need to add “AI” translation into the mix, as the IBM PS/2 Model 280 queries show – Google’s “AI” is entirely capable of making shit up even without having to overcome a language barrier. People are going to trust what Google’s “AI” tells them above the fold, and it’s unquestionably going to lead to injury and most likely death.

And who will be held responsible?

GNOME OS ready for more extensive testing

While it’s still early days and it’s not recommended for non-technical audiences, GNOME OS is now ready for developers and early adopters who know how to deal with occasional bugs (and importantly, file those bugs when they occur).

↫ Tobias Bernard

This is great news, and means GNOME OS is progressing nicely. I’m a proponent of this and KDE’s equivalent project, because it allows the people working on GNOME and KDE to really showcase their work in optimal, controlled conditions. While I don’t see myself switching to a Flatpak-based, immutable distribution because they tend to not align with what I want out of an operating system, they’ll serve as great showcases.

There is a risk associated with these projects, though, as I highlighted the last time we talked about them.

Once such “official” GNOME and KDE Linux distributions exist, the projects run a real risk of only really caring about how well GNOME and KDE work there, while not caring as much, or even at all, how well they run everywhere else. I’m not sure how they intend to prevent this from happening, but from here, I can already see the drama erupting. I hope this is something they take into consideration.

We’ll have to wait and see if my worries are founded or not.

Harpoom: of course the Apple Network Server can be hacked into running Doom

Of course you can run Doom on a $10,000+ Apple server running IBM AIX. Of course you can. Well, you can now.

Now, let’s go ahead and get the grumbling out of the way. No, the ANS is not running Linux or NetBSD. No, this is not a backport of NCommander’s AIX Doom, because that runs on AIX 4.3. The Apple Network Server could run no version of AIX later than 4.1.5 and there are substantial technical differences. (As it happens, the very fact it won’t run on an ANS was what prompted me to embark on this port in the first place.) And no, this is not merely an exercise in flogging a geriatric compiler into building Doom Generic, though we’ll necessarily do that as part of the conversion. There’s no AIX sound driver for ANS audio, so this port is mute, but at the end we’ll have a Doom executable that runs well on the ANS console under CDE and has no other system prerequisites. We’ll even test it on one of IBM’s PowerPC AIX laptops as well. Because we should.

↫ Cameron Kaiser

Excellent reading, as always, from Cameron Kaiser.

“My experience with Canonical’s interview process”

A short while ago, we talked about the hellish hiring process at a Silicon Valley startup, and today we’ve got another one. Apparently, it’s an open secret that the hiring process at Canonical is a complete dumpster fire.

I left Google in April 2024, and have thus been casually looking for a new job during 2024. A good friend of mine is currently working at Canonical, and he told me that it’s quite a nice company with a great working environment. Unfortunately, the internet is full of people who had a poor experience: Glassdoor shows that only 15% had a positive interview experience, famous internet denizens like sara rambled on the topic, reddit, hackernews, indeed and blind all say it’s terrible, … but the idea of being decently paid to do security work on a popular Linux distribution was really appealing to me.

↫ Julien Voisin

What follows is Byzantine and ridiculous, and all ultimately unnecessary since it turns out Mark Shuttleworth interviews applicants at the end of this horrid process and yays or nays people on vibes alone. You have to read it to believe it.

One interesting note that I do appreciate is that Voisin used their rights under the GDPR to force Canonical to hand over the feedback about his application since the GDPR considers it personal information. Delicious.

Flatpak “not being actively developed anymore”

At the Linux Application Summit (LAS) in April, Sebastian Wick said that, by many metrics, Flatpak is doing great. The Flatpak application-packaging format is popular with upstream developers, and with many users. More and more applications are being published in the Flathub application store, and the format is even being adopted by Linux distributions like Fedora. However, he worried that work on the Flatpak project itself had stagnated, and that there were too few developers able to review and merge code beyond basic maintenance.

↫ Joe Brockmeier at LWN

After reading this article and the long list of problems the Flatpak project is facing, I can’t really agree that “Flatpak is doing great”. Apparently, Flatpak is in maintenance mode, while major problems remain untouched, because nobody is working on the big-ticket items anymore. This seems like a big problem for a project that’s still facing a myriad of major issues.

For instance, Flatpak still uses PulseAudio instead of Pipewire, which means that if a Flatpak applications needs permission to play audio, it also automatically gets permission to use the microphone. NVIDIA drivers also pose a big problem, network namespacing in Flatpak is “kind of ugly”, you can’t specify backwards-compatible permissions, and tons more problems. There’s a lot of ideas and proposed solutions, but nobody to implement them, leaving Flatpak stagnated.

Now that Flatpak is adopted by quite a few popular desktop Linux distributions, it doesn’t seem particularly great that it’s having such issues with finding enough manpower to keep improving it. There’s a clear push, especially among developers of end-user focused applications, for everyone to use Flatpak, but is that push really a wise idea if the project has stagnated? Go into any thread where people discuss the use of Flatpaks, and there’s bound to be people experiencing problems, inevitably followed by suggested fixes to use third-party tools to break the already rather porous sandbox.

Flatpak feels like a project that’s far from done or feature-complete, causing normal, every-day users to experience countless problems and issues. Reading straight fromt he horse’s mouth that the project has stagnated and isn’t being actively developed anymore is incredibly worrying.

The Copilot delusion

And the “copilot” branding. A real copilot? That’s a peer. That’s a certified operator who can fly the bird if you pass out from bad taco bell. They train. They practice. They review checklists with you. GitHub Copilot is more like some guy who played Arma 3 for 200 hours and thinks he can land a 747. He read the manual once. In Mandarin. Backwards. And now he’s shouting over your shoulder, “Let me code that bit real quick, I saw it in a Slashdot comment!”

At that point, you’re not working with a copilot. You’re playing Russian roulette with a loaded dependency graph.

You want to be a real programmer? Use your head. Respect the machine. Or get out of the cockpit.

↫ Jj at Blogmobly

The world has no clue yet that we’re about to enter a period of incredible decline in software quality. “AI” is going to do more damage to this industry than ten Electron frameworks and 100 managers combined.

The flip phone web: browsing with the original Opera Mini

Opera Mini was first released in 2005 as a web browser for mobile phones, with the ability to load full websites by sending most of the work to an external server. It was a massive hit, but it started to fade out of relevance once smartphones entered mainstream use.

Opera Mini still exists today as a web browser for iPhone and Android—it’s now just a tweaked version of the regular Opera mobile browser, and you shouldn’t use Opera browsers. However, the original Java ME-based version is still functional, and you can even use it on modern computers.

↫ Corbin Davenport

I remember using Opera Mini back in the day on my PocketPC and Palm devices. It wasn’t my main browser on those devices, but if some site I really needed was acting up, Opera Mini could be a lifesaver, but as we all remember, the mobile web before the arrival of the iPhone was a trashfire. Interestingly enough, we circled back to the mobile web being a trashfire, but at least we can block ads now to make it bearable.

Since Opera Mini is just a Java application, the client part of the equation will probably remain executable for a long time, but once Opera decides to close the server side of things, it will stop being useful. Perhaps one day someone will reverse-engineer the protocol and APIs, paving the way for a custom server we can all run as part of the retrocomputing hobby.

There’s always someone crazy and dedicated enough.

Apple said to switch to year to identify releases of its operating systems

The next Apple operating systems will be identified by year, rather than with a version number, according to people with knowledge of the matter. That means the current iOS 18 will give way to “iOS 26,” said the people, who asked not to be identified because the plan is still private. Other updates will be known as iPadOS 26, macOS 26, watchOS 26, tvOS 26 and visionOS 26.

Apple is making the change to bring consistency to its branding and move away from an approach that can be confusing to customers and developers. Today’s operating systems — including iOS 18, watchOS 12, macOS 15 and visionOS 2 — use different numbers because their initial versions didn’t debut at the same time.

↫ Mark Gurman at Bloomberg

OK.

The length of file names in early Unix

If you use Unix today, you can enjoy relatively long file names on more or less any filesystem that you care to name. But it wasn’t always this way. Research V7 had 14-byte filenames, and the System III/System V lineage continued this restriction until it merged with BSD Unix, which had significantly increased this limit as part of moving to a new filesystem (initially called the ‘Fast File System’, for good reasons). You might wonder where this unusual number came from, and for that matter, what the file name limit was on very early Unixes (it was 8 bytes, which surprised me; I vaguely assumed that it had been 14 from the start).

↫ Chris Siebenmann

I love these historical explanations for seemingly arbitrary limitations.

Microsoft unveils Microsoft’s competitor to Microsoft’s winget

One of the ways in which Windows (and macOS) trails behind the Linux and BSD world is the complete lack of centralised, standardised application management. Windows users still have to scour the web to download sketchy installers straight from the Windows 95 days, amassing a veritable collection updaters in the process, which either continuously run in the background, or annoy you with update pop-ups when you launch an application. It’s an archaic nightmare users of supposedly modern computers should not have to be dealing with.

Microsoft has tried to remedy this, but in true Microsoft fashion, it did so halfheartedly, for instance with the Windows Package Manager, better known as winget. Instead of building an actual package manager, Microsoft basically just created a glorified script that downloads the same installers you download manually, and runs them in unattended mode in the background – it’s a download manager masquerading as a proper application management framework.

To complicate matters, winget is only available as a command-line tool, meaning 99% of Windows users won’t be using it. There’s no graphical frontend in Windows, and it’s not integrated into Windows Update, so even if you strictly use winget to install your applications – which will be hard, as there’s only about 1400 applications that use it – you still don’t have a centralised place to upgrade your entire operating system and all of its applications.

It’s a mess, and Microsoft intends to address it. Again. This time, they’re finally doing what should have been the goal from the start: allowing applications to be updated through Windows Update.

Built on the Windows Update stack, the orchestration platform aims to provide developers and product teams building apps and management tools with an API for onboarding their update(s) that supports the needs of their installers. The orchestrator will coordinate across all onboarded products that are updated on Windows 11, in addition to Windows Update, to provide IT admins and users with a consistent management plane and experience, respectively.

↫ Angie Chen on the Windows IT Pro Blog

Sounds good, but hold on a minute – “orchestration platform”? So this isn’t the existing winget, but integrated into Windows Update, where it should’ve been all along? No, what we’re looking at here is Microsoft’s competitor to Microsoft’s winget inside Microsoft’s Windows Update, oh and there’s also the Windows Store. In other words, once this rolls out, it’ll be yet another way to manage applications, existing inside Windows Update, and alongside winget (and the Windows Store).

They way it works is surprisingly similar to winget: application developers can register an update executable with the orchestrator, and the orchestrator will periodically run this update executable to check for updates. In other words, this looks a hell of a lot like a mere download manager for existing updaters. What it’s definitively not, however, is winget – so if you’re a Windows application developer, you now not only have to register your application to work with winget, but also register it with this new orchestrator to work with Windows Update.

This thing is so incredibly Microsoft.

Genode OS Framework 25.05 released

It’s been 9 years since we disrupted Genode’s API. Back then, we changed the execution model of components, consistently applied the dependency-injection pattern to shun global side effects, and largely removed C-isms like format strings and pointers. These changes ultimately paved the ground for sophisticated systems like Sculpt OS.

Since then, we identified several potential areas for further safety improvements, unlocked by the evolution of the C++ core language and inspired by the popularization of sum types for error propagation by the Rust community. With the current release, we uplift the framework API to foster a programming style that leaves no possible error condition unconsidered, reaching for a new level of rock-solidness of the framework. Section The Great API hardening explains how we achieved that. The revisited framework API comes in tandem with a new tool chain based on GCC 14 and binutils 2.44.

↫ Genode OS Framework 25.05 release notes

This new release also brings a lot of progress on the integration of the TCP/IP stacks ported from Linux and lwIP, improvements to the Intel and VESA drivers, better power management of their Intel GPU multiplexer, and more. They’ve also added support for touchscreen gestures, file modification times support milliseconds now, and support for the seL4 kernel has been improved. Many of these changes will find their way into the next SculptOS release, or, in some cases, were already added.

10biForthOS: a full 8086 OS in 46 bytes

An incredibly primitive operating system, with just two instructions: compile (1) and execute (0).

It is heavily inspired by Frank Sergeant 3-Instruction Forth and is a strip down exercise following up SectorForth, SectorLisp, SectorC (the C compiler used here) and milliForth.

Here is the full OS code in 46 bytes of 8086 assembly opcodes.

↫ 10biForthOS sourcehut page

Yes, the entire operating system easily fits right here, inside an OSNews quote block:

50b8 8e00 31d8 e8ff 0017 003c 0575 00ea
5000 3c00 7401 eb02 e8ee 0005 0588 eb47
b8e6 0200 d231 14cd e480 7580 c3f4

↫ 10biForthOS sourcehut page

How do you actually use this operating system? Once the operating system is loaded at boot, it listens on the serial port for instructions. You can then send the instruction 1 followed by a byte of an assembly opcode which will be compiled into a fixed location in memory. The instruction 0 will then execute the program. There’s also a version with keyboard support, as well as a much bigger version compiled for x86-64.

Something like this inevitably raises the question what an operating system really is, and if this extremely limited and minimalist thing can be considered as one. I’m not going to deep into this existential discussion, mostly because I land firmly on the side that this is indeed just as much an operating system as, say, Windows or MorphOS. This bit of code, when booted, allows you to operate the system.

It’s an operating system.

Signal uses Windows’ DRM to counter Recall snooping

Microsoft’s Recall feature, which takes screenshots of the contents of your screen every few seconds, saves them, and then runs text and image recognition to extract information from them, has had a rocky start. Even now that it’s out there and Microsoft deems it ready for everyone to use, it has huge security and privacy gaps, and one of them is that applications that contain sensitive information, such as the Windows Signal application, cannot ‘opt out’ of having their contents scraped.

Signal was rather unhappy with this massive privacy risk, and decided to do something about it. It’s called screen security, and is Windows-only because it’s specifically designed to counter Windows Recall.

If you attempt to take a screenshot of Signal Desktop when screen security is enabled, nothing will appear. This limitation can be frustrating, but it might look familiar to you if you’ve ever had the audacity to try and take a screenshot of a movie or TV show on Windows. According to Microsoft’s official developer documentation, setting the correct Digital Rights Management (DRM) flag on the application window will ensure that “content won’t show up in Recall or any other screenshot application.” So that’s exactly what Signal Desktop is now doing on Windows 11 by default.

↫ Joshua Lund on the Signal blog

Microsoft cares more about enforcing the rights of massive corporations than it does about respecting the privacy of its users. As such, everything is in place in Windows to ensure neither you nor Recall can take screenshots of, I don’t know, the Bee Movie, but nothing has been put in place to protect your private and sensitive messages in a service like Signal. This really tells you all you need to know about who Microsoft truly cares about, and it sure as hell isn’t you, the user.

What Signal is doing is absolutely brilliant. By turning Windows’ digital rights management features against Recall to protect the privacy of Signal users, Signal has made it impossible – or at least very hard – for Microsoft to address this. Of course, this also means that taking screenshots of the Signal application on Windows for legitimate purposes is more cumbersome now, but since you can temporarily turn screen security off to take a screenshot means it’s not impossible.

I almost want other Windows developers to employ this same trick, just to make Recall less valuable, but that’s probably not a great idea considering how much it would annoy users just trying to take legitimate screenshots. My uneducated guess is that this is exactly why Microsoft isn’t providing developers with the kind of fine-grained controls to let Recall know what it can and cannot take screenshots of: Microsoft must know Recall is a feature for shareholders, not for users, and that users will ask developers to opt-out of any Recall snooping if such APIs were officially available.

Microsoft wants to make it has hard as possible for applications to opt out of being sucked into the privacy black hole that is Recall, but in doing so, might be pushing developers to use DRM to achieve the same goal. Just delicious.

Signal also signed off with a scathing indictment of “AI” as a whole.

“Take a screenshot every few seconds” legitimately sounds like a suggestion from a low-parameter LLM that was given a prompt like “How do I add an arbitrary AI feature to my operating system as quickly as possible in order to make investors happy?” — but more sophisticated threats are on the horizon.

The integration of AI agents with pervasive permissions, questionable security hygiene, and an insatiable hunger for data has the potential to break the blood-brain barrier between applications and operating systems. This poses a significant threat to Signal, and to every privacy-preserving application in general.

↫ Joshua Lund on the Signal blog

Heed this warning.

plwm: X11 window manager written in Prolog

plwm is a highly customizable X11 dynamic tiling window manager written in Prolog.

Main goals of the project are: high code & documentation quality; powerful yet easy customization; covering most common needs of tiling WM users; and to stay small, easy to use and hack on.

↫ plwm GitHub page

Tiling window managers are a dime-a-dozen, but the ones using a unique or uncommon programming language do tend to stand out.

Linux 6.15 released

Highlights of Linux 6.15 include Rust support for hrtimer and ARMv7, a new setcpuid= boot parameter for x86 CPUs, support for sched_ext to count and report internal events, x86 Intel and AMD PMU enhancements, nested virtualization support for VGICv3 on ARM, and support for emulating FEAT_PMUv3 on Apple Silicon.

↫ Marius Nestor at 9To5Linux

On top of these highlights, there’s also a ton of other changes, from the usual additions of new drivers, to better support for RISC-V, and so much more.

A new PowerPC board with support for Amiga OS 4 and MorphOS is on its way

The Amiga, a once-dominant force in the personal computer world, continues to hold a special place in the hearts of many. But with limited next-gen hardware available and dwindling AmigaOS4 support, the future of this beloved platform seemed uncertain. That is, until four Dutch passionate individuals, Dave, Harald, Paul, and Marco, decided to take matters into their own hands.

Driven by a shared love for the Amiga and a desire to see it thrive, they embarked on an ambitious project: to create a new, low-cost next-gen Amiga mainboard.

↫ Mirari’s Our Story page

Experience has taught me to be… Careful of news of new hardware from the Amiga world, but for once I have strong reasons to believe this one is actually the real deal. The development story – from the initial KiCad renders to the first five, fully functional prototype boards – seems to be on track, software support for Amiga OS is in development, Linux is working great already, and since today, MorphOS also boots on the board. It’s called the Mirari, and it’s very Dutch.

So, what are we looking at here? The Mirari is a micro-ATX board, sporting either a PowerPC T10x2 processor (2-4 e5500 cores) up to 1.5GHz or a PowerPC T2081 processor (4 dual-threaded e6500 cores with Altivec 2.0) up to 1.8GHz, both designed by NXP in The Netherlands. It supports DDR3 memory, PCIe 2.0 (3.0 for the 4x slot when using the T2081), SATA and NVMe, the usual array of USB 2.0 and 3.2 ports, audio jacks, Ethernet, and so on. No, this is not a massive powerhouse that can take on the latest x86 or ARM machines, but it’s more than enough to power Amiga OS 4 or MorphOS, and aims to be actually affordable.

Being at the prototype stage means they’re not for sale quite yet, but the fact they have a 100% yield so far and are comfortable enough to send one of the prototypes to a MorphOS developer, who then got MorphOS booting rather quickly, is a good sign. I also like the focus on affordability, which is often a problem in the Amiga world. I hope they make it to production, because I want one real bad.

Google’s “AI” is convinced Solaris uses systemd

Who doesn’t love a bug bounty program? Fix some bugs, get some money – you scratch my back, I pay you for it. The CycloneDX Rust (Cargo) Plugin decided to run one, funded by the Bug Resilience Program run by the Sovereign Tech Fund. That is, until “AI” killed it.

We received almost entirely AI slop reports that are irrelevant to our tool. It’s a library and most reporters didn’t even bother to read the rules or even look at what the intended purpose of the tool is/was.

This caused a lot of extra work which is why we decided to abandon the program. Thanks AI.

↫ Lars Francke

On a slightly related note, I had to do search the web today because I’m having some issues getting OpenIndiana to boot properly on my mini PC. For whatever reason, starting LightDM fails when booting the live USB, and LightDM’s log is giving some helpful error messages. So, I searched for "failed to get list of logind seats" openindiana, and Google’s automatic “AI Overview” ‘feature’, which takes up everything above the fold so is impossible to miss, confidently told me to check the status of the logind service… With systemctl.

We’ve automated stupidity.