Zig’s lovely syntax

It’s a bit of a silly post, because syntax is the least interesting d detail about the language, but, still, I can’t stop thinking how Zig gets this detail just right for the class of curly-braced languages, and, well, now you’ll have to think about that too.

On the first glance, Zig looks almost exactly like Rust, because Zig borrows from Rust liberally. And I think that Rust has great syntax, considering all the semantics it needs to express (see “Rust’s Ugly Syntax”). But Zig improves on that, mostly by leveraging simpler language semantics, but also through some purely syntactical tasteful decisions.

Alex Kladov

Y’all know full well I know very little about programming, so there’s much interesting stuff I can add here. The only slightly related frame of reference I have is how some languages – as in, the ones we speak – have a pleasing grammar or spelling, and how even when you can’t actually speak a language, some of them intrinsically look attractive and pleasing when you see them in written form.

I mean, you can’t look at Scottisch Gaelic and not notice it just looks pleasing:

Dh’ éirich mi moch air mhaduinn an-dé
‘S gun ghearr mi’n ear-thalmhainn do bhrìgh mo sgéil
An dùil gu ‘m faicinn fhéin rùn mo chléibh
Och òin gu ‘m faca ‘s a cùl rium féin.

Mo Shùil Ad Dhèidh by Donald MacNicol

I have no idea if programmers can look at programming languages the same way, but I’ve often been told there’s more overlap between programming languages and regular language than many people think. As such, it wouldn’t surprise me if some programming languages look really pleasing to programmers, even if they can’t use them because they haven’t really learned them yet.

Debian 13 released

Debian has released its latest version, Debian 13 “trixie”.

This release contains over 14,100 new packages for a total count of 69,830 packages, while over 8,840 packages have been removed as obsolete. 44,326 packages were updated in this release. The overall disk usage for trixie is 403,854,660 kB (403 GB), and is made up of 1,463,291,186 lines of code.

Debian 13 release announcement

I’m never quite sure what to say about new Debian releases, as Debian isn’t exactly the kind of distribution to make massive, sweeping changes or introduce brand new technologies before anyone else. That being said, Debian is a massively important cornerstone of the Linux world, forming the base for many of the most popular Linux distributions.

At some point, you’re going to deal with Debian 13.

AOL announces it’s ending its dial-up internet service

AOL routinely evaluates its products and services and has decided to discontinue Dial-up Internet. This service will no longer be available in AOL plans. As a result, on September 30, 2025 this service and the associated software, the AOL Dialer software and AOL Shield browser, which are optimized for older operating systems and dial-up internet connections, will be discontinued.

AOL support document

I’ve seen a few publications writing derisively about this, surprised dial-up internet is still a thing, but I think that’s misguided and definitely a bit elitist. In a country as large as the United States, there’s bound to be quite a few very remote and isolated places where dial-up might be the best or even only option to get online. On top of that, I’m sure there are people out there who use the internet so sparingly that dial-up may suit their needs just fine.

I genuinely hope this move by AOL doesn’t cut a bunch of people off of the internet without any recourse, especially if it involves, say, isolated and lonely seniors to whom such changes may be too difficult to handle. Access to the internet is quite crucial in the modern world, and we shouldn’t be ridiculing people just because they don’t have access to super high-speed broadband.

Windows Settings and Control Panel: 13 years and counting

Remember the old Windows Control Panel? It’s still there, in your up-to-date Windows 11 installation, as a number of settings still cannot be changed in the “new” Settings application. In the latest Insider Preview for Windows 11 in the Dev Channel, Microsoft moved another long list of settings from the Control Panel to Settings.

The focus is very much on time and language this time around. A whole slew of more niche features related to the clock, such as adding additional clocks to the Notification Center or changing your time synchronisation server, can now be done in Settings. Format settings for time and date have also been moved into Settings, which is a welcome change for anyone dealing with mysterious cases where Windows somehow insists on using anything but the sane 24-hour clock.

As for language settings, things like enabling Unicode UTF-8 support is now available in Settings as well, and you can now copy existing language and regions settings from one user to another, and to the welcome screen. Lastly, keyboard settings like the character repeat/delay rate and blink rates are now also in Settings.

It’s absolutely wild to me that Windows still has two separate places to change settings, and that countless settings dialogs still look like they came straight from Windows 95. It’s a reply fractured user experience, and one that’s been in place since the release of Settings in Windows 8, 13 years ago.

The curve Windows is graded on compared to its competitors has basically become a circle. People write entire treatises about how Linux is not ready for the desktop because of some entirely arbitrary and nebulous reasons, while at the same time Windows users are served a hodgepodge of 30 years of random cruft without anyone even so much as raising an eyebrow.

I’ve long argued that if you truly take a step back and look at the landscape of desktop operating systems today, and you were to apply the same standards to all of them, there’s no chance in hell Windows can be considered “ready for the desktop”. The fact Windows has had two competing settings applications 13 years now with no end in sight is just one facet of that conclusion, but definitely an emblematic one.

LVFS to nudge large corporations to fund and contribute to the project

The Linux Vendor Firmware Service (LVFS), which provides device makers and OEMs with the infrastructure to upload and distribute firmware files to Linux users, as well as support during this process, is taking bold steps to ensure large companies contribute to the project. LVFS is the infrastructure behind fwupd, the tool users actually use to download and install firmware updates.

While Richard Hughes, the maintainer of LVFS, is employed by Red Hat to work on the project, and the Linux Foundation provides the hosting costs, there’s just not enough people and resources dedicated to the project. They’re going to take measures to address this.

This year there will be a fair-use quota introduced, with different sponsorship levels having a different quota allowance. Nothing currently happens if the quota is exceeded, although there will be additional warnings asking the vendor to contribute. The “associate” (free) quota is also generous, with 50,000 monthly downloads and 50 monthly uploads. This means that almost all the 140 vendors on the LVFS should expect no changes.

Vendors providing millions of firmware files to end users (and deriving tremendous value from the LVFS…) should really either be providing a developer to help write shared code, design abstractions and review patches (like AMD does) or allocate some funding so that we can pay for resources to take action for them. So far no OEMs provide any financial help for the infrastructure itself, although two have recently offered — and we’re now in a position to “say yes” to the offers of help.

Richard Hughes

In other words, functionality is going to be reduced for vendors who make extensive use of LVFS, but who don’t provide any financial or development support. I think this is an excellent incentive to get corporations who effectively freeload off a free infrastructure without providing anything in return to step up. It seems the measures are explicitly designed to target only the very few major users of LVFS, leaving the smaller companies unaffected.

Funding in open source is a major issue, and as open source becomes ever more popular and used by more and more large companies with excessive amounts of revenue, the strain on maintainers and developers is going to keep increasing. I’m entirely on board with efforts to encourage funding and contributions, as long as they fall within the confines of the terms of the open source licenses in use.

“Why I prefer human-readable file formats”

Choosing human-readable file formats is an act of technological sovereignty. It’s about maintaining control over your data, ensuring long-term accessibility, and building systems that remain comprehensible and maintainable over time. The slight overhead of human readability pays dividends in flexibility, durability, and peace of mind.

These formats also represent a philosophy: that technology should serve human understanding rather than obscure it. In choosing transparency over convenience, we build more resilient, more maintainable, and ultimately more trustworthy systems.

↫ Adële

It’s hard not to agree with this sentiment. I definitely prefer being able to just open and read things like configuration files as if they’re text files, for all the same reasons Adële lists in their article. It just makes managing your system a lot easier, since I means you won’t have to rely on the applications the files belong to to make any changes.

I think this also extends to other areas. When I’m dealing with photo or music library tools, I want them to use the file system and directories in a human-readable way. Having to load up an entire photo management application just to sort some photos seems backwards to me; why can’t I use my much leaner file manager to do this instead? I also want emails to be stored as individual files in directories matching mailboxes inside my email client, just like BeOS used to do back in the day (note that this is far from exclusive to BeOS). If I load up my file manager, and create a new directory inside the root mail directory I designated and copy a few email files into it, my email client should reflect that.

As operating systems get ever more locked down, we’re losing the human-readability of our systems, and that’s not a good development.

If you don’t like current macOS, why not keep using Mac OS X 10.9 Mavericks?

With Apple’s desktop operating systems straying ever further from what some of us consider its heyday, it’s no surprise people long for the days before Apple started relentlessly focusing on services revenue, bringing iOS paradigms to macOS, and dropping its Aqua design language for whatever they’re doing now. Some people take this longing and channel it into something a bit more concrete, and an example of this is a website I stumbled upon on Fedi: Mavericks Forever.

Mavericks Forever is a detailed guide to, as the name implies, keep using Mac OS X 10.9 Mavericks. It covers everything from hardware options to security patches, browser choices, and so, so much more. It even goes as far as adding more recent emoji releases, custom security patches, and visual customisations. There’s a ton to go over here, and of course, you don’t have to implement every single suggestions.

I ostensibly like pain, because I’ve had a soft spot for the trash can Mac Pro ever since they came out. Now that they are wholly and completely outdated by Apple standards, their prices are probably dropping rapidly, so I may have to grab one from eBay or whatever and follow this guide for a modern-ish Mavericks setup. I do actually like the Mac OS X of old quite a bit, I would love to have a usable version of it that I can use when I feel like it.

If only to remember the good old days.

Google ends Steam for Chromebook effort

In 2022, Google launched a major push for gaming Chromebooks, including a version of Steam for ChromeOS. Steam for ChromeOS remained in Google’s nebulous “beta” state ever since, however, and today Google is doing a Google by killing Steam for ChromeOS altogether.

Entering “Steam” into the ChromeOS Launcher starts the install process like before, but there’s now an intermediary message: “The Steam for Chromebook Beta program will conclude on January 1st, 2026. After this date, games installed as part of the Beta will no longer be available to play on your device. We appreciate your participation in and contribution to learnings from the beta program, which will inform the future of Chromebook gaming.”

↫ Abner Li at 9To5Google

Chromebooks are cheap devices for students, and while there are expensive, powerful Chromebooks, I doubt they sell in any meaningful numbers to justify spending any time on maintaining Steam for ChromeOS. Of course, Steam for ChromeOS is just the Linux version of Steam, but Google did maintain a list of “compatible” games, so the company was at least doing something. The list consists of 99 games, by the way.

It’s just another example of Google seemingly having no idea what it wants to do with its operating systems, made worse in this case because Google actually had OEMs make and sell Chromebooks with gaming features. Sure, Android games still exist and can be run on ChromeOS, but I doubt that’s what the six people who bought a gaming Chromebook for actual gaming had in mind when they bought one.

Developing your first KDE application

Akseli Lahtinen, a KDE developer who works on various components of the KDE Plasma desktop environment, had never actually made his own KDE application from scratch – until now. He created a to-do application, called KomoDo (available on Flathub), that makes use of the todo.txt format, and penned a blog post detailing his experiences. Of course, as a KDE developer, he’s got a head start and access to people who know their stuff, but that doesn’t mean it was a walk in the park.

If you’re thinking of developing a KDE application, Lahtinen’s blog post is a great place to start.

Age verification: what’s the harm?

Welcome, friends, to my grubby little corner of the internet. A corner so strewn with obscenity that the UK government has decided you must prove you’re a grown-up before you can access certain parts of it. The UK’s new Online Safety Act has come into force, so UK people might have noticed a bunch of websites suddenly demanding you take a selfie, share your credit card details, or jump through another hoop to prove that you’re over 18. Quite a few of my friends have been discussing this in the pub, because for understandable reasons people who aren’t embedded in the world of online pornography or internet law are suddenly curious about why the internet is now so very broken. They’re also often convinced that the government will change its mind and therefore no one really needs to worry. I’ve had this conversation so many times now that I reckon I’ve got the basis for a fairly solid layperson’s guide to age verification: what it is, how it affects you, and why we absolutely, genuinely do need to worry.

↫ Girl on the Net

Girl on the Net basically published the definitive guide on why age verification online, as currently implemented in the United Kingdom, and explored by the United States and the European Union, is such a terrible idea. It’s a privacy disaster, a clear onramp for Christian extremists to go after LGBTQ+ content, it doesn’t “protect the children”, it’s easily circumvented, breaks accessibility, casts such a wide net that it even hits sites like Wikipedia, and so, so much more.

Whenever anyone online tries to sell you on age verification as a means to “think of the children”, you can just point them to the linked article. If, after reading it, they still believe this is the way to protect children from seeing naked people (while leaving the door to the most brutal forms of violent content wide open, of course, as is tradition), they will have either ulterior motives, or are some form of extremist you can’t argue with anyway.

The demonization of sexual content and the sex workers that produce it as a means to introduce strict authoritarian control over the internet is something that will never go away. “Think of the children” is an incredibly powerful rallying cry for authoritarians to scare sheltered boomers into accepting pretty much any draconian measure, regardless of efficacy, and I doubt we will ever definitely win this fight.

But we won’t have to sit down and accept it.

That time Microsoft forgot the southern hemisphere’s seasons are opposite to the northern hemisphere’s

Whether you like Microsoft and its products or not, the one thing we can all agree on is that the company is absolutely terrible at naming things. Sometimes I feel like managers at Microsoft get their bonuses based on how many times they can rename products, because I find it hard to accept that they’re really that inept at product naming in Redmond. I mean, just look at my recent article about the most Microsoft support document of all time. Bonkers.

While the list of examples of confusing, weird, unclear, and strange Microsoft product names is long, let’s go back to that weird moment in time where Windows updates were suddenly given names like the “Fall Creators Update”. As with every naming scheme Microsoft introduces, this one was short-lived, but for once, we have an explanation. Raymond Chen explains:

It was during an all-hands meeting that a senior executive asked if the organization had any unconscious biases. One of my colleagues raised his hand. He grew up in the Southern Hemisphere, where the seasons are opposite from those in the Northern Hemisphere. He pointed out that naming the updates Spring and Fall shows a Northern Hemisphere bias and is not inclusive of our customers in the Southern Hemisphere.

The names of the semiannual releases were changed the next day to be hemisphere-neutral.

↫ Raymond Chen

If you live in the northern hemisphere – and you can’t live much more north than I do – you don’t often have to think about how the seasons in the southern hemisphere are reversed. We all know it – I assume, at least – but it’s not something that we’re confronted with very often, as our media, movies, books, and so on, all tend to be made in and for consumers in the northern hemisphere. I’m assuming that people in the southern hemisphere are much more acutely aware of this issue, because their media is probably dominated by stories set in the northern hemisphere, too.

It’s wild that Microsoft ever went with a seasonal naming scheme to begin with, and that it somehow slipped through the cracks for a while before anyone spoke up.

Proxmox Virtual Environment 9.0 with Debian 13 released

Main highlight of this update is a modernized core built upon Debian 13 “Trixie”, ensuring a robust foundation for the platform.

Proxmox VE 9.0 further introduces significant advancements in both storage and networking capabilities, addressing critical enterprise demands. A highlight is the long-awaited support for snapshots on thick-provisioned LVM shared storage, improving storage management capabilities especially for enterprise users with Fibre Channel (FC) or iSCSI SAN environments. With newly added “fabric” support for Software-Defined Networking (SDN), administrators can construct highly complex and scalable network architectures.

↫ Proxmox press release

I’ve only very recently accepted the gospel of Proxmox, and I now have a little mini PC running Proxmox, hosting a Debian Pi-Hole container, a 9front virtual machine, and a Windows 7 retro virtual machine. I’m intending to use it as an easy shortcut for running retro stuff, as well as any fun tools I might run into that work best in a container. I haven’t updated yet to this new release, but I’m interested to see how easy the upgrade process will be. Considering it’s just Debian, it can’t be too involved.

I’m curious of anyone else here is using Proxmox or similar tools at home, or at work for more complex use cases.

Writing a Rust GPU kernel driver: a brief introduction on how GPU drivers work

As promised in the first iteration, we will now explore how GPU drivers work in more detail by exploring an application known as VkCube. As the program name implies, this application uses the Vulkan API to render a rotating cube on the screen. Its simplicity makes it a prime candidate to be used as a learning aid in our journey through GPU drivers.

This article will first introduce the concept of User Mode Drivers (UMDs) and Kernel Mode Drivers (KMDs), breaking down the steps needed to actually describe VkCube‘s workload to the GPU. This will be done in a more compact way for brevity as it’s a rather extensive topic that has been detailed in several books.

We will wrap up with an overview of the actual API offered by Tyr. As previously stated, this is the same API offered by Panthor, which is the C driver for the same hardware.

↫ Daniel Almeida

There isn’t much to add here, except maybe this kitten.

Windows NT 4.0 set us on the path to Windows NT desktop dominance

The most popular desktop operating system today is still Windows, with its userbase roughly equally divided between Windows 10 and Windows 11. While we tend to focus on the marketing names used by Microsoft, like Windows XP, Windows 7, or Windows 11, their real name is still, to this day, Windows NT. Underneath all the marketing names, there’s still the Windows NT version number corresponding to the marketing name; Windows XP was Windows NT 5.1 (or 5.2 for the 64bit version), Windows 7 was Windows NT 6.1, and the current latest version, Windows 11, is Windows NT 10.0, a version number that’s been static since 2015.

Of course, version numbers don’t really mean anything, but it does highlight that yes, the Windows you’re using is still Windows NT, and thus, the operating system you’re using isn’t a part of the Windows 3.x/9x line, but of the NT line. And probably the first version of Windows NT that set us on this path is Windows NT 4.0 – with Windows 2000 sealing the deal, and Windows XP delivering the obvious knock-out punch.

Since Windows NT 4.0 turned 29 years old a few days ago, Dave Farquhar published a retrospective of this release, highlighting many important changes in Windows NT 4.0 that in my mind mark it as the true beginning of the shift from Windows 9x to Windows NT as Microsoft’s consumer operating system.

First, Windows NT 4.0 was the first version of Windows NT that shipped with the user interface from Windows 95. It brought over the Start menu, taskbar, and everything else introduced with Windows 95 to the Windows NT line, which up until that point had been using the same user interface as Windows 3.x. A default Windows NT 4.0 desktop basically looks indistinguishable from a Windows 95 desktop, and like the earlier versions of NT, it came in a workstation edition for desktop use.

Second, another massive, at the time controversial, change came with the graphics subsystem, as Farquhar notes:

And one change, easily forgotten today, regarded graphics drivers. Microsoft moved the video subsystem from user space, ring 3, to kernel space, ring 0. There was a lot of talk about Ring 0 versus ring 3 on July 19, 2024 thanks to the large computer outage on that day. In 1996, this move was controversial, for the same reasons. The fear was that a malfunction in the graphics driver would now be able to take down the entire system. But the trade-off was much improved performance. It meant Windows NT 4.0 could be used for serious graphics work.

↫ Dave Farquhar

Windows NT 4.0 delivered more than what’s highlighted by Farquhar, of course. A major new feature in Windows NT 4.0 was DirectX, as it was the first Windows version to come with it preinstalled. DirectX support remained limited in NT 4.0, though, so Windows 9x remained the better option for most people playing video games. Other new features were the System Policy Editor and system policies, Sysprep, and, of course, a whole slew of low-level improvements to both the operating system itself as well as its various server-oriented features.

Windows NT 4.0 also happened to be the last version of Windows NT which supported the Alpha, MIPS, and PowerPC architectures, although Windows 2000 retained support for Alpha in its alpha, beta, and release candidate versions. Of course, Windows would later expand its architecture support with first Itanium, and more recently, ARM.

As someone who was selling and managing computer systems at the time, Farquhar has some great insights into why NT 4.0 was such a big deal, and why it seemed to fare better in the market than previous versions of Windows NT did. He also highlights on particular oddity from NT 4.0 that’s still lurking around today, an oddity you really don’t want to run into.

Replacing an Amiga’s brain with Doom

There’s a lovely device called a pistorm, an adapter board that glues a Raspberry Pi GPIO bus to a Motorola 68000 bus. The intended use case is that you plug it into a 68000 device and then run an emulator that reads instructions from hardware (ROM or RAM) and emulates them. You’re still limited by the ~7MHz bus that the hardware is running at, but you can run the instructions as fast as you want.

These days you’re supposed to run a custom built OS on the Pi that just does 68000 emulation, but initially it ran Linux on the Pi and a userland 68000 emulator process. And, well, that got me thinking. The emulator takes 68000 instructions, emulates them, and then talks to the hardware to implement the effects of those instructions. What if we, well, just don’t? What if we just run all of our code in Linux on an ARM core and then talk to the Amiga hardware?

↫ Matthew Garrett

This is so cursed. I love it.

Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives

We are observing stealth crawling behavior from Perplexity, an AI-powered answer engine. Although Perplexity initially crawls from their declared user agent, when they are presented with a network block, they appear to obscure their crawling identity in an attempt to circumvent the website’s preferences. We see continued evidence that Perplexity is repeatedly modifying their user agent and changing their source ASNs to hide their crawling activity, as well as ignoring — or sometimes failing to even fetch — robots.txt files.

The Internet as we have known it for the past three decades is rapidly changing, but one thing remains constant: it is built on trust. There are clear preferences that crawlers should be transparent, serve a clear purpose, perform a specific activity, and, most importantly, follow website directives and preferences. Based on Perplexity’s observed behavior, which is incompatible with those preferences, we have de-listed them as a verified bot and added heuristics to our managed rules that block this stealth crawling.

↫ The CloudFlare Blog

Never forget they destroyed Aaron Swartz’s life – literally – for downloading a few JSTOR articles.

Orbitiny Desktop 1.0 Pilot 4 released

It’s not every day you stumble upon an X11 desktop environment you’ve never hard of, but today’s one of those days. The Orbitiny Desktop Environment is a one-person project, consisting of an entirely custom desktop environment written in Qt. Version 1.0 Pilot 4 was just released.

Built from the ground up using Qt and coded in C++, Orbitiny Desktop is a new, 100% portable, innovative and traditional but modern looking desktop environment for Linux. Innovative because it has features not seen in any other desktop environment before while keeping traditional aspects of computing alive (desktop icons, menus etc).

Portable because you can run it on any distro and on any live CD and that’s because everything gets saved inside the directory that gets created when the archive is extracted (this can be changed so that the settings go to $HOME/.config/orbitiny).

↫ Orbitiny Desktop Environment Gitea page

It’s got all the usual amenities like a desktop, panels, and so on, and a custom file manager. It’s also replete with a ton of small features that you don’t see very often, like full mouse gesture support on the desktop and a device manager that can enable/disable devices without blacklisting kernel modules. When you cut or copy a file, its icon will get a little emblem to indicate it’s on the clipboard, you can append and prepend files using simple drag-and-drop operations, you can set individual desktop directories for each virtual desktop, and much more.

Now, it’s technically not a full desktop environment, because it doesn’t have things like a session manager, power manager, various hardware configuration panels, and so on, but it can be run on top of existing desktop environments. While it has basic Wayland support, not all components work there, so X11 is the main focus for now.

Considering it’s a one-person project, you can’t expect a bug or issue-free experience, but that doesn’t mean it’s any less damn impressive. I honestly feel like there’s something valuable and interesting here, and I’d love for more people to get involved to see where this can go. There’s clearly a ton of love and dedication here, and the various unique features clearly set it apart from everything else.

If you have the skills, consider helping out.

Introduction to Qubes OS when you do not know what it is

Solène Rapenne, who writes a lot about and contributes to operating systems like OpenBSD and Qubes OS, has published a primer about what, exactly, Qubes OS is.

I like to call Qubes OS a meta operating system, because it is not a Linux / BSD / Windows based OS: its core is Xen (some kind of virtualization enabled kernel). Not only it’s Xen based, but by design it is meant to run virtual machines, hence the name “meta operating system” which is an OS meant to run many OSes make sense to me.

↫ Solène Rapenne

Rapenne explains the various ways in which isolated virtual machines are used in Qubes OS, and it’s easy to see just how secure Qubes OS’ way of doing things is. At the same time, it seems quite cumbersome to me as a regular user, and I don’t think I’m up for dealing with all of that. If you do security research, handle private or classified data, are a whistleblower or an investigative journalist, thoug, Qubes seems like a natural choice.

Interesting to note is that Rapenne used to use OpenBSD for her security work, but moved to Qubes OS because its virtual machine infrastructure is far more robust, and hardware support is better, as well.

A real PowerBook: the Macintosh Application Environment on a PA-RISC laptop

In October 1997 you could have bought a PowerBook 3400c running up to a 240MHz PowerPC 603e for $6500 [about $13,000 in 2025 dollars], which was briefly billed as the world’s fastest laptop, or you could have bought this monster new to the market, the RDI PrecisionBook running up to a 160MHz (later 180MHz) PA-7300LC starting at $12,000 [$24,000]. Both provided onboard Ethernet, SCSI and CardBus PCMCIA slots. On the other hand, while the 3400c had an internal media bay for either a floppy or CD-ROM, both external options on the PrecisionBook, the PrecisionBook gave you a 1024×768 LCD (versus 800×600 on the 3400c), a bigger keyboard, at least two 2.5″ hard disk bays and up to 512MB of RAM (versus 144MB) — and HP-UX.

And, through the magic of Apple’s official Macintosh Application Environment, you could do anything on it an HP PA-RISC workstation could do and run 68K Mac software on it at the same time. Look at the photograph and see: on our 160MHz unit we’ve got HP-UX 11.00 CDE running simultaneously with a full Macintosh System 7.5.3 desktop. Yes, only a real Power Mac could run PowerPC software back then, but 68K software was still plentiful and functional. Might this have been a viable option to have your expensive cake and eat it too? We’ll find out and run some real apps on it (including that game we must all try running), analyze its performance and technical underpinnings, and uncover an unusual artifact of its history hidden in the executable.

↫ Cameron Kaiser at Old Vintage Computing Research

I actually have Apple’s Macintosh Application Environment installed and running on my PA-RISC machines, and it’s incredible just how well-made and complete it really is. You get a full Mac desktop and its applications, excellent integration with the host, file sharing between host and client, and so much more. Running it on newer versions of HP-UX than it was originally intended for does lead to the odd issue here and there, but due to HP-UX’ excellent backwards compatibility, it all just works.

It has created this odd situation that my 2004 HP c8000 machine, with two of the fastest dual-core PA-RISC processors ever made, will most likely be the fastest machine I’ll ever officially run classic Mac OS on. Sure, you can use other emulators not created and blessed by Apple and run classic Mac OS on much faster hardware, but if you want to stick to official, supported methods of running the classic Mac OS, it doesn’t get much faster than this.