Linux 5.9 is out as the 2020 autumn kernel update. Linux 5.9 has a number of exciting improvements including initial support for upcoming Radeon RX 6000 “RDNA 2” graphics cards, initial Intel Rocket Lake graphics, NVMe zoned namespaces (ZNS) support, various storage improvements, IBM’s initial work on POWER10 CPU bring-up, the FSGSBASE instruction is now used, 32-bit x86 Clang build support, and more.
It will make its way to your distribution eventually, to your separate kernel repository, or, for the brave ones, to your compile command.
It’s quite clear that the main reason to use x86 has to do with network effects, including myself, but man the architecture is full of legacy hacks and caveats these days…
https://software.intel.com/security-software-guidance/best-practices/guidance-enabling-fsgsbase
This is the kind of feature that will inevitably add to confusion especially for new young engineers about why the hell they did it this way in the coming decades. And as is tradition with x86 they will probably have to continue including these CPU quirks on x86 cpu dies for a long time. I realize the aforementioned network effects keeps x86 dominant, but it makes me wonder just how convoluted we’re going to allow our dominant architecture to become before finally choosing an alternative with less legacy baggage to take it’s place.
I’m sure x86 (the thirty-two-bit version) will die eventually; Fedora 33 considered removing BIOS support, given that Intel is doing the same this year. (I realise that doesn’t mean the 80×86 is dead, but it does mean that even the most fundamental technologies end up becoming obsolete, at least where electronics and software are concerned.)
The main question for me is whether 32-bit x86 will die before Intel switches over to producing ARM and/or RISC-V processors. (And I read somewhere this week that ARM is already dropping support for 32-bit ARM.) They may not, of course, but I suspect that (like Microsoft, and unlike, say Digital Equipment Corporation and Data General) they will choose survival over obsolescence. And somewhere along the line, chips will presumably become powerful enough that even if 32-bit apps are still around, they can run in software emulation rather than on silicon – like Intel tried to do with Itanic, but that was too early and too slow.)
JeffR,
I get where you’re coming from, but it’s not just 32bit that I’m talking about. even 64bit x86 baked in compromises in order to be backwards compatible. Suppose we do remove 32bit, you can’t simply “uncompromise” the 64bit architecture because now you’ve got a lot of 64bit code and operating systems to be backwards compatible with and this FSGSBASE feature is an example of that. The feature turns the register into a more generic register for userspace code, but due to legacy use cases that stem back decades they still have to support the old usage model for these registers regardless of if they support 32bit or not.
I’d like to see RISC-V computers become a commodity, but who knows. I wanted to see more ARM computers as well, but to be honest I’m disappointed with the way ARM is evolving. It’s not nearly as open & interchangable as I’d like and I’m wary of dumping the admittedly crusty x86 architecture for another that tends to be more vendor locked and more difficult for independent developers to gain access to the hardware. 🙁
I agree with you on the last point, but on the other hand, suppose ARM or RISC-V or some as-yet-unknown architecture takes over and becomes as ubiquitous as x86 has been over the last 35 years; I’m sure there’ll be cruft in it by 2055. The successor to IBM’s original System 360 architecture (I forget whether it was the 370, the 390 or the 3090) has similar oddities: they extended it with a 31-bit architecture because, IIRC they used bit 31 (counting from zero) as a mode bit. Of course, now it’s 64-bit, and I don’t know enough about the architecture to say whether there are other oddities; but it seems likely that if they haven’t dropped compatibility altogether they’ll still have that weird, 31-bit mode.
It’s like the Y2K problem or Unix’s Y2K38 problem: people just weren’t looking that far ahead. (It’s unlikely the people who coded the VMS epoch seriously thought there’d be VAX-11-780S around in the year 31,086 by the Western calendar.)
JeffR,
Yes, to a degree. However the industry is also more mature and one would hope we don’t make as many mistakes as in the past. I don’t think anybody who originally designed our early CPUs/network protocols/etc really planned ahead. We were guilty of not planning for progress and as a result nearly every generation produced new architectural limitations with memory, disk interfaces, PCI, etc requiring re-engineering the hardware/bios/os/etc. The industry is much more mature now, and unlike our predecessors we’re able to use decades of experience to avoid earlier mistakes and plan ahead further.
So I accept your point, but I just wanted to add that the next 30 years should have less cruft than the last 30 years simply due to maturity.
Yes, exactly. These were clearly foreseeable problems even without the benefit of hindsight, but at that time they were more interested in saving a few bits rather than making it future proof. Not to place blame, they did what they had to do, but we’re in a much better position for futureproofing today at least if we’re willing to do it.