Chips and Cheese has an excellent deep dive into Arm’s latest core design, and I have thoughts.
Arm now has a core with enough performance to take on not only laptop, but also desktop use cases. They’ve also shown it’s possible to deliver that performance at a modest 4 GHz clock speed. Arm achieved that by executing well on the fundamentals throughout the core pipeline. X925’s branch predictor is fast and state-of-the-art. Its out-of-order execution engine is truly gargantuan. Penalties are few, and tradeoffs appear well considered. There aren’t a lot of companies out there capable of building a core with this level of performance, so Arm has plenty to be proud of.
That said, getting a high performance core is only one piece of the puzzle. Gaming workloads are very important in the consumer space, and benefit more from a strong memory subsystem than high core throughput. A DSU variant with L3 capacity options greater than 32 MB could help in that area. X86-64’s strong software ecosystem is another challenge to tackle. And finally, Arm still relies on its partners to carry out its vision. I look forward to seeing Arm take on all of these challenges, while also iterating on their core line to keep pace as AMD and Intel improve their cores. Hopefully, extra competition will make better, more affordable CPUs for all of us.
↫ Chester Lam at Chips and Cheese
The problem with Arm processors in the desktop (and laptop) space certainly isn’t one of performance – as this latest design by Arm once again shows. No, the real problem is a complete and utter lack of standardisation, with every chip and every device in the Arm space needing dedicated, specific operating system images people need to create, maintain, and update. This isn’t just a Linux or BSD problem, as even Microsoft has had numerous problems with this, despite Windows on Arm only supporting a very small number of Qualcomm processors.
A law or rule that has held fast since the original 8086: never bet against x86. The number of competing architectures that were all surely going to kill x86 is staggeringly big – PowerPC, Alpha, PA-RISC, Sparc, Itanium, and many more – and even when those chips were either cheaper, faster, or both, they just couldn’t compete with x86’s unique strength: its ecosystem. When I buy an x86 computer, either in parts or from an OEM, either Intel or AMD, I don’t have to worry for one second if Windows, Linux, one of the BSDs, or goddamn FreeDOS, and all of their applications, are going to run on it. They just will. Everything is standardised, for better or worse, from peripheral interconnects to the extremely crucial boot process.
On the Arm side, though? It’s a crapshoot. That’s why whenever anyone recommends a certain cool Arm motherboard or mini PC, the first thing you have to figure out is what its software support situation is like. Does the OEM provide blessed Linux images? If so, do they offer more than an outdated Ubuntu build? Have they made any update promises? Will Windows boot on this thing? Does it work with any GPUs I might already own? There’s so many unknowns and uncertainties you just don’t have to deal with when opting for x86. For its big splashy foray into general purpose laptops with its Snapdragon Elite chips, Qualcomm promised Linux support on par with Windows from day one.
We’re several years down the line, and it’s still a complete mess. And that’s just one chip line, of one generation!
As long as every individual Arm SoC and Arm board are little isolated islands with unknown software and hardware support status, x86 will continue to survive, even if x86 laptops use more power, even if x86 chips end up being slower. Without the incredible ecosystem x86 has, Arm will never achieve its full potential, and eventually, as has happened to every single other x86 competitor, x86 will eventually catch up to and surpass Arm’s strong points, at lower prices.
Never bet against x86.

I was going to say, no problem for most of us Linux users, even for games…and then I realized even for amdgpu gamers on Linux, Proton/Wine is a translation layer for calls, not virtualization of ISA, which is what would be needed. Blast. That could have been SOO attractive.
However for a home server, that becomes much more interesting. Depending on pricing of course. And with current RAM prices, nothing going to make me switch over from what I have anytime soon.
@Drizzt321
ISA translation is not as big a problem as you might imagine for games. On Windows, there is Prism and on Linux we have FEX which allows x86 applications to run on ARM.
https://github.com/FEX-Emu/FEX
If RAM prices do not stop Valve from launching the Steam Frame, you will get to see how well this works. Frame will run Windows games (x86-64) on an ARM processor.
https://store.steampowered.com/sale/steamframe
When running games, most of the performance is coming from the GPU which is not running x86. Having to translate the x86-64 instructions in the application is not going to make a game that much slower. And FEX is smart enough to dispatch Vulkan and OpenGL calls to native libraries.
FEX is a lot like Rosetta from Apple. And we know Rosetta worked well enough to let Apple move from Intel to Apple Silicon on their products with very little disruption.
The same kind of thing already exists for RISC-V as well with Felix86 which just recently added the ability to translate AVX and AES calls to RISC-V:
https://felix86.com/
LeFantome,
I gave this a shot with RPI 5 to run x86 titles in the living room maybe a year ago, but I was very disappointed. Of course it’s not the fastest ARM CPU around, but emulation is going to be felt unless you stick to less demanding software. Of course you’re right to point out that many games are actually GPU bound rather than CPU bound (depending on title), so that could mitigate things somewhat. It’s still not ideal though because emulation also costs more power consumption and memory overhead compared to native.
Another issue is that most ARM SOCs have a meager iGPU rather than a powerful discrete GPU. That said it is technically possible to support discrete GPU on ARM. I found it pretty cool to see someone running a PI sporting a RX 6700XT GPU!
“A GPU-powered Pi for more efficient AI?”
https://www.youtube.com/watch?v=AyR7iCS7gNI
Are you sure it’s going to run those titles on-device? Several paragraphs in your link suggest it’s using “streaming”. My guess is that stand-alone play won’t have the same technical capabilities…
(my emphasis).
Either way it will be interesting to see how this evolves.
Apple’s emulation, though well executed, wasn’t meant as a long term solution. Users still expected macos publishers to quickly make native versions available. Only unsupported software would need to rely on the emulation. I think steam has it harder than apple because in most cases native ports of windows games will not be forthcoming.
It would be a long term disadvantage for ARM hardware to be permanently dependent on emulating windows x86 games.
Alfman,
For x86 emulation, ARM chips need hardware support.
Those who have it (like recent Snapdragon), you get much better results, as you can do some of the expensive work (like memory barriers, and unaligned reads). Otherwise you basically have an “interpreter” even in JIT mode.
Raspberry have not invested in this, and with their recent commercial shift, this might change a bit in the future, who knows?
Follow up…
What are memory barriers?
X86 assumes strict write ordering (TSO), while ARM cores can re-order memory access. If you need strict ordering at ARM, one has to insert memory barrier instructions, which kills all performance optimizations on ARM. (No more pipelining or speculative execution)
What are aligned / unaligned reads
Intel can read a word at address 0x0101, which would require two memory fetches. ARM … and most sane architectures refuse to do. It is inefficient, but Intel allows it. ARM is much more strict, and they also have different alignment requirements for different operations.
This means, the emulator will have to manually load multiple pieces, and stitch them by hand.
Snapdragon has hardware emulation for both (and more)
Now, we can discuss which model is better… but at the end of the day emulation is not just translating “mov ax, bx” into “mov w0, w1”
“I gave this a shot with RPI 5”
So you chose the wrong hardware and set yourself up for disappointment. You’re never going to get the kind of performance you want from an SBC CPU.
0brad0,
I think it depends. I suspect a lot of performance issues come down to poor / incomplete support for hardware acceleration.
https://forums.raspberrypi.com/viewtopic.php?t=386127
https://gist.github.com/Cdaprod/416f37bf32aae327e8d20fd3a07f1923
It can be a con for FOSS operating systems when manufacturers won’t provide much software support.
Alfman,
The lack of hardware technical documentation is a true issue.
However writing drivers optimized for gaming… is a full time task by itself. Why do you think nvidia offers much better performance in 3070 for example, while Intel’s Arc 770A with roughly similar raw specs (~20 TFLOPS) fail to meet expectations?
sukru,
Maybe, but I am impressed by what FOSS devs are able to do independently….it’s just extremely tedious and inefficient to write drivers via the trial and error process that often accompanies proprietary hardware. You have to rely on luck, intuition, and reverse engineering to discover hardware features that aren’t documented. Not only does this take a great deal of patience but FOSS devs likely need to be even more skilled if they want to succeed without documentation.
I’m not familiar with Arc GPUs, I wrote benchmarking software for cuda to help me understand the strengths and weaknesses of nvidia cards, I’d probably need to do the same with intel GPUs to really answer a question like this.
@Alfman
As I understand it, the Steam Frame is intended primarily for streaming but will also allow running games directly on the hardware. It is not high-end hardware so that is not going to be AAA stuff. The text you quote seems to agree with this.
I do not think that ISA emulation is a good long-term solution in any case. It is meant to address the period of time when native application availability is spotty or to support the small number of apps that never migrate. In Apple’s case, they will take the emulation away as they want to keep driving the ecosystem forward. For Windows and/or Linux, I think it makes sense to keep ISA emulation in place.
Even for Steam, they are ok to use the Win32 API for games long term but want the games themselves to be platform native in terms of hardware. Emulation is a stop-gap.
LeFantome,
My interpretation is that Steam Frame will come with on-device games in the same sense that some smart TVs come with on-device games…technically true, but not really meant to compete with or replace a real game console. We’ll find out soon enough what on-device gaming actually means 🙂
I agree with these points, emulation should be considered a stop gap measure; software should get ported to native. However I am just skeptical of windows publishers actually doing it. In the steam discussion forums we regularly find gamers asking for official linux builds instead of proton, but most publishers keep disregarding these requests. Requests for ARM versions may end up being similarly disregarded by publishers. If so, ARM gaming consoles running windows games could end up disadvantaged by emulation penalties over the long term.
Alfman,
It should be noted that Valve’s is the story of one man’s passion project to free a $200B industry from the shackles of a $3T company’s absolute platform dominance. And while emulation won’t be going away anytime soon, the fact that you can’t count the number of Linux ports on two hands anymore is testament to his success.
@Alfman
> I gave this a shot with RPI 5 to run x86 titles in the living room maybe a year ago
The RPi 5 is not the strongest candidate for ISA emulation obviously. I am not sure what you were running but my point is not that ISA emulation is fast but that the performance critical code in games is mostly running on the GPU (which does not have to be emulated). I do not have a RPi 5 but I expect the GPU on it is pretty bad for gaming.
If you are running entire applications through ISA emulation, you are going to take a hit. On high-end laptops and the like, the fact that the hardware is so fast makes this a reasonable proposition. But you are still going to want native apps where possible. ISA emulation is a stop-gap. But it does address the “app compatibility” problem in the short-term (and the long-tail).
Ah, unaware of those things. Very interesting. Will be very interesting to keep an eye on this space then.
> Never bet against x86
Does this mean “never bet against Intel”?
Because the only reason that Alpha lost against Itanium for example is because Compaq believed in “never bet against Intel” and decided that Itanium would crush Alpha. Except, it turns out that Itanium sucked and Alpha would clearly have crushed Itanium if Compaq had not killed Alpha and preemptively assuming defeat.
Itanium lost massively and completely to AMD64 (x86-64) which was not invented by Intel. Intel had to hastily adopt AMD64 to keep from losing the market completely. AMD64 killed both Itanium and x86. By “never bet against x86”, you must mean “never bet against x86-64”. But that does not work either.
ARM completely crushed x86-64 in the mobile market. We are not all using ARM phones and tablets because Intel or AMD wanted it that way.
I would say the only reason Apple Silicon has not done more damage to x86-64 is because Apple does not let other people use it. It is why Qualcomm and now ARM themselves are having to fill that same niche in the broader market.
Than man behind many of the winning chips in the story above is Jim Keller. Maybe we should say “never bet against Jim Keller”. He is currently making RISC-V chips (look for the word “laptop” at the link below):
https://tenstorrent.com/en/ip/tt-ascalon
I certainly agree that application compatibility matters a lot. But as Apple has shown and Valve is showing, that can be overcome.
LeFantome,
I also think Alpha was a promising architecture, and could have had a good chance of winning on a level playing field. But given that the market had many billions invested in x86, it was practically untouchable. Intel might have been able to sabotage x86 (ie by refusing to upgrade it beyond 32bit). But AMD had their own license and built a 64bit bridge while retaining 32bit compatibility, so it never ceded territory to alternatives.
x86 is weakest in completely new markets where compatibility advantages are nil. ARM managed to get ahead on mobile and embedded devices because x86 had no software compatibility advantages there.
I agree with all of that. Alpha was already doing pretty well running Windows software though. Need a bigger Exchange Server? Alpha. Need more oomph for your CAD? Alpha. And they had their own Rosetta as well (FX!32). You could run a surprising amount of Windows NT 4.0 software on Alpha back in the day. And you could mostly use commodity PC hardware on many machines as well which made the driver situation much less of a problem. Alpha was a better extension of the X86 world than what we have seen since in my view.
All that said, AMD64 was and even better extension and may indeed of displaced Alpha even if alpha killed Itanium first. But I think Alpha would have made a LOT of headway by in the years between 1995 and 2003.
LeFantome,
I would suggest AMD64 is what saved Intel. Their own 64 bit offerings were going nowhere, and people were really looking into alternatives like PowerPC.
(The original Xbox was for example using an Intel chip, they never did that afterwards. Many other vendors were also looking at alternatives)
sukru
Did Intel’s 64 bit offering really struggle from a performance perspective, they were highly sort after?
They did struggle in production, delivery and to be price competitive against AMD64, even Power architecture was cheaper. But if you ignore the cost of the performance offered Intel seemed to have some pretty reasonable solutions that could not be justified economically in most applications.
cpcf,
Intel’s IA-64 (Itanium) was a “green field” design which was optimized for execution speed, yes.
But it also lacked proper hardware backwards compatibility with x86. Even though for a server system this might not be a larger concern as much as for a gamer desktop…
It opened up the customers to looking into alternatives. If IBM’s DB2, or Oracle’s database is optimized for a different architecture, they no longer had Intel as the de facto choice.
Green field also meant level playing field.
sukru,
Itanium 1 did have hardware support for x86, however itanium’s architecture was radically different and required explicit parallelism that x86 software lacked. The Itanium’s parallelism was lost on x86 software and it couldn’t keep up with a superscalar architecture optimized around x86.
With Itanium 2, intel used software x86 translation to make better use of itanium’s parallelism than Itanium 1 could executing x86 directly. However Itanium really needed specially compiled software to truly exploit VLIW’s potential. Unfortunately for itanium most software would never get optimized for it. I always though it would be interesting to write software to target Itanium’s strengths, but it was so darn expensive that intel priced the hardware out of reach of developers, essentially guaranteeing no mainstream software devs would ever support it.
https://wiki.osdev.org/IA-64
AMD64 is absolutely what saved Intel.
Itanium was a disaster. Even without the compatibility problems, it was a performance pig in practice. It expected the compiler to save it and that never happened.
The irony is that AMD64 was not even their technology though. The same agreement that allowed AMD to make x86 chips allowed Intel to make their own chips with AMD64.
Intel would have been left behind if AMD had not created what we now call x86-64.
Itanium was performant and the compilers were just fine.
What killed itanium was the lack of software compatibility, and power consumption that made it a non scalable solution past workstation and server application. So it missed out on the same economies of scale that killed other architectures.
Alfman,
I did not know that. But at the end of the day, it was a slower x86 than a comparable “Pentium”
Xanady Asem,
Yes, they were. But being “just fine” and not having a direct lineage from x86 opens the door to competition to other “just fine” architectures like SPARC or PowerPC (which sold a lot during that period)
There is a reason it was called Itanic
sukru,
It was slower at running x86 code than a contemporary pentium, but it doesn’t necessarily follow that code running natively was slower.. The problem is itanium had very little software.
It would not have been justified, but how interesting it would be if there was a proficient port of quake or halflife on itanium just to see how well it could handle the task natively.
@sukru
FWIW PowerPC nor SPARC did sell that much in that period either, which is why they pretty much suffered the same fate as Itanium.
POWER is the only remaining of those high performance RISC architectures still (barely) around. Mostly because of AIX support contract and it sharing the same execution engines with the mainframe cores.
When it comes to processor performance, economies of scale always seem to win. Ironically, the same thing is happening now to X86; ARM, where the bigger volume moved, is cleaning their clock now.
Alpha lost to X86 on cost even though it had a significant performance advantage. Plus, although it had vendor software support, it didn’t have as much software as Intel.
Same with Itanium. It had a substantial performance benefit for native software but by the time there was Windows and more server software, AMD was catching up and was a cheaper option.
FWIW Alpha killed DIGITAL. And when HP bought Compaq, they had little incentive to continue AXP, since Itanium was their baby.
Itanium turned out to be a dead end.
But Alpha and MIPS had priced themselves out of existence by themselves, in terms of exponential design costs not being matched by a similar increase in revenue for their parent organization.
If it makes you feel any better. A bunch of modern Intel x86 uArchs started life as Alphas in the initial performance simulation during their development cycles.
To me this is again a case of perspective, nearly all the architectures discussed start as task specific, and evolve to become general purpose, but we talk about them like they were general purpose from the beginning. It’s the evolution from task specific to becoming general purpose that leads to a lot of the issues discussed, because different vendors have different priorities which causes fragmentation. Has anybody come cold into the current market and introduce a desktop processor solution to go head to head with likes of AMD and Intel, I can’t think of any, and before we get a booster comment Apple didn’t do it either!
I’m not claiming it can’t or won’t happen in the future.
“specific operating system images people need to create, maintain, and update.”
This is not an issue with OpenBSD.
The Linux space is an absolute disaster, but that’s typical of Linux.
https://www.openbsd.org/amd64.html
> Supported hardware: All versions of the AMD Athlon 64 processors and their clones are supported.
(i.e. all x86_64 processors)
https://www.openbsd.org/arm64.html
>Supported hardware: OpenBSD/arm64 runs on the following hardware: (list of specific hardware)
In other words, OpenBSD has the same problem as Linux. Every piece of ARM hardware needs to be specifically supported.
One single kernel runs on them all, unlike Linux.
Same as x86_64. There is no magic x86_64 kernel that works on everything.
0brad0,
There might be obscure systems that don’t work, but x86 standards mean that hardware specific images aren’t typically needed with linux distros on x86: a single linux kernel can support nearly all x86_64 hardware. And because of BIOS/UEFI standardization you can typically boot into a generic VGA console and interact with it even when you lack hardware specific drivers.
If your criticism is that a linux distro might be missing drivers for your specific hardware, that’s true but is also true of OpenBSD. So I don’t understand why you disagreed with Unopposed0108?
Well, the problem is also that pricing for ARM boards is not great, to say the least. The Orion O6 (possibly the best/more powerfulconsumer-grade ARM board available) costs around €600. For that price, you can almost get a 14900K with a motherboard, and that combo trounces the O6 in every possible situation, excluding power consumption.
I would like to try one of these boards, but except for the Raspberry PI (and Apple), there is nothing really affordable or powerful (unless, in this case, you go to server boards with a much higher price).
IMO. The apple mini has such a brutally good price/performance that any ARM SFF competitor is going to have a hard time.
You are right, but Linux (or any OS other than macOS) on Apple Silicon are still immature: there’s no support for M4, and many important features are not supported with M3. If macOS is all you need, fine, but if that’s not the case, the otherwise great Minis are not an answer.
Sure, if windows or linux are a must, DYI ARM still looks like a wasteland.
For me macos is unix enough, and the mini was a no brainer. Not even on the x86 SFF I could see similar performance/value.
But given how many great SoCs are coming out of the ARM field (APPL, QCOM, MTEK, etc) it is a shame there is not a plethora of DYI or desktop options with those. Right now all those 3 vendors have single core performance much better than almost anything Intel or AMD can produce.
Do new competitors like RISC-V or LoongArch fare any better? Do they have an equivalent of BIOS or UEFI, where one OS image runs on all machines with that ISA?
No, and there’s little incentive for manufacturers to standardise on one. x86 became a standard platform as IBM PC clone makers targeted compatibility with a known system. With no market dominating platform for most other ISAs (maaaybe Samsung and it’s Android devices are the best example outside of PCs?) there’s little incentive to standardize on a compatible BIOS, and guarantee OS portability between devices.
If these companies had half a brain they would realize that trying to win without better economies of scale is futile. Joining together has risks but not joining together is certain doom.
The other reason x86 became so standardised, was because of MS-DOS and later Windows. You can’t alter the OS (by the virtue of being proprietary, but easily licensed), and the OS was aimed at a specific computer architecture. As a systems designer, if you wanted to run MS-DOS or Windows, you had to be compatible with the hardware platform. And MS-DOS ran on the IBM PC. Ergo, as a manufacturer, you had to build PC clones.
With most other non-x86 platforms, systems manufacturers (looking particularly at Android here) have the source code, and can bend the software to work with the hardware. You can’t really do that with Windows/MS-DOS. So since the OS is so plastic, vendors can get away with manufacturing incompatible systems, as they can just write some glue code to make Linux run happy on their hardware.
If RISC-V, ARM, Loongson etc, had a solid, market leader platform with a market leader proprietary OS (for example, Apple Silicon systems), maybe a defacto standard would emerge that everyone else would follow.
The123king,
I think this gets overlooked, but it’s very logical when you think about it. Back in those days making hardware compatible wasn’t really optional if you wanted to be compatible with existing commercial software.
I agree. ARM hardware isn’t expected to be compatible anymore because manufacturers are expecting it to be used with their own custom OS. For them hardware compatibility isn’t a goal (to my chagrin).
@Unopposed01:08
It depends on the vendor.
For example, here is a RISC-V board that has both UEFI and ACPI:
https://milkv.io/titan
The popular VisionFIve2 SBC ships with UEFI and even supports the open source TianoCore EDK firmware now.
And the RISC-V server profile specification mandates both UEFI and ACPI for server class hardware. So that will be the norm if you are installing something like RHEL on your RISC-V server.
But most RISC-V SBC offerings rely on device trees much like ARM boards do (including Apple Silicon). This means that the device tree has to be in your kernel for the system to work. While it depends again on the specific supplier, there is at least a developing culture of getting device trees and drivers into the mainline Linux kernel so that you do not have to rely on vendor supplied distros.
For example, the SpacemiT K3 starts shipping next month, support for it has already been added into the Linux 7.0 kernel, and both Ubuntu 26.04 and Fedora 43 will support it out-of-the-box.
I don’t know about LoongSon. But RISC-V can be worse than ARM in that regard.
The tiering goes something like this:
– x86 standardized programmer and platform interfaces
– ARM standardized programmer interface, vendor-specific platform interfaces
– RISC-V vendor-specific programmer and platform interfaces
The open and extensible nature of RISC-V comes with great benefits, but also with some massive headaches. They are trying to reign in the chaos with the RV profiles and system architecture definitions. E.g. they just introduced standard IOMMU, Interrupt architecture, etc definitions in the past couple of years. So a lot of the system-level stuff has been a bit of the wild west in RISC-V land…
“Never bet against x86?” Maybe.
One of the world’s largest tech companies (Apple) has been betting against x86 for half a decade and seems to be winning pretty hard.
I don’t like that because it seems like the beginning of the end for the standardized architectures that have defined x86 as a platform and made alternative OSs practical for home users.
Then again, I don’t know that there is another tech company competent enough to build decent machines around their own bespoke architectures. nVidia might be able to pull it off, but nobody else even seems close.
It could be actualized in “Never bet against UEFI”.
The answer has been already provided by ARM, and is called Arm SystemReady band:
https://www.arm.com/architecture/system-architectures/systemready-compliance-program/systemready-band
And it’s not something new, it has at least 10y:
https://lwn.net/Articles/584123/
“It helps ensure operating system interoperability for advanced configuration and power interface (ACPI) environments where generic operating systems (OS) can be installed on either new or old hardware without modification. Old OSs can run on new hardware, and new OSs can run on old hardware, without customization.”
So all standards are there, the point is that lazy hardware manufacturer don’t do their homework, plus who buy believes that arm sbc crap is a full PC, while those are focused for embedded use cases, not generalist computing.
The list of the ARM SystemReady partners is public:
https://www.arm.com/architecture/system-architectures/systemready-compliance-program/partners
Just buy arm hardware (mobos or sbc) from those hw vendor, choose the SystemReady compliant ones and you’ll have the same PC-like experience.
Easy 😉
andreamtp,
Yes, but I only wish that all commodity ARM hardware were compliant. Almost any random x86 hardware you find can boot a standard linux distro. Even weird esoteric x86 embedded devices still support the same booting standard. This is such a nice property of x86 for DIY. The systemready ARM computers for sale haven’t reached commodity status, which makes them awfully expensive and specialized compared to x86. ARM SBCs are everywhere, cheap ones even, but they are typically not compliant.
ARM can’t or don’t want to enforce it.
Only reason people love sbc is because they’re cheap.
But it’s a product for industry, for embedded project conducted by who knows that he had to fight against devicetree and outdated “sdk”s based on unsupported Linux distros with downstream drivers of abysmal quality.
acpi/smbios/uefi are a commodity nowaday, but are a personal computer thing that was later adopter for bigger things, when industry started believing that x86 could be good for what Sparc/PA-Risc/Power was used before.
ARM has always had his niece in embedded solution for industry.
Geeks started to use devicetree sbc crapware because attracted by low power, small size, good enough platform.
Geeks/prosumers are not the sbc target, so they don’t care about being SystemReady compliant, also because it will add costs to them and because contrary to a PC, an SBC will never change, so a devicetree it’s enough. But cost is everything, it’s enough to see the terrible support they have in Linux at launch: sbc hw vendors don’t give a f*** about proper linux support, not to talk about SystemReady compliant: any effort add cost.
But if you watch products for Datacenter (servers) or real developer machines, you have SystemReady and your life is easy, os industry is capable of doing it, but you have SystemReady stuff where SystemReady matter.
But things are slowly changing: the recent Radxa Orion 6 has “Full UEFI support via EDKII”:
https://radxa.com/products/orion/o6/#techspec
Vote with your money
andreamtp,
It makes a lot of sense why “geeks/prosumers” want to use arm devices, but it’s obvious that manufactures are not catering to us with DIY friendly products. The type of ARM systems that support systemready are generally enterprise servers (which ironically are often more expensive than commodity x86 servers). For DIY purposes it’s regrettable that we can’t re-purpose all the free ARM hardware that’s lying around us and destined to become e-waste. It’s just so unfortunate because the hardware works great, but the software barriers…ugh! After multiple decades it’s obvious that manufactures are never going to fix this across the board.
I don’t think the cost is a big impediment to standardization. If anything better standards would help reduce everyone’s costs like it does on x86. It’s the non-standard approach that’s more work and more expensive to support. Not only this but it creates a number of other issues for OEMs, like products getting stuck on unsupported kernels. I appreciate ARM’s hardware capabilities, but it’s so unfortunate that standards for ARM are so poor. Standards solve so many problems and x86 is a good testament to that.