Remember x86S, Intel’s initiative to create a 64bit-only x86 instruction set, with the goal of removing some of the bloat that the venerable architecture accumulated over the decades? Well, this initiative is now dead, and more or less replaced with the x86 Ecosystem Advisory Group, a collection of companies with a stake in keeping x86 going. Most notably, this includes Intel and AMD, but also other tech giants like Google.
In the first sign of changes to come after the formation of a new industry group, Intel has confirmed to Tom’s Hardware that it is no longer working on the x86S specification. The decision comes after Intel announced the formation of the x86 Ecosystem Advisory Group, which brings together Intel, AMD, Google, and numerous other industry stalwarts to define the future of the x86 instruction set.
Intel originally announced its intentions to de-bloat the x86 instruction set by developing a simplified 64-bit mode-only x86S version, publishing a draft specification in May 2023, and then updating it to a 1.2 revision in June of this year. Now, the company says it has officially ended that initiative.
↫ Paul Alcorn
This seems like an acknowledgement of the reality that Intel is no longer in the position it once was when it comes to steering the direction of x86. It’s AMD that’s doing most of the heavy-lifting for the architecture at the moment, and it’s been doing that for a while now, with little sign that’s going to change. I doubt Intel had enough clout left to push something as relatively drastic as x86S, and now has to rely on building consensus with other companies invested in x86.
It may seem like a small thing, and I doubt many larger tech outlets will care, but this story is definitely the biggest sign yet that Intel is in a lot more trouble than people already seem to think based on Intel’s products and market performance. What we have here is a full admission by Intel that they no longer control the direction of x86, and have to rely on the rest of the industry to help them. That’s absolutely wild.
Well, given AMD already did the x86_64-bit extension over 20 years ago that Intel only had to later copy because their propritrary IA-64 failed so EPICly I guess AMD basically already controlled all the state-of-the-art x86 ISA development for a long, looong time 😉
Exactly, Intel hasn’t been controlling the direction of x86 ever since AMD64 aka x86-64 won the race to bring 64-bit computing to PCs (it seems weird now, but Intel expected Itanium to replace the x86 architecture in PCs, with 32-bit x86 software being executed using some kind of emulation, which btw would have been horrible because Itanium was a monopoly fully owned by Intel).
ia64 x86 emulation was only meant as a stop gap for a year or two until of course everyone would have migrated to native ia-64 code, … 😉 hw x86 silicon was also removed w/ Itanium 2 so only qemu-like software would emulate it.
rene,
I was someone who actually wanted a IA64 machine for myself. But it was financially out of reach for ordinary consumers to run ia-64 and it’s crystal clear that intel never intended it for us. Rather they intended to split the market between enterprise and consumers. In hindsight it was a bad strategy that left their enterprise IA64 CPUs without any software. It was simply priced out of reach for the hackers and students who were best suited to exploit the novel aspects of the architecture in order to create new games/demos/software in order to drive up interest. If the hardware had been more accessible to us there would have been more innovation on the software front around intel’s VLIW architecture. Instead, there was no ecosystem and that platform suffered for it. Without the native software itanium could never be more than a very expensive and inefficient x86 emulator.
I don’t know if it could have been feasible for intel to price these affordably for consumers, but a killer game at a time when parallel computation via hardware acceleration was on the rise did offer intel a good market opportunity, but obviously they didn’t take it.
There is nothing in the IA-64 instruction set that makes it unimplementable in consumer-level CPUs costing around what a high-end Pentium 4 cost at the time (even if we assume performance wouldn’t be as good, it would still be possible). You see, Intel’s plan wasn’t to compete on technical superiority (not after Itanium failed to live up to expectations), Intel’s plan was to wait until the 4GB limit became more and more restrictive for PC users and then present Itanium as the saviour of the PC ecosystem, despite the inadequate performance at running 32-bit x86 code (remember: back then, the PC ecosystem competed with PowerPC Macs and RISC workstations, which were all 64-bit). After all, Itanium run 32-bit x86 code better than those other RISC CPUs, even with software emulation. And this would also conveniently eliminate that pesky competitor AMD. Intel never expected AMD to develop x86-64 and more importantly to market it successfully. Itanium finding a niche in HP-UX (and Intel focusing Itanium designs exclusively on the enterprise) was more of a consolation price, they expected IA-64 to become the standard for both enterprise and consumer.
Also see the comment below by jgfenix, the link is an interesting read.
kurkosdr,
It’s not just the instruction set. Going by the listings here, itanium dies were roughly 4X bigger in size and transistor count than contemporary x86 CPUs of the day.
https://en.wikipedia.org/wiki/Transistor_count
I guess you might be suggesting lower transistor models for consumers? Maybe, but there are also R&D costs to consider and it’s not clear to me whether intel could have sold them for drastically less. Theoretically it might have been possible but in any case we know intel wasn’t interested in marketing IA64 to non-enterprise demographics. Still, it would have been interesting to see what could have been done with the architecture in the hands of demo-scene hackers and devs who are known for push cutting edge software techniques.
It was too cost prohibitive even for those who were interested. I don’t believe intel intended it for consumers. As for leaving customers on 32bit CPUs, PAE on x86-32 still offered something like 64GB total memory capacity, which is still more than most computers are sold with decades later. The 32bit limitation meant that processes could only have 4GB mapped in at a time, which is a technical limitation, but not a particularly arduous one especially for ~25 years ago. So Intel probably thought they had more time. But I agree with you that AMD caught intel off guard.
That is an interesting read. If AMD hadn’t been around to bring AMD64 to market, Intel’s attempts to segment the markets could have succeeded. But the existence of AMD64 changed everything. It satisfied the needs of backwards compatibility and future proofing at the same time. Whether you were a business or individual there weren’t many downsides… and on top of this AMD’s CPUs were priced very competitively. It was a huge win for fans of x86.
According to an ex-Intel engineer they had developed x86 64 bit capable chips before that.
https://logicstechnology.com/blogs/news/ex-intel-cpu-engineer-reveals-how-intels-internal-x86-64-developments-were-stifled-before-amd64-thrived
It’s not necessarily a bad thing that the future of x86_64 is in the hands of a consortium. We all remember the stagnation in the x86 landscape while AMD struggled with the Bulldozer architecture and Intel mostly tick-tocked the price of their “new chips”. Also, we don’t need stunts like Itanium, where Intel is trying to get rid of the last competitor. I would say that a few more licensors to the architecture could bring back some much needed competition.
AMD and Intel, sure, but why Google? Their business is mainly advertising and low-quality online services. Why do they need to have a say in x86?
Same reason Dell, HP, and Lenovo are on the list. Chromebooks.
Also probably the same reason Meta is on there. They design their own server hardware for their datacenters.
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
Probably none of them “need” to be on the list, but as tech giants they don’t want to be left off it.
FWIW Microsoft et al have a lot of input on x86 arch revisions/extensions. Since they provide the use cases AMD and Intel use for their roadmaps.
Large fleet deployers like Amazon, Google, etc also get their own tailored x86 SKUs.
I always wondered: How much chip area does the x86 “cruft” really occupy? If it’s minor, why not keep it and not completely lose the embedded market to Vortex86? And yes, the embedded market still uses DOS OSes that require 16-bit real-mode:
https://www.youtube.com/watch?v=BdSJgoP2a88
because nobody uses the 16-bit code anymore, not the EFI-BIOS and no OS. It just wants millions in design and testing and lowers clocks by some 100 MHz here and there. Long overdue to finally axe this. There also is no 16-bit x86 embedded market.
rene,
I agree there may not be enough justification to support them, but I also agree with kurkosdr on the fact that people & companies are still using these modes even to this day. I don’t think we should deny their existence, the debate probably should be around whether such niche niche users are worth supporting though.
I think a good compromise would be to implement and boot into Long Mode exclusively, but officially support software emulation of legacy modes & instructions. This way the hardware doesn’t have to implement it and software emulation can still be magnitudes faster than the original hardware ever was. In a microarchitectural CPU it probably doesn’t take that much work to support x86. The prefetcher needs to understand those modes, not the underlying cores. Maybe an optional software defined prefetcher could pull this off for legacy modes.
I literally provided an example above of an embedded device that uses DOS (hint: it’s the YouTube link).
Also, how does 16-bit real-mode code “lowers clocks by some 100 MHz here and there”? 64-bit mode doesn’t even support the execution of 16-bit real-mode code.
This is not 100 percent true nobody uses 16bit code any more. Intel developers found in 2014 on Linux kernel.
There are still a lot of people around running old legacy win16 applications and a lot still running legacy 16 bit dos applications in dosbox.
Also you are right and wrong with. “There also is no 16-bit x86 embedded market.” There is no market for embedded 16 bit x86 silicon. New systems that you find that would have been for 16 bit x86 silicon today you fpga chip that is the 16bit x86 cpu. Like MCL86 – Cycle Accurate Intel 8088 FPGA Core. and there is a 286 one out there as well. Advantage of these FPGA lets say you find that the software was designed for some particular 16 bit x86 and you are little off in cycle for what that chip is you don’t redo the board you change the FPGA firmware to make it match. This is the software is not broken and do not fix. Rewriting the software can require having to recertify the software that can cost arm and leg.
You also find like kurkosdr found with a 32 bit x86 chip running in 16 bit mode but these were not normally the pure 16 bit x86 silicon market. FPGA have killed the pure 16 bit x86 silicon market and even doing this in the retro pc market..
New embedded item with 16 bit x86 in fpga is normally some update board for a legacy system with the 16 bit x86 code running on a fpga version of the chip it was designed for. Yes cycle accurate is important in these cases so that everything remains behaving the same. Yes wanting instruction set and cycle accuracy
rene there are a lot of dark corners of the computing world were you find stuff being made new that people would tell you absolutely cannot be new hardware. Yes systems with 16 bit x86 cpus in fpga with software over 30 years old and its a new device absolutely does happen.
Almost every x86 CPU boots as a 16-bit 8086 core. So during the first seconds, at least, billions of x86 systems run 16-bit code.
FWIW the legacy x86 ISA support does not lower clocks or adds any significant overhead, and hasn’t done so for decades. It’s a non issue.
I think this also misses that Intel doesn’t nessacaily Want to run x86.
Setting the direction means massive of R+D as well as high risk of failure.
For example, if Intel designs x86s they spend billions on it. And, like ia_64 if they lose, they lose all the investment and have to switch to a competitor’s solution to continue their model of commodity hardware.
They are better of sharing the R+D cost, reduce the risk of ending up on a dead end alone and then leverage their fabrication and buisness agreements to their advantage.
The legacy x86 stuff is less than 1%, mostly noise/error.
8086/286 are basically emulated, and 32bit 386 stuff is not that much.
People on the internet with little background in architecture tend to overestimate how much decoding takes in a modern core, in terms of area and complexity. Compared to the major structures in a modern out of order core.
Xanady Asem lot of old 8086/486 code used in embedded systems are depending on in-order processing. This is why they are normally run in fpga re-implementations of 8086 to 486 that are cycle exact. Yes there are 486 cpu implemented for fpga that are cycle exact.
The jitter caused by out of order processing for a lot of safety critical real time embedded things is not suitable.
There is a lot more legacy x86 than one would think in the embedded world.
True, but that market is extremely tiny.
I think there are Pentium clones still being produced.
Still, the legacy x86 ISA support overhead on a modern core is minuscule.
”’I think there are Pentium clones still being produced. ””
There is a catch here. A 16 bit x86 on a modern day FPGA unit price is cheaper than a Pentium clones and require less power to run.
16 bit x86 has fairly much come the domain of FPGA.
There will be a point where there will be a fpga that able to run Pentium 32 bit x86 better than asic pentium.
x86 64 bit chips at this stage appear out of reach in performance and fpga size requirements to be done in fpga.
Embedded with hard realtime stuff you don’t find in fact emulated. Fpga implementation of chip is not emulation. emulation you have hard time getting stable timings. Yes the jitter problems.
FPGA is most definitively EMULATION mate. It is not cheaper. And nobody is doing any modern 8086 systems.
I think history tells us that corporately / commercially the right decisions were made for Intel and AMD, decisions that basically secured a duopoly at a time when the market was facing fragmentation. Good for Intel and AMD, not so good for the consumer.
FWIW, I see signs of similar things happening right now, as the major players appear to compete and “we the general public” bang on about which is the best and which one wins, we are witnessing a quiet lockout. I suppose you could argue it’s all a sham designed to avoid legislative intervention.
We should not be surprised, big organisations are almost exclusively run by lawyers not technicians or engineers!
cpcf,
RIP Sun Microsystems (at least this is who I was picturing)
I do not think it is fair to call Jonathon Swartz a lawyer. He was more of a software guy though I think his degree was in mathematics.
I guess my post wasn’t clear but I was picturing Sun Microsystems as an organization favoring engineers over lawyers. I meant it as a contrast to other big companies that are “run by lawyers”.
Alfman, bullseye!
The difference is that, last time, x86 won against everybody and the duopoly was able to take over the market. This time around, it seems very much that x86 has been relegated to a subset of the market ( not the fastest growing or most profitable parts ) and that other platforms ( ARM, RISC-V, and GPU ) are leaving x86 in the dust. In that scenario, the duopoly is not so threatening to us and is not nearly as beneficial to the duopoly. Intel almost entirely took over the computer market at one point. They are very, very far from doing that now. Nevermind AMD, just look at the market cap of NVIDIA, Even Qualcomm is worth twice as much as Intel.
RISC-V is not leaving anyone in the dust LOL.
DC is still the most profitable segment, and x86 still has a big presence there.
We’re just witnessing the usual consolidation of a mature market into duopolies. For anything other than ultra embedded and ASIC, we’re going to see an x86/ARM world for the foreseeable future.
Xanady Asem I think a lot of observations other than the duopoly are really just people wishing for something better or different. The big end of town will eventually go quantum something, that’s a niché market but I bet companies like NVidia are all in on it. The rest of us are going to get a new version of the same old same old for the foreseeable future.
For profit companies exist to make profit. Not to masturbate about technology.
You need good legal, financial, sales, marketing, and management teams to sell engineered products.
Xanady Asem,
Companies like apple prove they can do both.
Apple is first and foremost profit driven. E.g. Their supply chain integration and management side ranks higher, in corporate priority, than their engineering teams.
Xanady Asem
The point was that they are not mutually exclusive. Apple is undoubtedly one of the top tech porn companies; tech outlets drool over every press release. “Masturbating over technology” is very much in apple’s wheelhouse. That said, I’ll agree that engineers are not the ones calling the shots there, which goes to cpcf’s point. Anyway, Apple can afford to ignore the critics, they just need consumers to continue buying their wares. The biggest risk I see for them right now are the antitrust lawsuits, but governments might be sufficiently corruptible to make these go away.
Am I the only one that remembers that infamous x86S document, where Intel portrayed herself as the sole inventor of the x86_64 architecture? I mean: the concept was sound, but trying to rewrite the history…
Reminds me of how, in 2017, they announced they’d be killing off UEFI CSM by 2020, then said again in 2023 that they’d be doing it by 2024 for server platforms and, now, ASUS is saying the Intel 500 chipset doesn’t support CSM when using onboard video… all I see when I read that is “If AMD has no problem keeping CSM, this is another reason to buy AMD”.
Intel made many mistakes over the years but their huge market lead allowed them to write off billions and keep going. Now Intel is in big trouble because TSMC has the superior technology.
I think one of Intel’s mistakes was when they introduced Pentium CPUs and PCI bus. Intel should have made Pentium CPUs start in native 32bit mode, and published the assembly code for reverting back to 16bit (un)real mode. Obviously BIOS manufacturers would have used this code to keep backwards compatibility.
I think today the latest CPUs still start in real mode, even with UEFI, is that correct?
This is an extremely flawed conclusion based on a very faulty premise.
There are exploratory projects within companies all the time, which go nowhere. X86s being no different.
It simply means that there is no market need to drive the investment needed.