XScale is a microarchitecture for central processing units initially designed by Intel implementing the ARM architecture (version 5) instruction set. XScale comprises several distinct families: IXP, IXC, IOP, PXA and CE (see more below), with some later models designed as SoCs. Intel sold the PXA family to Marvell Technology Group in June 2006. Marvell then extended the brand to include processors with other microarchitectures, like ARM’s Cortex.
With the smartphone and tablet revolution dominated by ARM, with Windows and Apple moving to ARM, we can probably say that, with the magical superpower of hindsight, Intel selling its XScale business to Marvell will probably go down as one of the biggest blunders in technology history.
The entire computing world is slowly moving to ARM – first smartphones, then tablets, now laptops, soon surely servers and desktops – leaving Intel (and AMD, for that matter) in a terrible position.
AMD has an ARM based CPU solution in its Opteron processors and can easily work it’s way in to desktop even though this would be a death-knell to it’s new Ryzen work.
I agree that x86 is nearing EOL and the only thing it really requires at this point is some motherboard makers like Gigabyte and Lenovo to be willing to be first to market for such devices that are not server based but ready for mass market consumerism… OR, is the PC era as we know/knew it actually just coming to a natural end and desktop space likely to be taken back up with magazines and not clunky desktop cases?… Time will tell.
AMD have a partnership with ARM to create ARM based server CPUs, so I don’t think they chosen the wrong path. After all, AMD were always using top notch and innovative architectures, yet lagged behind in their implementation, mostly because no access to better fabs.
With most x86-compatible processors since the Pentium, the way instructions have been executed has been greatly abstracted. What actually goes on under the hood hardly resembles the machine code at the front end. So, I’m guessing that the Ryzen family wouldn’t need much apart from a different front-end decoder to support an ARM instruction set all the way through.
x86?
Already dead.
What about x64?
Well, technically it’s X86-64
Which is because the X64 is also X86 capable.
Edited 2018-04-04 03:53 UTC
I’m sorry but bull, X86 still ROFLstomps ARM by a HUGE margin. Why do you think they are now up to octocore ARM chips, with half being high performance and the other half low? Because once you try getting any real performance out of ARM it blows its power budget to hell and thus kills any reason to use ARM in the first place!
Meanwhile Intel has gotten quad core X86 chips that run as low as 4w, AMD isn’t far behind and both companies X86 offers just blow anything ARM has out of the water when it comes to performance. As people do more and more with mobile devices? They are gonna want the same kind of performance they get with their desktops and laptops and unless they can come up with a radical new ARM design that can suddenly magically boost the performance without blowing its power budget? I’d say if anything ARM is the one running out of time as Intel and AMD both head to single digit nm production.
Nope, you forget one (or two) important factors in the formula : north and south bridges. They have their impact in global consumption.
http://www.tomshardware.co.uk/asrock-e350m1-amd-brazos-zacate-apu,r…
http://www.tomshardware.co.uk/Atom-Athlon-Efficient,review-31253-5….
http://www.tomshardware.co.uk/Atom-Athlon-Efficient,review-31253-6….
http://www.tomshardware.co.uk/Athlon-Atom-Nano-power,review-31361-1…
http://www.tomshardware.co.uk/Intel-Atom-Efficient,review-31216-18….
…
Wow…talk about cherry picking! Brazos? Really? A NINE year old chip? Or 8 year old Intel Atoms? Its not 2011 dude, in case you missed it AMD and Intel have made some pretty huge leaps in technology since 2011 as anybody who has used the new Celeron quads or seen what the new Ryzen mobile can do can attest.
I know you are an ARM fanboy but trotting out NINE YEAR OLD tests? Really don’t help your arguments. Heck AMD hasn’t sold Brazos since the netbook craze died, and does Intel even sell first gen Atoms to embedded corps anymore? Last I heard they quit making those in 2012.
It was an illustration, not something to be taken at first degree. It was from memory, not looked for more recent comparison, but you get the idea.
So I should trot out some NINE YEAR OLD figures for ARM then? Just for “illustrative purposes” mind you, cux news flash ARM chips in 2011? Big pile of poo. Hell I have an old ARM PDA from around that time in a drawer somewhere and I can tell ya that ARM sucked some big hairy ones back then, sure the power usage was great…because they didn’t do squat and what they did was slooow.
Doesn’t change the fact that sales of cellphones and tablets has been dropping like mad lately…hmm why could that be? Could it be, ohh I don’t know, that ARM has ran into a wall just as X86 did WRT speed so customers aren’t feeling any improvements in their devices to warrant the cost of replacement? Maybe you’d like to explain to the class why suddenly ARM has had to go from dual to quad to decacore in 1/20th of the time it took X86? Couldn’t be that ARM has run into a wall where they blow their power budget if they up the MHz so they are trying to solve it ala AMD in 2012 by just throwing more cores at it?
ARM has been around since 1987…that is 31 years dude. 31 years. In that time X86 went from 1Mhz to 4000mhz, from 1 core to 32 cores with 64 cores already within sight, and from 1 core taking up to 145w to quads that only take 4w with a grand total of 6w peaks while doing easily 30 times the amount of IPC that original core did…I’m sorry but while ARM has made some improvements if they would have done half of what X86 has done in the same time period we would have 32 core ARM units that suck half a watt…but we don’t.
Instead we have decacores that will kill a phone in less than 30 minutes if all the cores run at full speed (wow that is useful) and ya might want to look up “dark silicon” because that is a big deal with ARM right now, simply killing more and more of the chip until its absolutely necessary to try to squeak a little more battery life out…meanwhile you can run that X86 all day on that 4w and its only getting faster and the power budget is getting smaller…yeah sorry but unless ARM comes up with some new way of doing things? X86 is just gonna keep on coming.
Now you tell me which ones will customers want come 2022 in their mobile devices…a 16 core ARM that spends 90% of its life asleep because once you start using it your battery life disappears? Or a sub 2w Intel or AMD quad that feels just as snappy as a new laptop and can do everything their laptop can do in their pocket? Remember we’ve had a lot of chips that everyone thought would unseat X86, SPARC, DEC Alpha, even Itanium…didn’t happen did it? People are wanting their mobile devices to do more and more and more…and I’m sorry but ARM doesn’t look like its gonna keep up with demand, neither with the CPUs nor GPUs.
Hola, calme toi, dude
I still own 2011/2012 7″ android netbooks using the WonderMedia 8650/8880 chips under Gingerbread/Jelly Bean. And you know what ? They works plain and simple. My main phone is a 2011 HTC Evo 3D and my backup phone is a 2008 Nokia N95. And you know what ? They works plain and simple.
The problem is mostly people always expecting more from their devices to do, well, what much more? Bitcoin mining ? 3D rendering? Real time ray-tracing ? No, Facebooking in HD, Candy crushing in full color gamut, Tweeting in parallax material user interface.
The amount of power now requested to run a simple f–king phone is just abhorrent. I used to work on an Atari ST running at 8Mhz with just 1MB of RAM, and it worked. It has its quirks, but it worked. More power were needed, but for power requesting usage.
Now what, kernel got so complex, software so unoptimized (beside some spotted graphic rendering pipelines using C or even assembler) that the ARM cpus have to compensate Java/Dalvik/Art/Kotlin VMs with more stronger power, especially since that have to mockup regular desktop/laptop operating systems.
Just take into question why those embedded devices (9-12″) have now 2K retina display for granted while Intel laptops are still sold with 1366×768 (15″) or 1600×900 (17″) ? You need power and bandwidth to animate so many pixels, yet it is now a requirement for ARM chips, but not for Intel or AMD cpus?
The basic expectation of ARM throughput is unbelievable, you are requesting an amount of commitment from one chipmaker that is actually delivering (pretty sure some Mali T6xx chips outperforms Intel HD Graphics offering) but in a skewed perception of usability. Why always comparing the two ?
Intel failed miserably in the embedded space (smartphones, tablets) despite their Atom chips were quite good, yet ARM fit the “niche” pretty well. On the other hand, Intel and AMD are shining on desktop/laptop, and even gamer rigs for that matter where ARM would hardly ever reach consensus.
So, what’s up doc? Told you the pure chip TDP ain’t enough to compare apples to oranges, you pretended my illustration was biased because based on 9 years old articles. Show me how companion chipsets (not necessarily names “north and south” bridges anymore) don’t play a role anymore in a global power consumption.
You still are plain focused on how much ARM chips compare to their Intel counterparts, taken out of a complete system, for doing different things. Benchmarking is a thing, which for the matter were proved to be dubious especially considering the target use cases. But you still pretend them as being valuable and spot on. I beg to differ.
ARM chips have their reason, Intel chips too. I don’t want to burst your bubble, but you looks like a monoculture proponent, everything should stick to your beliefs, everything else is dead wrong and should either convert or disappear. Such an old fashioned trend, I too was Atari vs. Amiga once in a while because too blind to just understand both were legitimates.
So instead to infuriate yourself upon quite a hazardous subject that is changing and reshaping on an almost daily routine, just accept that the theory isn’t as close to the reality than it might be. Just found the Tom’s Hardware article I was originally looking for :
http://www.tomshardware.co.uk/dual-core-atom-330,review-31505-10.ht…
I mean that’s the one I was looking for in the first place. They compare apples to oranges in the way they benchmark “Performance score per watt”, but for the whole system, despite having noticed the Atom 330 was rigged by its chipset that had a greater TDP and actually needed a fan. Performance wise, the Core 2 Duo might be stronger, but at a pure TDP cost (61W) far outstretching the dual core Atom’s (8W).
That’s to say the Core 2 Duo needed almost 8x more power but never ever once was 8x faster than the dual core Atom 330 chip.
And not even speaking how much the Atom setting was crippled with 2GB DDR2-533 maximum, compared to a less restricted Core 2 Duo machine, because artificial segmentation requested by Intel to avoid Atom cpus to march on their stronger offering. I don’t buy the “system power consumption” excuse for the RAM limitation, regarding the poor choice of 945G chipset that nVidia addressed later with the Ion platform.
TL;DR just compute everything into the formula and not only the things that makes your favored platform shines.
Edited 2018-04-07 16:49 UTC
It’s probably simply because phones and tablets are now well in the “good enough” category for many people, hence they don’t feel they need a replacement; just like PCs for a ~decade or so…
Though that impact should be fairly minimal by now, especially north bridges “dissapeared” into the CPU a ~decade ago…
Let’s look at the facts here. Smartphones and tablets were always on ARM – so they haven’t moved anywhere.
Laptops the verdict is still out, but so far none of Microsoft’s ARM products have been in demand. That leaves what? Chromebooks? Not really sure what hardware they run, but only people surfing and/or SSH’ing to more powerful computers seem to use those. Maybe some schools here and there because they can’t do anything. I’m yet to meet anyone using an ARM laptop, so whatever trend you are setting up here.. well it didn’t reach my peers.
Desktop and servers? That’s all Intel as far as I can tell. Maybe you can find some exotic ARM Linux servers somewhere..
Kind of borderline what it’d get classified as. But there’s also the pi varients.
A quick Google search brought me to an article titled “Best Chromebooks 2018” where everything listed has an Intel CPU.
Yes, I think Thom is exaggerating the trend here. Sure, there’s more ARM hardware than in the past – it used to be confined to mobile devices, whereas it can now be found on server and desktop/laptop hardware. And sure, there are some advantages to having more power-efficient chips in the data center.
But to extrapolate from that and conclude that traditional x86-based chips are doomed and that Intel are fools for not jumping on the ARM bandwagon? Yeah, I think that’s stretching a bit…
There were x86 variants of smartphones and tablets as well (mostly Intel Atom and nVidia Ion “era”).
Edited 2018-04-03 13:59 UTC
Early Nokia Communicators were also x86, some low power variant of 386 IIRC.
He’s reacting to the Apple on ARM only rumors. Apple’s Arm chips are really in a class by themselves. Their performance could replace Intel in low to medium end laptops today.
…but Apple doesn’t make low to medium end laptops, at least not pricewise or marketingwise
But they do hardware-wise. You have to venture into the “Pro” line for anything with half-decent performance
There’re 3 modern ARM processors ready to fight in this market (ok, only one you can buy right now).
http://b2b.gigabyte.com/ARM-Server
https://www.avantek.co.uk/store/arm-servers.html
Some 2020+ supercomputers are based on ARM.
https://www.scaleway.com/instantcloud/
https://twitter.com/eastdakota/status/976560820611031040
etc..
Hi,
Over the years there’s been many desktop and server CPUs (68000, SPARC, Alpha, VAX, PA-RISC, …) and 80×86 killed almost all of them, including several of Intel’s own CPUs (iAPX432, i860, StrongARM, Itanium). The only real survivor is PowerPC (which has so much market share that a lot of people forget that it actually exists).
There’s 2 reasons for this. First, traditionally, PCs are built on standardised components, allowing pieces (video cards, RAID controllers, network cards, RAM, …; up to and including the OS) from many manufacturers to the combined. This gives far more flexibility and (through competition between device manufacturers) better peripherals, and makes the systems easier to repair (e.g. if a device fails you can replace that device alone). Of course “componentisation” requires a lot of effort and cooperation from everyone involved – standardisation groups, device manufacturers, motherboard manufacturers, OS vendors, etc.
The second reason is backward compatibility. For 80×86 you can be “relatively confident” that in 2 decades time you can still run software that your business depends on, and you won’t be screwed because the CPU you were using became deprecated and the company that wrote the software no longer exists.
None of the normal ARM vendors (excluding AMD) have ever had to deal with either of these things. Their products have always been “everything is a single product” where none of the pieces are standardised or interchangeable and backward compatibility didn’t matter (and didn’t exist).
This is why people have been saying “OMG, 80×86 is doomed, LOL!!!111one” for the last 15 years and there still isn’t a single plausible hint that it’s actually going to happen.
Sure, ARM could (and has) found a few rare/niche places where a lot of computers are being used for an extremely specific purpose where “componentisation”/standardisation and backward compatibility don’t matter, and these things make occasional headlines, but they’re not a threat to 80×86 at all. Ironically, if one of the ARM vendors had any meaningful success in the desktop or server markets, all the other ARM vendors would jump on the same bandwagon and ruin the profits for the first (and for each other) and destroy ARM’s hopes of anything more temporary success.
The only likely threat to 80×86 is AMD (who do have experience with desktop and servers including standardisation and compatibility); but even for 80×86 they still aren’t a huge threat to Intel (despite Zen) and for ARM they’re mostly non-existent, and AMD have cancelled (“postponed forever”) all of their ARM plans.
– Brendan
I agree in great part to you, but let’s get a bit more realistic about the needs of the users.
For 90% of PC users, the architecture being Lego blocks like traditional PCs, or an all-in-one like current generation of videogames, matters nothing. They only need to open Microsoft Office or games.
Alas, for notebooks, all you can upgrade is disk and RAM, everything else is integrated. So it won’t be a problem for ARM vendors having to deal with this level of integration, because they don’t need to; they don’t need to enter the lego-like PCs, but keep doing all-in-one machines.
I think the second technology starts to ‘care/cater’ to those types of people, is the second innovation dies.
Brendan,
I agree with you on several fronts. You identify several reasons why x86 managed to keep it’s monopoly in the past. However some of those underlying factors are changing, which could shift the tides away from x86 in the future.
ARM performance continues to improve. This obviously poses a risk to intel’s high performance market share. These past several years intel has reached performance barriers with CPU production and instead focused on more cores, but more cores is not an advantage over ARM. Meanwhile ARM is closing the technology gap. In some cases merely being more energy/cost efficient could favor ARM for new datacenters.
On backwards compatibility, you are right that it has been a huge reason to keep x86, particularly with windows. However there are two trends that are changing:
1. The ratio of portable software is increasing, and this decreases the dependence on x86 processors over time. The question we need to ask is when the threshold for critical mass of will be reached.
2. Windows itself has become less relevant, particularly on the server side where linux dominates and many linux distros are ready for ARM today. People like myself are ready to pull the trigger and get ARM servers when they become affordable. Vendors are not meeting the demand for affordable&commodity ARM servers today, but this is probably going to change as more vendors enter the ARM server market. I think this is the market where x86 is most at risk of loosing marketshare in the near/medium term.
There is a ARM on servers spec that many large companies agreed to support, this would support ACPI, EFI, and other things like your standard x86 server would.
However… adoption has been non-existent. But the spec is out there! It could happen!!
When apple uses arm, it will be more in the all in one device category with extremely limited hardware configuration.
Softiron sells devkits for it now. I’m using an Overdrive 1000 as my home server with Debian. Everything from openSuse to OpenBSD supports the hardware fully at this point. The only missing piece is an ARM core with sufficient single threaded performance to justify mass building rackmount gear out of it.
Hi,
Microsoft’s ARM devices are probably worse – not just “all in one device” but also locked down via. UEFI secure boot to ensure the user can’t jailbreak and replace the OS.
I think that’s my biggest fear – a world of walled gardens where you can’t replace or change anything without contributing to your captor’s profits; with everyone chanting “Yay, ARM is more competition” while being herded towards confinement.
– Brendan
Same fears. Gone is the openness we had on the PC.
ACPI, IMHO, is terrible. Except, well, when you compare it to any other solution. :/
I believe Red Hat was another big proponent of ACPI as well.
The technologies can be used to create a really closed system, but they can also be open. And lets face it, existing ARM systems suck for openness.
Brendan,
+1.
I’ve been warning against this for years. Cryptographic lockouts against owner control is so dangerous on so many levels. It we fail to protect our computer rights as owners, we will set into motion a reality where “our computers” will remain under the full control of manufacturers and the governments that oversee them.
To make matters worse, locked hardware hurts the viability of independent alternatives. If significant numbers of x86 computers had been locked down, users would not have been able to try out linux on their computers like I did and it almost certainly would have failed to achieve a critical mass.
The rebuttal always goes along these lines: if you don’t like company X’s restrictions, don’t buy company X’s products, but the flaw here is that it ignores how detrimental monopolies/oligopolies are to the viability of alternatives. If microsoft, google, and apple impede dual booting, then alternative platforms are screwed because they won’t have any commodity hardware to run on.
The other argument is that alternatives should just jailbreak the hardware, but jailbreaking has a lot of problems. Jailbreaking significantly raises the barriers to entry, explicitly makes owners depend on flaws to have control, takes a substantial amount of resources away from small projects, can result in legal challenges, and once fixed, owners will loose control and once again the alternative OS will fail to boot.
This is not the way computers should work. We need some kind of computing bill of rights where owners have the explicit right to run what they want on the hardware they own. Owners should be able to compel manufacturers to give owners the keys for their own computers.
But people hate the GPLv3, and they think Richard Stallman is an extremist.
kwan_e,
Stallman is an extremist, is he not? Haha. Some people and companies aren’t entirely comfortable with how far he likes to take things. That said, I would hope the idea that owners get to control their own computers is not “extreme”.
As for GPLv3, a major obstacle has been that key software like linux neglected to say “GPLv2 or newer“. Using a new license would require the consent of thousands of contributors, many of whom are probably not reachable and some of whom wouldn’t agree to the patent clauses.
Edited 2018-04-05 04:30 UTC
Yup, fun how the GPLv3 tries to phagocyte every software it touches, like it needs a kind of viral licensing scheme to allow it to spread and remain alive.
Just read at http://www.osnews.com/thread?655103 that despite Windows software being open sourced, the Linux conversions yet struggles. Fear from the origin ?
That’s not a problem with GPLv3. GPLv2 has exactly the same problem. Yet Linus Torvalds thinks GPLv2 is the perfect licence for Linux and Git.
As for “viral”, it only asks that you pass on the rights that you received upon obtaining the software to the people you pass it on to. What’s wrong with that?
Not wrong to pass the rights you get to the next one. The problem I have is that it also magically affects the supplementary code I might have coded in the meanwhile that sure may need the original code to run, but is not necessarily prone to get released in the wild as well.
Freedom should imply more a kinda “WTFPL” instead of “GPL”. I’ve already stated that’s why my own software are “ZLIB” or “MIT/BSD” for that figure, because I hate to be told what to do with my own stuff. Hence I’m not into shoving down people’s throat my right to own them.
Because it sounds and it is evil. Call it “software freedom” all you want like politics call about “liberal democracy” to the point of puking it several times a sentence, like to convict people it is the only way to go, despite haven’t proved being so successful IRL.
By all means, call Linus Torvalds, Linux and Git evil, because Linus Torvalds explicitly chose GPLv2 for all of his stuff.
kwan_e,
It doesn’t mean he made the right choice
Just kidding, I don’t have a problem with GPL but I do think it caused some pretty serious problems later on by not saying “GPL2 or later”. It means that today a project that is open sourced under GPL3 is not allowed to share code with GPL2 projects like linux. This sucks, this kind of friendly fire was probably never intentional, but it was an oversight from the early years that’s hard to fix today.
Most of the code in linux is not from linus himself but mostly corporations and others who are required to submit code under the GPL2. Linux is not a democracy, but if it were, I’m genuinely curious what kind of license all of the contributors would choose if it could be put to a vote. Not that it could actually happen, but just food for thought.
Edited 2018-04-06 15:26 UTC
Shouldn’t be too difficult to have Linux contributors cast a vote on “if it were up to you, what license for Linux woould you choose?” page…
That’s kind of arrogant and entitled as I see it.
Copyright is All Rights Reserved and you’re complaining about some people putting a type of “as long as you do likewise” condition on the pre-emptive license grant they offer to you for their creation when they were under no obligation to offer you a license at all?
How is that any different from complaining about having to pay for entries in Steam’s catalogue when you really really want them for free?
(In both cases, you’re complaining about having to pay the asking price for something with an effective per-unit manufacturing cost of zero.)
In fact, your argument sort of reminds me of the mindset of various companies over the years that tried to get away with GPL infringement by trying to get courts to declare GPL invalid. (If that actually happened, the code they’d taken would revert to All Rights Reserved and they’d be even worse off.)
Nobody gets to whine about people wanting to sell the result of their labour for money, so why should anyone get to whine about people wanting to “sell” the result of their labour for reciprocity?
Edited 2018-04-06 23:10 UTC
Ampere A1, Centriq 2400 and ThunderX2 are SBSA compliant.
Some standard would be nice. I mean, I can install a random x86 os on a random x86 machine, it just boots.
For ARM, things are a more complicated, and you need something different on different hardware. Nowadays it’s going towards you only have to bring a SoC specific bootloader and board specific device tree, and you can boot a mainline kernel. So, in many cases it’s no longer required to have a specific kernel for your ARM device, but still… it’s not as universal as a x86 to boot.
But what would happen in ARM manufacturers would come up with a standardised way to boot? (Perhaps an uboot or UEFI on ROM?) Would that help the adoption of ARM based computers?
Andre,
I would think so. I certainly don’t want the OS booting situation on ARM computers to end up like android phones. I guess the latest chromebooks are using coreboot, which has the huge benefit of being open source.
https://www.coreboot.org/Chromebooks
Microsoft is pushing UEFI on ARM, but their OEM licenses explicitly require manufacturers to lock owners out, which clearly defeats our purpose of having a standard in the first place
Android, smart phones and tablets. The computing landscape changed a lot over the past few years. And then I wonder, can a new open ARM platform still take off in a world where most users don’t have a clue what an OS is?
In a way, the Raspberry Pi did this.
–deleted– replied to wrong comment
Edited 2018-04-07 10:37 UTC
Such a standard already exist and adopted by server ARM manufacturers. Google: SBSA / SBBR.
For server hardware it has been standarised, but on consumer hardware, it has not yet been. While most consumer hardware nowadays use device trees, it seems the server standard uses ACPI to give the kernel information about the hardware. Should consumer hardware adopt the server standard, or should another standard be created?
Hate to break the news, but the second part happens rather frequently even in business software. Save for the big names like Microsoft, Oracle, Intuit and the like, you can absolutely end up in a situation where you can’t run that twenty-year-old software even in X86 land. Consider software written for Windows 9X that hasn’t been maintained since (yes, this is a thing). Sure, the chips are still X86-based, but you can’t really run Windows 95 on them. So what does that leave? Virtualization naturally, but… oops, the software copy protection DRM (yick!) doesn’t work in a VM due to timing issues and/or disliking the virtual CPU. If the company no longer exists, you can’t run the software in a VM, and buying the old hardware and os isn’t practical, it hasn’t done you much good that your CPU is still X86.
Sorry to write this Thom, but it is another of your “60 percent of the time I am right every time” predictions
Starting with Core2, Intel had very little competition, and have been making money hand over fist with its derivatives.
Any Intel endorsed alternative CPU might have made inroads into those healthy margins, so it seems a reasonable business decision at the time (10+ years of CPU dominance in a high margin market.)
It’s probably also why Intel so crippled Atom, early versions limited to 2GB RAM, so they couldn’t eat into the high value server and desktop markets.
All in all, I think Intel played it about right, and their numbers back that up.
Yeah, show me a dual socket server with 40 ‘cores’ and 256GB+ that has ARM processors.
Sorry, as much as some of us loathe x86/x64 hardware, especially after Spectre/Meltdown, it’s here to stay and Intel knows this.
Hell, you may as well try to predict that IBM is going to go back to supporting Desktop machines and they’re releasing PPC workstations! Or that Atari is going to come out with a computer / game console… oh wait…
But more seriously, some major things would have to happen. 1) Microsoft would have to die. They are far too invested in x86. 2) Linux would have to become the dominant operating system for non-server use (pretty sure it’s already about king for server use). This is because the large majority of software for it is open source, which means it’s easy to port to any CPU architecture. People could, for the first time really, use the exact same software on their laptop, desktop, phone, tablet, speakerhat, etc.
Since I don’t see either of those two happening for a long time, Intel is making the right move. Honestly I think they’re selling XScale because the spectre/meltdown stuff HURT them, like a LOT.
leech,
I don’t think microsoft is married to x86 as much as you suggest, they seem happy enough to invest in other architectures. It’s actually windows users and not microsoft itself who keep demanding x86 compatibility, their business cash cows in particular. But when users are ready to switch, microsoft will be eager to accept them into their ARM camp instead of a competitor.
Modern ARM processors have 32-48 cores and can address 512-1TB per socket.
Come one, 640k is enough for everybody…
Intel had some nice IP with XScale, but it isn’t’ like they could have kept going as they had been now, because they would have been part of Intel and focusing on Intel’s mission of ARM for embedded and x86-64 for desktop and above.
Also Intel knows lots and lots about optimizing for server workloads, so if tomorrow they decided to get into the ARM server space they would only have to license ARM and then map their current microarchitecture to the ARM instruction set.
I am not saying it would be overnight, but it wouldn’t take that long.
And having worked for them in the past it would not shock me at all to find out that one of the research groups has done this a while ago and has kept it up to date
When you become an ARM licensee, you get a ISA and an CPU architecture tossed in. You don’t have to stick to it.
I don’t see why. ARM licenses it’s architecture, so if Intel wants to pick up their ARM business once again, they can just buy a license and go from there.
Yeah, Marvell totally took over the market and now intel is a two bit player. Oh, wait.
BTW, Intel is still an ARM licensee.
This is the only post needed in this thread.
It’s not like Intel can’t use ARM chips ever again.
As alluded to in other comments, rumors of Intel’s imminent demise are greatly exaggerated. Intel is a business and is still and continues to do well. First of all, it has the world’s largest, most efficient and highest quality chip fabrication centers. And to make sure they are running at peak capacity and keeping shareholders content, they have licensed the ARM designs and has been LG’s primary ARM supplier for over two years now. (Article:
https://www.extremetech.com/computing/233886-intel-will-fab-arm-chip… ). Secondly, if I recall correctly, Intel still has three different processor design groups that are in constant competition with one another. Additionally, those three design groups have and continue to add more RISC type features to its traditional CISC based designs. If really needed and required, I believe that Intel’s depth and quality of engineering talent are more than capable of designing and bringing to market their own or derived RISC/ARM-based processors if and when required. Thirdly, and I could not find an article to cite, but I’m fairly certain when Intel has sold off its ARM-oriented assets to Marvel, Intel still holds the licensing rights to fabricate and sell ARM/SoC ARM-based designs.
Edited 2018-04-05 10:20 UTC