After missing the early days of the smartphone revolution, Intel spent in excess of $10 billion over the last three years in an effort to get a foothold in mobile devices.
Now, having gained little ground in phones and with the tablet market shrinking, Intel is essentially throwing in the towel. The company quietly confirmed last week that it has axed several chips from its roadmap, including all of the smartphone processors in its current plans.
This isn’t the first time Intel tried to go mobile. It actually had quite a successful line of mobile ARM processors: XScale. These were ARM5 processors that powered a ton of devices, and I think most of us know it from Windows PocketPC devices (and later Palm OS devices). Intel eventually sold XScale to Marvell, because the company wanted to focus on its desktop/laptop and server processors, in 2006 – right before the big mobile revolution happened.
I can’t help but wonder if that turned out to be a really dumb move.
I assume Itanium is in the cross-hairs at Intel, and I look forward to seeing how long it takes them to figure out a way to extract themselves from their contracts with the likes of HP.
Itanium isn’t nearly as bad as people think.
It was just crippled by a terrible terrible marketting strategy.
I worked at HP at the time they started really pushing integrity systems.
Usually the old HPPA systems were faster … at running parisc binaries … seems logical.
That was the main problem. Large parts of HPUX itself weren’t even fully native IA64 for a long time.
Windows on IA64 was also suffering from the fact that most people ran x86/64 software on it.
Native IA64 binaries on linux, for example, ran pretty well.
p13.,
Indeed, itanium did make strong inroads into VLSI, but for most the itanium would become a lousy x86 emulator at best. The outrageous enterprise pricing model certainly didn’t help.
AMD came in with 64bit extensions for x86 and swept the floor on both counts. This doesn’t mean it was a superior architecture, but it undeniably had much broader appeal among consumers who now had affordable upgrade paths that natively supported both newer 64bit and older 32bit x86 code.
I’d still like to own an itanium workstation at some point.
There is no denying the bang for the buck that you get with x86, but that doesn’t make the architecture any nicer.
We’ll see what’s going to happen with AMD and ZEN’s ARM side.
And what operating system are you going to run on it?
There is almost no way to get anything modern run on ia64. Mostly older server OS which are nearing or have already reached EOL. The last major Linux distribution who still actively works on ia64 is Gentoo.
OpenVMS runs on Itanium, and there was just a recent release of it (including a story here on OSNews).
It’s the XXI century! Please let’s stop confusing ISA with microarchitecture, people.
Fast forward 10 years and we still have 32 bit OS and applications even if memory size has gone to the roof.
Edited 2016-05-03 09:02 UTC
that’s a much bigger problem on windows than on other platforms. It was very late to the 64bit game (vista was the first windows that had decent 64bit support) and even later with pushing for 64bit support.
No, it is worse.
The whole EPIC instruction set and philosophy, the overengineered, design-by-comittee, put-everything-in concept was a total failure.
They based their early performance projections on small hand coded assembly code. As DSPs.
Modern workstation/server CPUs need highly adaptable OoO execution and memory queues, decent code density. Something Itaniums could not achieve.
Itaniums were barely useable because Intel poured tons of cache on the die.
Edited 2016-05-03 12:16 UTC
Treza,
That’s interesting, because many would say that x86 is a design failure. In a fair fight, with equal resources, talent, it is unlikely that x86 would have lasted so long. However due to great timing and the establishment of an early wintel monopoly, legacy software dependencies continue to assure a very strong position for x86 in spite of it’s deficiencies.
What? Windows didn’t come along until much later. And I would say Intel dominance, not monopoly. We have never had an X86 monopoly. A majority yes, but not a monopoly.
Edited 2016-05-03 17:59 UTC
darknexus,
Just to clarify, it’s common to use the term “monopoly” when referring to the dominant player, even though they’re not a pure monopoly with total control. Consider the ATT monopoly was broken up even though it wasn’t a pure monopoly. MS was fined for abusing it’s monopoly, and for that matter so was intel, even though they never had pure monopolies either, etc.
Semantics aside, wouldn’t you agree that if we took away the market popularity, and scales of economy advantages for the x86, then it would be much harder for x86 to compete against modern counterparts? This battle over computer architectures is taking place over scales of economy rather than technical merit. For example, PowerPC was generally a superior architecture and would have beaten x86 all else being equal, but it didn’t have access to the mountain-loads of money being thrown at x86.
To the extent that we can throw tons of money at making x86 perform well, then great, but there’s also an opportunity cost, we could have had something better (and more efficient).
Acorn had the right stuff. They where -literally- crushed. Far from ‘perfect’ markets.
Acorn morphed into ARM Holdings. That would easily be the most successful ‘crushed’ in computing history.
If Acorn not oligopolistic -ally crushed then, enduring a less ‘bleeding un- continuum’ in architectures by now.
Acorn [sorry, ARM] has no big problems with continuum engineering.
Sometimes, when an argument has been continuously proven wrong for 30 years, it is time to accept the possibility that the argument is actually wrong.
LOL.
Loads of cash has been invested into it (x86) and x64 over the years and that is why it is fast now.
JavaScript was “slow” until Google pumped a load of investment into V8 and now it was fast.
Pretty much the same with anything, something is bad until a load of money has been put into it by somebody.
I drive a diesel car. The performance of it is probably better than petrol cars of the time (2006) because lots of money was put into making Diesels better even though they make no sense on paper unless you are building trucks.
Edited 2016-05-03 18:52 UTC
Well, you must be live in that alternative universe where competing chips and microarchitectures, to x86, are/were developed for free and at no cost whatsoever.
Nope never said that. Stop mis-representing what I said.
“This has had a lot more investment” does not mean that other(s) had “no investment what so ever”. It would imply that the other(s) had less investment.
Obviously intel had lots in the coffers after the explosion of the PC market and could pump money into research.
ARM is doing well know because it has a lot of investment … just like x86 and x86-64 had last decade.
Edited 2016-05-03 19:58 UTC
Wooooshhh
Whoosh!!!
Look I have more exclamation points!!!!!
I must be correct!!!!!!!!!!
Edited 2016-05-03 23:03 UTC
I have no idea where you got the correlation between exclamation points and correctness. But then again, your mind works in “mysterious” ways.
Anyhow. I guess you agree with me then; it was kind of idiotic to blame x86 for requiring money to be high performant, since that is intrinsic to any microarchitecture. At least you did not indict x86 because its designers *gasp* breathe air.
Edited 2016-05-04 02:54 UTC
Many things are mysterious when you are an idiot.
No worries, I don’t think any less of you for admitting your struggle.
The insult doesn’t really work …
Agree, Lucas_Maximus. And on Apache, and on and on. FOSS is not a success reading because the lack of money, but because a naturally evolved way of working concertedly [most of the time].
Mmmh… VW? [Seems they cut quite a few corners]
i wish i could buy more diesels in the US. investment in diesel is miniscule compared to both market of and investment in petrol based engines. I would also bet more money at this point has been tossed into inferior (to diesel) hybrid vehicles and fully electric vehicles as well.
tylerdurden,
Then cite your “proof”.
Simple. The real world. Where x86 is the opposite of a failure.
tylerdurden,
But that doesn’t contradict what I said: “…due to great timing and the establishment of an early wintel monopoly, legacy software dependencies continue to assure a very strong position for x86 in spite of it’s deficiencies.”
If only I had quoted the specific part of your original comment that I was referring to with my reply. Oh, wait…
tylerdurden,
So you had to ignore the whole of what I said in order to make your sarcastic remark. Is it too much to ask for something more insightful? Seriously, you might have made a good point by citing something valid. Sometimes I even agree with your opinion but you speak as though you always have an axe to grind, what’s the deal with that?
Given how you’re the thin skinned one, with the pissy angry tone to boot, perhaps you could be projecting your grinding of axes on to me.
FWIW the 386, which solved most of the alleged x86 programming model sins, was released 30 years ago. Furthermore, from the 486 on, x86 has been on par or ahead, microarchitecturally speaking, with the contemporary state of the art.
So the tired “x86 hurr durr is bad” FUD from the 80s hasn’t hold much water for over a quarter of a century. I know it something some software types love to throw around to be “edgy.” But that “truism” is so out of date, that it’s almost hilarious. I was just trying to help you update your repertoire.
Well, this is not a place for discussing microarchitectures…
Anyway. I think the real game changer microarchitecture was the Out-Of-Order Pentium Pro. Before that, x86 were not better than contemporary RISCs.
PPro was able to hide most x86 deficiencies : Register renaming allowed to alleviate the lack of registers, complex instruction cracking into independently scheduled micro-ops made it more efficient and “superscalable”.
There is also some constraints, as the strongly ordered memory model, that some architectures tried to discard (DEC Alpha for example), and ended very useful (Linus wrote many rants/explanations about that).
MC680xx was seemingly better, cleaner than x86, but it is actually much harder to make it fast : The really complex instructions, with several indirections, possible MMU traps and resuming partially executed instructions is awful.
Updating all flags on all instructions (including moves) is terrible for superscaling. etc…
x86 is ugly but its issues can be managed. Itanium is hopeless.
It is remarkable that Intel’s success was by chance. They never understood how to properly design an instruction set, they even needed AMD’s help to evolve x86 into something decent (more registers, 64bits, no segments…).
“…They never understood how to properly design an instruction set…”
Huge accusation. [Maybe more like an oligopolistic behavior gone way wrong]
Yes, I concede, it was quite exaggerated.
Intel tried many times to kill x86, because they did not believe in its future either : for example i860 (named with a relation to x86), or Itanium.
Modern ISAs as ARMv8 and RISC-V are quite boring, but they are the result of more than 30 years of evolution, trying all sort of clever ideas that ended badly (as register windows, delay slots, branch hints, link/count register…), and resuting in RISC CPUs whose instruction set is not “Reduced” but plentiful.
tylerdurden,
So in other words, this is just your opinion, which is fine, I really don’t have a problem with this opinion. Nevertheless, it’s not “proof” that anyone else is wrong.
The x86 must dedicate silicon to overcoming it’s early architectural choices, and IMHO that’s a problem. Other architectures can learn from and avoid the pitfalls that x86 made, but x86 must continue to compensate for it’s legacy design. On the one hand, we can add arbitrarily many transistors to do this in parallel and transparently to the software, but on the other hand this process inevitably consumes more power, not to mention those transistors could have actually been used for computing something else.
We can all agree that x86 is “good enough”, especially with world class fabs and scales of economy, but that doesn’t mean that we couldn’t do better. That’s the thing, personally I think it’s a shame if that something better never catches on because we’re remain stuck spending all our resources on x86.
Edited 2016-05-04 13:41 UTC
It is really hard to determine how much is this x86 tax.
Simpler architectures like PowerPCs can be made as fast as x86 (Power8 …), but it is far from easy.
ARMs have no huge lead, except for very simple cores (Cortex-R & Cortex-M particularly).
The new, even simpler RISC-V ISA could also challenge ARM for some applications.
There are some quite outstanding CPUs, like Apple’s, which seems to be able to process up to 6 instructions per cycle. Which is no small feat, probably more doable with fixed size ARMs instructions than x86. But it is a phone/tablet CPU, it is not easy to extrapolate to what kind of laptop/workstation/server CPU they could build.
x86s complexity is also a barrier to competition.
It is cheaper to do an x86 for Intel than for anyone else : They have all the test patterns, the microcode for x87 and all deprecated modes.
Finally, there are patents. Gazillions patents on all the advanced stuff needed in a modern CPU (for example branch prediction), which could also be a mean to harm competitors, independanty from the chosen ISA.
Treza,
I agree with your points.
I have to wonder if perhaps we are reaching the point where human architects are a bottleneck. Today AIs are used to arrange the transistors that implement an ISA, but at some point they could also exceed our own abilities at designing an ISA itself.
The thing is, we humans can still understand human designed ISAs, how important is it for an ISA to have this property? How important is it for humans to understand how the compiler for the ISA works? An AI capable of optimizing the ISA the silicon implementation, and compiler could theoretically leap past anything we can design by applying optimizations that are well beyond our comprehension. We would set the optimization goals and come up with mathematical proofs to verify the stuff actually works, but we wouldn’t necessarily understand how or why.
Assuming such a processor were available and working today, would anyone trust it? Or would that be the beginning of the rise of the machines
Edited 2016-05-04 16:47 UTC
When I was young, I read Asimov, and were convinced that the ultimate goal of computer design, software engineering and all that stuff is to build a computer able to program itself, and get rid of programmers.
From that point of view, computers have been so far a total and utter failure.
Some architectures from the past have also relied on the idea that the compiler would be clever enough to adapt to irregularities, occupy the CPU during cache misses. It did not work. Actual execution is random, Itanium code is full of NOPs. The memory architecture is as important as the ISA.
The new ARMv8 instruction set was certainly designed iteratively tweaking a compiler.
During the design of AMD’s 64bits mode, Microsoft helped AMD and influenced the design by compiling Windows with different options.
(http://courses.cs.washington.edu/courses/csep590/06au/projects/hist…, page 16)
Treza,
Yea, there’s no doubt software held back the itanium. Some features may have actually been good, but they’re really hard to appreciate without software to use them effectively. I’d wager most itanium owners experienced all of the costs and very few of the benefits of the platform.
“…Finally, there are patents. Gazillions patents…”
Gosh, Treza! Knocking at the ‘Frozen ®’ Door. See you all in a century
Who is “we?” If you figure you have the solution, by all means share it with the world.
ARM has proven that they can do “better” than Intel in the mobile/embedded spaces. You can actually pick up fairly cheap ARM dev boards, and program them by hand/assembler all you want. Since that seems to be the main concern of people who fret over ISAs. Heck, you can spend a few grand and pick up a Power8 system, if mentally masturbating to high perfomant RISC is more of your thing. Cheap Itaniums/Sparcs/Alphas can be had from eBay too.
tylerdurden,
Therein lies the problem, x86 doesn’t compete on technical merit, it competes on the resources that have been committed to it. No one can compete with that, but it doesn’t mean it’s a good architecture. Can you cite any evidence that it would be a superior architecture without it’s market advantages?
Edit: Thank you at least for backing your opinion with more details!
Edited 2016-05-04 19:52 UTC
Here’s a simple evidence:
https://spec.org/cpu2006/results/
Search the performance values for x86 parts vs its competitors.
Cheers.
tylerdurden,
Well, one of the top engineers at AMD responsible for both x86 and ARM engineering agrees that ARM has architectural benefits over x86.
https://www.youtube.com/watch?v=SOTFE7sJY-Q
On the HPC side, some researchers at CERN see performance per watt advantages for ARM. It’s more impressive considering the ARM used 40nm fabrication technology versus the xeon’s 32nm.
http://www.cnx-software.com/2014/10/26/applied-micro-x-gene-64-bit-…
I may be perceived as having a pro-ARM bias, but when I had a few ARM devices for embedded development, I actually abandoned them in favor of x86 because it was so much easier to develop for x86 without the annoyance of cross compiling and boot strapping new architectures. Still, I do see some of the advantages and I’m eager for ARM to be more common in places like the server space.
Edited 2016-05-05 08:28 UTC
With enough resources, and thrust, a pig can fly.
Intel is quite good at flying pigs.
x86 complexity issue is probably less about the extra transistors needed for decoding, microcoded instructions, x87… because it is not that large compared to the die area used by the caches, the SIMD FPU…
The “problem” is that instruction decode is in the critical path, it is an hot, power consuming part of the CPU used all the time. Hence a disavantage compared to simpler ISAs, particularly when one tries to decode more than 3 instructions per cycle.
And the instruction set is undesigned so that it is quite difficult (need to parse many bits) to sort directly executable instructions from microcoded ones.
Intel have used many tricks to alleviate that bottleneck, as using an instruction cache with predecoded instructions (‘trace cache”), loop buffers that stores the micro-instructions traversed in short loops, etc.
Treza,
You know, even though I believe a simpler RISC ISA helps reduce implementation complexity, I still find the technology awesome. Intel are extremely clever to make x86 perform as well as it does.
Edited 2016-05-05 13:59 UTC
Tylerdurden is our little shark among Us big, fatty sardines.
No, no. You’re supposed to use a million monkeys in Parallel to recreate Shakespeare, not just one.
Right now instanced on ‘last generation’, ‘Atoms’ ®
Once most of coders showed clear lack of interest on learning more than one architecture, the coin was already thrown.
dionicio,
Yea, I’m guilty of it myself. Virtually all the software in computer stores (around me) were for the PC, which directly influenced which platform I’d end up getting involved with. And when I started working as a developer, the IBM PC/clones totally dominated the market and it meant that wherever I went, I’d very likely be working with a PC.
Back then, PCs with DOS were so dominant that I didn’t even know about the alternatives I was missing out on until I started university.
You mean VAX?
The Porsche 911 was transformed from an underpowered and extremely dangerous monstrosity to an absolutely brilliant sports car by 50 years of very expensive R&D.
A car analogy! Just what we needed!
Actually, it was HP who paid for the development of Itanium all the time. That was revealed during the HP-Oracle lawsuit. At some point, HP stopped paying and focused on x86 Integrity servers, and that was when Itanium disappeared from Intel roadmaps.
I believe there is some clause that forces Intel to continue to support & manufacture Itanium, as HP have very long support timescales for Integrity hardware. So Intel are stuck on the Itanium Long March for quite some time yet.
A retreat is not what I expected from Intel, hopping is just a retrenchment.
Intel needs to acknowledge in full panorama what is going on. They have the money to hire the scientists. Just my hunch, but suspecting a multi-faceted pivoting of the Industry. Cultural, Social, Political, etc.
I worked at Intel for a couple of years around the turn of the century. This is just the intel cycle. They get worried that they are too focused and then the diversify their offerings for a bit; then they see that P/E is taking a hit and they contract to focus on the most profitable lines.
As for the sale of XScale, intel got a fair bit of stock out of the deal. I don’t know if they retained it, or sold it off, but they made a tidy profit out of it. Regardless, the reason why they sold that division is the same reason they killed off their microcontroller lines, and are getting rid of mobile x86: the margins are too low.
Some products make more sense for smaller companies to go after. Being really big doesn’t mean you can (or should) do it all.
I do wonder what this means for the Intel Edison and Curie modules.
jockm,
Or they just weren’t firing on all cylinders when they needed to be as the market was absolutely exploding. If they had succeeded in winning over the mobile market I am rather doubtful they’d be pulling out now because margins are too low. If the margins are too low it’s because their mobile division failed to generate scales of economy.
To me the mobile market is a great example of how crucial timing is in business. Even a multi-billion dollar company can miss the boat in a new market, spend billions of dollars trying to catch up (ie MS and intel in mobile) and barely make a dent in the market. Timing is everything. In a mature market, the opportunities to change the market are significantly diminished. At least intel still has other successful product lines to fall back on, not everybody does.
Hi,
As far as I can tell, all the “Intel throws in the towel” stuff is just media hype/bullshit.
As far as I can tell, what Intel have actually done is axed “Atom based” smartphone SoCs from their roadmap; and instead are planning to use other CPUs (based on “Core M”) for future 5G smartphones.
– Brendan
Core-M is higher price category. It is $300, not $30.
Intel is selling Skylake-based Core-M TV-dongles for $500.
Edited 2016-05-03 11:26 UTC
Intel said they would try again in the next mobile generation (5G). But Core M is wayyy outside the power budget for a smartphone SoC.
That’s all they have. Atoms just can’t reach modern ARMs performance.
They can downclock core-m and cut some GPU blocks. It is not a technical problem for Intel, but solely economical. Core-m is cherry-picked premium product.
Edited 2016-05-03 18:56 UTC
Could a RISC instruction set be ‘deconstructed’ from the modern x86-64? Optimized both in terms of hardware and software? That would be a huge effort in term of static analysis, but the data already exists.
Is Continuum now deceased? Just energy optimizing actual cores is -economically- a lost race, before it even begins. Silicon has to be simplified.
Good observation re: continuum. Some knock-on effects there to Microsoft’s future strategy.
The most complex instructions, specially those that impede speed scaling, are low use, could be easily parallelized or has lost preeminence in the modern coding environment. Those instructions should be de-constructed, and executed with microcode.
Its about bringing back coherence, and continuity, both downscale -now urgent- and upscale -not so urgent.
[Or expurgated] as happened to Graphic Instructions…
And eventually ‘flagged’ for deprecation [at least a (coder) generation into the future], the most inefficient of the microcode.
Deprecation doesn’t imply any extinction consequence on the software side: Simply that emulating modules are going to be added to the software stack.
Here W3C got a huge mistake by abandoning version -ing. They sentenced browsers to be bigger, and bigger card houses.
The right answer is to reestablish version -ing and deprecation. Leaving eventual legacy to add-on modules.
How could browsers become ‘critical emergency handling’ tools if so brittle?
[At my browser, Only This, and That Site, should dynamically load the 90’s module ]
Some times back there was an article that Intel “throws in a towel” on high-end desktop processors as the development cannot be pushed as desired. Now they “throw in a towel” on mobile platforms. Well what will Intel focus on then? Office desktops with i3?
Server and IoT lines.
If we look back to as things were then, xscale was nowhere near the x86 lines in terms of number crunching. Intel had a road map where atom would bring the power consumption down to xscale levels but with x86 performance. Rather than be forced to market against their own chip in a year or so they sold it off while it was still at peak value. Now, we all know (in hindsight) that atom never fulfilled that promise. In part, because of its success in netbooks, Intel lost sight of atom for mobile and tried to make it a mini laptop chip. Despite the initial promise it never fulfilled that role either and ended up to inefficient for mobile and under performant for laptops
Possibly stupid. But Intel want _margins_. ARM are not a big company – Intel can’t afford to go there (could ARM build a fab? No, not at all).
TBH Intel are a one trick pony! It seems mean to say as they now have SSDs and GPUs (of a sort). The GPUs in-particular are helping shift x86 in laptops/tablets.
There’s just not “Intel” sized money in microcontrollers (A market Intel used to be big in).
How many times have they tried to escape?! Itanium, 960, 860, 432 (Yeah, I remember) loads of attempts – none has been x86.
Intel need x86 to stay highly profitable or they’ll be laying off a hell of a lot more people.
Yup. Culturally, Intel can not do ultra low margins.
It’s also not the first time they have exited a large market, when they know they can’t compete or the margins are not worth it (for them). E.g. Intel during the 70s and early 80s was mainly a memory vendor.
Culturally, low margins are the end of the line, Tylerdurden. Everything becomes fragile and, on the quick changes following, also alien, detached. Soul is lost, a Company becomes a Corporation.
I still can’t tell if you’re a bot or some random performance artist.
A little sad about your very little classifying box, Tylerdurden
Don’t be sad! At least you don’t have to reply to your own comments this time…
Edited 2016-05-04 19:00 UTC
You’re super-escalating now, Tylerdurden. That makes You Pro ®
“The P6 architecture lasted three generations from the Pentium Pro to Pentium III, and was widely known for low power consumption, excellent integer performance, and relatively high instructions per cycle (IPC). The P6 line of processing cores was succeeded with the NetBurst (P68) architecture which appeared with the introduction of Pentium 4. This was a completely different design based on the use of very long pipelines that favoured high clock speed at the cost of lower IPC, and higher power consumption.”
https://en.wikipedia.org/wiki/P6_%28microarchitecture%29
A f_(* up lazy path seem to have started around NetBurst, [Intel had the Fab HUGE advantage then].
“NexGen’s CPUs were designed very differently from other processors based on the x86 instruction set at the time: the processor would translate code designed to run on the traditionally CISC-based x86 architecture to run on the chip’s internal RISC architecture.[2] The architecture was used in later AMD chips such as the K6, and to an extent most x86 processors today implement a “hybrid” architecture similar to those used in NexGen’s processors.”
…..
“… until it was purchased by AMD in 1996…”
Maybe, just maybe, time has come again to sit down and negotiate [that ‘continuum’ thing], as the gentlemen your are
“…The architecture was used in later AMD chips such as the K6…” I treasure the last of the K6. It has a III on its golden surface. But can’t find it at Wikipedia. Beautiful seeing that BEAST running!
Found It! SharpTooth is the name of the BEAST. A full RISC – IA32 – MMX with then fabulous L1 64KB L2 256KB on humble 21M transistors. 18 Watts on 256nm engineering. 2 Watts just bringing back to the present those stencils?
Gosh! Where are our ENGINEERS?
http://www124.pair.com/qed/qed/cpuwar.html
AND running at 10x its original 400Mhz speed.
“When equipped with a 1MB L3 cache on the motherboard the 400 and 450 MHz K6-IIIs is claimed by Ars Technica to often outperform[1] the hugely higher-priced Pentium III “Katmai” 450- and 500-MHz models, respectively.”
Found The Wiki:
https://en.wikipedia.org/wiki/AMD_K6-III
[No Gold, sorry]
Now, that’s PR. [But almost, and the difference in price was so HUGE that they where, effectively, different markets].
*Fiscally* Intel can not do low volume, low price. They have 100,000 employees and enumerable offices.
ARM (yes, they license – not fab) have a dozen office and a few thousand people.
Fabrication is taken up by companies specialising in fabrication as this keeps wastage to a minimum for the fabs _and_ dispenses the risk of a chip failing in the market to the fab customers – not the fab itself.
Eggs and baskets.
Yup. ARM has demonstrated to have the better business model to deal with the high volume/low margin intrinsic to the mobile/embedded consumer market.
And Samsung, and some of the other fab firms, are catching up big time with Intel’s process leadership.
So things are going to get “interesting” for them in the near future.
ARM don’t make the money from making chips. They make money from licensing tech.
Edited 2016-05-03 21:01 UTC
Yes indeed that’s true. That’s why I said “Could ARM build a fab… no, not at all”. They couldn’t – they don’t have the resources.
Intel _need_ a market to exist to sell 500->3000 dollar CPUs or they’re screwed! They simply wouldn’t be able to survive with that level of staffing and their cost base.
Maybe the 800 pound gorilla is becoming the last of the Tyranosaurs. An apex predator but the prey is now small furry mammals scuttling around in the undergrowth. There just ain’t the food.
“500->3000 dollar CPUs”
Give me a reason, as consumer, to have a $500 CPU.
My webm huge clip collection was my last one… Gone with the wind.
Maybe here AMD have a little of reasoning: Specialized Processing Units [again]. Anyone remember the age of Math [Co-]Processors?
Actual Fade is Neural Processing Units, as add-ons.
Exactly, that’s precisely Intel’s problem.
There’s hard core gaming (for the moderately wealthy), server loads (although an atom would run most web sites), video/image editing – again not mass consumption stuff and a cheap parallel GPU does it better anyway.
I reiterate… that’s Intel’s problem. As a consumer I *can* find you reasons. But is that mass market consumption and even if it isn’t is there enough consumers to support the 800lb gorilla?
But as on every former fade: Excessive ambition, corral-ing, oligopoly, copy-out-lawing, and excessive ‘breath-over-our-shoulder’ will end in their annihilation.
Games?
Maybe you have a hobby that benefits?
You are RIGHT, Drumhellar. Games are absolutely CONSUMER.
As for ‘dumb’ moves:
https://blog.vellumatlanta.com/2016/05/04/apple-stole-my-music-no-se…
Surely, there is legal back-ground there, somewhere around all the “I Agree” buttons He pushed on his ‘way to interminable joy”.
dionicio,
That’s genuinely shocking. Some malware does this, but never in a million years would I have expected a reputable company like apple to resort to holding local user data hostage to their subscription service.
This really deserves to be discussed in a new topic of it’s own rather than as a footnote here.
Of late thinking I was wrong on seeing Apple as an IT Corporation. Just happen to be, Alfman.
As an artist, wouldn’t BUILD over anything not being full FOSS. Much the less TREASURE it.
Intel can rebrand obsolete models and sell them at huge margins. The Centrino 3050 is an old Atom with a slightly upgraded GPU that sells for $120. Why would you make a $20 SoC?
Intel shouldn’t wait for ARM to approach to their trenches. Should take a hard look at x86-64 and start a scientifically researched, meditated and orderly deprecation process of those instructions slowing More’s Law.
[Maybe is time to add a little neural AI to branch prediction]. Branch prediction is more useful to CISC than to RISC. If ARM does the same, more marginal benefit.
[And make that x86-128, in the process].
As with any ‘music’ neural AI can be ‘piped’, also.
And, Why Not? Be Bold. Maybe a Few Complex Instructions make full sense today. ARM couldn’t think in these terms.
Just make sure the new ones are composed of ‘melodious’ notes, both to Moore’s Law and ‘continuum’ efforts.
deprecation process of those instructions slowing More’s Law
Moore’s law is basically “transistor count doubles in 18 months”. There is no way even for most inefficient instructions to slow it down.
“…Moore wrote only about the density of components, “a component being a transistor, resistor, diode or capacitor,”[89] at minimum cost…”
You’re RIGHT, Viton.
Aka a “Perceptron branch predictor”.
Many papers about that.