As part of today’s Intel Architecture Day, Intel is devoting a good bit of its time to talking about the company’s GPU architecture plans. Though not a shy spot for Intel, per-se, the company is still best known for its CPU cores, and the amount of marketing attention they’ve put into the graphics side of their business has always been a bit weaker as a result. But, like so many other things at Intel, times are changing – not only is Intel devoting ever more die real estate to GPUs, but over the next two years they are transitioning into a true third player in the PC GPU space, launching their first new discrete GPU in several generations.
As part of Intel’s previously-announced Xe GPU architecture, the company intends to become a top-to-bottom GPU provider. This means offering discrete and integrated GPUs for everything from datacenters and HPC clusters to high-end gaming machines and laptops. This is a massive expansion for a company whom for the last decade has only been offering integrated GPUs, and one that has required a lot of engineering to get here. But, at long last, after a couple of years of talking up Xe and laying out their vision, Xe is about to become a reality for Intel’s customers.
While we’ll focus on different Xe-related announcements in separate articles – with this one focusing on Xe-LP – let’s quickly recap the state of Intel’s Xe plans, what’s new as of today, and where Xe-LP fits into the bigger picture.
AnandTech dives into the first pillar of Intel’s GPU plans – integrated graphics and entry-level dedicated GPUs. The other two pillars – high-end enthusiast use/datacenter, and HPC – will be covered in other AnandTech articles.
I am holding myself from being hyped about Intel. I do actually have friends working there (and have invested in the company), but they are really bad in the recent years.
AMD took a very big gamble back in the they when they went “fabless”. (Their main issue was how they financed that whole endeavor). But it had finally paid off, after roughly a decade. Intel still insists on trying to improve their technology and failing.
I am not sure how much they can ride on single core performance or inertia against change. The Epyc system are already making way, and AMD APUs have a steady income stream from gaming consoles. Intel no longer has a captive market unlike those.
And this is not their first entry to the GPU market. Their older Xeon Phi was expensive and did not gain much market share.
AMD did not take a gamble as much as it did not have a choice. AMD’s fabs had been an albatross around their neck already by the time the first Opterons came to market.
Intel’s fab strategy made sense until recently, because they had the volume to justify them (plus they make a hell of a lot more than just discrete CPUs).
Technically the Xeon Phi was not a GPGPU, but a many-core accelerator.
javiercero1,
Intel’s troubles have much more to do with technological process deficiencies rather than volume or lack thereof. I think intel took it’s position for granted and allowed itself to become too bloated. They assumed next gen chips would be ready this year (because internally management said they were). Obviously it came out that intel’s process doesn’t work reliably enough for mass production. Now they’re a few years behind. Part of me thinks there’s a possibility western owned chip fabs are going to go the way of the dodo, it may all be outsourced to foreign companies going forward.
I don’t think you understood the point regarding volume.
Intel is much more pragmatic of an organization, people are giving so much drama and personality to these faceless tech corporations.
The volume issue comes down to return on investment.
CPU design and process technology are 2 areas which cost has grown exponentially. In the old days, the cost was still keeping up with volume.
Initially, the high performance RISC CPUs were the 1st ones to be hit by the volume vs design cost barrier. At some point most of those vendors (HP, DEC, SGI, etc) found themselves in a situation where the design cost was too high and their volumes not large enough to recoup the investment. So in the end, we had RISC CPUs which cost twice as much but performed half as fast as commodity x86.
For the most part Intel’s volumes were large enough that they could keep up with the 2 curves; architecture design cost and fab process design cost. But now intel has reached a wall and they have to chose which curve they want to continue riding. Most of the delays in their new processes are due to not just tech hurdles, but cost as well.
Another vendor which experienced something similar a few years ago was IBM. For a while, IBM coped by opening up their fabs.
So Intel is now at a crossroads and both involve disruption in their business model. They can a) move production over to TSMC or Samsung, or b) they can open their fabs to 3rd parties.
None the less, this is a situation Intel has never found itself in, as they had been on the leading edge of semi manufacturing ever since they basically created the business 4+ decades ago.
javiercero1,
I understood your point, I merely disagreed with it. Intel’s fab process began faltering before everyone got on the TSMC bandwagon for high end chips. I’d concede that going forward intel may indeed loose scales of economy advantages, but that won’t have been the cause of its downfall. Intel was failing to push the fab technology for several years despite having scales of economy.
It may not be politically correct to say this, but I think western protectionism has resulted in many of our industries loosing their competitive edge. Too many people still dismiss foreign products as cheap crap even though that cheap crap has long made it’s way into the supply chains of US products too. Over time our domestic industries have been being smashed by cheap foreign competition, we see this everywhere! We’re loosing in areas that we used to dominate with very little hope of making a comeback.
I really hope that intel can pull through. Shutting down their fabs would result in an inordinate about of consolidation for the fab industry, which spells trouble for the future.
You’re still not comprehending it.
Scale of volume is one (perhaps the principal) reason why Intel is lagging behind in fab tech since they lack the ability to invest as much in the new process node as TSMC and Samsung.
I have no idea how you’re trying to articulate this as “Western Protectionism.” There are more countries in the West than the US. The large volume fabs may be now located in Asia, but most of the tech for the production lines in those fabs are made in the US and Benelux.
It’s simply a matter of ecosystem. There is a much better design ecosystem in Silicon Valley that is simply not worth replicating elsewhere. So a lot of the chips and the tools to design those chips are made in SV. Similarly, Asia has developed a much better chip fabrication, assembly, and packaging ecosystems than anywhere else.
Everything is globalized now, that cat left the bag long ago. A lot of Intel fabs are not located in the US. Similarly both TSMC and Samsung have fabs in the US.
Intel will probably not give up their fab. But their culture will change and they will either open their fabs to products from 3rd parties, or use 3rd party fabs for some of their products. This is nothing new for them, before they were the “microprocessor” company, Intel was the “memory” company. And they had to give up that business sector when Japanese companies starting taking over that market in the 80s.
And even if intel gives up fabbing, there are still plenty of players in that space: TSMC, Samsung, GloFo, UMC, SMIC, Tower, etc. You also have a nice range of where those companies come from: USA, Taiwan, China, Israel.
There is no national tragedy happening. The US is still a principal manufacturing power, it’s just that most production was automatized (not as exported as people think). What we’re witnessing is the evolution of business models. In this case the Foundry Model is clearly winning and thus the Vertical Model (Intel) is now the evolutionary loser.
Ironically there is a region where losing IC Fabbing can be consider a true tragedy is Japan.
javiercero1,
I do comprehend it, I just disagree with your premise, even after reading it again. Rather than assuming that I couldn’t possibly disagree if I comprehended you, can’t we just agree to disagree?
I work in the industry. I understand that the description of what the actual issues are will clash with your expectations, as an outsider, of what the issues should be. Such is the way of the internet discussion forums.
javiercero1,
Well then it’s settled, agree to disagree 🙂
I really hope intel does well. I’d like to see consumers benefit from a serious GPU race like the old days. AMD has decent midrange GPU products, but the high end is desperately in need of more competition. Nvidia has been overcharging for it’s products due to a general lack of competition.
> The downside is that it also means that Intel is the only hardware vendor launching a new GPU/architecture in 2020 without support for the next generation of features, which Microsoft & co are codifying as DirectX 12 Ultimate. The consumer-facing trade name for feature level 12_2, DirectX Ultimate incorporates support for variable rate shading tier 2, along with ray tracing, mesh shaders, and sampler feedback. And to be fair to Intel, expecting ray tracing in an integrated part in 2020 was always a bit too much of an ask. But some additional progress would always be nice to see. Plus it puts DG1 in a bit of an odd spot, since it’s a discrete GPU without 12_2 functionality.
Same old Intel with shitty GPGPUs,
Yeah. I have no idea what is it about graphics that Intel just can’t seem to execute worth a damn.