Intel has dominated the CPU game for decades, and at CES 2020, the company officially announced its first discrete GPU, the codenamed “DG1”, marking a big step forward for Intel’s computing ambitions.
There were almost no details provided on the DG1, but Intel did showcase a live demo of Destiny 2 running on the GPU. Rumors from Tom’s Hardware indicate that the DG1 is based on the Xe architecture, the same graphics architecture that will power Intel’s integrated graphics on the upcoming 10nm Tiger Lake chips that it also previewed at its CES keynote.
The market for discrete GPUs is in desperate need of a shake-up, especially at the higher end. Nvidia has had this market to itself for a long time, and it’s showing in pricing.
Thom Holwerda,
+1!
I will be delighted if we can get more serious competition in this space. These past couple of years have really been hard on consumer wallets because of lousy competition at the top. An nvidia monopoly in the GPU space is as harmful as an intel monopoly in the CPU space.
As usual, I won’t trust the marketing BS until products are actually launched and in the hands of the public. I remain extremely skeptical that intel GPUs will be all that competitive with high end GPUs, but hey “bring it”! I look forward to the promise of more competition in 2020 for both CPUs and GPUs.
Well, regarding that AMD sells APU (CPU+GPU) options for both desktop and laptop offers for quite a long time now, I don’t really understand why people still eludes this viable alternative.
Kochise,
It is viable, but AMD’s GPUs are more competitive in the low/mid-range than high-end where nvidia still has a performance advantage. All else being equal I’d strongly prefer AMD GPUs on the basis that nvidia’s prohibitions on using off-the-shelf GTX/RTX GPUs in a data center are completely unacceptable.
As far as laptops go, my next laptop could well be an AMD laptop when my old one breaks, although unless they become more affordable I’ll have to continue going with used laptops, which limits my options. Personally I’m picky about keyboards, resolution, having an sd card reader and ethernet port. So by the time I apply my must-have criteria I have yet to come across a used AMD laptop that fits the bill.
I’d also be open to an ARM laptop that isn’t restricted and can run linux, but so far it’s very hard to find what I need since most ARM manufacturers are not targeting professional use cases. 🙁
Unless your algorithms are tolerant to bit errors… then it doesn’t make much sense to use consumer GPUs in a datacenter regardless… so even with AMD you’d end up needing to go for an MI card or similar. Non data center GPUs also don’t support many of the features you’d want for using them with virtualization either.
cb88,
I disagree with this generalization. Some applications need to define exact bit rounding semantics, but there’s also an awful lot of applications where it doesn’t matter at all. It won’t matter for render farms, games, most environmental simulations, etc. Neural network processing used in AI usually does not require the full precision of 64 or 32bit floats. Until recently, the consumer cards actually had “too much” precision for things like DNN. Recent nvidia cards support the more efficient 16bit “half floats”, which are preferred because the performance of a deep network is more important than the precision of a neuron. This is what makes the new tensor cores exciting even though they only support less precision.
https://www.aime.info/blog/deep-learning-gpu-benchmarks-2019/
Sometimes we don’t even need floats and even 4 bit integers can be sufficient.
It occurs to me you might have meant bit errors in the form of spontaneous bit corruption in ram rather than arithmetic precision. If you need ECC ram, then granted you’ll only get that on an enterprise card. ECC ram exists “just in case” even though in practice this type of corruption is quite rare and plenty of people running render farms, game servers, video streaming etc may never really need the additional protection that ECC offers. I’m not saying nobody needs it, but most of us can live fine without it too and never experience a problem. In the end, the buyer should be able to decide for themselves whether they need it or not.
I wouldn’t say you are wrong to point out features like virtualization, however it would be wrong to conclude all data center deployments require features like GPU virtualization. Many servers are not virtualized and use dedicated resources. Unless you are renting out servers like amazon, you probably don’t need GPU virtualization at all. Ultimately if you need the features of a titan, then go buy a titan, no issues there. However I assert that high end consumer GPUs are competitive with enterprise cards for both compute and render. Nvidia is prohibiting GTX/RTX in data centers because they consider these cards a threat to their enterprise cards, otherwise it would be sufficient just to tout the benefits of titan without using a coercive stick.
NVIDIA is not prohibiting anyone from running off the shelf cards on a data center. What they will not give you is support or honor the warranty, since they did not qualify that silicon for 24/7 100% utilization or for that use case. Same with AMD.
Besides you must work on a two-bit “data center” if you can’t afford the compute specific cards.
javiercero1,
I agree with you that wouldn’t be so bad, except that it’s not true. On top of voiding warranty for enterprise use, nvidia added new terms to explicitly prohibit data center usage in the license for their proprietary drivers.
https://www.nvidia.com/en-us/drivers/geforce-license/
Nvidia’s PR justified this saying that only enterprise cards are supported for datacenter, however the fact that they added an exception for blockchain processing, which is obviously an extremely constant & intensive 24×7 operation, makes it evident that nvidia’s real goal is to protect the market for tesla cards. You are not allowed to use these in a data center (except for blockchain applications). What’s more not all data center applications are 24/7 operations and yet they’re still prohibited by the terms of the nvidia agreement.
Don’t judge, but not everyone who runs a server is wealthy. If you can afford it, then good for you, I’m envious. Regardless, people in my orbit cannot necessarily afford to pay a 7 times the price for tesla cards with marginal performance benefits over high end consumer variants. Seriously one telsa card costs more than any of my servers.
To put this another way, for the price of a telsa GPU, you could build a cluster that runs circles around the tesla GPU using commodity GPUs.
https://lambdalabs.com/blog/best-gpu-tensorflow-2080-ti-vs-v100-vs-titan-v-vs-1080-ti-benchmark/
Don’t get me wrong, I’m not dissing the teslas, however unless you have a specific need for one, they may not give you the best bang for your buck.
Sigh. Running a couple of servers is not a “datacenter.”
Again, NVIDIA is not “prohibiting” you from running gaming cards on your servers. What they don’t support is your use case in their driver. You can make it work, but you’re on your own.
There’s a reason why a Tesla card costs more than an RTX, and similarly for the Quadro. Even if they are very similar in specs: SUPPORT and QUALIFICATION. The drivers for the gaming cards are not the same as either the Tesla or Quadro, and there’s where most of the cost comes. It’s not the HW, but rather the SW/Support.
You’re free to take a shitty FIAT to the race track, what you can’t expect is to get Ferrari support when you blew up your engine, or expect your insurance to cover your car when your total it at the circuit.
NVIDIA can only offer support for the usecases they designed and sell their cards for. And that goes to just about every other vendor. But again, you are free to run RTX cards in your “datacenter” all you want. You will just get no support from NVIDIA’s drivers or the expectation of the AIB’s guarantee hold.
javiercero1,
Sigh. There’s no need to be pretentious. You learned something new about nvidia’s licensing terms that you didn’t know before, you’re welcome!
Nobody says running a few servers is a datacenter. However just because I don’t own a data center doesn’t mean I don’t run servers in a data center. It’s really not all that unusual for companies to own/rent cages within a datacenter and in some cases these customers might even make up the majority of a data center’s computers.
You keep mentioning support, but the problem isn’t really with “support”. Consider that one of the more popular operating systems in hosting is cent-os, which doesn’t provide any official support and yet many companies are happy using it because it’s good enough for them and saves them money. No the issue isn’t support, it’s the EULA prohibition to install the GTX/RTX drivers in a data center. I don’t see a way to get around using nvidia’s proprietary drivers. If you think there’s an OEM version of nvidia GTX/RTX hardware that IS licensed for data centers, then please reply with your evidence.
Your car analogy is wrong for two reasons: 1) the EULA makes clear you are not free to take “a shitty FIAT to the race track” they explicitly prohibit you from using consumer cards in data center deployments (except for blockchain) and 2) it’s a huge exaggeration to label high end cards like RTX & Titan as the “shitty FIAT” of your car analogy. Some of these are impressive cards even for enterprise compute purposes.
It sounds like you endorse ignoring the license terms that you disagree with, well that’s one way to deal with it, but it’s far from ideal.
That you keep bringing this up repeatedly makes it unclear whether you realize that nvidia does support a stable compute API/ABI even on consumer cards and for the most part it’s the same between consumer and enterprise SKUs. It’s only unsupported for data center deployment and IMHO the reason nvidia added the data center restriction is because they’re afraid of consumer cards cutting into enterprise markets. I really do think it’s as simple as that. Do you actually disagree with anything I am saying or are we just arguing for argument’s sake? Any chance we can just agree? Haha 🙂
Jaysus, are you really that dense?
NVIDIA does not support explicitly that their CONSUMER grade graphics cards to be used in COMMERCIAL PRODUCTION data center applications, mainly because anything involving commercial workloads, financial liability is at the top of considerations as cost of doing business.
The reason why the drivers prohibit use cases, for which they have not been validated, is because you’re opening a big ass liability for NVIDIA otherwise. And they’re simply covering their asses.
Imagine that you’re using your RTX card as part of a commercial compute hosting offering to a 3d party/client. And your client uses your server to run some financial modeling app, all of the sudden the board overheats and produces some errors which lead to a faulty transaction which costs your client a pretty penny.
Your client does their failure analysis and realizes your HW fucked them up out of a lot of money. So they sue your ass. And then you try to cover your own ass so you sue NVIDIA… ooops you can’t because, they explicitly told you they don’t support that use case on that product. End of story.
That’s why they are explicitly forbidding your from using that product in that application. So that they don’t have to deal with that scenario.
javiercero1,
You’re telling me things I already know, I am the one who informed you that it was prohibited. You didn’t believe me and this is your quote “NVIDIA is not prohibiting anyone from running off the shelf cards on a data center.” So I backed it up, and now you feel the need to puff up your chest, call me dense, and pretend that I didn’t know. Why is any of this necessary? Whatever man, let’s just move on but I ask that you please show more respect in the future.
Sure, I’ve already said this much myself, some users find enterprise cards worth the additional money, others do not. Some may even prefer consumer cards that have a lower TDP and are easier to cool than the tesla, ECC memory can be beneficial for some, but it’s an unnecessary perk for others. The truth is everyone’s needs and budgets are different and you cannot pretend there’s a single solution that is best for everyone.
Some companies are willing to pay top prices, and that’s just fine, good for them. However I’m not sure you’ve really had exposure to the other side where many companies are looking for any way to cut unnecessary costs. For better or worse this is a common refrain of the companies seeking my services in my area. I estimate I’ve lost half of my prospective projects with local companies to cheaper bids, usually from indian shops. They’ll offshore work for a fraction of the price of local labor even if they know full well that it usually nets subpar results. In proposals, I tell clients what I think they should do to get the job done right, and very often they want to find ways to do it cheaper. I want to be perfectly clear about this: this is not my preference, it’s theirs. Maybe the situation is different in tech hubs like silicon valley where they probably have a lot more money to play with, I don’t know. I feel like you may not appreciate just how wide the wealth gap is across the tech sector though, especially when you’re a small company dealing with other small companies. Oh well, this is all very tangential to the discussion at hand, but I do think there’s a bit of a disconnect between our worlds.
A simple “Yes” to my question of you being dense would have sufficed.
You’re not “teaching” me anything, I already knew of those restrictions. I was simply explaining you the reason for their use of that language in their end user agreement. NVIDIA is covering their asses with regards to the liability associated with commercial workloads.
Furthermore, NVIDIA is a multi billion corporation, they are not a charity. It’s not their fault that you can’t afford their enterprise grade silicon that supports that use case. If other vendors are undercutting you in price, it’s your problem not NVIDIA’s.
It sounds like you’re running a small consulting firm, and are veeeeeeeeery behind the times if you’re literally still running client workloads on your own servers. If you need to provide GPU compute facilities to your clients, You can simply rent out Amazon EC2 instances with plenty of GPU power for pretty cheap, There really is no need for any small hosting company to have their own physical servers, when you can simply resell capacity from larger cloud providers for a fraction of the cost of running your own iron.
This has been true for almost a decade, BTW. Maybe I’m the one teaching you something after all. Hope this help your business. You’re welcome,
javiercero1,
You clearly did not know, otherwise you wouldn’t have responded the way you did. Although if it means that much to you, I’ll play along: yes you knew about nvidia’s “no datacenter deployment” clause all along. Happy? Let’s move on.
I know nvidia doesn’t care, this was my original point. Companies tend not to give a crap unless they are threatened by competition, which is exactly why I started these comments welcoming more competition from AMD and intel.
Assuming you’re willing to provide the support, it can be much cheaper to own than rent in the long term where expenses eat away at profits every month. In my experience while you do have cheap options, it’s often at the expense of lower performance than what you can build yourself. Maybe you prefer to be a VAR/reseller, which is fine. Again, to each their own.
Sigh, I used to work for that company, I was trying to explain to you why that prohibition is there. Really not a hard concept to grasp, alas..
javiercero1,
How long ago did you work for nvidia and what did you do if you don’t mind my asking?
Yes, obviously they want to make us use tesla cards in data center deployments except for blockchain.
Alas, I’m curious what cute terms of endearment you’ve got lined up for me next, haha.
It’s not necessarily the case that competition at the high end will bring the prices down (significantly). Bear in mind that with competition between card manufacturers there is a good 50+% difference in price between boards based on the same Nvidia chip.
Plus, when a new chip comes out, Nvidia generally can’t keep up with demand and that drives prices up.
Ultimately, the high end isn’t supposed to be something that everyone is purchasing. It’s meant to be pushing boundaries, and that comes at a price. Everyone has a right to look at the prices and think they are eyewatering, and be frustrated that they are out of reach, but for the high end products to be more affordable, they are probably going to have less performance – whether there is competition or not.
grahamtriggs,
That’s supply and demand. A bigger supply of high end cards from competitors to match the demand helps prevent excessive prices. There isn’t enough competition at the high end, but with any luck the situation may improve in the next few years.
Just consider that each segment of the market has it’s own supply and demand trends. The lack of competition in any segment will tend to drive prices higher than when there’s more competition. Of course there are limits to this and more competition cannot help when the market is already competitive. However most of the tech industry these days has consolidated & grown so large that incumbents have stopped facing meaningful competition. Of course having a duopoly is better than a monopoly, but a duopoly is hardly what I’d consider competitive.
A 50% difference between cards of different manufacturers based on the RTX 2080 Ti over a year after the chip was first launched is not a supply and demand issue.
If the card manufacturers are generating that much difference based on how much they overclock, the cooling solutions, etc., how much cheaper do you think competition from AMD/Intel will make, when all they will be competing on is the wholesale price of just the GPU chip as sold to the card manufacturers?
Even with competition, you are realistically not going to see a major shift in prices for equivalent performance.
Which gets back to the question – would you rather see the high end of graphics cards lowered in prices AND not pushed out to being as performant. Or do you want to see cards pushed as far in performance as the manufacturers think there is still a market for them, accepting that means that they are going to be in a higher price range?
grahamtriggs,
There’s where supply and demand comes into play. In a truly competitive market, market dynamics sets the going price, not any of the individual players. When there’s little/no competition, sellers get to name their price. In a competitive market, excess prices don’t last long because they loose sales to their competitors. If they refuse to correct their prices to match market rates, then they’ll loose market share to competition. In noncompetitive markets, like monopolies or duopolies, there is a lot of reward for raising prices and not much incentive to keep them low and sometimes these companies may even agree not to compete with each other in order to increase mutual profits. Although I think this is getting a bit tangential to the topic, the main point is that when there’s genuine competition, the market sets the prices and not the companies.
Well, I disagree with this premise. In a competitive market the law of supply and demand isn’t optional. It applies to market subsets too. When you are a consumer looking for a graphics card, you are probably looking for a certain class of performance and features, and here we are talking about high end. The more choices that you have in your target class, the less that any given supplier can charge over market rates while still getting your business. But without competition, you don’t have much choice but to go with the lone supplier regardless of price.
Obviously you can also lower your expectations and buy something from a lower class for a cheaper price, which I think is what you are saying, but it doesn’t negate the fact that prices at the high end are affected by competition or lack thereof.
You may like to know exactly how much the lack of competition at the high end is costing us…I don’t have those numbers for you. Just speaking for myself though, nvidia probably would have lost a sale if AMD had a product that was in the same class as high end RTX. Nvidia only made that sale at that price point due to the lack of competition, which is my whole point regarding supply and demand.
Alfman,
“There’s where supply and demand comes into play. In a truly competitive market, market dynamics sets the going price, not any of the individual players. When there’s little/no competition, sellers get to name their price. In a competitive market, excess prices don’t last long because they loose sales to their competitors.”
You are still missing the point. Even where there is no competition between chip manufacturers, there is still competition between the board manufacturers who are the ones selling to consumers. The idea that there is no competition for the RTX 2080 TI chip can not account for the fact that there is a 50% price differential between boards
You have boards selling at around 20% less than the Nvidia reference price, and boards selling at 20% more.
Having AMD or Intel releasing a competitive chip might cause the price of the 2080 TI *chip* to change. Even if you were optimistic and said it caused a 20% price reduction in the chip, it still means that the water cooled versions are going to cost around the current reference price.
The fact is that there is competition amongst ASUS, MSI, EVGA, etc. which is why boards are available that are 20% less than the Nvidia reference price. However, yes, there is a limit to how low they can go, based on the cost of the components – including the Nvidia chip itself.
“Well, I disagree with this premise. In a competitive market the law of supply and demand isn’t optional”
The law of supply and demand doesn’t overrule the basic cost of manufacturing. OK, yes – if a manufacturer has a warehouse full of stock they haven’t been able to ship, they might sell it at lower than cost just to recoup some of that money. But if that happens too often, you quickly end up with the business closing. A business can only be sustainable if it can set prices that are higher than the cost of manufacturing (and R&D).
If you want chips with more cores in them, and advanced cooling solutions that allow clock rates to be higher, that comes at a cost.
If you are lucky, increased chip competition might see a 10% price reduction for the same level of performance. If you expect prices to go lower than that, it’s only going to happen if the products are designed for lower performance, and therefore lower manufacturing costs.
So again, I ask you – do you want/expect to see more than a 10% reduction in costs for high end products, and if so, would you be happy that means manufacturers are not producing cards that are as fast as they are capable of making, and which there is clearly is a market for?
grahamtriggs,
You could have one vendor selling hardware for $1000 and the next could sell it for $1500, $2000, or even $3000. The bulk of the market will gravitate to those with the best value available and set the median price that most people will end up paying. Look at markets that are extremely competitive and the only differentiating factor is price (often the case with amazon & ebay listings), the exact same products tends to be priced within a few dollars of each other. Unless a vendor differentiates their products to warrant higher prices (better shipping, better warranty, better reviews, etc) they’re going to loose volume so long as lower priced competitors can handle the demand.
No, even then supply and demand is applicable. It’s always applicable in a free market.
If AMD or intel were to come in in with a similar water cooled version that customers found suitable, nvidia would be looking at more competition and lost volume. It would either need to improve it’s product in some way, or lower prices to maintain marketshare and convince customers not to go to their competitors. But it’s not a static situation either, competition can also improve in response to nvidia. This is why competition is so effective and the lack of it is so harmful.
And once again I have to tell you supply and demand still applies even to the top of a market. I don’t know why you think it doesn’t, but it really does. Lousy competition yields lousy pricing. I really don’t understand your sentiment, let me ask you directly: do you not want more competition at the top? To me that wouldn’t make sense, but if you do want more competition at the top, then why is it that you’d want more competition at the top if it does not improve the incentive to cut prices and deliver better products as you’ve been claiming that it doesn’t? If what you were saying were true, then we can just do away with competition and simply have faith that companies will provide us with optimal prices by their own accord. Suffice it to say, I don’t trust monopolies to do that.
“You could have one vendor selling hardware for $1000 and the next could sell it for $1500, $2000, or even $3000. The bulk of the market will gravitate to those with the best value available and set the median price that most people will end up paying.”
Seriously, Alfman, just go an look at the product listings of the stores. There are boards with NOMINALLY THE SAME RTX 2080 TI CHIP with a price difference of 50+% between the cheapest and most expensive.
Yes, there are differentiating factors from the cheapest boards to the most expensive – different clock speeds (including testing of chips to run at those speeds), different cooler constructions, etc.
Do you not see that a very large proportion of the costs for these boards is going into the cooling solutions and testing done by the board manufacturers, NOT the wholesale price of the chips from Nvidia? Competition from another chip might affect the wholesale prices of the chip, but it WILL NOT affect the cost of the cooling solutions and testing the manufacturers do. Hence, more chip competition at the high end will have a smaller impact on pricing than I think you believe it will.
“Obviously you can also lower your expectations and buy something from a lower class for a cheaper price, which I think is what you are saying, but it doesn’t negate the fact that prices at the high end are affected by competition or lack thereof.”
I didn’t say that prices can’t be affected by competition at the high end. But you are talking as if another chip manufacturer producing equivalent performance of an RTX 2080 TI will see the prices drop by 50%. They won’t. If AMD/Intel put out a chip at the same level of performance, it would have a fairly similar fabrication process, a similar cooling requirement, a similar cost, and prices wouldn’t differ by more than 10%.
If you want the high end to be 50% lower than the current prices, then they will be products that target a lower level of performance. In fact, they’ll probably have fewer active cores, so that they can increase the yield at the specified performance. See, cause that’s the thing – whilst you complain about the price of the high end, it’s a balancing act with having competitive pricing for the mid-range chips with the same architecture. Mid-range chips generally come off the same line – the lower advertised core count allowing faulty cores to be disabled, increasing the yield. The less they charge for the high-end chips, the more would need to be charged for mid-range chips in order to make the average die revenue – which has to exceed the average die manufacturing cost to be sustainable.
“. I really don’t understand your sentiment, let me ask you directly: do you not want more competition at the top?”
Didn’t say that though, did I? Go back and re-read them if you like, I have never once argued that Intel or AMD shouldn’t put out a chip at the high end. In fact, I think it would be great if they could do something that could actually be better than Nvidia. I’ve welcomed the competitiveness of Ryzen – and switched to them after many, many years of only having Intel CPUs.
My argument has never been about whether there should or shouldn’t be competition. All I have said is that I really don’t think a competitive high-end product would bring about the kind of price reductions that people think will/should happen. Or, if the highest end cards available were 30%, 40%, 50% cheaper than they are today, then it would almost certainly mean that the manufacturers are leaving performance in the design lab and not producing products that are as fast as they could be – e.g. target fewer enabled cores so that they have sufficient yield.
I already said it wouldn’t matter if someone listed it for 100% more. What matters is the price they’re actually going for. Obviously anybody can setup a merchant account and list products for as high a price as they feel like. I could list them for $3000 today and your 50% number would change to ~200%. Obviously this does not automatically make it a meaningful reflection of the true market price. Unless there are unusual circumstances, excessively high prices will get very little sales volume and not have a statistical impact on true market prices. Unless you have sales volume figures at different price points, the 50% figure you keep repeating doesn’t mean all that much.
Hey, if you want to argue that various parts of the supply chain are already competitive, then that’s fine. However the cooling solutions on RTX 2080 TI cards really aren’t all that different from AMD Vega cards selling for a fraction of the price, so it’s likely more competition at the top would help make both chip and card manufacturers become more competitive.
That’s a straw man. First of all, this 50% is your number. Second of all, your number represents the gap between the low and high seller prices for RTX 2080 TI and does not represent the amount that prices could drop by if there were more competition.
I already told you “You may like to know exactly how much the lack of competition at the high end is costing us…I don’t have those numbers for you.” I don’t know where you pulled this 10% from so I’m going to assume it belongs to the 90% of statistics that are made up on the spot 🙂
Nonsense, I’m only talking about the price benefits that could result from additional competitors offering similar products. Sorry, but your 50% statements continue to be totally arbitrary and because of that I am just going to stop responding to them.
That’s why I posted it as a question to try and understand where you are coming from.
Yes, in addition to getting lower prices, having stronger competition this year could lead to more innovation too!
“I already said it wouldn’t matter if someone listed it for 100% more.”
I’m not talking about people randomly listing things at. I’m talking about the pricing of cards at retailers like NewEgg, etc. Go to retailer websites and search for RTX 2080 TI. Cards based on exactly the same GPU chip have a price difference of 50-70% between the cheapest and most expensive.
“Hey, if you want to argue that various parts of the supply chain are already competitive, then that’s fine. However the cooling solutions on RTX 2080 TI cards really aren’t all that different from AMD Vega cards selling for a fraction of the price, so it’s likely more competition at the top would help make both chip and card manufacturers become more competitive.”
And as you’ve already admitted in wanting more competition, they are not “high end” cards (in the landscape of the whole market).
A Radeon VII has about the same performance as an RTX 2080 (NOT TI), and retails for about the same price. (The 2080 TIs are about 15-20% faster.)
“That’s why I posted it as a question to try and understand where you are coming from.”
You shouldn’t need to wonder, because I’ve been *explicit* from the first response.
You +1 responded to a comment by Thom that a lack of competition is showing up in the prices [at the high end]. At all the levels of mid-range and below, where there is competition, the price/performance is pretty comparable. Where AMD and Nvidia have similar performance chips, and they are packaged with similar cooling solutions, they have a similar street price.
If AMD were producing a chip right now that was equivalent to the 2080 TI, then the cards would almost certainly be priced around the same as the 2080 TIs are now.
And even if, in any given generation, one of the chip makers achieved a paradigm shift to be the clear price/performance winners – e.g. AMD produced a chip that was competitive with the 2080 TI but priced at 2070/80 levels – then they would almost certainly be capable of producing an even faster chip. Which they then almost certainly would do, and the cards would be $1000+.
Maybe competition will drive even higher performance at those prices. But the pricing structure itself won’t change much. The market for more performance isn’t going away, people are prepared to pay $1,000+ for cards that have the performance over cheaper cards to justify it, and increased competition probably won’t change that there are cards produced at those prices with an appropriate performance improvement over cheaper cards.
grahamtriggs,
You keep repeating this, but it’s not the accuracy of your number that’s faulty. If I go to newegg right now, the range of new cards sold by newegg is $1050 – $1800, which matches your 71%, but this doesn’t tell us much about the average market price paid by consumers without sales volume. It could be $1050, or it could be $1800, or anywhere in between. It’s likely closer to the lower end, but therein lies the second problem with your ranges: ignoring the fact that these products have vastly different specs (at the top end you’re looking at 240mm AIO watercooling). Heck just take one manufacturer, EVGA, their “RTX 2080 TI” products range from $1100-$1800, a 64% gap, OMG! Q: What does this range tell us about how much new competition would affect prices? A: Absolutely nothing.
Hypothetically if AMD cards were to come into the market with competing products for both vanilla 2080 ti cards as well as high end water cooled 2080 cards, you would see both the $1100 and the $1800 price levels come down due to supply and demand to reflect the new competition. What’s more, after after new competition is added, you might still see 50%+ price gaps between high and low end cards! In short, this gap in and of itself tells us nothing about how far prices would come down.
Well granted “high end” is relative. You highlighted rtx 2080 ti in your example, but regardless everything about competition & supply and demand applies to titan and tesla cards as well.
Yep, although at the time I bought my card vega 64 was the best option available from AMD, which was lower still. Radeon VII sits about halfway between vega64 and RTX 2080, which is definitely an improvement, but still below RTX 2080 (non-ti) and well below 2080 TI.
https://www.pcgamesn.com/amd/amd-radeon-vii-review-radeon-7-benchmarks
Personally I was not merely looking at the raw performance numbers, but the power consumption too, and unfortunately AMD cards were not competitive at all. Vega 64 cards were not only slower, but they consumed more power. I’m still rooting for AMD, but they still need to catch up to offer serious competition for the high end.
The difference is that the lower prices at mid range already reflect AMD’s competition, the prices for high end segments do not. I’m not making this stuff up. They should teach more supply and demand in school, because it’s extremely useful to understand how markets and prices work in general.
https://www.britannica.com/topic/supply-and-demand
I guess you’re probably going to say that you understand it already, but if you really understood it you wouldn’t be making these claims, like asserting that market segment prices do not change when there is more supply. This is a pretty fundamental misunderstanding of free market supply and demand on your part.
I really don’t want this to end on a sour note, but if you’re not open to learning about supply and demand, then I don’t see a point in continuing the discussion, do you?
“You keep repeating this, but it’s not the accuracy of your number that’s faulty.”
How is the accuracy faulty when you’ve gone to NewEgg and seen exactly what I’ve been saying about the prices? You are choosing to disagree with what that says about the market, but I have accurately reflected the spread of pricing from primary retailers.
” therein lies the second problem with your ranges: ignoring the fact that these products have vastly different specs”
See that is inaccurate. Because I have not ignored the fact that these products have different specs – I explicitly stated that they do.have different specs (and different cooling solutions to achieve them). But they all have the same chip, and when we are talking about having other chip makers competing, it’s the chip that they are selling to the board vendors.
What we can glean is that low end / stock specs retail at around $1,000-$1,100. Improved air cooling and overclocks retail at $1,200 – $1,400, and watercooling retails at $1,500-$1.700. All based off the same chip.
So if there is another chip maker offering a high end chip that competes with the RTX 2080 TI, you’ll have overclocks with a $250 premium, and watercooled with $500 premium. The chips are still going to be priced more than the low- / mid-range. There simply isn’t that much room for prices to move, if you still have the spread of low- / mid- / high- end performance.
“The difference is that the lower prices at mid range already reflect AMD’s competition, the prices for high end segments do not.”
And the RTX 2080/TI pricing reflects the fact that it offers more performance than the lower priced products, and that it has twice the die size / half the number of GPUs per wafer.
“They should teach more supply and demand in school, because it’s extremely useful to understand how markets and prices work in general.
https://www.britannica.com/topic/supply-and-demand
I guess you’re probably going to say that you understand it already”
I actually studied economics as a specific course and passed exams in it. You’ve linked to an article on supply and demand but it seems you don’t even understand what is written on the page that you’ve posted. Note that the supply and demand curves are separate. Lower prices increase demand, and higher prices (usually) increase supply. It’s not inalienable that over-supply will result in price reductions – what may happen is that the manufacturer decides it’s not worth lowering prices and simply restricts supply.
But if supply can’t meet demand, prices will rise – which is what we see with GPU launches. Street prices go up early on because there isn’t enough supply, and then they drop back to roughly the originally announced prices as the supply becomes sufficient.
If you still have a problem with this being hypothetical, we can always talk about other markets – e.g. mobile phones. We’ve seen more competition for higher end handsets, the prices haven’t meaningfully come down – a disruptor will sometimes enter the market with a keenly priced “flagship killer”, but as they establish a name they inevitably edge towards higher specs and higher prices.
Or TVs. Every year new models of TVs come out, and the pricing of low-, mid, high-end at various sizes is reasonably consistent with each round of new products. (Making some allowance for the premium pricing associated with introducing a whole new technology like OLED).
For now, I don’t see the GPU market changing much in demand. And as long as that is the case, and as long as chip makers can generate the differential in performance, I just don’t see a meaningful difference in the spread of pricing. Competition is more likely going to affect the level of performance that we get at those price points, than the price points themselves.
grahamtriggs,
I really don’t know how to be clearer about this. Lets try another example. You go to an ice cream shop, they have ice cream cones (single and double scoop), ice cream sandwiches, and ice cream sundays. All these products might have the exact same ingredients from the same supplier and might range in price from $4 to $8, that’s a 100% gap. Now what does this gap tell us about what will happen to prices if another suppler opens up shop? Not a damn thing! I cannot be any clearer than that.
It’s not just the “law of demand” it’s the law of “supply and demand”! Even if demand doesn’t change at all, changes in supply also have an affect on prices. It beats me why you keep ignoring that in every one of your posts, but I refer you back to…
https://www.britannica.com/topic/supply-and-demand
Learn it! If you keep on refusing to learn, well I suppose that’s your prerogative.
“I really don’t know how to be clearer about this. Lets try another example. You go to an ice cream shop,… Now what does this gap tell us about what will happen to prices if another suppler opens up shop? Not a damn thing! I cannot be any clearer than that.”
Let’s try and correct your example. There are already multiple ice cream shops (graphics card manufacturers). They all serve the ice cream from the same dairy (Nvidia), which they “package” in different ways – standard cone (stock cooler), waffle cone (custom air cooler), sundae (water cooled).
So another dairy (AMD) starts supplying ice cream. That just introduces competition for one part of the final product. It doesn’t change how much it costs to rent a storefront, to hire staff, to purchase waffle cones, sauces or fruit.
Another dairy might have a small effect on the pricing of the ice cream, as it is sold in tubs to the ice cream shops. But when you are buying a waffle cone or a sundae, you are buying more than just the ice cream that has gone into it. And the spread of pricing for producing different products from one common ingredient doesn’t change.
“It’s not just the “law of demand” it’s the law of “supply and demand”!”
Keep posting the same link – because all you are doing is showcasing that you either haven’t read the page that you are linking to, or you don’t understand it.
Supply and demand are separate curves – they model how pricing affects both demand and supply. The job of the market is to find the equilibrium – the price at which the level of supply and level of demand are the same. Look at markets with dynamic pricing – e.g. Uber. At times of peak demand, the price goes up to encourage more drivers to work. At low demand, it doesn’t matter how many drivers are working there is a base price that it never goes below.
You keep pushing the point that lack of competition is raising prices, but what the market is actually telling us is that either:
a) Other manufacturers are not capable of producing competitive products
or
b) The current prices are too low for other chip makers to produce high end products.
But GPUs are scalable, highly parallel products. AMD are not producing Radeons at more than half the die size and 2/3rds of the transistors of a 2080 TI,. On one hand, they clearly could scale up the parallel units for a more powerful chip – it’s possible that they are limited by the power requirements. But if they can get round that, then what we are seeing is that they can’t compete on price at a higher performance level.
grahamtriggs,,
You’re starting to get it, this is good. You’ve got most of the pieces, now you just need to put them together. A lone competitor sets whatever prices they want and profit mostly becomes a function of volume * prices – fixed_costs – volume * unit_costs. Naturally the seller wants to maximize profits, and by looking at the demand curve at various price points they can optimize for profits. Having no competition means they can maximize profits by way of high prices.
Now when you introduce one or more competitors, it inevitably cuts into the first seller’s sales volume. In reality there could be differences between the sellers impacting buyer choices, like reputation, location, etc. However all else being equal the seller with the lower prices will attract more buyers, hence taking away volume from the rest with higher prices.
Let’s look at two (unrealistic) extremes to make a point…
1) Prices have no impact on buyers and they chose sellers randomly, each seller gets about half the volume.
2) Buyers are extremely price sensitive and they always chose the seller with the lowest price. Therefor the seller with the lowest price gets 100% of the volume and the one with higher prices gets 0% of the volume. Ask yourself, which seller is more profitable? In this exaggerated scenario, the only seller who can make any profit is the one with the lower price. The seller that refuses to budge on price ends up damaging their own profits with prices that push customers away to competitors.
The real world lies between these two extremes. Additionally price cannot go lower than cost to remain profitable. Still, in extremely competitive marketplaces (often with chinese sellers), prices do approach costs and sellers have very little wiggle room on prices; they rely on large volumes for profit.
So the pricing strategy to maximize profits looks very different depending on how competitive the market is. Once competitors arrive with similar products, if you refuse (or are unable) to become more competitive with your products&prices, then you’ll end up with subpar profits due to lost volume.
That’s a fair question. If other sellers are unwilling or unable to compete with the first seller, then the first seller gets to take advantage of this lack of competition by selling at high prices.
“Additionally price cannot go lower than cost to remain profitable.”
At least you are finally starting to acknowledge what I’ve been saying all along.
Now, stop doggedly standing by a mantra that another competitor MUST mean lower prices, and look at what the market is actually telling us right now.
1) We have competition up to the level of Radeon VII / RTX 2080..
2) The architecture of these chips are quite similar In fact we can put a 5700 XT side-by-side with a 2070 Super, and they have similar number of shaders, TMUs and ROPs, similar clock speeds, similar bandwidth – and similar performance (The 2070S having a higher transistor count because of the RT/tensor cores and much larger die size from the process).
3) Providing power requirements can be met, AMD could produce something with similar performance to a 2080 TI “simply” by packaging similar numbers of cores (not withstanding the additional capabilities). This would result in a chip with more transistors, larger die size and higher production costs than they currently have.
4) Making something more powerful than the Radeon VII not only costs AMD more to produce which then has to be reflected in the price, they would only produce such a chip if it can be positioned at an appropriately higher price than the Radeon VII.
5) When the RTX 2080 TI came out at $999, what happened? Despite people saying that it was too much, demand still exceeded supply, they were hard to get hold of, and the prices of 3rd party boards went up. That’s retailers putting the price up, not Nvidia. Nvidia (and the board manufacturers) had not chosen the highest price they could given they had no competition – it was the market that (temporarily) pushed the prices up.
Potentially, AMD competing at the high end might mean that there is sufficient supply that that spike in prices due to excessive demand doesn’t happen. But that’s separate to the core argument that Nvidia has been setting prices too high because they don’t have competition.
You are looking at $300-$400 between the Radeon Vii / RTX 2080 and the start of the RTX 2080 TI. So if AMD did produce such a chip, the cards are going to be around $800-$900. And given the extra capabilities of Turing, those cards are still going to carry a premium – it might take $50 – $100 off the Nvidia pricing, but realistically, no more than that.
There is clearly a market for performance of, or in excess of, the RTX 2080 TI,, even bearing in mind what less powerful card can do, and that market is also prepared to pay $1000+ for that performance. Maybe we get to a point where manufacturers can’t scale up the performance to create a full range from low end to $1000+. Maybe we get to a point where the market doesn’t desire that much more performance. If/when one of those happens, then the range offered by manufacturers will shrink. Until then, we should probably just accept that the high end of the market is $1000+, and any competition will happen around that price point.
grahamtriggs,
Not for nothing, but I never said otherwise. I needed to spend a lot of time debunking the claim that 50-70% price gaps for 2080TI cards was proof that prices couldn’t go much lower. We never got around to discussing what the price floor might actually be, and for the record, I specifically said I couldn’t tell you the lower price limits because I don’t have enough data. In any case, I’m very happy that we’re in agreement now that price floor is really a function of costs and not the 50%-70% gaps on seller prices.
Well, you’re going to need to get over this to get a better grasp of free market supply and demand. Because actually, if we can assume that competitors set their prices to maximize profits under all circumstances, then those prices absolutely do change between a one supplier market versus the two supplier market. Absent price fixing schemes or other free market interference, adding competition does lower prices. And given sufficient competition, the price will approach cost.
“I needed to spend a lot of time debunking the claim that 50-70% price gaps for 2080TI cards was proof that prices couldn’t go much lower.”
No you didn’t, because that wasn’t what I said.
“In a competitive market, excess prices don’t last long because they loose sales to their competitors”
Which is what we have in terms of boards from third party manufacturers. So when we see custom air coolers with overclocked chips costing $200 more than stock, that’s reflective of the additional costs that Asus, MSI, etc. put into making those boards. Same with watercooled exterme overclock boards costing $500 more than stock.
These boards are a demonstration that there is a demand for increased performance even at prices between $1000-$1500. Plus, even if an AMD/Nvidia battle saw their high end offerings with a stock air cooler topping out at under $1000, you can bet that as long as it is practical there would be overclocked / esoteric cooling solutions priced at up to $500 more.
You kept talking about supply and demand, but never considered what the demand actually looks like. That’s important when you consider the chip design side, because GPUs are highly scalable.
The 2080 TI was the fastest chip that Nvidia brought to market. It is almost certainly NOT the fastest chip that they could have released. Put some more cores in there, increase the transistor count and die size. Of course, this means it costs more to produce, and so the prices would have to be higher.
Nvidia decided that the market could withstand a $1000 design. That’s what they designed the 2080 TI to be, and decided against splitting the market any further with a design containing any more cores. But they almost certainly could have done – two years prior, they put out the Titan V with 15% more cores and more transistors than even the 2080 TI. (albeit a slower core design).
Equally, they could have just stopped at the 2080. The high end would then be two competing chips of similar performance on boards around $700. But they knew the market was there for more, so they put 50% more cores in, and charged more for it. As long as there is a market prepared to pay $1000+, and as long as it is technically possible, one or more of the manufacturers are going to design a chip with more cores in it to satisfy the market.
“claim that 50-70% price gaps for 2080TI cards was proof that prices couldn’t go much lower.”
So no, I didn’t claim that the price gaps were proof that chip prices couldn’t go much lower. I said pretty early on that the prices of the low- to mid-range cards are proof why the high end chip prices couldn’t go much lower.
But, those price gaps are proof of the costs for the additional engineering above what the stock designs have. And wherever the high end chips are placed (at least up to the current pricing), then there will be these overclocked variations, and they will carry the same premium.
And at the same time, these boards are demonstrating that a $1000+ market does exist. At least one manufacturer is going to design chips to address that market. And if two or more do, then they will be doing so designing solutions that carry enough manufacturing costs that there won’t be room for competition to lower the prices. That is why prices couldn’t be lowered. It’s why high end prices likely WOULDN’T come down. And wouldn’t is an equally valid address to the initial point as couldn’t.
“Because actually, if we can assume that competitors set their prices to maximize profits under all circumstances”
See, that’s the problem. Because you are making assumptions, without actually looking at the market conditions.
You’ve assumed that they are pricing optimally to maximize profits, when the market has already shown that Nvidia announced at a price where demand exceeded supply and the market retailers profited in the short term by taking advantage of that to increase prices.
You’ve assumed that supply will increase when both AMD and Nvidia are using the same fabrication partner.
You’ve assumed that an increase in supply means that it will exceed demand – which is what would be needed in order for prices to come down – when demand (at launch) already significantly exceeds supply.
You’ve assumed that if 2080 TI performance could be delivered for e.g. $799, there wouldn’t be a market for 50% more performance for 50% higher price that a chip maker won’t create that even more powerful product for that market.
You’ve assumed that just because there isn’t competition at the high end that Nvidia has simply pushed the price at the top end completely out of line with the manufacturing costs / lower end.
The RTX 2080 TI has approx. 50% more cores than a 2080. That means 50% more transistors, 50% larger die, and approx 50% more manufacturing costs. The boards come with approx. 50% more memory. The raw manufacturing costs of the 2080 TI is therefore going to be somewhere around 50% more than the non-TI. And with that kind of increase in costs, a 50% increase in the prices of the cards for 2080 TI vs 2080 is not grossly out of line. So if there is any room to lower prices because of competition, it isn’t by a lot.
I’ve looked at the design of the chips, and the ability to scale them to produce a range of products. I’ve looked at the comparative costs of manufacturing different products. I’ve looked at how that reflects in products that do face stiff competition, And I’ve looked for indications of demand in the market. And based on all of that, I’ve said that I don’t think high-end prices are about to come down, even if we have competing high end products in the future, they will likely be competing at the existing price points, not lower.
I might be wrong about that. The prices for high end cards may come down. But that is likely to be because the manufacturers run into limits with scaling, or the demand drops off (e.g. because gamers aren’t looking to drive even higher resolutions).
But you’ve just said more competitors = more cards = lower prices, without considering any of that.
grahamtriggs,
Yes, but you speak of it as though it’s an exception to supply and demand when in fact it’s a biproduct of it. Those prices hikes we see today already reflect supply and demand. An increase of competition&supply, even with a constant demand would lower prices.
Finally we can agree!
You may not have faith in supply and demand, but it is one of the most accepted principals in economics and robustly explains pricing & trends in free markets. The GPU market is not an exception.
DG1 is basically the integrated graphics ripped off of Intel’s CPUs and put on a card…. they’ve talked about DG1’s performance already and it’s a joke. The reason they don’t want to talk about it and keep making it look flashy is that it is a marketing ploy to keep investors from dumping Intel stock despite Intel’s fabs basically being belly up for the past several years.
You see a company so desperate that they are restarting 22nm production and you think they can even come close to bringing a half decent GPU to bear??? Micron also cut thier ties with them over Optane and is proceeding to finally make thier own products and Intel no longer has a stake in that fab.
Basically Intel peaked out and is now being bled dry…. perhaps it will rise from it’s own ashes but not anytime soon. Some people think thier Foveros 3d stacked packing will save them but if you look at it is horribly designed…. and relies on Intel getting 10nm or 7nm working which there has been zero evidence of. Even then it is questionable… and will certainly only work in very low power mobile applications, a market AMD is about to rip from Intel’s grasp.
Yes.
Where (I couldn’t find any performance information)?
GPUs scale very well. If you take the integrated graphics from an Intel CPU, make it wider, give it its own dedicated (GDDR5?) RAM so its not fighting for host RAM bandwidth, then increase the clock (not forgetting that integrated version is constrained by a “fractions of 85W” power budget); it should be relatively easy for Intel to hit “mid-range Nvidia” performance.
Of course it will also depend more on pricing, the quality of drivers and target market; and I suspect Intel will start conservatively (initially aiming for normal people not the relatively insignificant “high end gamer” niche).
Note: I’m also (possibly incorrectly) assuming it’s intended for desktop systems – it could be intended for HPC instead.
“Restarting”? I doubt Intel stopped 22 nm production. They always have used older processes for less performance sensitive silicon (chipsets, NICs, etc).
Intel is currently in a weird place at the moment; but all companies have peaks and troughs; and it would be foolish to assume a temporary trough is the new permanent peak. Note that people were saying similar things in the 2000s (when AMD released Athlon and Opterons with the memory controller on the chip and outperformed Intel, then followed it with 64-bit); and 10 years after people like you were saying “LOL, Intel is doomed” AMD was almost dead.
Erm. Your “zero evidence” is that Intel started shipping 10 nm Cannon Lake chips in 2018.
The reference to it not being scaled up was in some article about DG1 being a concept GPU to “prove thier ability to even make a discrete GPU” and as such is not scaled up at all as that would have increased complexity… so yeah it’s literally a PR stunt at this point. Because they will encounter more problems whens scaling up.
““Restarting”?”
Yes restarting production of EOL CPUs on 22nm… of course the fabs don’t go poof but they are supposed to be producing things like chipsets etc… at this point not new CPUs. Even AMD’s chipsets have moved from something like 50nm to 14/12nm (IO die at global foundries).
“Erm. Your “zero evidence” is that Intel started shipping 10 nm Cannon Lake chips in 2018.”
Yep and they perform *worse* than 14nm in every aspect except density but that doesn’t beven matter as yields never improved…. AFAIK they have not released a superior or high yield version of 10nm to date.
cb88,
I was also curious as to where you got information about the performance as Brendan asked?
Well, I’m in the wait and see bandwagon. Intel’s integrated GPUs suffered from obvious bottlenecks in that they relied on shared ram and competed with CPU cores for die space/thermal cooling. My inclination is to agree with Brendan that GPUs in principal should scale well by copy and pasting execution units and providing high speed dedicated ram. However IMHO there’s too little information to even speculate where intel’s discrete GPU could land in terms of performance. I don’t put much stock in PR material, but I don’t see a reason intel couldn’t come up with a decent card if they don’t cut corners. Again, we’ll have to wait and see.
You’re moving the goalposts though 🙂
I’m also disappointed intel is slow to move to 10nm, but they do have a foot in the door and we’ll probably see more progress this year. I am cautiously optimistic that this competition between intel and AMD could spur innovation. Things have been dull (and overpriced) over the past couple of years.
Part of the reason Intel is releasing so much details about Xe and DG1 at this time is because it’s going to be a central part of their Aurora supercomputer, built by the DoE and Argonne National Laboratories, and the DoE wants developers clued in early to the architecture so they can hit the ground running, so to speak, when the computer is operational.
But, it is definitely going into consumer cards as well.
It very well could be for HPC. A couple high profile centers deployed Phi, and then it flatlined. Everyone still preferred CUDA.
There are lots of different things this could be for. There are lots of Xeons out there that get sold with Nvidia or AMD graphics cards, and that’s money Intel isn’t competing for.
VDI is certainly a thing that could benefit from cheap powerful enough GPUs that could be shared between VMs.
There is also lots of graphics software optimized for Intel iGPUs, and if that software can use the DG1 without modification, it should benefit given a discrete card should have a larger envelope to play in.
That’s not a bad thing. Their iGPUs are enough to power modern desktops and some small games. I have a couple of systems that just need more video outputs, and if this can deliver 3x Displayports at 4K while pulling <75W in a single wide slot, I'll probably pick a couple of these up.
Right now the best options for my Linux desktops are the AMD Radeon Pro WX2100, WX3100, WX4100, and WX5100.
” Some people think thier Foveros 3d stacked packing will save them but if you look at it is horribly designed…. ”
Foveros is a pretty big deal, at least from a FAB standpoint.
Whether or not Lakefield has a big impact it is too early to tell. But seriously ,claiming that a company like intel, which has over 80% market share in some of the most profitable segments, has “peaked” is just ludicrous.
Nvidia sells drivers, not chips, much like Apple sells MacOS X, not Macs. It doesn’t matter how well AMD GPUs do on benchmarks, if you are serious about gaming you will buy the Nvidia card because the Nvidia card is less likely to muck things up when trying to run the latest PC game.
Considering Intel graphics drivers are notoriously bad, I wouldn’t hold my breath for a shakeup in the discrete GPU sector.
What a load of bull… the only reason Nvidia’s drivers are even relatively stable at the moment is their architecture has stagnated for awhile since AMD failed to compete. When they are forced to compete thier drivers are just as unstable as everybody else’s add to that that their driver is stuck in 2014 UI and feature wise.
Gaming-wise, Nvidia drivers are better than AMD’s, if anything because Nvidia works together with several game studios to make sure the games work right with the Nvidia drivers. No clue what professional users consider best.
I am talking about Windows just to be clear.
And anyway, Intel graphics are the worst of the bunch. Wouldn’t be surprised if this “discrete” GPU ends up in low-midrange laptops occupying the ecological niche between on-CPU graphics and discrete Nvidia and AMD graphics. Overall, a smart move from Intel, since they can pursue that market without having to increase the die area of the CPU+GPU combo in a single SKU. Or they could cut out the GPU entirely from the CPU since they can’t get the GPU to work on 10nm.
NVIDIA sees itself more as a software company than a HW one.
Their driver team is huge.
I don’t think people appreciate how much smaller AMD really is with respect to their two main competitors; Intel and NVIDIA.
Apple sells IOS development machines, not MacOSX. or Macs of any sort.
There are people who want a Mac OS X computer so they can use Final Cut and Garageband (and don’t want to use any Windows equivalents). Then there are people who just want an integrated experience so they don’t end up as a human ping-pong ball when something doesn’t work between the Dell hardware and the Windows OS (where each party blames the other and both of them blame the driver vendor who has no interest in helping because they got already paid and don’t care). With Macs, it’s their hardware, their OS, their drivers, so eventually they are shamed into fixing integration problems.
Your really need to watch AdoredTVs video on nvidia drivers:
https://www.youtube.com/watch?v=dE-YM_3YBm0
That was back in 2007, back in the early days of Vista when not many drivers worked right with Vista. Again, it doesn’t matter. As long as Nvidia works with game studios to make sure their drivers work with the games, they will have the upper hand.
Ah, Internet Time. First discrete GPU? Hardly. https://en.wikipedia.org/wiki/Intel740
Was waiting for this post. Factually true, in reality it was a short-lived experiment that didn’t affect history.
Remember Nvidia is accused of stealing a bunch of trade secrets from AMD: https://www.theverge.com/2013/1/16/3881826/amd-accuses-bob-feldstein-of-stealing-documents-nvidia
Intel’s GPUs will most certainly end up as a way for them to stave off Ryzen iGPUs, but there’s no way they’ll suddenly have the ability to compete with a company that seems to have all the high end GPU trade secrets at the moment.