AMD surpassed rival Intel’s market cap on Friday.
AMD stock rose over 3% for the day, giving the chipmaker a market capitalization of $153 billion. Intel fell nearly 9%, a day after disastrous earnings that missed expectations for profit and showed declining revenue. Intel’s market cap was $148 billion at the end of trading on Friday.
The shift is mostly symbolic, but it signifies a much more competitive market for PC and server chips, where the two companies compete directly.
I don’t report on financials anymore (unless it’s something truly unique), but this one I wanted to highlight simply because it highlights just how well AMD is doing. Only a few short years ago, this would’ve been unimaginable. Meaningless, sure, but a sign of the times nonetheless.
~90% of this can be attributed to gross even malignant Intel mismanagement rather than AMD doing well.
Intel didn’t do well, but this claim is not fair to AMD. They did hell of a job.
satai,
Yes AMD did well, but since they are fabless it makes sense to give some credit to the fab too. TSMC was mass producing sub-10nm wafers while intel’s own fabs were seriously struggling with it. It took them too long to shrink their process down and this lead to some major wins for intel competitors. Ultimately as long as things are trending towards more competition I think it should be welcomed by consumers. I do wish apple CPUs were available for OEMs rather than being limited to apple products since it would give us more choices, but even so at least we are seeing more competition for now.
While what you said is true lets be honest here…TSMC could be handing AMD 3nm wafers and it wouldn’t mean squat if they were still using the Bulldozer arch, and this is from someone who bought an FX-8320. By coming up with a completely new design with good IPC, low power usage, and adding SMT? AMD made a seriously great chip with Ryzen, even their first gen Ryzen chips are still damn good to this day and they just keep getting better.
While they were certainly helped by Intel with their “Lets just pump up clocks and voltage to FX-9590 levels to keep pumping 14nm chips” I don’t think AMD would have had any trouble selling Ryzen chips regardless. the only place where I thought AMD dropped the ball is they really should have been cranking out more mid range APUs as during the GPU shortage you couldn’t find an AMD APU to save your life but with limited chips I could see their focus on APUs for laptops.
But there is also one thing that not enough give AMD credit for and that is the long life of their sockets, its what switched me from team blue and many others I’ve talked to. Having to toss your whole rig every couple of years with Intel was getting insanely expensive but with AMD? I started with an R3 1200 and progressed to an R5 3600 and just got a BIOS update adding support for the 5xxx series so all I need to do to upgrade is swap chips which for a board I got in 2018? Not going to complain. Its why IDC how cheap Intel makes their chips, when this AM4 system bites it I’ll go AM5.
bassbeast,
I agree, but there was a substantial fab advantage too so I think credit for the end product needs to be shared.
No doubt about that. The shortages were not their fault, but perhaps they had some leeway which products to put out. I wonder what role binning plays in all of this. If the fab is producing good yields and the demand is very high, then there’s not much incentive to restock the lower bins.
Yeah that’s fair.
I would never again buy a nvidia GPU (they just break) or a intel CPU (they change sockets too often). I went from a 44 core intel system to a amd 5950x and got the same performance at much lower wattage. Same with my GPU. Went from 4x 1080TI to a single gosh darn 6900XT and not only is it faster, it is a lot quieter as well. The GPU upgrade was nessecary anwyways as the nvidia cards kept dying, and replacing them all the time was costing me a fortune. Another point is the open drivers, AMD cards are awesome in linux and haiku, so much so that i do not even have windows installed (even as a vm) any more.
Disclaimer: i still have windows 98 in a vm to play drakan and diablo hellfire once every blue moon.
Don’t forget third party driver support with AMD which you don’t get with Nvidia. I’ve been using the Amernime drivers and they have been great and bring support to even the old Terrascale cards to Windows 10 with hardware acceleration which is a great way to revive a C2Q or Phenom II system by giving it some hardware accelerated kick in the pants.
They just break? I didn’t realise that was a thing. I don’t think I’ve ever personally had an nVidia card die on me and we use farms of them at work too. Curious.
Under-phil,
I was thinking the same thing. I’ve never had a GPU fail on it’s own. (I’ve had a power supply kill one, but that’s not the GPU’s fault). Obviously faults do happen. I specifically remember widespread issues with RTX 2080 Ti that shipped with faulty memory. But if your replacing cards over and over again that doesn’t seem normal to me and maybe there is another cause like a bad power supply, very bad case temps or maybe even bad MB/PCI slot?
That said, I wouldn’t mind supporting AMD over nvidia due to their position on proprietary linux drivers. I feel kind of dirty for choosing nvidia over AMD, but…cuda.
Things could be getting better on the driver horizon…
https://www.makeuseof.com/nvidia-open-sources-gpu-driver-for-linux/
These aren’t fully open source drivers, but they should help in providing a stable GPU interface that doesn’t get broken every time the kernel updates.
Alfman,
From what I read the “open source” driver is just a very interesting API approach.
They moved the actual driver onto the GPU, and this is just a “glue” in between:
Still better than not having it, though.
sukru,
Yes, it’s not so much an open source driver so much as an open source driver interface. You still need proprietary code to run nvidia’s graphics & compute stacks. Obviously it won’t please those who want real FOSS drivers, but at least it will (or should) solve the driver breakages.
Hi, how are yo running Windows 98? At VirtualBox I have very sluggish interface. I`m afraid of thinking about games.
Congrats to AMD! Not sure it’s warranted, but still.
With regards to “never had a GPU fail”, uh, at least some here are old enough to know… there was a time when I’m pretty sure you could guarantee the Nvidia card failure. Also, even over the past several years, there have been deliveries of card lines with bad memory. So, at least that’s still a thing.
chriscox,
Can you be more specific regarding years & models? I believe my first nvidia was probably in the 2000s and I’ve had quite a few since. I doubt the situation for nvidia is all that egregious overall, but actual data would be interesting for sure. I wish manufacturers could be compelled to publish return/failure/RMA rates by law so that consumers could be more informed. IMHO consumers deserve to have the real data.
Ahhh childrens, so young as to not remember the infamous Bumpgate (and the royal screwing Nvidia gave their customers whom they sold known faulty products to), I ended up getting so many free practically new PCs because people would get super PO’d when their graphics card in their OEM PC bought the farm and got told it was 2 seconds out of warranty so they would toss it on the curb and my friend that worked at the city reclamation center would grab ’em for me. Ahh good times those were…not for Nvidia customers but for me as a simple swap to an AMd card and voila! Good as new.
As for models? I believe the Bumpgate GPUs (which since you probably haven’t heard of it Nvidia cheaped out on the Bump solder that connected the GPU to the substrate so after a limited number of hot/cold cycles the balls would break, hence why it was called bumpgate) were the 8xxx series? and they have had spats with bad memory, some of the 4xx and 5xx cards and was it the 8xx or the 9xx cards where they did that kinda scummy trick of advertising a “4GB” card but only 3.5Gb was available at full speed and 500Mb was at a much slower speed which made some games crash or glitch? I can’t recall off the top of my head.
But while it may be anecdotal, I saw enough dead Nvidia GPUs working at the repair shop I quit buying them around the same time as ATI was releasing the HD4xxx cards. we used to get a ton with memory errors like white noise in the picture or the dreaded “green screen of death” while the only ATI and later AMD dead cards we would get had had something horrible done to it like the R9 280x the guy had plugged straight into the socket in a lightning storm. In fact that reminds me I have a 1060 3GB I really need to get around to listing on eBay as for parts that I pulled from a customer’s PC as the GPU cooler is in great shape, but it has the white noise which means dead memory chip.
bassbeast,
I’m finding lots of links where people talk about it but few technical specifics about the actual engineering faults. It reminds me of xbox 360’s “red ring of death”, which used ATI GPUs.
…actually I found this on archive.org:
https://web.archive.org/web/20090525055952/http://www.theinquirer.net/inquirer/news/1004378/why-nvidia-chips-defective
Page 2 of the link shows high temps would result in severe structural weakening as they approach 80C. Maybe I got lucky, that 40% failure rate could have easily been me, but I never overclocked and maybe I didn’t push cards hard enough either. I don’t skimp on cooling in my systems because I’m not comfortable when temps get that high but I concede that most OEM/beautique builds have terrible cooling properties, often with no direct airflow at all! Still it’s not an excuse for components to fail under their designed throttling points and I can’t blame anyone for being wary after being bitten. With that said though, I don’t see evidence of systemic & disproportionate nvidia GPU failures in modern products.
The active gripe I have with nvidia right now is their refusal to expose all the temps on linux. This is a problem because it means we cannot program case fans to react to high GPU memory junction temps (that exceed 100C) on linux like we can on windows. Nvidia’s official position is that it is safe because the card will automatically throttle itself, but it is stupid to have no way to cool things intelligently.
https://forums.developer.nvidia.com/t/request-gpu-memory-junction-temperature-via-nvidia-smi-or-nvml-api/168346
They are doing well and in many ways in glad for it, been a fan since the Athlon era tbh. But I do wonder how much of this value is built on crypto mining. A massively energy intensive (and very much non-green) that had inflated GPU costs and distorted market demand. AMD have fully embraced the market ofc, releasing mining specific firmware