AMD isn’t only getting back in the game on processors – they also just finally truly unveiled Vega, the new line of Radeon graphics cards. AnandTech benchmarked the two cards, and concludes:
Unfortunately for AMD, their GTX 1080-like performance doesn’t come cheap from a power perspective. The Vega 64 has a board power rating of 295W, and it lives up to that rating. Relative to the GeForce GTX 1080, we’ve seen power measurements at the wall anywhere between 110W and 150W higher than the GeForce GTX 1080, all for the same performance. Thankfully for AMD, buyers are focused on price and performance first and foremost (and in that order), so if all you’re looking for is a fast AMD card at a reasonable price, the Vega 64 delivers where it needs to: it is a solid AMD counterpart to the GeForce GTX 1080. However if you care about the power consumption and the heat generated by your GPU, the Vega 64 is in a very rough spot.
On the other hand, the Radeon RX Vega 56 looks better for AMD, so it’s easy to see why in recent days they have shifted their promotional efforts to the cheaper member of the RX Vega family. Though a step down from the RX Vega 64, the Vega 56 delivers around 90% of Vega 64’s performance for 80% of the price. Furthermore, when compared head-to-head with the GeForce GTX 1070, its closest competition, the Vega 56 enjoys a small but none the less significant 8% performance advantage over its NVIDIA counterpart. Whereas the Vega 64 could only draw to a tie, the Vega 56 can win in its market segment.
Vega 56’s power consumption also looks better than Vega 64’s, thanks to binning and its lower clockspeeds. Its power consumption is still notably worse than the GTX 1070’s by anywhere between 45W and 75W at the wall, but on both a relative basis and an absolute basis, it’s at least closer. Consequently, just how well the Vega 56 fares depends on your views on power consumption. It’s faster than the GTX 1070, and even if retail prices are just similar to the GTX 1070 rather than cheaper, then for some buyers looking to maximize performance for their dollar, that will be enough. But it’s certainly not a very well rounded card if power consumption and noise are factored in.
So, equal performance to Nvidia’s competing cards at slightly lower prices (we hope), but at a big cost: far higher power consumption (and thus, I assume, heat?). For gaming, Nvidia is probably still the best choice on virtually every metric, but the interesting thing about Vega is that there’s every indication it will do better on other, non-gaming tasks.
It’s still early days for Vega.
“is that there’s every indication it will do better on other, non-gaming tasks.”
Given that they have to compete in every segment on the basis of exactly the same silicon, unlike nvidia. What they’re doing here is a similar strategy to what they do in Ryzen with CPUs.
Edited 2017-08-14 22:26 UTC
I tried moving to AMD with the 580X earlier this year as the cards looked good (same as a 1060GTX in most games) and my monitors supported FreeSync.
Unfortunately I had lots of game crashes, Markdown Pad Pro wouldn’t work correctly (basically a small github compatible markdown editor + preview app, there is a problem with certain drivers, windows and WPF which just doesn’t exist with nvidia for reasons I don’t care about). Games took twice as long to load as my 960GTX machine (why I have no idea). Ending up returning the card after it overheated while playing Rocket league which runs on my Girlfriend’s notebook with crappy intel gpu.
Ended up just putting down the cash for a 1080Ti. Not one single game crash, card doesn’t overheat.
Shame really. Especially after Ryzen looks like it will be a good upgrade to this 2014 i7 machine. I can stick Crysis 3 at high/ultra @4k without problems.
I’ve had problems with every ATi/AMD card I’ve tried since the 9800pro which really kicked nvidia in the teeth at the time, considering how awfult the GeForce 5800 series was.
Edited 2017-08-14 23:08 UTC
Sounds like you probably had leftovers of old drivers… that happens still on Windows.
Long time fan of AMD
They had a great run until the core i3/5/7/9
Looks like ryzen/threadripper/epyc is doing well
And vega at least giving good numbers again
I’m sitting on an i7 860 rn, with a radeon hd6870 no issues, my buddy in same room on a proliant dl380g6 2 proc, with a radeon eah5450 no issues…
I have had issues with some older amd cards a while back, but i’ve also had some really inexpensive geforce cards fire capacitors at me in the past too… no fun!
Really digging the exciting turns from each vendor this time around. Seems like everyone is starting to bring their A game again …
Both Vega cards, with the very latest not-in-mainline-yet kernel+mesa drivers, outperform AMDGPU-PRO in everything but Vulkan, and fixes for Vulkan are coming by Linux 4.14! If you’re looking for a Linux GPU, Vega definitely looks like the way to go from a system integration point – Nvidia’s blobs always sucked at everything but performance. Five years after the big overhaul started, AMD’s open source stack is finally just about caught up to Nvidia.
Phoronix link with more details, benchmarks here: http://www.phoronix.com/scan.php?page=article&item=rx-vega-linux1
For what I know, the open and closed drivers share the same code, the closed ‘Pro’ version only comes with additional components. So the performance discrepancy should be down to older codebase or something.
And if you install the experimental DC kernel from a PPA … you can actually get working HDMI audio on Polaris and up (although there are still some bugs and annoyances).
One of the most exciting things to watch for from AMD in the next year IMHO is the new “Raven Ridge” APU that targets laptops with combined Ryzen + Vega cores, potentially allowing AMD to give Intel a run for its money in the high-end ultraportable market. However looking at these Vega cards it doesn’t seem like the architectural changes focus on power efficiency at all, which is exactly what’s needed for laptops and exactly where AMD has historically come up short. I guess the APU versions of Vega will be smaller and more efficient, but without architectural improvements in this area it seems questionable whether they’ll have anything that can be embraced by MacBooks, XPS’s and Surfaces of the world in the near future.
Edited 2017-08-15 07:40 UTC
The results/thinking should at least be normalized against the GPU’s GFLOPS rating.
Use Tech PowerUp’s GPU database and look at
the numbers.
https://www.techpowerup.com
Vega’s high TDP’s are because of much higher compute power (GFLOPS) value.
If your going to compare GPUs for gaming potential then you have to factor also the GFLOPS (compute) rating and if the game software itself is leveraging the compute-performance of the GPU. You can still be in a CPU-bound scenario while using a fast CPU coupled with a GTX 1080 due to bandwidth limitations of CPU/GPU interface and that the respective game engine relies too much on CPU for floating point calcs and not offloading enough of these calcs to a GPU-compute-pipeline.
When factoring in GPU-compute-performance, the Vega 64 should be compared with the 1080 Ti.
The Vega 56 compute-performance is significantly better than that of the 1080.
With direct-to-the-GPU bare-metal paradigm pushed by graphics API’s like {Vulkan, Metal, Direct3D 12, Mantle}, the GPU’s floating-point compute performance will be playing a more important role on the performance side (relative to CPU).
The ideal is when a game engine is able to adequately saturate both the rendering and compute “data lanes” and this would provide a good basis for review of GPU hardware in context of gaming.
Another complication is when games are optimized to different extents for different hardware (AMD versus NVidia).
At least with mining cryptos you have the potential to tap into the full compute power of the card and makes the GPU investment worthwhile.
For the more complicated gaming scenario, how much are you willing to pay to have the newest GPU used with game engines that may not access the full render/compute potential of the video card.
I suppose computers run more optimized as mining rigs than as gaming rigs; the former dealing with more simpler source code than the latter (mining program versus game program).
At the end of the day, both AMD and NVidia video cards will give a good gaming experience; the technology is there and working. However, I find it foolish to get nit-picky or even get too serious about game benchmarks concerning personal computers, computers which are designed to be used for a wide variety of tasks and especially not optimised for just gaming.
Gaming consoles are another issue since they are meant to be optimized for gaming and any associated benchmarks would/should be taken more seriously. However, consoles have the issues of upgradability and extent of shelf-life.
The ideal scenario would have the hardware and software to actually work without having to make hypothesis. The consumer doesn’t care if the card is more powerful on paper. Keep faith away from computer pragmatism.