AMD’s benchmarks showed that the top Ryzen 7 1800X, compared to the 8-core Intel Core i7-6900K, both at out-of-the-box frequencies, gives an identical score on the single threaded test and a +9% in the multi-threaded test. AMD put this down to the way their multi-threading works over the Intel design. Also, the fact that the 1800X is half of the price of the i7-6900K.
If these promises and benchmarks hold up, Intel will be facing some incredibly tough competition on the desktop/laptop side for the first time in a long, long time.
Intel need to get a new CEO, the one in place managed the company by the numbers and destroyed innovation and development.
Research isn’t profitable ? Fuck the future, let’s fire 12000 engineers !!!
The Win-Tel pairing is loosing steam and intel need to diversify.
Seems to be a Donald Trump symtom. Only the numbers matter. Intel thought they could get away with no R&D department. Guess they are wrong. Now it will take 2 years to catch up to where AMD is today.
Intel will see a 45 degree downward slope. Companies will be trying to get out of their long-term contracts.
And Intel will rebrand the same chip so as to match AMD’s offering.
I with hold judgement ’till some third party does a comprehensive benchmark instead of taking AMD’s word for it. Even the Bulldozer beat contemporary Intel CPUs in certain very parallel benchmarks such as x264.
According to the link, they used Cinebench. Two things some two mind:
1) Very parallel (an AMD advantage)
2) Did they also factor-in the results of the OpenGL benchmark that Cinebench comes with into the score? (an AMD advantage, since Intel’s integrated GPUs are worse than AMD’s, but it doesn’t matter in such CPUs since they will most likely be paired to a discrete GPU, it’s just a good way to inflate the score if they also took GPU performance into account)
So… Let’s wait for some third party to do some independent comprehensive benchmarking (which will include very parallel tests like x264 and Cinebench but also less parallel ones like browser tests and games) and we ‘ll see.
Just to be clear, anything from 15% loss to Intel’s best CPU and up (in less parallel tasks) is a major win for AMD. The days geeks bought the very best CPU are over, now they buy a powerful-enough CPU (and a powerful-enough GPU) and save some money to buy a high-end smartphone which also gives better bragging rights.
Edited 2017-02-23 00:41 UTC
If the AMD chip is within 10%-15% less performance than the Intel equivalent, I will vote with my wallet.
I am a senior, retired, and I don’t need to overpay for CPUS so as to provide million dollar salaries to senior executives.
I hope they hit a homerun, but to be fair there is a lot of spin on this.
They compare it to an i7-6900k, but their real competition is actually an i7-7700k (a $349 processor). Almost no one buys 6900ks outside of the workstation market.
Sure, with double the core count Ryzen should demolish the 7700k on heavily multi-threaded workloads – but I suspect the 7700k will still easily win on single/lightly threaded workloads like games and the stuff most people run at home (at least as stock clockspeeds). At similar TDP as well – for significantly less money.
Anyway, if you go into it wanting an 8 core monster this may end up being the way to go, but I think most of the market is quite content with 4 cores for most stuff. It isn’t that 8 cores is bad of course, more than they are missing out of a big part of the market by not offering anything less (yet).
I think it will all come down to overclockability for the gamer/enthusiast market. If Ryzen can hit 4.5Ghz or higher they will have a winner. If not, at $500, it isn’t all that compelling to me. Unless your workload is video encoding or other heavily multi-threaded stuff the 7700k will probably be a better choice for most people.
Well see I guess. I’m pulling for them though. Competition is a good thing.
Edited 2017-02-23 01:04 UTC
Which wins in ‘home’ usage really depends. While Windows stuff is still dragging it’s arse getting to the point of parallelization that pretty much every other modern OS has, there are still qui8te a few things that do a great job. Recent versions of Office actually do a decent job using multiple cores, and web browsers other than IE have been doing it for years (and I’ve checked, Edge does use multiple cores much better than IE). The problem ends up being all the developers who are perpetuating that view that multiple cores should be for multi-tasking on client systems, not getting things done more efficiently.
As far as games, it depends on the specifics there too. There are quite a few games that do benefit from parallelization, and the good studios are moving more in that direction. In fact, the only new games I’ve seen for quite a while that don’t do a good job of using multiple cores are PC-only games, since PS4 and XB1 games pretty much have to use multiple cores to do anything useful.
Now, all of that said, what I’m interested to see is memory bandwidth comparisons (which are for some reason always missing from these types of press releases even for server processors).
I think that might be their silver lining – recent and upcoming console ports to PC. Those coming from Xbox One/PS4, originally targeted 8 core Jaguars, so Ryzen having 8 cores as well will likely result in lazy ports running much better on it than on a 4 core Intel chip. That said, it may not matter though, Jaquars are pretty slow in the first place. I doubt a 7700k, even with only 4 cores, will end up being the bottleneck for any console port. Having almost double the clock speed more than makes up for it…
Check the specs, a 4 core Intel model on the high end is 8 threads of execution (4 cores with 2-way HT), an 8 core Jaguar CPU is 8 threads of execution (8 cores, no SMT), and an 8 core Ryzen is 16 threads of execution (8 cores with 2-way SMT). 16-threads of execution is part of why the Ryzen CPU’s are such a big deal, the only way to get that on an Intel CPU these days is to buy a 1000+ dollar Xeon.
Now, that said, 8 real cores is far superior for actual parallelization than 4 with 2-way multi-threading for anything but thread-pool based workloads (which are rare on client systems), but I think the big impact will be that Ryzen’s SMT is probably more like Bulldozer’s SMT (shared-nothing unless you are doing 256-bit FPU operations) than Intel’s HyperThreading (almost everything except registers shared and multiplexed by hardware).
From what I have read it is the opposite – it is more like HT (shared everything except the registers).
Having 30 tabs (or 1500 tabs for that matter) of idle web pages is going to cost you 0% CPU time. But point taken, the average computer today does virtually nothing multithreaded.
What you are missing is why this is: four cores is simply not enough speed advantage to truly push applications to use more cores.
If we finally get some competition between AMD and Intel, maybe that will force them to start releasing 8 or 16 core CPUs. Suddenly a new range of things gets more realistic to do real time. For those doing video editing I’m sure they’ll appreciate if everything is 4 times as fast.
That’s severely disappointing if that’s the case.
ahferroin7,
I think galvanash’s post speaks to your earlier point where you said “However, it does help with multitasking, which is a very common desktop usage pattern.”
Anyways, for #1 #2, I have to wonder if GPGPU and vector processors are a better approach for most kinds of engineering workloads. Even running lots of CPUs in parallel still doesn’t get you the kind of performance you can get with dedicated vector processors. General purpose processors are bad at scaling due to shared memory. A few years ago I conducted an SMP experiment iterating a simple “multiplication kernel” across a large data set, this operation is “embarrassingly easy” to run in parallel, so in theory the performance should grow linearly with the number of CPUs. However I found that adding more processors added much less performance than that because they were all bottlenecked by ram. Granted some kinds of operations should scale well on massive SMP, especially if they are longer calculations that can operate entirely within cache and less memory contention. But a GPU is far better designed for performing billions of simple operations efficiently. And since many engineering problems can be boiled down to performing simple calculations on billions of data points, this is why I think most highly parallel applications are headed to vector processors rather than SMP.
For #3 SMP is very useful for VMs, although it’s not really something most desktop users need.
IMHO games and AI are the best candidates for large SMP for normal desktop users, although in practice most games I’ve seen don’t take advantage of it since graphics are left to the GPU and the game logic seems to be bound to one or two cores. It’s interesting to ponder the possibilities though.
Edited 2017-02-24 18:57 UTC
Finally! and god bless!
Is it gonna be the naughties all over again? Will Ryzen be the TBird of the Gen-X/Millenial’s era?
Maybe. Competition is certainly good for the market and us plebs.
Always been fascinated by AMD clever architecture choices, yet plagued with implementation difficulties and lagging a bit behind with Global Foundries.
Buying ATI and going the APU way was really the right choice, mixing CPU and GPU architectures closer together was the natural evolution AMD embraced the first.
If they can makes several different versions out of the Ryzen architecture, they could now more easily target various markets. Hope new AMD based laptops soon.
And don’t forget that if Intel sold Xscale, AMD is part of the new ARM architecture consortium, so I bet they’ll get some reward from this in the long run.
AMD after (RY)ZEN launch: “I live… AGAIN!”
Intel might have slightly better processors, sure, but I remember when Intel had no competition on the consumer market. Because of that I always buy AMD when I’m looking for an ~x86 processor.
I’ve been looking at upgrading my home server for a while, but now I’m glad I waited. I’d been looking at a decent quad core Xeon E3, but I’ll now be able to get an 8-core 16-thread CPU with essentially the same MIPS rating per thread that also supports ECC RAM (the only reason I even considered shelling out the money for a Xeon) for essentially the same cost and a marginally lower TDP. I’ll probably end up having to get a discrete GPU, but I was going to anyway since the Xeon I had been looking at didn’t have one built-in and I’m not going to shell out extra money for a MB with an integrated GPU that Linux barely supports (sometimes I wonder what ever happened to Matrox server GPU’s).
Hi,
Note that a “performance vs. performance” comparison is mostly irrelevant.
The comparison that matters is more like “performance+heat+CPU_performance+GPU_performance+price vs. performance+heat+CPU_performance+GPU_performance+price“(with various weighting factors for each component that reflect intended usage).
If Zen is slightly slower than an equivalent Intel CPU, but the price is lower or the integrated GPU is better, then maybe Zen “wins” despite being slightly slower.
– Brendan
Zen doesn’t have an integrated GPU, at least not the launch models. APUs with integrated graphics will (probably) come later.
https://en.wikipedia.org/wiki/Harvard_Mark_I#Design_and_construction
https://en.wikipedia.org/wiki/Apollo_Guidance_Computer#Design
“Slightly slower” is still outrageously more powerful than what was used to change mankind’s history. So I bet I can accept this “relative” lower power to type in my cv, browse some pr0n and play Return to Na Pali.
Looking at the die photo, it’s clear the “eight core” processor is really two quad core processors on the same die with better non-processor resource sharing. This is how these things go… most early quad cores were just two dual cores on the same die, often with no sharing beyond a common external bus.
Edited 2017-02-25 16:34 UTC