In the majority of controlled tests, AMD has done something they haven’t been able to achieve in almost 15 years, since the tail-end of the Athlon 64’s reign in 2005: that is to have a CPU microarchitecture with higher performance per clock than Intel’s leading architecture. Zen 2 finally achieves this symbolic mark by a hair’s margin, with the new core improving IPC by 10-13% when compared to Zen+.
Having said that, Intel still very much holds the single-threaded performance crown by a few percent. Intel’s higher achieved frequencies as well as continued larger lead in memory sensitive workloads are still goals that AMD has to work towards, and future Zen iterations will have to further improve in order to have a shot at the ST performance crown.
Beyond this, it’s remarkable that AMD has been able to achieve all of this while consuming significantly less power than Intel’s best desktop chip, all thanks to the new process node.
AMD’s brand new Zen 2 processors are nothing short of a slam dunk, and the desktop processor market hasn’t been this exciting and competitive in 15 years. I’m contemplating building a small light-load workstation for my new office, and there’s no way it won’t be team red, since AMD offers the amazing value across the board – low end, mid range, and high end.
The big disappointment for team intel has been the lack of progress on process technology, it’s been too long since they’ve had a reduction in die size. That intel still holds the title for single threaded performance is kind of remarkable though. For a lot of games, single threaded CPU performance is what matters, although on the other hand we’re talking about academic benchmarks that mere mortals would have a tough time perceiving in real games.
AMD has done a phenomenal job at competing on massively parallel cores. These can be useful for plenty of data center applications. However IMHO they still haven’t found a killer desktop application for consumer use cases. While it’s true that some graphics intensive applications can benefit from CPU parallelism, I think we’re evolving to parallelism in the form of GPU/GPGPU offload engines instead of SMP CPUs, which does away with the hairy SMP scalability issues. In the GPU model, the CPU is mostly a front loader for the GPU and is rarely itself a computational bottleneck. I don’t know if this will change much in the future, but most games benefit much more from single threaded performance than high core counts (in part because they use the GPU for highly parallel tasks).
So, technically these huge core counts would be great for complex graphics processing in the absence of GPUs. But in the era of massively parallel GPUs, what’s the killer application for massively parallel CPUs?
For one of my projects, I was working on a 32core xeon server to process a lots of analog radio streams in real time, but it wasn’t a consumer application. I’m curious if anyone has interesting applications in mind!
VirtualBox is the killer application for massively parallel CPUs. Period.
http://www.emulators.com/
Kochise,
For the data center, absolutely. I use virtualization (though not virtualbox) quite a bit in my work. However for desktop virtualization, regardless of how many operating systems running, most end users do not have to use them all concurrently. Maybe a few VMs will run background tasks, like a download or music player, but those activities are still unlikely to saturate a modest CPU. So for single user systems where the user is tabbing between the VMs, four or eight core CPUs are likely plenty. RAM is more important than CPU. In single user applications, faster single threaded performance often beats having more cores.
So I agree that for multiuser server systems, more cores means more users able to use the system concurrently, but I was interested in ordinary non-server desktop use cases. What is the killer application for ordinary average joes who aren’t running multiuser servers? Not that I’m complaining mind you, it’s great to have AMD stirring up innovation in the market.
Your link http://www.emulators.com/ …
Well, it’s a fun little link about atari emulation, in particular there’s a video showing them emulating many hundreds of atari PCs at once. Whatever floats your boat I guess, haha.
If you are on a server rig, you will never use a toy such as VirtualBox. You will throw at it either a serious hypervisor such as some VMWare stuff or, if you are into open source hypervisors, Xen or Proxmox (that one is lovely).
If you are on a workstation and do not really care about performance, sure, VirtualBox is good enough though I would rather user Hyper-V which is cheap (a Win 10 Pro licence) but has fast enough i/o perf.
If you do not care about performance, just buy any cheap CPU and lots of RAM. Man, I’ve use Hyper-V on a low end dual core Pentium and it was running well enough.
Well you shouldn’t look far, think about any combination of heavy workloads that consumers sometimes need. The most popular one pitched by AMD during the launch of Ryzen was gaming + streaming.
There are others, like gaming + encoding; encoding + compiling; gaming + leaving open whatever you were doing instead of trying to free up as much resources as possible from your meagre 4C/8T CPU; the options are endless.
This of course if all about the advantage of having more than four double-threaded cores, this is not about the 16C/32T CPU because that’s pushing it and is more about unlocking CPU potential of HEDT levels that was previously inaccessible to consumers (for consumer level prices).
Gargyle,
In the absence of gaming + streaming software built for GPGPU, then sure, but I really think those use cases favor GPGPU for heavy lifting over a high number of CPU cores. I think it’s fair to say the GPU implementation wins every time and even 32/64 core computers won’t change that. To be fair though, I don’t do gaming+streaming activities personally, so I’m interested if anyone else does: can you take a screen shot of CPU & GPU loads while gaming + streaming? I’d be kind of surprised if more than ~8 cores is really beneficial unless the software being used is not accelerated (ie cuda/opencl/opengl compute).
The options are endless, however in practice just because things can be done in parallel doesn’t mean that a typical desktop user will. IMHO transitioning from 8 to 64 cores will bring less benefit than the 1 to 8 core transition given these two observations: 1) the need for high cores is weakened by the rise of GPGPU offering better scalibility.
2) 4/8 core CPUs are already saturating user’s ability/need to multitask (aside from niche use cases).
Yea, I think 4-8 cores is the sweet spot. Not only because of how we’re using computers, but also because of the SMP bottlenecks that start to creep in as we go higher.
“But in the era of massively parallel GPUs, what’s the killer application for massively parallel CPUs?”
Games. For at least the last five years, games have made use of multiple cores to improve game speed. You can’t play a modern game without at least six cores anymore, and the more the better. Single-threaded games haven’t been a thing since like 2011 or so. Everyone knows that single-threaded performance peaked about a decade ago and has been stagnant. So everyone moved then to threaded game engines to work around the stagnation.
JLF65,
Yeah, that’s the obvious use case that we all think about, but it’s at least somewhat of an exaggeration since most 3d games in the past decade are bottlenecked by GPU rather than CPU particularly at high resolutions with ultra quality. I realize it may not be intuitive, but sometimes disabling cores can increase performance.of software including games. There’s a temptation to hand wave the pesky details to conclude more cores is always better, but it’s really more complicated than that. There are some real engineering challenges both in silicon and in software to get faster performance from more cores. If you want a high end computer to set everything at the highest possible quality, great, go for it. But to say you can’t play modern games with less is overstating the position.
Intel held an unfair advantage due to skipping on security. Sorry but what I have learned over the course of whole “Spectre” and similar issues is that Intel deliberately ignored these kind of attacks to improve the performance.
Now as users, we have to disable features, update BIOSes (on older servers and industrial machines), and hope latest Windows and Linux kernel patches are sufficient to secure us against JavaScript based attacks (it was really eye opening to see a JS program steal encryption keys from memory).
And once all those “fixes” are done, Intel no longer stays in the performance lead.
It was good while it lasted, however it seems like time is now due for actual competition.
(I don’t have a grudge against Intel. My desktop is Intel, and I hope they clean up their act soon).
sukru,
You’re right, but it’s not the whole picture. Meltdown in particular was intel specific and it’s regrettable they didn’t enforce security across memory barriers. But the spectre flaws and speculative execution in general is something all CPU manufacturers do to speed up their CPUs. AMD and even ARM processors are also vulnerable. The degree to which these flaws can be addressed by all vendors depends on the degree to which they’re willing to roll back the performance gains that speculative execution made possible in the first place. Deep speculation is explicitly used in order to improve performance, yet that very act enables a 3rd party to use statistical analysis to profile deep code paths that it otherwise wouldn’t have access to. It’s easy to fix, but then we’d role back most of the IPC performance gains of the past two decades.
I think the way forward and out of this mess it to focus on GPU/FPGA architectures that achieve parallelism explicitly rather than implicit instruction parallelism achieved via speculative execution.
Competition is long overdue. But I was rather hoping to see more ARM PCs (and by that I mean real desktop & server grade hardware, not just the plethora of SBCs).
One type of Spectre is usable for all processors with speculative execution: bypassing software protections, the degree of vulnerability for all other types varies with many being in the theoretical category for AMD. Not saying they are perfect but several types of attack is very hard to do due to (probably coincidental) design choices like how the branch predictor maps and tags branch addresses.
My opinion:
GPUs will never compete with CPUs for normal programs, they are designed for and very effective for a subset of data parallel computation but most types of computation can’t be effectively be mapped to something like that. FPGA as in reprogrammable hardware can help for some computation intensive workloads but isn’t a perfect fit for general purpose programs. Speculative execution is here to stay until we get true dataflow processors (we won’t) and even then speculative execution could dramatically increase performance, speculative execution is the only mechanism theoretically capable of exceeding the dataflow performance limit.
Megol,
Yeah, ironically Intel is being knocked because they succeeded in using speculation so effectively in deep pipelines, which is now a con, haha.
I’d say that all speculation is theoretically vulnerable. The deeper the pipeline, the more hidden information can be exposed. The challenge is actually finding instances of vulnerable code patterns in real software. Spectre, vulnerabilities can show up just about anywhere, but the severity depends on the context in which the vulnerability appears. In a video game engine there may be zero security impact, whereas in a kernel, web browser, or some other secure processes there may be a higher risk. For this reason I think maybe there needs to be a CPU flag indicating when the CPU is allowed to execute code speculatively.
Yes and no, I think the evolutionary path for traditional CPUs is reaching a dead end. A lot of us drool over high core counts, but the reality is that naive MT code doesn’t scale well in practice at high core counts. Hardware is migrating to NUMA solutions to address hardware bottlenecks, but now it’s introducing new bottlenecks for software built on multi-threaded primitives designed for traditional SMP CPUs. This is the whole reason AMD’s “gaming mode”, which disables half the CPU’s cores, is faster in many games and benchmarks.
I think it may make more sense going forward to engineer software to get massive GPU speedups rather than to get relatively modest speedups with large CPU core counts. But honestly I think most existing general purpose software is not going to embrace either NUMA or GPGPU. Legacy PC software is the new mainframe software, haha.
Granted, it’s easy to see why CPU manufacturers pushed for deep code speculation engines given the state of our software industry. Intel especially pushed hard to maximize speculation speedups, but deep speculation engines are very expensive in terms of CPU transistors and power. The consequence is that CPU algorithms tend to be much less efficient than GPU algorithms. We should not overlook the opportunity cost: dropping some of the transistors used for speculation would directly give us more power budget and die space for other purposes that might have more technological merit overall. Regardless of what we say here, you’ve got to admit we’re already seeing software exploit explicit GPU parallelism and blow way past what CPUs are capable of, all the while leaving CPUs underutilized. My opinion is that GPUs are the best commodity technology for new software with large scalability requirements going forward.
All great news, but where are the mobile processors!? The only AMD notebooks I see on the market are cheap junk with their terrible Axxx processors.
https://www.amd.com/en/shop/us/Laptops
https://store.hp.com/app/slp/amd-ryzen
https://www.amazon.com/s?k=laptop+ryzen
…
There are a few of them out there, but AMD was almost run out of the laptop market entirely. I’ve seen something like 166 Intel laptops vs 8 AMD laptops on Microcenter’s website. AMD has a long history of bad laptop chipsets they’ll need to overcome, and even these new chips aren’t getting good reviews compared to the power efficiency of Intel’s laptop chips.
Công ty đòi nợ Cửu Long uy tín là công ty đòi nợ xấu cho công ty, doanh nghiệp, đòi nợ cho cá nhân hợp pháp duy nhất TPHCM áp dụng chính sách đòi nợ không thu phí dịch vụ trước.
Link: https://doinocuulong.vn