Lots of news from Intel today – the company announced a new line of processors and accompanying motherboard chipset. I have to admit I find Intel’s product and platform names completely and utterly confusing, but from what I gather, the company announced new high-end i7 and i5 processors, as well as even higher-end, high-core counts i7s and a new line, the i9. The X299 chipset brings it all together.
I was keeping an eye on these new processors as I just ordered all the parts for my brand new computer, but I had already decided not to wait for these since I prefer not to jump onto new processors and chipsets right away (which is why I didn’t opt for Ryzen either). Looking at the replacement for the processor I eventually settled on – the 7700K – I’m pretty sure I made the right call, since the speed bump seems minor (100Mhz), while TDP goes up relatively considerably.
The high core count processors are – much like the Ryzen 7 1800X – incredibly alluring in a “I want all the cores” kind of way, but for the most part, few workloads actually benefit from more cores in processors. Aside from workstation-oriented workloads I personally do not engage in, it really seems like processors are running ahead of the software they run.
Still, with Ryzen and now Intel’s new parts, there’s a ton of choice out there if you’re building a new computer.
Performance at that level is only useful for video and games. For video more cores are useful now.
For games it is a different story. Most engines are optimized for 4 or 8 threads, but not for 8 or 16 threads. But history has shown that eventually you get more more performance once engines catch up.
I would always buy a more core CPU for the same money. Just look at those old 220W FX8xxx Athlons they are still decent in some games today whereas the Intel dual or quad cores from that time are way slow for anything demanding today.
I run a lot of virtual machines and not having something that has the expense of a Xeon kit would be nice.
Said performance is also useful for gentoo’ers. Particularly when compiling Chromium (a rather unholy load).
Is there any reason today to go with a source distro?
Why not? Compile times are generally wonderful short these days – except for Chromium. My needs for solid control of my system hasn’t changed, nor are binary distributions more flexible today than in the past. I cannot think of anything that would reduce the need for source access and freedom of customization.
Is there some magical compiler switch that will increase performance by compiling code by oneself that I don’t know about so I can waste time recompiling same crap again and achieve different results than distribution maker ?
Edited 2017-05-31 13:56 UTC
He’s not talking about performance from the compilation, but rather the ability to control whats installed at a finer grain level than what most binary distros allow.
I happen to think thats mostly BS. Its possible, but most don’t really do it. Its basically becoming the distro maintainer. With dependencies being what they are in the open source world, sometimes you’re forced to change a lot just to get a security fix. Unless you feel comfortable enough back porting patches. Or like several idiots I know, don’t care at all about security and just stick with a years old version of vulnerable code, while disabling the firewall and any mitigating security software out there.
Plus many of the reasons I once had for having specific versions of software are now filled by having containerized applications. Where the old nasty version of whatever that I need for what ever dump application, is safely isolated away from the main system.
It is not so much about performance from compiling. It is about controlling dependencies, overall load and tailoring the system specifically to my needs.
In regard to performance from compiling, one should not expect significant differences in general. This is also not the point of compiling from source. The point is that nothing gets installed unless I want it, meaning I can have a very tight system, which will do exactly what I tell it to, when I tell it to; and neither sooner or later nor more or less.
I am certainly much more capable of being my own distro maintainer than anyone from Ubuntu or Fedora. None of them can create a distribution which meets my needs while meeting the needs of other users. Neither can I create a system that tailors to their needs while being optimal for me.
I have used gentoo since 2005, so I am quite comfortable with this. LinuxFromScratch is a nice alternative (it gives a wonderfully tight system because manual dependency resolution…), but to much trouble to maintain in the longterm.
It’s useful for those that tinker or work on operating system projects too. Not only can you experiment with writing more efficient parallel algorithms in your own code, when you need to build massive amounts of software it is significantly faster.
In my case, I build packages for a small BSD project. More cores are a huge help. However, I’ve had bad luck getting ryzen to boot so far. For open source work, I recommend a used workstation with a xeon as they can be found for $400. I bought an HP Z420 last year and upgraded the ram and added a SSD. It’s pretty nice and it has an 8 core xeon. It’s been great for compiling.
Don’t forget developers. Most build systems will compile a source file on each core simultaneously. Build time tends to go down pretty linearly as you increase cores. Extra cores do wonders if you work on large projects.
That sounds just like the things people said when AMD brought their dual cores to the game during the Athlon64 era, almost 14 years ago.
The fact is (and was then as well) that, yes, a lot of software isn’t adapted to fully utilize the potential of those numerous threads that can be executed in parallel on these new processors, but that DOES NOT mean you cannot see an advantage yet.
Whenever you game, your computer is still doing other things. The OS is managing all kinds of services. Perhaps your browser is still open and regularly requesting some CPU-time for a delayed javascript to refresh some data via XHR. Maybe you like to leave every other application untouched before you game, not bothering to clean up before you start the game, not bothering to try and achieve the least possible hickups or sudden framedrops because your computer was doing something while the game was fully utilizing the processor, thus suddenly having to share its resources.
With a lot of threads, this problem is quite possibly solved.
You want to start a quick game, but…
* you’re in the middle of encoding some video and it won’t be finished within the next hour
* you want to stream in 1080p60 simultaneously
* your AV has just started a full system scan
* you have a thousand tabs open in your favourite browser and aren’t willing to close them right now
… these situations will make your computer crash and burn when only using a 4C/8T processor, because most games nowadays REQUIRE at least four proper cores and would like some more (because that’s what XBone and PS4 with eight threads do with computer game development), but running a 6C/12T or more will give your computer somewhat more breathing room.
But of course, those are just my two eurocents.
I agree with the general idea that most software is not optimized for many cores, but that is simply because it doesn’t have to be and is already plenty fast on 1 core.
The kind of software that isn’t fast enough on 1 core (audio/video encoding, “zippers”, games) are also the kind of software that DOES get optimized for multiple cores. And although such software is often tested for 4 cores it is generally true that “once it starts scaling past 2 cores, it starts scaling a lot”. The software that benefits most from multiple cores is probably best not to run on your own machine but should be run on dedicated servers and might be more optimized for GPU’s than for CPU’s.
So my recommendation to people in general is this:
* By one of the simplest setups you can find if you aren’t going to put load on it. Simply accept the small delays you will sometimes have and enjoy a luxury vacation
* If you aren’t going to put load on it but someone else is paying for it, buy a nice looking medium setup. The user will be extremely happy and thankful and you can continue to use the machine a bit longer
* If you need a high end machine for a poweruser, buy just below the top you can find and consult with that user about the things that are important to him.
* The absolute top is never worth it except for bragging rights. You get 5% extra performance but pay 50% extra. A few months later you will not notice the 5% extra performance and something 5% faster is now available making you miss those dollars/euro’s a lot!
That is also true, but in Ryzen’s case, you get an 8C/16T CPU for quasi the *same* money as the Core i7 7700K. Agreed, the single thread performance is significantly better on the 7700K, but that difference is dwarfed by the multi thread performance of the Ryzen 7 1700 being much better.
I agree that single thread performance is what matters more for games than multi thread performance, but to be frank, the difference is almost negligable and while that is true, the scale tips in the favour of the twice-as-numerous-in-threads-than-the-i7 Ryzen when you combine workloads: gaming + streaming, gaming + whatever really.
People have been spotting this phenomenon in only a couple of games (like GTA5), but the difference points to 4C/8T NOT being enough in some heavy games right now, especially when your computer is doing anything else besides rendering the game.
So, in my view: Core i7 7700K vs Ryzen 7 1700? For me a Ryzen, please, thanks. I cannot just ignore the huge potential of running heavier or more parallel workloads on a Ryzen AND the potential of running future games a lot better due to having more simultaneous hardware threads to its disposal. I’m not buying a new computer every year (or even every five years – my current one is almost eight years old!), so I need to take the future into account.
EDIT: my response to your comment is, yes, it’s silly to spend more money than you really need, especially to just get The Biggest and Meanest for no good reason other than bragging rights. It’s almost like flushing your money down the loo, really.
But going 6C/12T or even 8C/16T does not equal vastly more expensive, not in the case of AMD anyway. And that I like, yes I do, yes sir.
Edited 2017-05-30 13:09 UTC
Gargyle,
Obviously there are many local algorithms that should scale well with massively parallel SMP, but IMHO we must always rely on benchmarks rather than assume our intuition about more cores being better is correct.
Edited 2017-05-30 15:59 UTC
Also known as “official overclocking”.
If you have an obsession with threading and are on a low budget, pick up an old dual socket LGA1366 server. With 3GHz quad core xeons (mine has a pair of X5667’s) fetching around £50-£75 each, you can make a beastly gaming machine for not a lot of money. Sure, you’re on a 5-7 year old CPU architecture, but who cares when you have 16 threads on a budget!
Interesting idea for non-gaming purposes, but wouldn’t games that need a lot of cores also require a high end GPU? And would such a high end GPU even work in a server motherboard?
Edited 2017-05-30 12:39 UTC
Can you imagine how many instances of Star Wars we could run in BeOS on one of those?
But can it run Corum 3?
Edited 2017-05-30 17:32 UTC
At the moment you can’t go wrong with a Core i7-7700K processor. We’ve even put them in Photoshop and 4K video workstations, for smaller companies that had a somewhat low budget. A GeForce GTX 1070 is a good match for the i7-7700K. I won’t use Ryzen in builds until Windows 10 and drivers are optimized to take full advantage of it, likely by the end of this year.
That’s being very optimistic.
They’re still updating BIOS for this platform for stability reasons.
The i9 is not a product for normal people. It’s for specific use cases who will spend the premium because the increase is worth the money. What I don’t understand is where this leaves xeons. The i9 (at it’s pricepoint) is a workstation CPU, is multi socket support the only upsell now?
Yes but it is a big one. If they can move the 2x 4x 8x CPUs away from the Xeon brand. Then they can smp superscalar processors for clusters and alike still be named xeon and have no theoretical smp limit of number of processors (as long as the interconnect allows it).
Until recently for example the Pentium 2 overdrive powered supercomputer (333mhz P2 overdrive) was in the 1000 fastest computers on earth. I do not remember how many thousands of cpu’s it had but it was just around and about a fuckton.
ECC
For many years I have been able to upgrade my laptop for £450-£600 trading up to a newer model, always slightly second hand or demo machines. Recently though I do not see decent spec. laptop machines in that price bracket, they seem to have moved up in price rather since ipads, android tablets and their ilk arrived on the market. It would be very nice if some of these price/performance benefits would start to hit the laptop market as I need a new machine!
I find that I can do all my day to day work on a core2duo 2.5ghz, it handles my low end gaming needs, does all that I require of it including photoshop and javascript development. If I need to do video editing or virtualisation then I choose a desktop with four cores.
I can buy a decent PC on a stick and a quad core on a cheap phone but laptops with a decent CPU/GPU are still costly. We need these improvements to translate into something affordable on the poor laptop.
> Performance at that level is only useful for video and games.
*sigh* like no one wants to compile their own OS, like packages delivered in source can’t be tailored far better than some binary blob, for your *specific* machine…
Without a decent amount of cores (and why don’t we have 128 per cpu yet I mean really its not the 20th century any more….) we can’t usefully know how to use them, give people the “useless” extra cores and we’ll soon get used to hacking for them…
Edited 2017-05-30 22:17 UTC
Hi,
I’d rephrase this as “it really seems like software is running behind the processors they run on.”
Near the end of last century, “double the clock frequency every several years” got replaced with “double the number of CPUs every several years” and a lot of software developers still haven’t noticed.
– Brendan
Creative Edition of Windows being a serious effort to bring computer power thirst back from zombie land.
Actual IP law-frame too big, clumsy and lawyerly to reactivate CREATIVITY within the Digital Universe. “A modo Corporativo”.
Unless that being the ACTUAL purpose. Hope not.