Overall, the launch of Comet Lake comes at a tricky time for Intel. The company is still trying to right itself from the fumbled development of its 10nm process node. While Intel finally has 10nm production increasingly back on track, the company is not yet in a position to completely shift its production of leading-generation processors to 10nm. As a result, Intel’s low-power processors for this generation are going to be a mix of both 14nm parts based on their venerable Skylake CPU architecture, as well as 10nm Ice Lake parts incorporating Intel’s new Sunny Cove CPU architecture, with the 14nm Comet Lake parts filling in the gaps that Ice Lake alone can’t meet.
Another year, another Skylake spec bump. Intel sure is doing great.
Extremetech guys note that the new chips do not present a better offering, and all Intel does is to make some shaky claims of improvement with the help of some shady fine print.
Creating a chip might be a harder task than we presume, and considering the latest scandals and/or better offering by the competition, I bet Intel is walking on eggs and trying to mitigate a big announcement up their sleeve for a later date.
Kochise,
Yeah, I think they took a huge blow with meltdown & spectre, which not only took away R&D resources, but also resulted in several years of performance regressions. I’d wager there’s still some a specter of spectre in all of today’s superscalar CPUs. But I do hope that intel’s reached the point where they can get back to the business of improving performance. My understanding is that we need some kind of revolution in chip design since we pushing to the limits of what’s possible with current transistor technology. It’s diminishing returns from here on out.
The easiest thing to do, rather than improve raw sequential performance, is to scale up by adding more cores, and that is what we’re seeing. This is good for servers, but it is my expectation that consumer markets are going to reach “core fatigue”. Software markets are very reluctant to change away from x86 and sequential processors generally, but with such meager marginal gains to look forward to in the future on this path, I’m eager to replace our existing tool chains and migrate to far more parallel paradigms (FPGA/VLSI/GPGPU). The thing is it’s extremely difficult to do when so much of our infrastructure/toolchains are built for these “legacy” microprocessors.
Well, I’m not looking into more “performance”, I’m looking into “optimization” in the first place. If developers cared a little bit more, perhaps there wouldn’t be such waste of GHz and W all around the world. Java, I look at you…
Kochise,
I’m a bit confused about what that means, haha. I’d think that optimizing CPUs means increasing their speed and/or decreasing their energy consumption.
Yeah, there’s tons of that everywhere we look.
On a related note, I finally got a hold of an odroid-n2 ARM SBC.
https://wiki.odroid.com/odroid-n2/odroid-n2
I haven’t gotten to use it too much other than for benchmarking, but I gotta say, I’m impressed so far. Many of the ARM SBC’s I’ve tried in the past left me wanting more performance and complaining about bottlenecks caused by thermal issues. I actually found the desktop to be kind of laggy, which I did not find to be the case on older SBCs, so I don’t think GPU acceleration is working, however I have a good feeling about it as an embedded system for my purposes.
I wouldn’t bet on that, after all Intel’s current design goes all the way back to the Core 2 series and Spectre/Meltdown has meant they are gonna have to basically start from scratch. From what I understand even these latest chips are vulnerable and that we shouldn’t expect a chip that doesn’t require Spectre/Meltdown patches from Intel until 2020 at the earliest.
Meanwhile AMD has caught up to them in single core performance and beats them pretty badly in most multithreaded benches I’ve seen AND you get more cores and PCIe lanes at a cheaper price. And with so many server/workstation big names (which was always Intel’s biggest moneymaker) announcing recently they were going with EPYC? I think if Intel had anything even close to marketable they would announce it ASAP just to try to bleed some steam from the AMD hype-train, the fact that this is the best they can do? Tells me they don’t have anything even close to production ready or they would have told us about it.