Apple’s latest iOS devices aren’t perfect, but even the platform’s biggest detractors recognize that the company is leading the market when it comes to mobile CPU and GPU performance – not by a little, but by a lot. It’s all done on custom silicon designed within Apple – a different approach than that taken by any mainstream Android or Windows device.
But not every consumer – even the “professional” target consumer of the iPad Pro – really groks the fact this gap is so big. How is this possible? What does this architecture actually look like? Why is Apple doing this, and how did it get here?
After the hardware announcements last week, Ars sat down with Anand Shimpi from Hardware Technologies at Apple and Apple’s Senior VP of Marketing Phil Schiller to ask. We wanted to hear exactly what Apple is trying to accomplish by making its own chips and how the A12X is architected. It turns out that the iPad Pro’s striking, console-level graphics performance and many of the other headlining features in new Apple devices (like FaceID and various augmented-reality applications) may not be possible any other way.
During Apple’s event last week, the company didn’t even mention Intel once, and profusely made it very clear just how much faster the A12X is compared to all other laptops – even its own – that obviously all run on Intel (or AMD) processors. It seems like with this exclusive Ars Technica article, Apple is continuing its A12X marketing blitz, which all just further solidifies that Intel’s days inside Apple’s Macs are almost over.
Thom Holwerda,
The benchmarks show the ipad beating other devices in the mobile form factor, but from what I see, none of the benchmarks show it beating the latest intel laptops. Did I miss something? That’s not to say the ipad’s performance isn’t impressive, but how did you arrive at your conclusions?
Macbook pros are handicapped by relatively poor heat dissipation, which results in CPU throttling. I posted a link of this not too long ago.
http://www.osnews.com/thread?664630
So the MBP may hinder intel’s CPU in that it cannot reach it’s peak performance due to throttling.
Edited 2018-11-08 03:38 UTC
The A12X is managing those results at (probably) no more than 2GHz and no fan. Even the thermally-limited MBP laptops have fans, and the ones that don’t (MacBook etc) clock very slow.
I read a blog post yesterday in which a CS researcher ran his code (single threaded) on his phone (A12) and found it faster than his i7-7700k workstation. Not per cycle, but in absolute terms. 2GHz vs about 4. Most likely due to the A12’s enormous on-chip cache, which is mostly there to support GPU and imaging functions, but they’re definitely in the ballpark of being “fast enough”.
Here’s the post. Anecdote isn’t data, and YMMV, but still interesting:
https://homes.cs.washington.edu/~bornholt/post/z3-iphone.html
areilly,
Yeah, there’s no doubt ARM chips are making lots of progress. I’ve been eager to get ARM for datacenter applications for years now. On performance per watt, ARM has been excelling for a while, but overall performance was still slow. I think if companies like apple throw enough money at the problem, ARM eventually could dethrone intel as the performance king. But it would be too early to make that claim until the latest generation ARM processor is shown to beat the latest generation intel processor. Your link puts apple roughly 1.5years behind intel in terms of performance, which isn’t bad at all, but I’m a bit confused by Thom suggesting that it already happened.
Intel single thread performance is stalled in 2015. They shelve the same microarchitecture all these years. Even unreleased second gen 10nm chip IceLake is only barely faster. All they can do is to pump frequency at excessive power like i9-9900
Edited 2018-11-08 17:23 UTC
viton,
I agree, single thread performance on x86 has been stalled for some time. It’s increasingly difficult to move to smaller fab processes and the out of order speculative execution units that make single threads faster have come under attack by spectre style vulnerabilities (that I’m sure intel is still dealing with behind closed doors). Upping the frequency is the most effective way to increase single threaded performance, but it requires much more power and drastic measures to keep rigs cool
Meanwhile ARM processors continue making great strides on performance, however since they’re still technically behind intel I predict they’ll be bumping into many of the same problems when/if they catch up. It’s why I’ve been a proponent of alternative architectures where we’re much less dependent upon single threaded performance in the first place.
The good news is we’re making strong progress with GPGPU and other neural net co-processors that scale far better than single threaded cores ever could. The biggest impediment is really getting software developers on board because we’re so resistant to change, haha. There’s just so much software that relies on optimizing the speed of individual CPU cores.
however since they’re still technically behind intel
This is not correct.
A12 is 7-wide and has 8 times more L1/L2 cache per core.
32K->128KB
256KB->4MB
It is much faster than Skylake per cycle.
Edited 2018-11-08 23:28 UTC
viton,
Well then provide a source for your information comparing the fastest A12 processors to the fastest intel processors.
Edited 2018-11-08 23:58 UTC
It is clearly seen in anandtech iPhone XS review / and by analyzing geekbench subtests.
Some microarch details are here
https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-review-unv…
BTW in a instruction latency table you can note that ARM Cortex A76 FPU is better than basically… everything. 2 cycle FP ADD latency @ 3GHz
Skylake ADDSS latency is 4 cycles.
I have my own microarch benchmark for iOS (in progress).
Edited 2018-11-09 00:54 UTC
viton,
Sorry if I missed it, but I don’t see any benchmarks there comparing the A12 to any intel processors. There’s no denying it runs circles around other ARM processors, but that’s clearly not what we’re talking about here.
If you want to say the A12 is faster than intel’s desktop processors, then I’m fine with that assuming you can provide benchmarks that show it. However until then I’m going to believe the current evidence that shows the A12 is slower than intel processors. I’m not looking for theoretical arguments, just unbiased hard data.
I actually am a big fan of progress on the ARM side of things, I think it’ll be great having more competition for intel at the high end…by far the biggest con with intel CPUs is that they are so power hungry. Nevertheless, if you want the fastest CPUs available, intel wins on all the benchmarks I’ve seen. So I’m placing the onus on those of you who are claiming ARM is now faster to provide the benchmarks to prove it. I think this is a reasonable request.
Edited 2018-11-09 01:18 UTC
There are SPEC2006 scores in anandtech article.
You can compare it with your favourite intel processor.
For example SPECINT2006 on i7-8650U (up to 4.2GHz)
compiled with real-world compiler:
https://forums.anandtech.com/threads/itel-deprived-us-of-an-exciting…
https://homes.cs.washington.edu/~bornholt/post/z3-iphone.html
viton,
areilly already posted that link, and I already responded here.
http://www.osnews.com/thread?664800
Don’t get me wrong, it’s impressive, but it’s comparing apple’s latest generation with an intel processor from two generations ago.
If we lookup some benchmarks for the i7-7700k and compare them to the next generation i7-8700k, then it should come as no surprise that intel itself was able to beat the i7-7700k used in your link.
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-7700K+%40…
i7-7700K @ 4.20GHz
single threaded=2583
mutli threaded=12041
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-8700K+%40…
i7-8700K @ 3.70GHz
single threaded=2703
multi threaded=15965
It’s impressive that intel got higher scores even while decreasing the core frequencies! And we’re not even looking at the 9th gen processors that just came out.
The A12 is nothing to sneeze at, I am impressed…but if you want to make a factual assertion that it’s actually faster than intel’s fastest desktop processors, then I’m going to continue to insist that you provide evidence that shows a side by side comparison with the latest generation CPUs. If you cannot do this, then it doesn’t pass my fact checking standards enough to accept the claims as true.
but it’s comparing apple’s latest generation with an intel processor from two generations ago.
7700 and 8700 has the same core uarch as 6700.
There are zero improvement other than frequency.
It’s impressive that intel got higher scores even while decreasing the core frequencies
You should look at turbo frequencies, actually.
6700K Turbo Speed: 4.2 GHz
7700K Turbo Speed: 4.5 GHz
8700K Turbo Speed: 4.7 GHz
6700->7700
4.5 / 4.2 = 1.07
2583 / 2353 = 1.09
7700->8700
4.7 / 4.5 = 1.044
2703 / 2583 = 1.046
Single core is always runs at turbo, if no AVX2 was used.
Edited 2018-11-09 06:39 UTC
viton,
Not if you’ve got a MBP, haha.
Incidentally I’m actually looking for a new system right now and I really do wish I could buy a high end ARM PC. My enthusiasm is somewhat negated by my skepticism, as you’ve surely noticed, but truthfully I’ve been wishing for an ARM PC desktop for several years now. I’d need it to run linux, support PCI video cards and SATA raid arrays, plenty of USB3 ports, etc. I would be all over that! I am hopeful that it’ll happen some day, alas it won’t be this year. How great would it be if they were right over the horizon though!
Not the most powerful, but a good start :
https://www.cnx-software.com/2016/09/23/marvell-espressobin-board-wi…
https://www.marvell.com/embedded-processors/armada-80xx/
http://www.gateworks.com/
https://www.cavium.com/product-octeon-tx-arm-processors.html
Alt archs such as?…
The MBP throttling was solved, it was somehow Apple failed to include the power management firmware when they ship it, causing the CPU to overdrive and constantly throttle itself.
The MBP CPU has an TDP of 45W, and Apple designed their MBP to dissipate 45W of heat from CPU. That is compared to less than 10W of the A12X in iPad Pro.
So it is pretty impressive. If you look up the MacBook Benchmarks, which is also an fanless design, the A12X is indeed faster.
ksec,
Can you provide a link? Despite the MBP’s throttled performance as discussed, the benchmarks in the article still show the ipadpro being beat by the MBP in every single category except “memory”. Am I missing something? Did apple say that the A12X beats the MBP somewhere?
The link is in the original article where the A12X beats the MacBook (with 2 year old CPU’s as I point out in another reply). MacBook isn’t the same as MacBook Pro!
Thanks for the reminder on the “MBP cannot dissipate enough heat on the higher CPU-versions making those upgrades a bad idea”
avgalen,
You’re right, I didn’t notice ksec dropped that after talking about “pro”.
which version of that MacBook? https://ark.intel.com/compare/95441,95452,97538
* 1.2 GHz dual-core Intel Core m3-7Y32 Kaby Lake processor (Turbo Boost up to 3.0GHz)
* 1.3 GHz dual-core Intel Core i5-7Y54 Kaby Lake processor (Turbo Boost up to 3.2 GHz)
* 1.4 GHz dual-core Intel Core i7-7Y75 Kaby Lake processor (Turbo Boost up to 3.6GHz)
All of those are 4.5W CPU’s, so half of the A12X right? They are also 2 year old parts that really shouldn’t deserve the name i5 and i7 and shouldn’t be in a laptop that starts at 1500 Euro and goes up quickly in price from there.
(* haven’t read the article yet)
It is just cherry picking – single core performance using synthetic benchmarks.
In the real world no mobile chip is capable of sustained maximum output.
Customising the silicon makes the upgrade path hardware dependent. This isn’t an FPGA that you can just flash in the field, new features and performance implemented in hardware not code.
Alarm bells should be ringing!
Look forward to advertising of great new features, that require the purchase of the next model!
The various benchmarks point to the total custom hardware solution put together by Apple and centered on the A12X beats everything else in the smart phone and tablet categories. It is not so clear-cut for the notebook category.
Have we come again full circle and, like the Amiga of old, are best served by hardware which has been designed as a system for the intended purpose? It is worth noting that the design could probably be described as “Asymmetric Multi-Processing” in which the various processing units are dissimilar in design and function.
Interestingly, the latest flagship smart phone from Google, the Pixel 3/3XL, appears to be severely lacking in terms of raw processing power. Given this, it would not be surprising if Google does not have a not-yet-publicized custom hardware design project to go hand-in-hand with Fuchsia.
I am not quite sure how to frame the benchmarks comparing the A12X devices to notebooks. Heat control throttling is definitively an important factor – yet likely not the only one.
The iPad Pro really has me confused. They are far too powerful and expensive for a pure consumption device. The regular iPads/Chromebooks/Android-tablets/etc have that market covered.
They are also far too limited for a general production device, both by connectivity, screensize, software and price. In the previous iOS release they tried to make the software a bit more useful (splitscreen) but they have a long way to go and really didn’t improve with this release. Connectivity and screensize are now slightly improved with the USB-C/Thunderbolt port, but it is up to the app (not the OS) to support hardware at the moment. Just connected an external disk and accessing the filesystem is still not possible unless the app supports it.
The iPad Pro is still “the future”, but it really seems that the hardware is now far ahead of the software.
Of course there will be niches where an app-developer makes an excellent app that makes great use of the iPad Pro and there will be a user group for whom this makes perfect sense. For that group the price will not be a problem either, but for now:
* Too expensive for consumption
* Too limited for production
* …so who is this product for?
Artists.
This is exactly it, artists. It’s why I bought an iPad Pro. They are finally fast enough to handle much of the type of artist based software that I use and the Apple Pencil is as good as Wacom. Still no 3d software solution… come on Zbrush or someone else. Port to iOS!
The Neural Net chip alone makes me certain they’re going to scale this up to at the very least Mac laptops. That’s the sort of chip you really want available -inside- your developer machine for best results.
I guess at least now it should start to be clearer to the masses, that intel has good chances to start loosing market share in the near future. There are systems that depend on legacy apps/code, but I guess the masses will start moving away from x86 . Years ago I was betting on ARM replacing x86, but now with the success of risc-v I must say that risc-v looks a lot more promising to me than arm.
All the CPU benchmarks are based on Geekbench. How faithful is it?
If I remember well, a few years ago, Linus Torvald was very critical of GB3. He explained that the code was not general enough and gave too much weight to algorithms that may be hardware assisted.
Also how long do the benchs run? I mean, mobile SoCs are tuned to excel during relatively short times before throttling. If geekbench does not run long enough, it may not be a representative load. For instance, when I import and build previews of photos with Lightroom, the process lasts at the very least 15 minutes.
I know that Apple did an incredible job and that the A12 is in another league than the other ARM based cpus on the market.
But I don’t believe in magic and I’m always skeptical when I read that it can compare favorably to the latest and actively cooled Intel CPU. This assertion is always supported by Geekbench. I understand that the architecture may be more efficient and that Intel is still stuck at 14nm, but I doubt that it is enough to close the wattage gap.
… and yet they remove the 3.5mm headphone jack. I’m guessing audio professionals weren’t a consideration. Wireless/Bluetooth might be what the fashionistas want, but they still haven’t (not even Apple with its W chips) gotten over the latency issues.
Edited 2018-11-08 14:55 UTC
You think audio professionals care two craps about the 3.5mm jack? Just get a USB C (or lightning before that) audio interface with XLR/Optical/quarter-inchers/whatever else you need, and you’re covered. No audio professional, ever, uses a built-in 3.5mm jack on a computer or tablet. Not a chance. You use external audio interfaces both to eliminate interference and to have far better DAC and ADC chips than the built-in ones could ever give you. Audio professionals aren’t pitching a fit over this one, sorry.
..and I don’t give a two craps (see, two can play the rude-ass c*nt game) about your misinformed opinion either. I personally know 3 electro/house producers in my suburb alone who put together beats and loops on the train or in bed on a daily basis. One of them even exports the exact same synth tracks over to to Ableton because he claims the sound from the KORG ARP iPad app he uses sounds superior to what his desktop DAW’s equivalent duophonic synth plug-ins achieve. Yes, they all eventually go back to their iMacs with Ableton and Focusrite audio interfaces and studio monitors and whatnot, but a lot of their initial composition takes place on their iPads with wired headphones. One of them was throwing a fit over it today morning, as a matter of fact, and plans to stay on his old (non Pro) iPad till it dies now.
Edited 2018-11-08 23:58 UTC
you’d be surprised how many DJs use the audio out jack still…
Pro ? What’s “pro” about macbooks or iPads? No ports ? Glue ? Mediocre components and build quality ?
Samsung and Huawei also make their own chips, perhaps not typically used in flagship, but definetely mainstream… (well, less custom from the reference ARM designs)
Have you considered Apple may have cynically designed a chip specifically to exploit a (worthless) benchmark?
In 2105 Apple also claimed spectacular Geekbench results for the iPad pro. They never translated to the real world or any other benchmark.
Tested: Why the iPad Pro really isn’t as fast a laptop
One benchmark makes it look good. A lot of other benchmarks show a different story. Get all the details here.
https://www.pcworld.com/article/3006268/tablets/tested-why-the-ipad-…
During the Motorola days Apple was caught making totally BS performance claims.
unclefester,
Cannot upvote you, so +1.
I agree completely. Here’s another very recent case of benchmark manipulation, this time giving intel a completely unfair advantage over AMD by disabling it’s cores.
https://www.theinquirer.net/inquirer/news/3064495/intel-i9-9900k-vs-…
There are so many ways to cheat, like using better ram for yourself while using sub-par ram in competing hardware. This is why I’m reluctant to take first-party benchmarks/scores at face value. Having more benchmarks collected independently in real world conditions is better than relying on manufacturer sponsored benchmarks.
It’s also useful to test with different algorithms because different processors have different strengths and weaknesses. For example, intel is great at number crunching, but AMD may have better memory architecture. We can’t get a good picture of these differences when we aggregate scores into a single variable.
Edited 2018-11-09 04:39 UTC
First of all A10X is a 5W TDP part (not a 10W). Second of all, the GPU is an important part of that TDP. If you take that out, you get a 3W octo-core CPU so around 0,5-0,7W per high performance core. If you were to put those cores at 5GHz, they would go at 3-4W per core. So it’s quite possible to get 64 cores at 5GHz in the sub200W territory.
Right now UltraSPARC gets the crown with 32Core x 8Thread x 5GHz(1280pseudoGHz) in 180W. Intel can only claim 28 x 2 x 2,3GHz (128pseudoGHz). ARM has a real chance to come up with a 64 x 4 x 5GHz and it would blow intel out of the water completely in single-thread and multi-thread.
VMware already has ported ESXi to ARM and they have builds that run on the Raspberry PI. RHEL is available on ARM. UEFI is available on ARM. Windows is available on ARM. XNU is available on ARM. The whole industry is ready to migrate to ARM quite easily. It’s just a recompile away.
I can’t wait for ARM to become mainstream in the desktop, laptop and server area.
https://www.youtube.com/watch?v=BHZCCUEzK0s