Apple has moved its modem chip engineering effort into its in-house hardware technology group from its supply chain unit, two people familiar with the move told Reuters, a sign the tech company is looking to develop a key component of its iPhones after years of buying it from outside suppliers.
Understandable move by Apple, both from a business perspective, and from a security perspective. The open source world really needs to build open source baseband processors at some point.
I want to see how this plays out. Apple was criticised for not licensing a Qualcomm SoC (back when they were the best thing available) and now Apple’s own chips run circles around anything Qualcomm has to offer. Could we see that happening with modems too? On the other hand Qualcomm has loft of patents which fence off several innovations regarding modems, and modems are what they ‘ve been doing as a company since forever.
What this move definitely proves is that companies with lots of cash (Apple and Samsung) are more able to control their fate. They can also achieve cost reductions by moving design in-house.
Apple SoCs run circles around Qualcomm on benchmarking tests but in real world performance it’s much closer. Regardless, the only operating system Apple SoCs can run is iOS which makes it useless to me (and most people). Qualcomm SoCs can run any OS someone wants to design to run on them. This reason alone I could never support/buy an Apple product.
They are currently, but I remember when the G4 cpu was so powerful it was considered a weapon (look it up, fun bit of history). However, when the competition took the lead, apple were left in the cold. The Intel move is now heralded as a brilliant move, but it was also capitulation by a company forced to abandon its own chips and design R&D as a dead end. The open market would allow Apple to jump between suppliers as and when it needs to.
The G4 was not an Apple design (the Apple of the time didn’t have the resources to develop a CPU), it was an IBM design (who lost interest in PowerPC due to decreasing relevance and stopped developing it). PowerPC was the Itanium of desktops: Maintained only because some OS vendor was paying to have it maintained.
They stopped developing it because they hardly reached the gigahertz and the consumer market was limited to macs. Later it was used in some high end consoles like the Game Cube.
CPU scaling is not a natural phenomenon. The inability to reach 1Ghz was due to lack of research and development funds. PowerPC was already dependent one a single company for the desktop (IBM basically), rarely a good thing, then PowerPC lost relevance and IBM lost interest.
@kurkosdr : it was Motorola having a hard time reaching 1 GHz in their fabs.
Still, the IBM-Motorola duo was their single source. Even if Apple had some input in the PowerPC design it wasn’t worth the trade-off compared to a true dual-source ISA like x86, which had (and has) two companies and two fabs. At least that’s what history showed.
Imagine how much worse x86 would have been without the low-cost CPUs pioneered by AMD (and Cyrix), no 64-bit (because Intel wanted to push servers to Itanium) and CPUs currently stuck at Intel’s 14nm (AMD rolled out 7nm)
@kurkosdr : it was also the Centrino platform that drew Apple to choose Intel over PowerPC, because from a power consumption view, IBM-Motorola wasn’t offering a decent wireless alternative, beside integrating a third party solution. Plus the fact that x86 was having more “ground” and drawing more “evolution” on its ecosystem. And the big/little endian conversion caused additional troubles, like even on PCI buses. It was a weird but reasonable move. Just sad that x86 won despite not being the best architecture out there, far from it. Thankfully AMD improved things a bit with x64.
Again, Centrino didn’t happen as natural phenomenon, it happened as a direct consequence of competing companies duking it out. BTW x86 is not as bad as people think. For example SPARC is much worse because it mandates needless register duplication to support the largely useless “register windows” functionality (if you ever wondered why even big SPARC CPUs stick to the minimum mandated number of register windows, now you know) which is a big problem as multiple execution units get integrated on the CPU. This is also the reason SPARC is never chosen for clean sheet designs like consoles, it’s MIPS (or PowerPC while IBM was still interested in the sector). When it comes to x86, if it’s used with a good compiler the only real problem is the small number of registers, but this has been solved with register renaming. The fact so many good implementations of x86 exist means it not as bad as academia makes it to be.
BTW even MIPS now loses clean sheet designs to x86. Go figure…
Even Apple CEO from the time of 68k->PPC transition said that not going Intel back then already was his biggest mistake…
It was designed as a trio as part of the AIM alliance (Apple, IBM, Motorolla).
Or the ridiculous “PowePC “supercomputer on a chip” G4″ PR campaign, based mostly on few hand-picked and -optimised Altivec Photoshop filters…
Legally it was a Supercomputer. Supercomputer was defined as a system performing a gigaflop of processing. The PR campaign simply took advantage of the fact the legal definition was well behind the reality. (although they legally Couldn’t sell the machines to certain countries until the law was amended)
The regulations had already been changed before the G4 came out, but didn’t go into effect until a month and a half after release.
It was still kinda different (and still ridiculous) – supercomputer benchmarks are based on sustained throughput, while G4 rating was on those few Photoshop filters; perf. rarely encountered IRL.
And not the first or last time Apple marketing was based on things obsolete… (remember “I’m a Mac. I’m a PC” ads?)
Imagine this won’t matter to anyone outside the wall.
Can someone please delete such sh-tposting?
OP’s post made total sense to me and was relevant.
Google seems to be like minded in wanting control over it’s chips
https://hexus.net/tech/news/cpu/127376-google-poaching-nvidia-intel-qualcomm-engineers/
Guys, no news updates in over a week. What’s going on ?
It appears Thom is on vacation –> https://twitter.com/thomholwerda