Oh, benchmarks.
Benchmarks of computer hardware have their uses. Especially if you have a relatively narrow and well-defined set of calculations that you need to perform, benchmarks are great tools to figure out which processor or graphics chip or whatever will deliver the best performance – scientific calculations, graphics processing (e.g. video games), these are all use cases where comparisons between benchmarks of different hardware components can yield useful information.
A different way to put it: benchmarks make sense in a situation where “more power” equals “better results” – better results that are noticable and make a difference. A GTX 1080 will result in better framerates than a GTX 1070 in a modern game like The Witcher 3, because we’ve not yet hit any (theoretical) framerate limit for that game. A possible future GTX 1090 will most likely yield even better framerates still.
Where benchmarks start to fall apart, however, is in use cases where “more power” does not equal “better results”. Modern smartphones are a perfect example of this. Our current crop of smartphones is so powerful, that adding faster processors does not produce any better results for the kinds of ways in which we use these devices. Twitter isn’t going to open or load any faster when you add a few hundred megahertz.
In other words, modern smartphones have bottlenecks, but the processor or RAM certainly isn’t one of them. Before you can even reach the full potential of your quad-core 2.4Ghz 6GB RAM phone, your battery will run out (or explode), or your network connection will be slow or non-existent.
As a result, I never cared much for benchmarking smartphones. In 2013, in the wake of Samsung cheating in benchmarks, I wrote that “if you buy a phone based on silly artificial benchmark scores, you deserve to be cheated”, and today, now that Apple is leading (in one subset of processor) benchmarks with its latest crop of mobile processors, the same still applies.
So when John Gruber posted about Apple A10 Fusion benchmarks…
Looking at Geekbench’s results browser for Android devices, there are a handful of phones in shouting distance of the iPhone 7 for multi-core performance, but Apple’s A10 Fusion scores double on single-core.
Funny how just like in the PPC days, benchmarks only start mattering when they favour [insert platform of choice].
Setting aside the validity of Geekbench (Linus Torvalds has an opinion!), this seems to be the usual pointless outcome of these penis-measuring contests: when the benchmarks favour you, benchmarks are important and crucial and the ultimate quanitification of greatness. When the benchmarks don’t favour you, they are meaningless and pointless and the world’s worst yardsticks of greatness. Anywhere in between, and you selectively pick and choose the benchmarks that make you look best.
I didn’t refer to Apple’s PowerPC days for nothing. Back then, Apple knew it was using processors with terrible performance and energy requirements, but still had to somehow convince the masses that PowerPC was better faster stronger than x86; claims which Apple itself exposed – overnight – as flat-out lies when the company switched to Intel.
When I use my Nexus 6P and iPhone 6S side-by-side, my Nexus 6P feels a lot faster, even though benchmarks supposedly say it has a crappier processor and a slower operating system. Applications and operations seem equally fast to me, but Android makes everything feel faster because it has far superior ways of dealing with and switching between multiple applications, thanks to the pervasiveness of activities and intents or the ability to set your own default applications.
Trying to quantify something as elusive and personal as user experience by crowing about the single-thread performance of the processor it runs on is like trying to buy a family car based on its top speed. My 2009 Volvo S80’s 2.5L straight-5 may propel the car to a maximum speed of 230km/h, but I’m much more interested in how comfortable the seats are, all the comfort options it has, if it looks good (it does), and so on. Those are the actual things that matter, because the likelihood of ever even approaching that 230km/h is very slim, at best.
I bought an iPhone 6S and Apple Watch late last year and used them for six months because I feel that as someone who writes about every platform under the sun, I should be using them as much as (financially and practically) possible. I used the iPhone 6S as my only smartphone for six months, but after six months of fighting iOS and Apple every step of the way, every single day, I got fed up and bought the Nexus 6P on impulse.
Not once during those six months did I think to myself “if only this processor was 500Mhz faster” or “if only this thing had 4GB of RAM”. No; I was thinking “why can’t I set my own default applications, because Apple’s are garbage” or “why is deep linking/inter-application communication non-existent, unreliable, broken, and restricted to first-party applications?” or “why is every application a visual and behavioural island with zero attention to consistency?”.
iOS could be running on a quantum computer from Urbana, Illinois, and it wouldn’t solve any of those problems.
The funny thing is – Gruber actually agrees with me:
I like reading/following Holwerda, because he’s someone who I feel keeps me on my toes. But he’s off-base here. I’m certainly not saying that CPU or GPU performance is a primary reason why anyone should buy an iPhone instead of an Android phone. In fact, I’ll emphasize that if the tables were turned and it were Android phones that were registering Geekbench scores double those of the iPhone, I would still be using an iPhone. In the same way that I’ve been using Macs, non-stop, since I first purchased a computer in 1991. Most of the years from 1991 until the switch to Intel CPUs in 2007, the Mac was behind PCs in performance. I never argued then that performance didn’t matter – only that for me, personally, the other benefits of using a Mac (the UI design of the system, the quality of the third-party apps, the build quality of the hardware, etc.) outweighed the performance penalty Macs suffered. The same would be true today if Apple’s A-series chips were slower than Qualcomm’s CPUs for Android.
So, he’d be buying iPhone even if the benchmark tables were turned, thereby agreeing with me that when it comes to phones, benchmarks are entirely meaningless. Nobody buys a smartphone based on processor benchmark scores; at this point in time, people mostly buy smartphones based on the smartphone they currently have (i.e., what platform they are currently using) and price.
That being said, there is one reason why benchmarks of Apple’s latest mobile processors are quite interesting: Apple’s inevitable upcoming laptop and desktop switchover to its own processors. OS X (or macOS or whatever) has been in maintenance mode ever since the release and success of the iPhone, and by now it’s clear that Apple is going to retire OS X in favour of a souped-up iOS over the coming five years.
I know a lot of people still aren’t seeing the forest through the trees on this one, but you can expect the first “iOS” MacBook within 1-2 years. I put iOS between quotation marks because that brand of iOS won’t be the iOS you have on your phone today, but a more capable, expanded version of it.
It sounds wild, but the A10 looks to have the power and efficiency to handle the workload of a full PC. This coalescence of mobile and desktop PCs is driven by forces on both sides: mobile chips are getting more potent at the same time as our power needs are shrinking and our tasks become more mobile. If you think your workplace isn’t changing much because there are a bunch of weathered Dell workstations sitting next to frumpy HP printers, consider just how much more work every one of your officemates is doing outside the office, on their phone. And all those grand and power-hungry x86 applications that might have kept people running macOS – Adobe’s Photoshop and Lightroom being two key examples – well, they’re being ported to iOS in almost their full functionality, having been incentivized by the existence of Apple’s iPad Pro line, last year’s harbinger for this year’s performance jump.
Unlike Windows, whose x86 reliance is tied to its dominance of the lucrative PC gaming market, Apple really has very few anchors locking it down to macOS. The Cupertino company has been investing the vast majority of its development time into the mobile iOS for years now, and that shows in the different rates of progress between its two pieces of software. macOS is, in many ways, legacy software just waiting for the right moment to be deprecated. It’s getting a fresh lick of paint now and then, but most of its novelties now relate to how it links back to Apple’s core iOS and iPhone business.
This is where benchmarking and the performance of Apple’s A10 Fusion processor do come into play, because even in the constrained environment of a smartphone, it seems to be reaching performance levels of laptop and desktop processors.
That “iOS” MacBook is closer than you think.
I for one will be rejoicing the day that Apple ditches MacOS in favour of IOS.
Not so long ago I was angry about the MBPro lineup having been abandoned, and the general trend of turning workstations and PC into mobile-like devices, but I just came to the conclusion that Apple accelerating the trend towards a ‘mobile experience’ for all users will in fact soon be the driving force for the resurgence of a market for professionals-only hardware and software.
Sadly, Microsoft will realize this 5 years too late, and will just keep stuffing us with windows-10-like garbage for quite a while…
No, it really isn’t. Let me try to explain an alternative viewpoint:
Phones have been fast enough to drive a desktop for many years now. The phone in your pocket is far faster than the G4 PowerBook and the phone GPU is faster than a 2006 high end GPU. The memory is has is also more than plenty. And it has multiple cores. This means a modern phone could *easily* drive a Windows XP/Vista age desktop PC.
So why doesn’t it? The answer to this question is complicated. And ultimately also why an “iOS” MacBook will fail if it is executed the way you’re imagining it will. There’s basically several factors at play here:
1) The user interface between a phone and a desktop is different. At a distance WIMP and touch look kinda alike, they both have views in a tree, they react to events, and draw to the screen. But the devil is in the detail. When you have a much larger screen, a physical keyboard, and a mouse, your way of interacting with the application changes fundamentally in ways the require the entire user interface to be designed differently.
Microsoft had to learn this lesson TWICE. First when they tried to minimize WIMP with Windows Phone 5 and older, and second time when they tried to upscale Metro to the desktop with Windows 8.
If Apple is foolish enough to try this they will fail in the same way. The only way to do this properly is to include BOTH Cocoa and Cocoa Touch into such a solution, and I don’t mean where a phone app can kinda pretend to be a WIMP app like Microsoft thinks it can.
2) The simplicity of having one device do one thing and do it well. For the phone to replace your desktop PC, it has to integrate seamlessly with your monitor, keyboard and mouse. This seems simple at a distance, but software and hardware tend to not play well together unless they come from the same vendor.
For a classic example of how difficult this is to do, see cars and phone integration. Naturally, Apple has the arrogance to think they can do a fully integrated solution here, but I think they will fail to understand that not everyone is 110% Apple-only hardware. The same lesson they might be about to learn about headphones.
3) The battery. It may never become an acceptable price to pay that your phone battery is running low just because you’re at home using your desktop PC. I don’t think this is about to be solved anytime soon.
The actions of Apple lately seem to indicate they just might be foolish enough to attempt what you’re suggesting though. If they do, it will be their waterloo.
I agree with you, and it’s beating a dead horse. Yes, cellphones and desktops have overlapping uses but that doesn’t mean it’s smart to ignore their differences. We’ve seen this movie before. Whether it’s trying to shoehorn one into the other or create some hybrid of the two, it always ends in failure.
The reason we have different tools for different jobs is because it works better that way. Software isn’t an exception. You wouldn’t think that’s such a hard concept to grasp.
#1
See Plasma active.
Now its buggy as heck and doesn’t have a great reference implementation (IMHO). But look beyond that to the design ( how it works, not the skins or icon sets). One application, different representations of input. It was/is beautiful.
#2 That is simple. Hooking up peripherals to hardware is a solved problem.
#3 Don’t run on battery when using as a desktop! Plug it in/wireless charge. Solved.
In my next post, we’ll discover world piece can be achieved in 4 simple steps.
As we all know, there is no I in World Peace.
Sorry, couldn’t resist.
There are certain assumptions in mobile applications that do not scale to larger screens, and similarly desktop applications make assumptions that does not work well on mobile hardware.
The keyword is the “user experience”. It’s not about how you render a button or a list, but more about how it behaves in general.
A mobile application (like a browser, or email reader) will use a “back” button at the top (in IOS), and if you’re lucky a navigation menu on the left in landscape / tablet mode. This is done, because app assumes limited screen real estate, and control by a human finger.
A corresponding desktop application can have a menu bar at the top of the screen, have multiple tabs in the app for navigation, and can expect mouse and keyboard to be ready.
The games will depend on tilt and motion on mobile devices, but the desktop one can have a controller with several axes, and 10+ buttons connected. (Yes there are bluetooth controllers, but the market penetration is very small).
Overall it *is* possible to design to different UIs for the same application with separate mobile and desktop experiences. (Actually you can now include tablets, and TVs as separate modes as well). But then you’re left with two completely different applications (like Photoshop on the desktop and Photoshop mobile)
Edited 2016-09-19 20:03 UTC
For me, a phone/phone UI could probably never replace a desktop, but for a large category of people it will do it very well: they simply uses their devices to consume content created by others.
They still haven’t learned that lesson though. Every new UI they build on Windows 10 is a UI meant for touch that you’re supposed to use in the exact same way with a keyboard and mouse.
I cringe every time I have to go into the new Settings.
I’m using a Windows 10 laptop with a “phone” hardware (Intel Atom 7535F, 2GB RAM, 32GB SSD) and it is barely usable. Even scrolling a webpage smoothly is impossible. I also use a Windows 10 Core 2 from 2008 with similar benchmark performance that is vastly faster (except booting) and more responsive in the real world despite having a slower hard disk.
I agree that Apple could switch to ARM for the Mac. As mentioned, they’ve handled that kind of transition well before (68000 -> PowerPC -> Intel.)
But making an “iOS Mac” is a red herring.
These operating systems are already as close as they will get. Both built on Darwin kernels, sharing as many frameworks as makes sense.
Theoretically a merged platform could include all the frameworks for both platforms (a “fat” platform) allowing apps run anywhere. But I believe Apple would fear the possibility of ugly/poor Franken-apps that aren’t well targeted to one or the other platform.
Apple could announce an ARM MacBook at the next WWDC, and by release in September, a silly amount of the available app catalog would be available on the platform.
Whether such a device would have any notable performance differences from an Intel machine (notably battery life), I can’t say. Beyond having “their chip” in it, not sure if there’s real value there.
But they could do it.
But I agree with the others, the touch experience, the intimacy of working with a table or phone vs the disconnected reality of a desktop are quite different user experiences. The idea of having to reach out and touch my monster iMac makes my arms ache just thinking about it.
This is just sad .”The idea of even standing up and do stuff makes my legs ache”.
We are literaly turning into the human blobs of Wall-e.
Edited 2016-09-20 03:25 UTC
There’s a difference between being lazy and having aching arms from holding them out horizontally all day. Working with a touch screen monitor for extended periods takes its toll on your arms – I can’t imagine it would be any different for you.
The “fat” platform is very doable in the next couple of years.
I bet they have it running in their labs already.
All the translation requires is middleware between the interface and the input (touch or keyboard/mouse), and addressing iOS’s single user build…. but Apple already ships a multi-user iOS to big education customers only.
My tablet might be able to replace my workstation — just air-dock it to a keyboard and second monitor, why not?
When OS X shipped it ran on PowerPC G3’s @ 300mhz with 64mb RAM. A modern iPad is far faster than that.
There’s more than input-adapting middleware required. The interfaces also have to be done for each. Granted, I don’t think that would be much of a problem as app developers on iOS already do this for iPhone/iPod vs iPad, and the interface files are almost identical for Mac apps anyway. In fact, this is one of the reasons why the iPad is so far ahead of Android on a tablet experience. Still, I could understand Apple not wanting to tempt developers into the laziness that characterize Android’s non-phone apps to say nothing of the monstrosities that are in Windows land at the moment.
I think you’re right on the money actually, except they won’t fear Franken-apps. They have experience both with fat binaries and with creating APIs for moving from an older platform to a new one (see Carbon and the transition to Cocoa). I think they can certainly pull it off with less effort than either fully porting OS X (sorry, macOS) to ARM, or building iOS into a truly desktop OS.
I also wonder if they will eventually do away with the new MacBook and MBA in favor of a refined iPad Pro. Personally I’d rather see them keep an ultraportable laptop; my preference is the Air over the MacBook but I’m not your typical Apple head.
My biggest question is, why hasn’t AMD sued for trademark infringement over that “A10 Fusion” name? Their original A-series processors, including A10, were called Fusion CPUs before they switched to calling them APUs.
Will we see the return of the iBook?
Microsoft has made some HUGE mistakes and (hopefully) learned a lot of lessons, but I like where they’re going with Win10 Mobile and Continuum.
What I’d like to see from anybody out there (be it Apple, Linux, or MS) is a phone that acts like a regular smartphone with all the Android/iOS/WinMobile apps and then works just like a desktop with full desktop apps when docked to a monitor w. keyboard and mouse.
With regards to apps, all three platforms have the capability build in them to do this right now. By that, I mean they all allow developers to package multiple executables into the same App package. Atm, this capability seems to be only used to either target different CPU architectures and/or to write “applets”/metro-tiles/etc.
What is needed is for the OS Developers to refine the transition and APIs between smartphone mode and desktop mode and for App Developers to develop apps that run on both of those environments and their input methods.
I see that Microsoft is pushing towards that direction , although it seems to me that developers are a bit slow to follow with bigger/more complex Continuum apps.
I haven’t been keeping up with what Google/Android/Linux and Apple are doing, but I have a gut feeling that they’re headed in this direction, too.
Either way, they’ll need developers to write them new apps! And I hope they do write new ones, instead of just doing ports of old.
I’ll just note that Windows (since NT) isn’t “reliant” on x86 processors and was at the start developed on RISC machines. However perhaps what was meant was that it relies on compatibility with x86 binaries in order to keep its place in the market, IMHO even that would be wrong. Microsoft could e.g. jump to the ARM platform and include emulation software providing backwards compatibility with reasonable performance, .NET programs will run on any (reasonable) platform etc.
Uhm, I’ll just note that they did have a line of ARM windows tablet/2-1’s not that long ago. They didn’t run Intel software, just the metro/modern windows store ones. They were otherwise nice, you know except for the lack of software or hardware compatibility.
I think you overestimate what emulation is capable of. Applications could run in an emulated X86 mode, but I’d not want to use them often. Even Rosetta (the PPC emulation layer on Intel-based OS X) was noticeably slow when you had to run something in it. Emulated X86 on a RISC-based machine like ARM would, if anything, be even slower because you have to render a complex set of instructions down to a much reduced form. They could do it, certainly, but I don’t think most of their users would take to using it too well.
darknexus,
Yea, distributing software as architecture dependent binaries is clearly a limiting factor here. Emulation is painful. Apple was able to migrate their desktops from the less popular ppc architecture to x86, increasing parity with the majority. But going the other way could prove to be more challenging.
If they replace desktop software with IOS software, then it’s certainly doable, the only question is whether the market would accept it. And to this end I have to ask you guys: are mac users accepting or at least tolerant of the notion that IOS software should become the new platform standard for (apple) desktops?
I personally would never be sold on it, because I know it means they’d try to drag desktop users into a walled garden, but I frequently have to concede the fact that my opinion doesn’t matter to corporate interests.
As someone who went from a G4 MacMini to a 1st-edition MacBookPro (1.83GHz), there was no difference in app-speed, if even a little faster on the Intel. A combination of both a.) Rosetta being incredibly good at what it does and b.) the gulf of performance that had opened up between PPC & x86 at the time, meant that the PPC to x86 transition was as smooth an architecture transition as there ever has been. For that, I give Apple massive credit.
I’m not using a Mac for this generation because of the hardware becoming too rigid. No upgradeable drives or RAM means there’s just no point spending that much money on a machine. I bought an i7 desktop PC with an SSD RAID instead.
I’m a firm believer that Apple have ARM-Macs in the lab and I’m certain they outperform x86 for native code, especially on performance-per-watt; but the compatibility with existing software is a thorny issue that isn’t as manageable as the PPC to x86 transition. x86 behaviour is crazy complex and unknowable.
I think Apple are waiting for a software transition, not a hardware one to drive the switch. I think this is what all this iOSification of mac OS has been about, though it surprises me greatly that the Mac App Store has rotted quite so much — I would think that getting as many developers over to that would be a critical prerequisite for jumping ship to ARM.
I didn’t find Rosetta to be speedy. I had an Intel Core Duo Mac Mini, and an iBook G4. Running PPC-based apps, particularly intensive ones, was a lot faster on the G4 under load than what Rosetta could do. I’d say try it with heavy apps like an audio editor, however it’s not as if we really can try Rosetta these days.
The mac mini was always a lowend model, considerably cheaper than a macbook pro…
I have a powermac g5 quad, and it ran ppc apps considerably faster than any of the early intel macs, in some cases it was faster than native x86 code too.
It was similar with the 68k transition, native code was faster on ppc but emulation was generally slower than all but the lowest models of 68k chip. It was possible to emulate a 68k mac on an amiga at the time and run 68k software faster than any real mac was capable of due to the amiga having 68060 upgrades available while the mac did not.
You could say similar things about the benchmarks of video cards in PC gaming too. The benchmarks are important, but really just to validate a minimum standard of performance. On the other hand, there are the intangibles – nVidia cards (at least 10 years ago – it’s been a long time since I ran ATI cards) simply run smoother (subjectively). So too do Intel CPUs (even back when Athlon was king), against AMD. And yes, some of that was compiler shenanigans on Intel’s part- but really that doesn’t matter for end user experience. There are also the ecosystem components – nVidia has g-sync, (3d Vision a few years back), shield, etc.
Also, while I don’t disagree that Apple wants to push iPad and iOS as the future of computing (I wish they’d find a way to sell app store membership – imagine if Steam Apps were also sandboxed, and couldn’t wreck your system with DRM crapware and other malicious rootkits), but it’s really not ready now for professional use, despite the name, and I can’t see it being ready with out some pretty substantial changes, like mouse/touchpad and multiple monitor support. I also don’t think it makes much sense to throw out macOS, though you can definitely see the tech converging over time. macOS and iOS share a lot of code after all.
I just can’t see iOS taking over for professional use until they have some kind of looking straight ahead workflow, something that could be used on a standing desk without destroying your neck.
Edited 2016-09-19 20:28 UTC
Everyone keeps harping about mobile and desktop, can’t be merged.
See Chromebook with Android Support. Oooh…shiny!
Actually, there have been several topics and plenty of comments here about how bad Android apps are on a tablet or notebook. For example, 8 days ago:
http://www.osnews.com/story/29393/One_year_later_can_Android_7_0_No…
It’s the combination of the two. Android on a larger device is weird. I have a Remix mini – it helps some but it’s still clunky. It’s the merger of the two, Chrome Desktop and Android apps that I find intriguing. If they do a good job of integrating the apps with the Chrome Desktop, I think they may have a winner. Chromebooks aren’t clunky. I’ve had several, and given a couple to my fiance and her daughter. They love them. Adding the ability to install apps is the winning ticket I believe.
On the web we have this idea that the content (or app) should “respond” to the size of the display (and in practice that just means screen width and maybe pixel density – short screens get no love).
I think what’s needed for Android apps to not suck on desktop is 1 part technical, and 1 part practical. The technical thing is we need a way to respond to input type. MS has a switch you can throw between keyboard and mouse mode, and finger mode, and many devices trigger this switch when you swivel your screen around, etc. But why not have it simply respond to mouse move, or touch input – why a global switch?
If the touch areas of buttons and all that in apps changed to suit differing input types, we’d have a pretty good hybrid OS.
The practical part is we need users and user demand for tablet, laptop and desktop apps. Without that app designers aren’t going to optimize app experiences for tablets and up – and that’s more the reason those apps suck, than that Android sucks in general in those form factors.
One the other hand! In Windows, desktop apps tend to suck anyway (for all the same reasons Thom dislikes iOS’s inconsistent apps – a problem for which I’m afraid their is no solution – end user developers have too many divergent ideas about UX), and so much time is spent in the browser on the desktop anyway.
Here’s a technical deal many are unaware of – new Android APIs work on old Android versions! It’s not like those other platforms where if your users don’t upgrade they don’t get the latest stuff – Android is atomized, the new APIs can often be compiled into your apps (it’s been that way for ages). Why not apply the same idea to web apps – basically treat the web as a legacy Android platform. Port the Android GUI API (with improved k/b paradigms) to the web. Solve both problems at the same time – get a focus on desktop/tablet UX, and also get access to a market that can start using it now.
You haven’t read the part of that article where out of the 200 top apps basically just a handful work properly in “tablet mode” or even support landscape mode.
Android apps on a tablet suck
Android apps on ChromeOS suck even more
Can that be fixed by the OS…..no, not really.
Does Android/ChromeOS have a focus on improving that….no, it seems to be actually getting worse
I have a Nexus 7 and I would definitely NOT say that Android apps on a tablet suck. I’ve never had a problem. The apps use the available space fine. Some add new features when running on tablet. I guess YMMV.
No wonder, the Nexus is only a tad bigger than a phone. And it’s a portrait “tablet”.
I always preferred mine in landscape, but maybe that’s why Android’s tablet experience always sucked for me. I’m personally amazed anyone’s Nexus 7s still work, after what I’ve seen of their build quality. Ymmv.
My mileage is actually pretty good because I mostly use games and Microsoft apps which are the exceptions to the rule. Again, I will point to this article that has a very nice and seemingly unbiased analysis: http://arstechnica.com/gadgets/2016/09/one-year-later-can-android-7…
I just don’t think this is really where we are going…
We already have an iOS Macbook. Its call an iPad Pro, and it isn’t exactly setting the world on fire. It is selling ok, it isn’t a failure or anything, but imo it has a rather limited audience when it comes to “professional” computing.
Its not because of the CPU, and its not because of the form factor, its because iOS simply cannot host most “professional” applications as well as OSX can. And I have seen nothing that would make me think that is going to change in the future.
When will iOS get a real bash shell? When will it get real mouse support? How do I access shares on my corporate network? How do I spin up a web server or a node server? How do I run a python or ruby app locally? Where is xCode? Where is bootcamp? How do I run a VM? Where’s docker support?
None of those things are contingent on there being an x86 chip under the hood, they could easily be done on an A10 or future ARM CPUs. The problem isn’t ARM, its iOS…
I personally do think we will see an ARM Macbook sooner rather than later, it just won’t be running iOS. It might have the ability to run iOS software, but without all the stuff I mentioned above it won’t be a macbook, it will just be a glorified iPad.
The problem is an iPad simply isn’t (and never will) be good enough for most “professionals”. Stuffing an iPad into a Macbook case won’t make it a Macbook – it will just make it a tablet with a better keyboard. It will still have all the same problems, all of them rooted in the fact that it runs iOS instead of OSX.
There is nothing wrong imo with a Macbook with an ARM CPU, as long as it is running OSX. Maybe we lose the ability to run bootcamp, but that is a small lose in functionality compared to what we lose switching to iOS.
Microsoft figured that out and adapted to it. They simply include both their mobile and desktop environments together on their systems. Apple will have to do the same or they will never be able to convert the majority of professional users.
I do have to point out one thing: professional is not necessarily “software developer.” Don’t make the same mistake I did. A few of your points such as network shares are relevant to all professionals (for those there actually are ways to do it by the way), but all your questions about bootcamp/vm/docker/server, to 95% of the professional crowd, is absolutely irrelevant.
darknexus,
I agree with galvanash, the CPU architecture is mostly irrelevant to me and the mobile CPUs are technically capable of handling most workloads, but unless/until the mobile platforms make a serious effort to support productivity use cases, they’re not going to be able to replace desktops for my work.
Obviously needs will vary from one professional to the next. I know some developers like to organize their work into different VMs for each unique client and test configuration, I’m sure you can understand why that is useful. Frankly, many of the daemons/tools I need are lacking or poorly implemented on mobile platforms. I expect more from my development desktop. Part of that is just the form factor, and adding a keyboard can help, but even so the software just is not there for me, at least not yet.
You ignored the main point of my post, that professional does not necessarily mean software developer. Good job.
darknexus,
Firstly, I was talking about myself.
And secondly, I explicitly stated that professional needs vary, so I don’t really see why we need to disagree on anything here. Even from one software developer to another, we can have vastly different needs. Obviously a doctor, a musician, an accountant, a writer, a photographer, a professor, a lawyer, a mechanic, etc have different needs too. Maybe for some of them the mobile platforms are already good enough, and for others they may find things sorely lacking, such as file systems, input, media cards, peripherals, etc.
My point was simply that I agreed with galvanash, an ARM device could clearly meet the majority of requirements, but there are many use cases for which today’s mobile platforms are still too immature due to a focus on consumer needs. As a power user, I’ve felt somewhat neglected by mobile platforms. But now that the consumer market is saturated, maybe the focus will change…who knows?
Edited 2016-09-20 17:52 UTC
I agree benchmarks and stick measuring seem worthless in most instances when upgrading your phone. I upgraded my phone recently (see if you can guess which one I got) and cpu, ram, and gpu was not on my list of helping make my decision. I think it really comes down to this when choosing a new device (besides it being broken):
OS: Which OS do you prefer. In my instance it is Android because of the flexibility it allowed me that the iPhone didn’t stock (didn’t want to get into the whole jailbreaking stuff). I know people that prefer iPhone, and I think that is great if it fits their needs and preference.
OS version: My old phone, a Nexus 5, did get the Marshmallow update, and this update was important to me because of the app permission model (lacking for long time in Android) and the power saving features. Of course some people are forced to upgrade when the next version of the OS is not supported on their hardware. In my old phones case, the Nexus 5 would not have gotten the Android 7.0 (Nougat) update (unless you use a rooted version I am sure will be available).
CONNECTION: My Nexus 5 did not support VoLTE, Wi-Fi calling, or Band 12. This made for a horrible experience with T-mobile because their voice network is horrible (they moved to VoLTE which does work well), and lack of Band 12 made being inside buildings a problem (the T-mobile LTE hotspot helped with this in my primary building but still didn’t fix voice issues for Nexus 5). I didn’t want to leave T-mobile because of the cost and features, the money I save using T-mobile will pay for a new phone over time if I was to upgrade. I got unlimited data and international data/txt for a cheap price, that is hard to get for cheap anywhere else (maybe Sprint, but I moved from Sprint since they sucked so bad for the areas I traveled).
CURRENT ACCESSORIES: All my cables I have for my equipment (Kindles, Moga Game Controller, etc..) was Micro-USB. In addition I use wireless charging, which I thought would be a gimmick, but I ended up really liking the connivence of this. Most new phones are moving to USB-C, and I didn’t want to have to replace all my cables or use adapters everywhere. I wanted to use the cables and wireless charging I already had for my Nexus 5.
STORAGE: I found myself always hitting storage limits on my device of 32GB. So the next phone I decided I wanted a SD card option so I could have large storage for pictures, video, downloaded maps, languages, ROMS, and large games.
That was basically the mains reasons for considering what phone to get. It was not the benchmarks on CPU, RAM, SCREEN, or even battery (I could always carry a portable 6000mAh when I needed that extra juice, like on travel trips). I didn’t feel I needed a super retina display or any sort, but do admit a better camera is always nice to have and the Nexus 5 I had sucked in low light conditions.
So which phone did I upgrade to that met all my needs surprisingly, the Samsung S7. I decided on the S7 and not S7 Edge because I like smaller phones, and the 600mAh bigger battery in the edge was not a problem when l could always carry that portable battery when required. So I got everything I wanted above, and overall I love my Samsung S7, even with Samsung additions which I didn’t find too drastic over the stock Nexus 5 Android, and some features I liked Samsung added. The call quality and reception is so much better now with T-mobile, and the only change was the phone (I travel the same areas). Having the SD slot, water proofing, better camera, Qi/AirFuel wireless support, bigger battery, fast charging, and additional radio networks game me extra things the Nexus 5 didn’t, and a lot of phones out there didn’t have. They even threw in a GearVR and 1 year free Netflix when I bought which added extra incentive when considering the price (even vs a new Nexus 5x).
So basically those are the reasons I upgraded myself, and never really considered the other things a factor in my decision. Right now I don’t see myself replacing my Samsung S7 anytime soon, and having the 200GB SD card in gives me a lot of space for big things I store. It is nice also I can do a full phone backup to the SD card, and if the phone dies I pop it out and put in a new phone and restore locally. So yes, people choices of phones should be based on functionality, not benchmarks and stick measuring.
Edited 2016-09-19 23:25 UTC
How does that work out for you? Did you format the card in something other than FAT32? I wonder because that card would unlock the potential for a very big mobile video library, but since FAT32 cannot handle files larger than 2GiB the point is moot.
I never tried anything to test whether my phone support any other file system than FAT32 on the SDcard. If it would, that would be great!
I also wonder if TWRP and other recoveries are able to cope with the unexpected file system on such an SDcard.
It should support exFAT. My two-year old Samsung tablet does.
Probably ExFAT, though it’s usually reasonable that Android supports EXT4 as well (though whether Samsung fscked that up is anyone’s guess). In either case though, FAT32 is by no means required anymore, thank goodness.
It works great for storing large data, which is exactly what you want to use it for. Recording in Hi-Def can create large files very quickly. As other people said Samsung (some phones might only support lower capacity SDs with FAT/32) uses ExFAT so you don’t hit that 4GB file size limit of FAT32, as the max file size limit for ExFAT is 16 EB (good luck find a SD card that size LOL). I personally store my emulator ROMS, downloaded offline maps, large games, images, videos, offline Spotify downloads, language downloads, etc.. All of those would take a considerable amount of storage away from the 32GIG the phone comes with (for a US based Samsung S7). You also have the ability to move some apps (if they allow it) to the SD. Most games do allow this which are mostly the the apps using the largest app storage. Then of course some apps allow you to pick where to store large downloaded data, such as Here Maps allowing you to pick where to download the offline maps to.
If you want the ability to pop out the SD and mount it on another device and put it back in the phone, be sure not to use Android adaptive storage or encrypt the SD card. Adaptive Storage is not an issue with the S7 since Samsung disabled the ability to turn this on (which I can see reasoning beyond doing this). I know some peoples concern is peoples ability to access the SD if it is not encrypted. That is why you store only stuff your less worried about having on the SD. If you are worried about having your phone pictures and video on a unencrypted SD, then store them on the local phone storage which is encrypted. If you don’t want to mount the SD on other devices, then encrypt the SD. Having a choice is nice.
Edited 2016-09-21 00:00 UTC
Well…. Both MS and Apple has been running around as headless chicken’s for the last couple of years. Anything stupid is possible these day’s I guess.
I think most people already long ago agreed that the Pentium 4 was a dead end, and that the Core series was much better. Disregarding PowerPC entirely, it’s fair to say that Intel improved a lot over the years. So let’s not misunderstand the obvious.
Let me remind everyone that after the creation of the UI people used to say that the command line was going to disappear, but guess what it is still there. Developers used to say that in the near feature there was not going to be the need of writing code and that development was going to be strictly visual and we are still coding.
The future is going to be more orthodox than most people like to believe. Merging both platforms in the near future will be stupid unless they can come up with something as revolutionary as the “Knowledge Navigator”, but despite everything we are not there yet. Sure macOS and iOS share a lot of things which is great, but the desktop UI is still and will be really useful for a long time. The problem is the new generation of arrogant perpetual teenagers that are only capable of taking pictures of their food or themselves and share it on Facebook, for them a basic touch UI might be more than enough.
Yes, the command line is still here but used only by a slight minority. Most “normal” users don’t even know about it and would be scared if seeing it.
The cli is making a resurgence…
MacOS 9.x had no cli, OSX does.
Microsoft are focusing on improving their cli, and some of their newer products depend on it for management.
The cli is used by sysadmins and sometime by programmers, rarely by “normal” users.
Mac OS survived for quite a while without a command prompt, and although that might not be the best example, it does show that general consumers at least can get by fine without one. There’ll always be a need for it, just like there’s still a need for programmers banging the metal in assembly, it’s just that those situations become quite niche as time goes by.
“That “iOS” MacBook is closer than you think.”
The only benefit for ARM I see is to get rid of the Intel CPUs.
I doubt, that an ARM based MacBook will give longer battery life time than an Intel based one (given the same performance).
But I have to agree (and I am telling this since the iPad Air came out): Apple will sooner or later come up with an ARM MacBook Air.
“…and by now it’s clear that Apple is going to retire OS X in favour of a souped-up iOS over the coming five years”.
Time will come when that decision will be compulsive. And OSX will be still OSX. And iOS will be still iOS: OS ‘behavioral islands’, walled gardens, Alcatraz Islands.
And it will not matter at all, anymore. World is bigger than that.
Edited 2016-09-20 14:41 UTC
When the first Mac came out, the 68000 was far and away the most powerful chip you could get in a home-class machine. The first Macs were expensive, but only half what an IBM PC cost and they were incomparably more powerful; only lack of memory and expansion held them back.
Intel went to work and eventually made the Pentium. Something had to be done, and the result was the PowerMac: when the first ones appeared they blew Pentiums away. The thought at the time was that the Pentium was old, slow, and soon to be obsolete; the PPC was faster, cheap, and at the beginning of its evolution.
Once again Intel went to work with unlimited resources, and built a cooler and faster x86. (It helped that IBM never really cared about Apple’s needs or heat dissipation.) Apple was in trouble again and ended up switching to Intel processors.
They nearly blew the PPC switch by providing no practical way to build native software for over a year after launch. It took time for legacy programs to run faster on PPC than 68040, and few companies had the resources for making PPC-native binaries then (think Apple, Adobe.) Apple might well have bankrupted without Metrowerks.
For the Intel transition, they learned a little. There were many more native applications at launch. Apple provided tools (albeit very, very bad ones) and shafted Metrowerks worse than Symantec had shafted Apple on PPC, leaving Apple in control with no competition.
If they want to switch to their own chips, they have sort-of decent tools now and they’ve trained people to build for multiple CPUs with a decent cross compiler chain, invented to support iPhones and iPads. It looks doable, but there is no obvious desperation this time. Is it worth doing just to save a little money (if they do save money.)
MOST people aren’t nerds. Most people could live with an iOS Macbook.
By that I mean that multiple products are having iOS versions created for them. Take AutoCAD for instance. There is a AutoCAD version for iOS. Of COURSE it doesn’t have all the functionality of the desktop version. They just created it. Rome wasn’t built in a day. Even if hardware were to have no future updates they could continue to improve the software leaps and bounds above what it is today.
Now the lack of a mouse/trackpad may be a huge issue or maybe they have figured out a way that works well on an iPad. That is up to the app developer to either figure out or reach out to Apple for help on this. I think that the makers of AutoCAD is a big enough company to do this.
You see, everything that’s being discussed here is total utter useless running in circles.
For it to be able to cater to the computing needs of the vast majority of professionals, Apple would need to macOS-ify iOS to the point it almost becomes macOS and software developers would need to turn their iOS apps into almost-full-fledged desktop applications. What’s the point of it all?
AutoCAD for mobile platforms is meant to enable you to make minor edits to your drawings on the fly and I assure you that all users will have migrated (back) to Windows by the time the iOS version has reached substantial feature parity to the desktop application.
Heck, it took them 6 releases to achieve near feature parity to the Windows version on the Mac, so now I can almost use autoCAD for Mac as a daily driver for my needs. How many years until AutoCAD for iOS supports everything? What a terrible waste of resources to reinvent the wheel!
Professionals have advanced needs. Advanced needs are already met by available applications on desktop platforms such as macOS. Such platforms are evolving more and more slowly because they have now reached a satisfying set of features, satisfying speed on available hardware and so on.
As someone said, the only sensible path can be evolving Cocoa and Cocoa Touch towards a merger enabling a single application to have different interfaces for different sets of input devices and screen sizes/orientations.
Even if one day Apple calls the operating system across all its devices “iOS”, I am pretty convinced that, as far as features and UX go (and the origins of underlying code as well), it will be the sum of what today macOS and iOS are, not just an evolution of iOS.
Otherwise let me know when Cinema4D, Adobe After Effects or Rhinoceros reach feature parity on iOS and I can use VRay distributed rendering across the clients on my network, saving resulting pictures on network shares.
I mean, by the time iOS supports all this it will surely have incorporated so much of the code that today is differentiating macOS from iOS, that how it will be called (iOS, macOS, AppleOS) will not matter at all and will not make it more iOS than macOS.
Edited 2016-09-21 10:17 UTC
What you are failing to read is that iOS for iPads is still very young and is growing in capabilities very quickly. Two years ago you wouldn’t have dreamt that AutoCAD of ANY version, maybe even just a READER would have run on an iPad. And of course you wouldn’t have guessed that there would be a 12.9″ iPad Pro since most people inside Apple didn’t know about it.
Things change. Eventually, who knows, maybe two years from now, things could be TOTALLY different and everything you are saying will be solved. Maybe. Who knows? Only Apple knows. Just never say never. It _always_ (wink) comes back to haunt you.
Good points
But only insofar as the whole industry of professional software makers can keep Apple’s pace in this race towards replacing macOS with iOS.
Things can change faster than foreseeable, but what I am trying to say is I believe that so much existing code will be reused in this race that both iOS and the full-featured professional grade software it will run will have hybrid ancestry rather than being rewritten from the ground up. From a code legacy perspective, I don’t see the macOS branch dying and the iOS branch surviving – I think they are more likely to flow into one another.
Just asking but do most of you know that the difference between macOS and iOS is drivers and APIs and hardware? And that the underpinnings of both is EXACTLY the same root of the OS? You do know that, right?
I get SO tired of people talking like they are totally separate OSs when they are the same OS but are made for different hardware with different drivers because of that and different APIs and libraries. But again, the OS itself is the same.
Even if you believe that an iOS version of Adobe’s suite could replace the desktop versions (which to be honest I don’t see ever happening – graphics and video professionals need things like calibrated monitors, efficient and exact mouse/keyboard control, multi-monitor support in general, etc.), one of the Mac’s main reasons for existence in this day and age is to serve developers.
Macs have pretty much become standard hardware for (non-Windows) developers – just go to any developer conference and take a look around, it’s MacBooks as far as the eye can see. iOS can’t be used for development – no access to the filesystem, no command line, and no third-party interpreters allowed means it’s a dead end. Get rid of macOS and you lose that enormous, loyal audience – a huge source of profits goes “poof”. Not to mention the “halo” effect that this has among the tech-minded and their friends and family – it’s a good source of marketing.
Finally, there is the paradoxical question: how on earth will iOS apps be developed, when iOS can’t host a development environment? It’s just silly to think about.
To sum up: if indeed macOS is going to become “legacy”, it can only happen if iOS essentially becomes yet another desktop OS after all, like Windows 10 but in reverse. The current iOS is simply too locked down to support developer use cases.
Edited 2016-09-20 18:33 UTC
You haven’t been paying attention.
http://www.osnews.com/story/29284/Continuous_C_and_F_IDE_for_the_iP…
http://www.osnews.com/story/29296/Swift_Playgrounds_helps_you_learn…
An there’s more happening in this space. The growing list of things – even ‘hardcore’ things – iOS can do is growing by the month.
At the moment Playgrounds is a nice learning tool, little more. Otherwise there have indeed been interesting developments on the iOS IDE front – but I’m still not drinking the Kool-Aid that iOS in anything resembling its current, limited, locked-down form will ever be adopted by serious developers, reality distortion field or no. A file system is needed. Support for multiple monitors is needed. Multitasking is needed. A command line with compiler support is needed. Add all of these things and what you’ve got is essentially a desktop OS.
The day that Apple no longer offers a desktop-capable OS will be the day that the Linux desktop finally triumphs. (In other words, it will probably never happen .)
Edited 2016-09-20 19:45 UTC
Playgrounds for iPads is just the beginning of programming on iPads. They just trying to figure out how they are going to let developers develop on iPads themselves.
In the beginning DOOM was programmed on NeXT but ran on DOS. Eventually compilers on DOS were good enough where people could write Quake and later DOOM versions on DOS/Windows and they didn’t need NeXT anymore.
Apple might port macOS to their own processor, but I’m willing to bet they have more sense than shoehorning iOS on laptops. Micorsoft already failed at the same and getting desktop features isn’t going to do Android any flavors.
The whole all on the phone on single framework convergence is pointless anyway. Developers already have the means to share code between frameworks where it’s useful to do so, and UIs need to be developed separately for different devices or they will suck on all of them.
I agree the benchmarking is mostly… just a guide to how soon I need to retire old devices.
But as others have pointed out, “macOS” and “iOS” (and “tvOS” and “watchOS”) are not really separate products or systems. The OS+device is one part of an ecosystem.
It is kinda going towards the “ubiquitous computing” demos from XEROX in the late 80s. The stuff is just everywhere and kinda invisible; just a part of your environment. You’re living _in_ the ecosystem.
So you don’t need your phone (or for that matter your watch) to be able to plug into a dock at your desk. The cloud is the dock and takes care of whatever needs connecting, like the AirPods which pair over the cloud.
macOS is in a way just what iOS looks like on a desktop, if it could dock with one. Which affects the apps and how they’re built. For now, SPSS runs on Macs via Java. And 3D modelling tools like form.Z need lots of power. But there’s nothing to be gained really by them running on your phone, except where it actually works for some specific purpose — form.Z recently brought out a viewer app for iPads. OmniGraffle can do lots on the Desktop and a fair amount on an iPad. I even have to choose whether to use a trackpad or a mouse or a pen for some tasks, because one is better than the others.
From a technical perspective all these products are very different. But from a use perspective, the work you do is organised into different categories; not “CPU” and “RAM” or even “frameworks”, but “my 3D model” and “my travel plans”.