After sixteen major releases, you might think there’s not much left to be added to Parallels Desktop – and for the vast majority of Mac users who are still using Intel CPUs, there isn’t. For them, this update to the popular virtualisation software tidies up a few bugs and adds support for the latest version of the Linux kernel, but that’s largely it. Overall it’s not even consequential enough to warrant a full ticking up of the version number.
Yet arguably, this is the most significant release of Parallels Desktop since it first appeared in 2006. Just as version one unlocked the potential of Apple’s then-recent switch to the Intel architecture, this one breaks new ground by allowing you to install and run Windows 10 on Apple Silicon.
They conclude it’s a great first release, but that it still has ways to go.
I am disappoint that there is no mention of 3D acceleration. Presumably there is an accelerated driver available (otherwise Parallels wouldn’t be able to map the UI elements of Windows 10 to the OS X desktop, it would be very hard to do if Windows 10 was drawing pixel-by-pixel using the CPU), but there is no GPU-Z screenshot or benchmarks so we can see how it all performs.
GPU-Z doesn’t support ARM yet (claims the driver is blocked from loading), but I can say it actually runs /really/ well. Surprisingly, it’s much faster than CrossOver too.
I would say it’s somewhere between a 750 Ti and 1050 Ti performance wise; really good for translated graphics APIs, with emulated x86 applications, virtualized under another OS, and all on a fanless laptop.
Still, we could get some Device Manager properties window screenshot, some dxdiag screenshots and some benchmarks. Does the GPU driver available to Windows 10 support DirectX and OpenGL and at what levels? DirectCompute and OpenCL?
I was also disappointed that this review didn’t benchmark ARM-native Windows apps.. That’s where the power of the Apple silicon should really shine compared to other ARM Windows machines.
Moochman,
+1 for benchmarks
My take away is that Microsoft and Parallels should work out a deal so that end-users aren’t forced to use the Insider edition of Windows. On the plus side, the Insider edition is free, and many Parallels users are likely to be tinkerers anyway so maybe using beta software won’t bother them as much.
For me the ability to run Windows is a precondition to buying one of these new Macs, so this is great news. Even if I’m on the fence about sticking with the Mac platform or switching to Linux.
Moochman,
Personally I like to distinguish between the merits of the machine and the merits of the platform. Their ARM desktop computers are under-performant for me (I can’t emphasize this enough, I know different people have different needs and that it may be good enough for them). However for a portable laptop, I wouldn’t mind having one at all. I’ve wanted an ARM laptop for a while, and apple’s M1 laptops have exceptional ARM performance, which is a pro, but the issue there is I don’t want to fight with it just to be able to install linux and use open source tools.
It makes sense to check your apps for compatibility. Some apps work completely under rosetta emulation, but issues have been reported for others like skype, autocad, inkscape, kodi, etc for example…
https://isapplesiliconready.com/
To be honest I wouldn’t be interested in using it as an emulator since it defeats the efficiency advantages of using ARM. Until linux support and native tooling improves, it’s probably not a viable option for my purposes. GCC doesn’t support it natively for example.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96168
In the meantime I’m still interested in an ARM laptop, ideally with non-restricted & non-proprietary UEFI (probably pushing my luck).
Do you really need GCC? Is clang not enough? (Honest question here) .. To your point about “emulation”, it’s not, it’s full paravirtualization with some translation of (DirectX) API calls. So performance under Windows or Linux should more out less be on-par with native.
Unless you are referring to emulation of x86.. in that case it’s just a matter of time until everything is available for ARM natively.
Moochman,
The honest answer is that clang is probably enough, however there exists a lot of software (and even a majority) where the developers use the GCC toolset. So migrating to Clang could conceivably cause distracting problems. You’ve got build tools that assume GCC is available, sometimes you’ve got areas where compilers themselves disagree.
https://stackoverflow.com/questions/15245326/porting-code-from-gcc-to-clang
And I’m not certain they support all the same intrinsics.
Side note: It could be interesting to remove GCC from my distro’s build environment to see how many things it breaks.
I appreciate what you are saying. There’s no technical reason they can’t write directx/opengl/opencl/etc drivers that communicate directly with the underlying hardware. In principal there’s no need to emulate intermediate API and so the overhead for 3d acceleration should be non-existent/minimal.
That said, I’m not sure if Parallels actually have direct access to the underlying hardware, I’m guessing there may be at least one layer of indirection on top of apple’s driver.
Yes I was referring to the software itself, if your software is x86 and you’re forced to run it through rosetta 2 (or what have you for your platform), then you’ll loose battery life and performance accordingly. If your software isn’t running natively, that can offset the efficiency gains of ARM.
My windows computer still has 32bit x86 programs on it because the developers haven’t even made the switch to 64bit yet. I’m kind of skeptical many windows developers are going to rush to release native software for ARM, which is a relatively unfavored architecture in the windows world. With linux things are different, by and large a typical user can expect all of the software to run natively with no translation needed (when distros officially support your architecture anyways). Anyways the point was it makes sense to research these things if considering making the switch.
Creating a 64bit version can result in a small speed increase, but depending how the original code was written it could be a lot more work than just a recompile. You then have the overhead of maintaining 2 builds unless they’re going to drop 32bit support.
But this is all rather short sighted, microsoft cannot remove 32bit support until there are no more users of it so it ends up creating a lot of extra work for microsoft, plus extra resource usage on every system which has to keep 2 sets of libraries.
That’s why Apple dropped 32bit support entirely, forcing developers to support 64bit. If you don’t force them, a large number of developers will never bother and it ends up holding everyone else back.
Linux is in much better shape, since most software is open source and 64bit processors predate amd64 by many years (mips, alpha, sparc, parisc, ppc etc) its very easy to make a pure 64bit linux system and remove all 32bit support.
bert64,
Yea, sometimes there’s just no benefit to going 64bit and the application invariably gets larger. In linux the normal practice is for all code to be converted regardless, but in windows many developers never did.
Well, owing to DLL hell, I’ve seen the increasing use of application specific DLLs anyways where libraries are just bundled with the application and the library never gets shared shared anyways. Sometimes entire frameworks are bundled with the application. Even the standard microsoft VC runtime DLLs aren’t reliable and I really couldn’t count the number of times I’ve had to manually install a specific version for an application to work. In linux things are somewhat better due to the repositories, but modern package managers like snap just bundle everything into the package making them enormous. This obviously isn’t ideal, but I’m getting off topic.
Yes, having source code makes all the difference in the world. This is the reason one can run a linux desktop natively on an ARM SBC without really even knowing it’s ARM. When RISCV computers become practical, I predict native support will be pretty good there too thanks to having the source code.
Interesting. It feels like dejavu when Apple switched to intel. I switched in short order to Macs, but Apple never followed through on their promises, and the OS went in a very non Linux/BSD path. I eventually ended up with a VM based solution, but was very unhappy with the state of things. Right now, I think you’re limited to 16 Gig of memory on M1 macs? I don’t think I can live with that if I’m running a lot in a VM or two. But, I’m thinking about it. Double that to 32 and keep the price where the 16 gb is , and it becomes more tempting.
Even though its a solid no for me right now, I think in five years I think it will be a no brainer.
I think this is an artificial limitation Apple has put on their machines. I don’t think there’s anything inherent in the M1 architecture that limits it to 16GB, they would just need to put more RAM in the M1 package (or, i dunno, use industry-standard DDR4 DIMMs… But who am i kidding, this is Apple)
The123king,
That’s a possibility, but it falls short of expectations here for desktop specs, so I don’t think we should just assume it’s an artificial limitation. For all we know they may actually be hitting the thermal limitations of this design such that it is difficult to upgrade the CPU/GPU/ram without effecting the thermal envelope for everything else. It may not scale as well as having discrete components.
For me personally, 16GB would be ok for a laptop. but I do expect more in a desktop, especially since it’s shared video ram. I think there’s a very good reason to add more RAM too because it looks like users are experiencing heavy SSD swapping on big projects and of course SSDs have a limited lifetime.
https://linustechtips.com/topic/1306757-m1-mac-owners-are-experiencing-extremely-high-ssd-writes-over-short-periods-of-time-likely-thanks-to-aggressive-swap/
It would be one thing if the SSD was user replaceable, but apple solders them in so what are owners supposed to do after 2-3 years when the drive reaches 100% of it’s engineered lifespan? How expensive is that going to be to repair after warranty? This is the same company that charges $1200 for repairs (aka you should just buy a new one) that Louis Rossmann did in a minute…
https://www.cbc.ca/news/thenational/complete-control-apple-accused-of-overpricing-restricting-device-repairs-1.4859099
A fair comment would acknowledge that it isn’t guaranteed that the tools are reading the SSD data correctly, for the M1 high usage stories that came out. This story seems to have fizzled out.
I ran these same CLI tools back when this story first came up. Many of the other values my M1 Macbook Air showed were clearly incorrect (32 hours power on time, for my over month old laptop that I was using for full work days 3 – 4 times per week as well as a variety of other off-hours tasks).
My SSD usage values seemed a bit high, but even at my rate it would be over 300 months before my SSD becomes an issue (and that’s not getting into the guaranteed writes versus the actual number a modern, reliable SSD can handle – which is usually much higher). I bet a whole bunch of other components would be dead for most users long before the SSD.
I may check back in 2 or 3 months from now and see if there are any new developments, updates to the tools or new M1 specific tools, and check my macbook again.
rem200020,
Insofar as independent testing & reporting goes, it’s a little problematic that apple doesn’t publish the technical specs as far as I can tell. The point being, we’re forced to take the disk’s reported values at face value.
That said I don’t see a reason to assume the reported values are wrong, but assuming that they are reporting incorrect smart data then one thing is clear: apple should fix the problem, provide accurate health monitoring tools, and make a statement to address the concerns. …this should be easy for everyone to agree on!
Actually modern SSDs often have a lower write cycles because they keep compromising longevity for capacity. That said, I don’t want to speculate because we have very little technical information about apple’s SSDs. Enterprise grade drives tend to stick with fewer bits per cell and significantly over-provision spares. However one shouldn’t assume that a one size fits all consumer drives aren’t going to be impacted by heavy swapping.
Ironically, because skeptics of the reports have put the smart data itself into question, the onus is now on Apple to address BOTH the question of SSD lifetime AND the question of the disk health monitoring accuracy. I don’t think it’s unreasonable for owners to know the remaining SSD longevity in terms of it’s engineered lifespan.
These are things that a sensible person may want to know. Say for example you were buying a used mac computer, it’s very important to know how much life is left in the SSD especially if it’s an apple computer where the SSD is soldered in place.
I think the categorization of the memory limit is academic. Its part of the M1 package, and can’t be changed by a consumer. I have no doubt that future Apple chips will have larger memory on the package eventually. I have no idea when that might be. Apple has proven to be great at making very efficient use of memory on IOS,. then may try to focus on doing that with Mac OS instead of upping the available system memory. Its also apple, so I don’t trust their future direction with Mac OS will align with my use case which is very different than their core customer. If it benefits their bottom line and core customer to hinder the use of Virtual machines, they’ll do it in a heart beat.
At the moment one of their core user bases is developers. That probably won’t change, either, since Macs are developer machines for iOS. So chances are virtual machines will continue to be important features, at the very least for Linux and Docker.
At least before M1, I believe there was some momentum with devs leaving macs and returning to windows. Mac os will have to continue to support their real core buyers: Ios developers. They likely don’t care if its not the best web dev, rust dev, or any other kind of dev machine. Macs are no longer critical to Graphics or video work. I’m not sure Apple cares about them that much, they might do a real mac pro with an arm, priced at the 2-5 k range, but thats also not for me.
Bill Shooter of Bul,
The OS has a few tricks it can resort to, but the thing is this is not a new problem, it has been widely studied and the solutions are well known. MacOS is already doing them: memory compression & swapping.
https://arstechnica.com/gadgets/2013/10/os-x-10-9/17/#compressed-memory
http://www.usenix.org/legacy/publications/library/proceedings/usenix01/cfp/wilson/wilson_html/acc.html
These are effective workarounds for low memory, but they inherently come at the cost of performance on large workloads. Thanks to very fast SSD storage, this can be somewhat mitigated, however there are negatives of using flash SSD for swapping memory, and since M1 owners have been reporting depletion of SSD lifespan under large workloads, it seems at least plausible that swapping is having a negative impact on M1 macs with too little RAM today. This issue will not likely come to a head before the standard 1 year warranty expires on these M1 macs. But after a few years this may potentially turn into a widespread problem with many owners unable to fix it themselves because the SSDs are perminently soldered in place. It may be an expensive repair bill for components that are typically user-serviceable.
It’s easy to see how it benefits apple, but I do not see how not offering better options benefits customers. I agree not all customers are going to need VMs (I’m not even sure how well these run on the M1 macs to begin with). I think a more common use case is mac users who work with large studio files, which often requires loads of RAM especially at modern resolutions.
And in terms of the soldered SSDs in a desktop, not being user servicable is very hostile IMHO. Even in the enterprise class macpro apple has locked the m.2 drives so that the owner cannot just buy media from a competitor, ensuring that owners are fully dependent on them. Of course we can all agree this makes sense for apple’s bottom line, but I think they deserve to be called out for putting greed over consumer interests.
I don’t think they can’t do the same degree of memory compression & swapping on mac os, as they don’t have as much control over the applications. Ie things like running in the back ground doing what ever the hell the application wants is still allowed on macs.
Bill Shooter of Bul,
Ok, but I kind of doubt there are many MacOS users who want it to transform into IOS.
Regardless, it isn’t always the case that high ram usage implies poorly written or managed software. There are genuinely heavy workloads that benefit from lots of RAM. I personally would choose more RAM and cores if those were available (and they didn’t suffer from throttling). I don’t believe Apple has published a roadmap as to when these will be available though. Many in the media wrongly predicted these options would be available with these new macs. We just have to wait and see when the upgrades come. Who knows if it will even be this year.