The era of macOS 10 is over, and we’re entering the next era of macOS’s life cycle. This is going to be a massive update, and aside from the transition to ARM, it can be summed up as “macOS: iOS Edition”: the entire graphical user interface has been redesigned to resemble iOS, including massive amounts of whitespace, touch-friendly design, and very white roundrect icons.
The new operating system brings the biggest redesign since the introduction of macOS 10, according to Apple. Big Sur borrows a number of elements from Apple’s iOS, including a customizable Control Center, where you can change brightness and toggle Do Not Disturb, and a new notification center, which groups related notifications together. Both interfaces are translucent, like their iOS counterparts.
A number of apps have received streamlined new redesigns, including Mail, Photos, Notes, and iWork. Apple has introduced a new search feature to Messages (which organizes results into links, photos, and matching terms), as well as inline replies for group chats, a new photo-selection interface, and Memoji stickers. There’s a new version of Maps for Mac that borrows features from the iOS app, including custom Guides, 360-degree location views, cycling and electric vehicle directions (which you can send directly to an iPhone), and indoor maps. Apple introduced a number of new Catalyst apps as well.
I’m not entirely sure about the look, especially since it feels very much like a touch UI that won’t work and feel as well when using a mouse of a trackpad – it looks like a 1:1 copy of the iPad Pro’s iPadOS user interface, for better or worse. Still, judging a GUI by mere screenshots and short videos is a folly, so let’s reserve final judgment until we get to use it.
That being said, if you want to try the new GUI now, you can just load up any GNOME-based distribution and apply any of the countless iOS-inspired themes found on Gnome-Look.org.
An additional massively important feature is that the upcoming ARM-based Macs will be able to run iOS and iPadOS application unmodified, as-is, much like how Chrome OS can run Android applications. This further underlines how despite years of Apple and its advocates poo-pooing Windows for combining cursor and touch-based interfaces, Apple is now pretty much past any idea of combining the two, and has instead just opted to make everything touch-first, whether you use a mouse or not.
Lastly, macOS 11 will come with Rosetta 2, which will allow x86 applications to run unmodified on ARM-based Macs. That’s definitely good news for early adopters, but performance will obviously be a concern with emulation technology such as this.
I really wish people would stop comparing GNOME to MacOS; it’s nothing like it. Anyone who does so probably hasn’t used a Mac in years, even assuming they’ve actually used one, as opposed to just looking at them, at all. I’m primarily a Linux user (and I can be happy running everything from KDE to Cinnamon to MATE to XFCE), but when I do use MacOS – like right now – I really enjoy the look and feel of it. And it’s still pretty responsive on this 10-year old Mac. GNOME, on the other hand, – look, feel, everything about it – I absolutely loathe. And don’t get me started about the speed or the stupid, borked extensions system.
…it was a tongue-in-cheek remark. Calm down :).
Maybe it is, but I just don’t know where people get it, Thom. It’s like comparing salt to t-shirts, a really off-the-wall comparison.
It’s not as off-the-wall as you’re making it out to be. Both Apple and Gnome have the well-earned reputation of removing user choice from one version to the next, user preferences be damned.
Apple and Gnome are hardly unique in that respect. Microsoft has been doing it since the start of Windows 10. And smartphone systems seem to be getting ever more closed and locked down too.
The IOSification isn’t a big surprise. Most of us were expecting to see convergence between the mac and the ipad, However neither article covered my biggest question: will new native applications be subjected to walled garden restrictions? Emulation through rosetta is one thing, but I suspect everyone will agree that emulation is strictly a transitional stop-gap measure rather than a long term solution. Apple probably will be dropping support for rosetta in a few years (rosetta for PPC->x86 was supported from 10.4 through 10.5, ~4.5 years).
https://en.wikipedia.org/wiki/Rosetta_%28software%29
Is it possible for owners to install their own native software or is native going to be limited to IOSesque walled garden channels? I don’t wish to make assumptions, but we need answers! If anyone has a definitive answer, please link it. Obviously a lot of people are hoping they won’t be restricted, but that’s not a given.
I don’t recall them mentioning anything about this at all, but honestly, they wouldn’t mention anything like that anyway right now, as the bad press would be huge.
The question of walled-garden is definitely still an open one, but it’s really less of a question than just waiting for confirmation. Apple has been consistent in their reduction in support for unblessed software for many years now. They have consistently reduced support for (forced users to jump through hoops), or completely removed support for (3rd party kernel components) various unblessed pieces of software. It has been slow enough to allow developers to re-architect as a result of the new restrictions, but also jarring enough that some software has already vanished from existence on modern versions of Mac OS.
I’m not in the pro/enthusiast audio space, but I have dabbled with audio on Macs a few times over the years. I’ve noticed a clear trend toward external DACs versus devices that live on the system bus requiring a driver, and I’ve wondered if that was a direct response to Apple removing support for 3rd party kernel-based drivers. I don’t doubt the security improvements that can be made from removing the ability to load 3rd party kernel components, but Apple’s motives were likely more profit-motivated than security (of the user’s system) motivated.
All that to say, walled garden is the clear future of the Apple ecosystem, on all fronts. I suspect that the virtualization that was demoed, assuming it’s available to the public at all, will be relegated to paid-for Apple Developer subscription. PC hardware + Windows, while in comparison much more open than Apple-branded hardware, has also been trending more and more closed for the last decade or so. We’ve literally been witnessing the death of (relatively) open hardware architectures for the PC power user. This relatively-open architecture is what brought me into the technology space from a young age. Being able to tinker with things is what allowed me to learn about how systems work at a deep level. It concerns me that the removal of this openness will hamper similarly-inclined kids.
The bright spot in the tech landscape, as I see it, is the single-board computer market, which is more open and vibrant than ever before. The RPi community is just the beginning. There are a plethora of options out there covering both ARM and x86-64, some of them quite performant. I hope that the parents of the aforementioned kids will pay attention to their kids inclinations and pick up some SBCs to help their kids learn and grow their skills.
Hello,
macOS on Apple Silicon will be an open platform, with even Secure Boot being possible to turn off, and with custom kernel extensions being available.
See: https://developer.apple.com/documentation/apple_silicon/installing_a_custom_kernel_extension
never_released,
You basically echoed this same statement in another thread. So I’m not going to repeat the same discussion points here, but I would like to ask another question: where did you see/read it would be possible to turn off secure boot on these ARM devices? I hope you are right and that’s the case, but your comment is the first I’ve heard about it. I’d like to know is there a leak or official statement that backs this up?
Hello,
It was detailed at the Platforms State of the Union session at WWDC, after the keynote.
never_released
Could you provide a link and time or something? Respectfully, it makes more sense for the onus to be on the person making a claim than others to have to find it.
@Alfman, you could just go watch the WWDC State of the Union session.
modmans2ndcoming,
Sorry, but no. When someone makes a claim without a specific source (and approx. time if it’s in a video), I am justified in asking for a specific source. I’ve already watched clips from the keynote, which is more than enough for me. I’m not going to undertake someone else’s responsibility in sourcing their claims, that’s unreasonable. I go through the trouble of sourcing my claims for everyone else’s benefit, whether it’s benchmarks, a specific time in a youtube video, whatever, and I don’t think it’s asking too much to have everyone to provide their own specific sources & quotes to back their own claims.
With the transition to a different architecture, it would be the obvious time for them to transition to a walled-garden too – walled-garden for all new Arm optimised apps, whilst retaining install your own native applications for x86. It’s a very clear demarcation, and obvious way to deal with multiple packages, and they get to move to a walled garden whilst still claiming that you can install third party.
No doubts that Apple moves are towards tighter control of its products and generate more profits. They build better products compare to most out there but aware of their “Walled-Garden” concept. Apple will lock their customer and make it hard for them to move out and they will not embrace open computing.
I think the trend to external DACs is a general trend for audio enthusiasts – I’m sitting in front of a Topping DX7s with a balanced hybrid tube amplifier stuck on top linked to a Linux box. There have been concerns with internal audio cards suffering from interference – possibly more at the ‘cheap and nasty’ end of the scale.
There may be some binary translator, just like how Qualcomm does for their SnapDragon running windows.
But also during the Rosetta days there was no app store. So now apple has a much better vector to ensure most developers are on the same page regarding fat binaries (which is one of the main value propositions all the way back from NextStep days).
Why do Apple want MacOS to look like ElementaryOS?
They want it to look like iOS. Whatever copycatty Linux distro happens to currently look like iOS or a Mac-iOS hybrid is irrelevant.
From the outset, I’m not surprised by the direction. I do not welcome it either. I understand that APple is trying to corral users to a familiar interface, much like Microsoft did in reverse with the tablet PC and Windows XP. I cannot see a touch-centric interface work for productivity. Do they expect people to interface with machines like the Ironman movies? Even if that was a reality, that doesn’t appear productive. I’m getting older and more set in my ways… The current desktop inferfac wasn’t broken, there’s np need to fix it IMO. But hey, if it does take off, I see the stock prices go up in Windex, screen wipes, etc. LOL
I am much less enthused about the new macOS. It’s kinda ugly, there’s not nearly enough information density, and i have concerns about it becoming a walled garden.
Off topic, but by Christ The Verge is a piece of shit bit of web design isn’t it?
With NoScript turned on it doesn’t load images. Well naturally, most of the brain-dead end of the web doesn’t any more so that doesn’t make it special.
Once you disable NoScript it takes something like ten seconds before its done loading.
And if you try to read it whilst its loading tat in the background… whoa there tiger!
It flicks about, from top to bottom to the middle, clearly alarmed by its own loading process; the ‘Beware our cookies’ banner, besides obviously taking up 2/5th of the screen, as is the vogue, only loads in after a few seconds; some of the images load straight away, others seem to wait for a script of a script of a script to call them out from Bel-Shamharoth’s lair.
I particularly liked the bit where five seconds in I could no longer move up and down with the arrow keys, only the mouse scroll wheel, until the loading finished, whereupon the arrow keys were allowed to work again. I was impressed; even The Sun website doesn’t do that.
What wonders of modern JQuery!
However badly Apple or Gnome or Microsoft cock up their next OS update, they will never be a tenth as aggressively counter-productive as mainstream web development.
(Firefox 77, Mint 19, Thinkpad x201, 6GiB RAM, modest but stable broadband, in case you were wondering.)
Don’t you remember the good ole days when the Verge’s java script was so terrible it was included in benchmarks?
I absolutely hate JavaScript as a concept, not just when it’s misused. It takes a beautiful Document Object Model and literally craps all over it. The symptoms are apparent: Back and forward buttons don’t work consistently, some clickable elements can’t be opened in a new tab, processing can keep going on for an indefinite amount of time (instead of being a function of the size and content of the DOM tree and always finite), and you can’t be sure when a page is done loading and can go offline and read it in the underground parts of the London Underground because some stupid web designer decided to have images load on-demand during scrolling with a shimmering effect using some JavaScript code. Gizmodo is also guilty of this.
Makes you appreciate the concept behind Flash, where all the scripty script nonsense was confined to certain nodes in the DOM. Too bad the implementation sucked and the whole thing was closed-source and maintained by a company who doesn’t believe that their “Reader” (free of charge) software should be any good.
For me It takes 7.5 seconds to fully load, but it was readable instantly*, i could only tell it was still loading by looking at the network window. while I was scrolling up and down the page. No funny jumping involved either.
So while your story was fun to read, i don’t think you represent the majority.
*) less than a second, using incognito mode to disable all extensions and with cache cleared and cache disabled. Arrow keys never stopped working.
Troels,
For me under firefox it took 2.5s to fully load with adblocking/privacy blocking extensions. Fully responsive and no jerking around.
After disabling the extensions it takes a full 10s to load and I experience jumping around as described by M.Onty. (ghostery and ublock blocked a couple dozen 3rd party trackers/advertisers, I don’t use noscript as I found it too be too high maintenance when I tried it years ago).
So ultimately, everything is fine after all with Mac on ARM. Not only will x86 emulation and universal app support be built in just like with the previous transition, but there is actually some pre-optimization that happens to boost performance. There is no sign of a sudden draconian requirement for apps to be installed from the App Store as Thom predicted, unless you count the additional support of iOS apps as such. Even emulation of x86 OSes will be getting some extra support to ensure developers don’t run into issues with Docker and such. Still no word on Windows support but my guess is Parallels is working on it. Everything seems like it’s the rosiest possible outcome to be hoped for – and I for one am genuinely curious/potentially excited to see how the new ARM Macbooks turn out.
You are wrong about Parallels – you can’t effectively emulate x86 on ARM without hardware assistance which is not allowed by Intel. So you will have to use Windows ARM in VM instead of Windows x86. Which means a lot of compatibility problems with Windows software as Windows ARM can’t run 64 bit Intel applications. Alternatively, you will have to endure with Intel CPU software emulation for virtual machines which will be slow and power hungry.
How does the support Apple talked about for running x86 Linux instances in Docker work then? Can you point to sources that explain which hardware assistance you are referring to and in what way it’s not allowed by Intel?
I don’t expect native performance by a long shot, of course, but for playing the occasional older game it would hopefully be enough.
Firs of all – do a research instead of spreading misinformation. There is a list of limitations for Rosetta 2 on the Apple website. Existing x86 Virtual Machines are not supported, which was obvious as they can’t be supported with only binary translation before execution. x86 Docker is supported with software CPU emulation which is slow.
I do not care what Apple support told you. Intel doesn’t allow to add hardware support for x86 instructions emulation. When Microsoft and Qualcomm announced this intention for ARM they were threatened to be sued by Intel . CPU is the core Intels’ business and they will not allow to assist tx86 instruction emulation with hardware. Software emulation is allowed, this is what Apple did
Do some research – have you done any yourself? If so please post some links to sources, otherwise it’s just your word against – wait I never claimed to know anything, all I wanted was for you to point me to some concrete information.
As for the information you did reference (Rosetta 2 limitations) – I know this and I never claimed that the x86 version of Parallels would run natively. What I was saying is that I expect Parallels to bring out an ARM-native version, which is capable of using emulation to run x86 virtual machines – as Apple has already demo’ed with Docker. Yes of course it’s “emulation” (or rather, translation) and not running natively – but that doesn’t mean it’ll be unusably slow.
Can you provide a link to the Microsoft/Qualcomm/Intel story? I’m not questioning it, I just want to see what you’re talking about.
this drives me to linux when my current Mac needs to be retired
Rosetta 2 is not an emulator. It’s a binary transpiler. Assuming a 20-40% cpu increase above the current iPad Pro performance will be superior to current MacBook Pro for the current app generation.
kristoph,
The same is true of qemu and microsoft’s x86 on ARM compatibility as well. Most of the industry simply calls it “emulation” anyways even though it’s technically recompiling the binary code to run natively on the new processor. QEMU (minus KVM) was originally based on BOCHS, which was a real emulator in the strict sense of emulating the CPU, but QEMU has evolved to compile down into native code. Vmware may have done this even earlier IIRC.
All of these technologies are slower than code that is natively compiled to the target. This is why hardware assisted virtualization has supplanted it in practice except when cross architectural compatibility is needed.
While I wish they had run more tests to pin down the exact cost of overhead (missed opportunity there) here are some benchmarks from windows 10 side of things…
https://www.windowslatest.com/2018/03/27/windows-10-on-arm-benchmarked-both-natively-and-with-x86-emulation/
And some microbenchmarks…
https://megayuchi.com/2019/12/08/surface-pro-x-benchmark-from-the-programmers-point-of-view/
As you can see, emulation/recompiling does have a performance impact and sometimes it can be severe.
No. This is NOT emulation. By definition, emulation is when you map one architectures calls to another’s architectures calls at runtime.
Rosetta literally transpiles the x86 binary into an ARM binary when the x86 binary is installed. It can even build a fat binary from an x86 binary. It does the same kind of analysis on the binary that a compiler does on the source so there is no loss of performance because of lack of compilation level optimization.
The folks who think source is this magical thing that a compiler must have for optimization really lack an understanding of the state of the art here.
kristoph,
I know what you are saying, however it’s very common for the industry to call it emulation anyways.
Yes, this is how QEMU works as well. You can look into QEMU’s user mode emulation, which works at the process level rather than the OS level.
I expect you’ll find this paper very interesting.
https://pdfs.semanticscholar.org/57f7/f94d5ec8f465b8f8753aaf63fbb488d96f9d.pdf
I’m highlighting QEMU not necessarily because it’s the best implementation of this technique, but rather because it’s an open implementation where anyways can look at the details. This paper talks about combining QEMU with LLVM’s code generator. In theory LLVM’s optimizer might outperform QEMU’s built in one, but it isn’t always so straitforward.
I’m very curious how well rosetta will perform compared to other code translation implimentations, but its approach is not novel.
kristoph,
Ah, I think I focused on the wrong part of what you were trying to emphasize… you’re suggesting that the translation happens before execution rather than during. That may well be the case.
Well that’s the thing, translated code still isn’t going to perform as well as native because not all the overhead comes from invoking the JIT compiler. Anyways it’s an interesting point. Maybe I can try to come up with a way to measure QEMU’s JIT invocations & overhead.
I’m excited by the prospect of non-Intel hardware, but the Windows 10 mobile UI edition of MacOS… eesh.
Still waiting for the new “Mac” that’s just two iPads connected by a custom hinge though.
With all the problems that Intel has been having in their chips for the last ten years, it will bring a sigh of relief to my household as we switch over our Apple computers (the whole device) to versions with Apple CPUs (SoCs).
What particular problems did your household have with Intel CPUs?
I am astonished by your pretentious commentary. Were you talking about exploiting speculative engine flaws to access protected data? Many ARM CPUs had the same problems too. Anyway, it is unlikely those problems had any effect on a household.
Shouldn’t it be XI and not 11? (I mean, especially in deference to Xi’s country where Apple manufactures most of their stuff.)