Since January 2016 (and maybe before), there’s been talk that Microsoft was working on bringing x86 emulation to ARM processors. Sources of mine are now saying that this capability is coming to Windows 10, though not until “Redstone 3” in the Fall of 2017.
Here’s why this matters: Microsoft officials continue to claim that Continuum — the capability that will allow Windows 10 Mobile devices to connect to external displays and keyboards — is going to be a key for the company, its partners and its customers. There’s been one very big limitation to Continuum so far, however: It only allows users to run Universal Windows Platform (UWP), and not full-fledged x86 apps.
What if an ARM64-based device could run x86 apps via emulation, the same way that the WOW (Windows on Windows) emulator allowed 32-bit apps to run on 64-bit Windows? That would make Windows 10 Mobile, which as of now, continues to support ARM only, and Continuum a lot more interesting, especially to business users who need certain Win32/line-of-business apps.
Quite compelling, to say the least. I’ve always considered the smartphone that turns into a full-fledged desktop when docked the holy grail of the mobile computing world, and I’m excited that Microsoft is still working on it.
They’ll get it right eventually. Infinite monkeys and all that.
Wouldn’t emulated performance be markedly slower than native performance? This would have to be full-blown CPU emulation, wouldn’t it? You won’t be docking your phone to edit videos at your desk or anything, if that’s the case.
The performance would be worse, yes. But mobile devices today would probably still be able to get emulated x86 performance similar to (total guess) 2010-ish desktops.
Having used Qemu a lot on desktop though, I can say that most programs are IO bound anyway, and if you have full control of the stack many of the API features could also be shimmed to make them run at native speeds underneath.
It seems with WSL, and now this, Microsoft is finally getting something from the microkernel-esque features of NT.
“2010-ish desktops” for something released in “Fall 2017”, so effectively run bu people in 2018 is not that compelling: useful for small things, not serious work.
I have no idea if the performance metric is sound, but 2010 was when the Core i* processors were introduced and they can run plenty of serious workloads, I’d be plenty impressed with that level of performance on a phone.
Office 2003 run pretty decent and was far enough to create Word documents, Excel sheets, Powerpoint presentation on computers fast enough to browse the web.
What the heck are you expecting from a phone after all ? Seriously…
Who cares? We’re at Office 2016 now. And besides, Office is one of the few apps that should never need CPU emulation. They’d be fools not to have kept the ARM portion of Office’s codebase, even though the Surface RT flopped.
The ARM version couldn’t run X86 only things like add-ons, plugins and macro’s. It was also only Word/Excel/PowerPoint and later Outlook.
Office better run X86-emulated or business would simply ignore it. Maybe the “Home” version could be ARM to be easy/safe/power-efficient?
The main problem remains, it’s probably nice to attach a screen and a keyboard on your smartphone to do some shit on it like it was a real computer, yet it just isn’t. At all, even though the processor running it is powerful enough.
Just look how the computers (and smartphones) got so powerful in the last decade, yet are crawling like slugs under the non optimized software load. Not speaking about hardware accelerated video decoding and 3D rendering.
If all you can handle is a smartphone for your business, not even an ultrabook stuff, then you’d better ask yourself about the logic behind your job. I mean, if providing x86 emulation to ARM is harder than running Office365 in a browser, what’s the point ?
Just like I said, if you need a more serious platform to operate Visual Studio, Photoshop or whatever demanding win32 software, what are you doing with a smartphone in the first place ? Seriously…
https://blog.malwarebytes.com/cybercrime/2015/10/leaving-laptops-in-…
Read this. Then go to travel advice from different countries you will find many countries where the travel advice includes do not leave any electronic devices in your hotel rooms.
Next if you are a business person travelling to one of those countries you wifi/internet access may not be dependable or secure. Office365 in a browser is not even practical.
So depending on where you are going with luggage and security limitations you might be restricted to only a mobile phone. Mini projector+ projection keyboard+ mobile phone is ultra compact.
So if you are running Visual Studio or Photoshop on a phone or whatever demanding win32 software, its most likely because you don’t have any other practical choice.
You also have to remember from day 1 with the rasberypi with Linux x86 emulation was provided not the best performing using qemu.
Also if you look around you will find people running full copies of windows on arm devices using emulation. Not the best performing beast. Usermode emulation is a lot faster than full system emulation. You can compare this under Linux running a full Linux x86 in qemu on an arm platform and then using qemu usermode to-do the same application on the same platform.
Please note binfmt under Linux means the usermode can be 100 percent transparent that an x86 application acts perfectly like its normal on the arm platform other than being slower but faster than running a full emulator.
https://wiki.debian.org/Multiarch/TheCaseForMultiarch
Basically Linux world has been doing mixed instructions sets for a fair while. So a lot of arguments against it really don’t stack up.
Actually, computers reached a point about 5 years ago where they had enough CPU, GPU, RAM and IO (SSD, USB3, faster internet) and where power management got so good that almost everything is fast now. There isn’t a program on my computer that takes more than a few seconds to load and most are instant. Even booting/shutting down the OS is done in seconds. After that the improvements to computers have mostly been in energy efficiency. The only truly slow thing I do is install software and even that has improved quite a lot with store-downloads and better patching.
I could actually see a future where you would carry all your programs and (bigger) data with you on a device that would work everywhere.
But I could also see a future beyond that where your programs are all running on servers or websites, your data is in cloud-storage, your settings are synched through your profile(s) and your input and output are not limited by physical devices (keyboard/mouse/touch/screen) but replaced with voice/thought/projection.
Bring on the future!
And you want this? What kind of coolaid are you drinking. You want to give up control of everything, and every little bit of your data? You want to remove every little bit of control over your own machine?
You can keep that future, thanks. I’ll use Cloud services when they are convenient. No way in hell am I putting my sensitive data on one. Forget it!
I am drinking the coolaid of convenience combined with the meal of realism. Lots of my sensitive data (taxes, health, email, compnay passwords) is already stored on servers that I don’t control. Some are government, some are my own companies and some are Cloud-Services. This has been true for the last decade at least!
It used to take me 4 to 8 hours to get a machine setup in such a way that I had all my software, tools, settings and data. Now it takes me 30 minutes to get it perfect, or just a few minutes (install Chrome, setup profile) to get most of it done. That is a major advantage that comes with the cost of trusting some companies that I have to trust anyway
avgalen,
While I understand your point, I would put forward that the only reason it’s an advantage is the companies providing internet “cloud” services stopped working on making normal software easier to use because the cloud services were more profitable for them.
In other words, it’s not easier because you’ve given up control of your data to a proprietary cloud service, it’s easier because vendors didn’t invest in ways to make it easier for you to have your own personal cloud that you retain control of.
One company I think was going in the right direction technologically was Sun Microsystems. I could go to any unix terminal at the school, log in and have all my customizations, files, and software right there. These were enterprise features that never made it into the home, but it would have been darned useful to have this be a default feature for PCs. Java web start had similar benefits and made software installation trivial.
It would have been awesome for home PCs to evolve in that direction, technically there’s no reason they could not. But ultimately companies promote technology based on their profits, and cloud services that they control and can sell subscriptions for is better for them. Users willingly give up control of their data because the lack of privacy is now “normal”, and so that’s why this business model of taking user data will continue.
Actually, that did make it into the home, but it required Cloud Services. Your school could never scale this to everyone and every device!
If my phone breaks now, I can get a new one within an hour, start it, connect to my account and restore a backup.
In the past when somenones computer broke I spend days trying to find all their data, reinstalling, updating, reconfiguring, etc.
Doing this at home with your own servers requires a lot of knowledge which most people simply don’t have. The internet and Cloud Services fixed this.
avgalen,
This wraps back to what I was saying, vendors could have made personal cloud services trivial to setup and easy to use. There’s no technological reason we can’t have that or that consumers wouldn’t embrace it. It’s the vendors who decided that they don’t want users to have more than superficial control.
I’m not trying to change your mind, I know you don’t care about this for yourself, however you shouldn’t pretend this doesn’t harm the interests of other people who do actually value their own privacy and control.
Edited 2016-11-25 19:35 UTC
I still see professionals running ancient Windows XP and Windows 7 machines all the time. These are bankers and the like – pretty serious to me. For most classes of users 2010 level hardware is about as much as they’ll ever really need (with some additional RAM and faster SSD storage anyway). The assent of smartphones and tablets as replacements for traditional computers even prove that point a bit too IMHO.
https://eltechs.com/product/exagear-desktop/
There is emulated and then there is emulated.
Its like bochs vs qemu vs something like java/.net jit disc caching.
Bochs Full emulation so always slow.
Qemu is dynamic translation on fly. Performance not always that great as in the translation engine it uses a intermediate representation before producing native code. With the number of CPU types qemu supports this is the only practical way but it comes at a price.
Now what Microsoft could be putting up is a x86 to arm64 translator without the extra over heads this from extragear appears to be a lot faster.
Qemu usermode for Linux bsd OS X is not that optimised. Why all libraries used by the application have to be the same arch as the application.
It will be interesting to see how Microsoft has done it. There is no reason why native libraries code could not be used from x86 application running on arm64 as long as the call out and call back work.
So the questions will be:
1) what percentage of running code will be needing translation from x86
2) How is the JIT designed is it caching how far optimised.
http://www.javaworld.com/article/2076593/performance-tests-show-jav…
Do remember cpu instruction set is just another bytecode. So in theory really well optimised translation method the performance difference between native platform and translated other than start up over head could be close to zero.
darknexus qemu is not doing full blown CPU emulation and a lot of game system emulators are not either just because the overhead of that is too huge to consider. Yet dynamically translating from one cpu bytecode to another is practical for performance.
Many people don’t know that java bytecode was in fact implemented as a instruction set in some cpus.
x86 instruction set is a bytecode not as friendly bytecode to translate to some cpu types as java or .net bytecode but it is basically the same problem. x86 binaries dynamically translated to arm64 really should not perform worse than .net or java if it is then there something in the dynamic translation that needs work.
This brings the question is .net/java applications fast enough if so then x86 dynamic translation long term should not be a problem.
I don’t think it’s necessary. Maybe this could be done similar to Apple’s fat binary way – the app would be x86 or x64 code, but system would generate thunks to call the native library, if it’s available, as native binary would contain code for several architectures. AArch32/64 and x86/x64 are not that much different from binary point of view at application level. For example you could have x64 GTK based app, but GTK itself would be native AArch64 binary – it would be required to preserve ABI level layout, though. If ARM application calls fopen, malloc or other functions it doesn’t matter if the real call comes into emulated binary or to native implementation via some thunk. The only problem I can see is to convert calling conventions. This probably would require making some rule script, which would describe which symbol should be called which way.
Edited 2016-11-22 05:30 UTC
Yes, color me very sceptical.
I believe ARM processors nowadays are at Intel’s Atom-level performance, as seen in benchmarks for cellphones running Atom. Drop that speed only by 50% for emulation (and that would be a great feat), and you will be running at the speed of a very slow PC.
What are those PC apps that you cannot do without, will not be compiled for ARM, but would run comfortably on a half-speed Atom machine?
You’re wrong. Even emulated at 50% it would be faster than Atom.
Apple A7 vs z8500 (dual core vs quad)
https://browser.primatelabs.com/v4/cpu/compare/1120937?baseline=1119…
Cortex-a73 vs z8500 (1.84 vs 2.24 GHz)
https://browser.primatelabs.com/v4/cpu/compare/1120487?baseline=1119…
Also, 50% is va ery low number. Exagear is much better at x86 emulation.
Edited 2016-11-22 11:36 UTC
Unfortunately 0 ARM devices running Windows 10 have Apple CPUs in them. The most common are Qualcomm’s, and they seem to have about half the performance or less: http://indiatoday.intoday.in/technology/story/in-googles-own-benchm…
So, for anything CPU-bound, this might be really painful.
viton,
Thank you for the benchmarks. Your link has an important omission, the “Atom x5-Z8500” is not a 2.24GHz processor, that’s just the burst speed, the base frequency is only 1.44GHz and that will be the dominant frequency for a CPU intensive benchmark.
http://ark.intel.com/products/85474/Intel-Atom-x5-Z8500-Processor-2…
While it doesn’t necessarily change your conclusion, I think it’s worth pointing out because the link is very misleading to quote the CPU as being a 2.24GHz core, which it is not.
I’ve used atom based computers, and they don’t use much power, but to be honest I’d never want to have to use one for any real work because it’s a lousy performer even on it’s own turf.
http://www.cpubenchmark.net/singleThread.html
500 is a pathetic single threaded rating for the “Intel Atom x5-Z8500”, my mid-class intel e3110 CPU computer from 2008 has a single threaded score of 1263. The atom has 4 cores, which is better than my 2, but most business applications aren’t highly multithreaded so having more than 2 cores often doesn’t help anyways.
While it remains to be seen how well the x86 emulation will perform in practice, I’d say the target isn’t high performance apps anyways. Just being able to run x86 software at all is useful for many classes of business apps that just sit around being idle most of the time. I wouldn’t want to run a multiuser server under ARM emulation, but for a client GUI, sure why not.
OK, it looks like I’m wrong.
Those benchmarks show the ARMs running roughly twice as fast as the Atom. So with a 50% emulation tax, they might be able to run Win86 software at the speed of a quad core Atom, which will not make you feel the wind in your hair, but can certainly be a useful machine.
Surely he software makers should be fixing the apps / recompiling for ARM ?
Seems totally bonkers to fix this problem .. with new hardware development.
The trend to fix anything software with bigger faster newer hardware is bonkers. Slow word processor? Get a new GHz CPU with Gbytes of RAM! Can’t run your app on ARM .. get a new gen ARM with hw emulation for x86 .. I mean x86 of all things .. far from what anyone would call a sane sensible instruction set with a coherent overarching design philosophy…
As an example:
* neural networks with Python .. working on a laptop, hyperscale cloud vm .. and a £5 / $4 raspberry pi zero .. no need for $$$$ GPUs.
* Trying moderately hard to get an order of magnitude speedup by fixing the software … bringing down a 5 day calculation down to 4 hours. http://makeyourowntextminingtoolkit.blogspot.co.uk/2016/11/does-num…
It’s not always that simple. A _lot_ of software is built on top of proprietary middleware, which may or may not be properly maintained or portable in the slightest.
The cost for developers to update would be absolutely immense, and given the small market of ARM-Windows, it is probably not worth it for most developers.
It also doesn’t help that for a very long time, Windows has meant x86, so the use of assembly or intrisics was not a limiter to portability between different versions of Windows.
That’s one reason I’m very glad that, when I got fed up with Windows XP back around 2003, I quit gaming cold-turkey (half-way through Dungeon Siege, actually), only returning to closed-source code with the advent of the first Humble Indie Bundle and GOG’s sale on Psychonauts around the same time.
The habitual reliance on open-source Linux applications meant that, when the OpenPandora came out (providing an Xfce desktop on ARMv7-based hardware), I was able to translate pretty much everything except the non-emulated, non-rewritten games over without a hiccup.
Amen.
I’ve never trusted myself with C or C++ (I prefer PyQt or PyGTK because they won’t segfault if an exception gets raised within an event handler) but, ever since Rust came out, I’ve been poking around at writing my command-line utilities in Rust while I wait with baited breath for Qt bindings and a suitably close approximation of the Django stuff I use (eg. ORM with schema migration for prototyping with real, dogfooded data).
(Admittedly, only half because it would reduce resource requirements. Rust is also great for reconciling Python-like comforts with a strong type system.)
Edited 2016-11-22 00:18 UTC
I ran ARM’s own x86 interpreter on my ARM2-powered Archimedes in about ’86. Played at being an IBM PC well enough to do PC things like word-processing. About as fast as a 4.7MHz PC of the day. Later on DEC used dynamic translation to run x86 PC apps on their Windows-NT Alpha boxes. My Transmeta Crusoe powered Fujitsu laptop didn’t set the world on fire for performance, but worked adequately and the battery life was excelent. My Nexus-9 tablet also used dynamic recompilation, but also had a hardware decoder to take the pain out of the “first execution” pass that had previously been the main problem with dynamic recompilation systems.
This is all just by way of saying that it isn’t the technical challenges that make this a dumb idea.
What has always made the phone-docks-to-be-desktop-PC idea (“continuum” or whatever Ubuntu call it) a dumb one is that the dock+screen+keyboard component could just as well have an actual computer chip in it (well, it already has several) at essentially zero incremental cost. The thing that Microsoft already sells as the dock for their first demonstrations is the same size and costs about the same as an Intel NUC. Just run the “desktop” in there natively. By all means, mount your phone’s file system when it’s in range, so that you can keep working on that presentation or whatever, but why give up the ability to use your phone to receive phone calls? Or force your PC to shut down its secure network connections when you want to take your phone with you to lunch?
Just being able to do something doesn’t mean that you should.
Convergence is what Ubuntu calls it.
Then maruos takes the dual OS method.
http://maruos.com/
areilly what means this idea is not going away is a practical problem.
https://blog.malwarebytes.com/cybercrime/2015/10/leaving-laptops-in-…
There is so many countries around the world where you cannot leave your laptop in the hotel room as your electronic devices are only secure while you are with them. This becomes a practical problem data stored in phone and processed in phone is more practical in this security environment.
http://www.wi-fi.org/discover-wi-fi/wi-fi-certified-wigig
Also you have to remember WiGig standard on 60Ghz so yes you can have your phone docked to screen and keyboard and at you ear talking to person on phone all without cables.
This is a question if a business is running Virtual desktop infrastructure (VDI) the stuff mentioned here would be running on a server. Phone could hold way more complex login method than just a password. Yes large and complex encryption keys.
You would call this generation 2 thin clients that don’t suffer from guess/see password problem. Also you do have to ask the security question of leaving a machine logged in with open access because person forgot to logout as they would have also have to forgot to take phone with them.
So running the OS in the dock leaves you will all the existing problems.
So phone being desktop has it usage cases. I will say absolutely that a phone is unlikely to ever be able to replace every desktop use. Its a simple fact of power in all ways. The more processing you want to do the more electrical power you need at the same tech level. Of course a phone has a limited battery size. More electrical power used more heat generated again phone has limited size to get rid of that heat.
The big thing there will be particular use cases when the phone desktop hybrid is the correct choice.
Small correction though, Continuum still allows you to use your phone … as a phone.
Edited 2016-11-22 15:43 UTC
There’s a huge difference between switching operand sizes/addressing modes of the CPU and emulating an entire CPU architecture. Sure, either way you have to marshal the system calls into the kernel, but switching CPU modes is very different from full blown software emulation.
I’m not surprised microsoft is trying this, many people are stuck with non-portable proprietary x86 apps, and so this could be useful. I just hope MS doesn’t botch it. I feel they made an absolute mess with virtual file system mappings for 32 & 64bit for the program files and system directories. Whoever is responsible for that introduced a lot of complexity for no real gain. They’d have been better off with no virtual mappings at all.
http://www.osnews.com/comments/28776?view=flat&threshold=0sort=comm…
You don’t need infinite monkeys. Just eight octopuses.
how many devices are out there (or planned) that can run this?
How many users will really want this panacea/gold at the end of the rainbow?
OR will it be just another nail in the Windows on Arm coffin?
Kudos to MS for keeping on trying but I really feel that the market for Windows Mobile has moved on to other things. 1%-5% of the market can’t be considered a success.
I totally agree.
And when you look at the string of promises made by MS on ARM platforms since 15 years…
How many have landed ?
Frankly, which serious professional will invest a business on it ?
Anything times zero is zero. Much more likely, someone else will do it and MS will buy them out.
I expect at some point there will be an X86 processor that will be powerful enough to run general purpose apps and power efficient enough to operate a phone. Emulation will not be needed. X86 powered phones could take over the market…in a couple of years.
bolomkxxviii,
That’s an interesting suggestion. I think that’s exactly what intel was thinking when they sold off their stake in ARM to develop x86 for mobile, yet it didn’t pan out and intel lost billions trying.
https://www.extremetech.com/extreme/227720-how-intel-lost-10-billion…
The problem isn’t that intel can’t build competitive chips, I think they can. However there are market factors beyond their control. For better or worse, the phone market matured without intel and now x86 is the outsider, struggling even to be a niche in mobile markets.
In fact, with microsoft building x86 emulators, we could actually see the opposite start to happen, and ARM could start creeping into desktop market, which is so far dominated by x86. This assumes the ARM processors can break the x86 barrier with emulation that is “good enough”.
It’s hard to predict market demand, but I know that personally I would be interested in having an ARM PC to run linux on it. (Of course, it wouldn’t do me any good if these ARM PCs are locked down with secure boot, as required by MS…)
Edited 2016-11-23 16:56 UTC
First X86 is a bad ISA and the fact it has extremely good silicone implementations won’t help in making real world emulation performant.
Second while ARM class mobile CPUs have made
tremendous progress in terms of performance they are still aggressively optimized for low power consumption so they won’t be specially designed for taking huge advantage of constantly connected power supply and still constrained by thermal characteristics of tightly packed smartphones.
Third while 2010 class software would work ok and would suffice for a lot of people will simply be forced to upgrade for other reasons like security, corporate standards, data formats compatibility. The developers will be on the other hand tempted to take advantage of the performance of contemporary desktop CPUs (always an orders of magnitude higher than emulated on ARM) and is higher memory (not mentioning the mobile devices cannot swap). For example the newest Outlook trying to emulate web interface is sloppy even on I5.
I’m not even mentioning the web apps which are mainstream on desktop (while barely used and thus optimized for mobiles).
All in all, the performance of this solution will be questionable in the real world.
Edited 2016-11-24 12:58 UTC
I have a Windows phone and have hooked it up to a docking station for a test. It’s more than adequate for lite editing of Office documents. As with anything I’m not expecting nor wanting to do heavy work on the go let alone on a phone. If I need to do that I would break out a laptop.
However where I see this shining is HoloLens. Pure speculation here. I see this as a next step in the HoloLens platform evolving. Think of all the scifi books where people have access to full computing on the fly with embedded computers. HoloLens is already a pretty impressive computer strapped to your head running Windows 10. Now extend that to being able to run existing x86 programs as well. You basically can have your workstation with you at all times set up how you want with multiple displays in a virtual environment running what you want.
This is very interesting but I’ve already successfully converted my Asus Zenfone 3 Premium into a desktop computer. Using a USB-C hub, I connect a monitor, mouse, keyboard, external HD, GB Ethernet, Wacom Intuos drawing board with stylus (yep works great) and an HP AIO Laserprinter. What about apps, well I have that covered as well, by installing a Debian server locally under a Chroot, I was able to install and use all of my favorite Linux applications by launching them through a X-Terminal. Surprising it works quite well, though with 6GB of RAM and a Qualcomm 821 I guess it better well damn should. All in all I’m extremely happy with my setup, though it was difficult in finding a new phone that supported video over USB-C, options are very limited at this time. Yes, I can use Chromecast or .iracast to stream my desktop wirelessly and it’s actually usable but nothing beats a wired connection.
The Asus Zenfone 3 by the way is a fantastic phone, I’m lucky I was able to find such a handset for my hacking endeavours. I still can’t believe people still use iPhone’s when such things exist, I mean what part of running a friggen local Debian Server don’t these people understand. Yes, I realise that not many will ever do what I’m doing with my phone but the sheer amount of features in general over the iPhone is just to overwhelming to ignore.
Edited 2016-11-25 09:33 UTC