Right off the bat, there is not that much use for a Pixel Watch with Windows on it. The project, as the maker says, is for “shits and giggles” and more like an April Fool’s joke. However, it shows how capable modern smartwatches are, with the Pixel Watch 3 being powered by a processor with four ARM Cortex A53 cores, 2GB of DDR4X memory, and 32GB of storage.
Getting Windows to run on Gustave’s arm, as you can imagine, took some time and effort of inspecting a rooted boot image, modifying the stock UEFI to run custom UEFI, editing the ACPI table, and patching plenty of other files. The result of all that is a Pixel Watch 3 with Windows PE.
↫ Taras Buria at Neowin
More of this sort of nonsense, please. This is such a great idea, especially because it’s so utterly useless and pointless. However pointless it may be, though, it does show that Windows on ARM is remarkably flexible, as it’s been ported to a variety of ARM devices it should never be supposed to run on. With Microsoft’s renewed entry into the ARM world with Windows on ARM and Qualcomm, I would’ve hoped for more standardisation in the ARM space to bring it closer to the widely compatible world of x86.
That, sadly, has not yet happened, and I doubt it ever will – it seems like ARM is already too big of a fragmented mess to be consolidated for easy portability for operating systems. Instead, individual crazy awesome people have to manually port Windows to other ARM variants, and that, while cool projects, is kind of sad.
Thom Holwerda,
Honestly this sounds like an april 1st joke.
To the extent that it’s real, I think it says more about the developer’s commitment to patching windows/UEFI/ACPI than of window’s flexibility per say. This was achieved despite the numerous obstacles in the way.
I wish all hardware were compatible at the bootloader level. x86 has basically achieved this. Of course the OS will still need device drivers, but being able to load the OS is usually not a problem on x86. The lack of universal booting standards on ARM from the start has proven to be a significant impediment to the goal of being able to boot generic operating systems across ARM devices. Operating systems should at least be able to boot reliably without having to modify the OS or the device. IMHO this was a lost opportunity and a regrettable outcome for the ARM ecosystem to provide a ubiquitous setup regardless of hardware.
Ahh… the instruction set architecture of freedom, banishing the evil x86 duopoly by replacing it with…. checks notes… an assortment of mutually incompatible bootloaders and a Qualcomm monopoly (with Mediatek serving as a bottom-feeder of scraps Qualcomm doesn’t want).
Prediction: If RISC-V ever becomes a real thing, it will devolve into a similar mess of mutually incompatible bootloaders, and Qualcomm will dominate it too just like they did with ARM.
kurkosdr,
Having genuinely viable alternatives is important, I wouldn’t say ‘no’ to more CPU competition. Alas x86 matured in a different era when Microsoft didn’t manufacture hardware, and hardware manufacturers didn’t write operating systems, and so they each needed to build around interoperable standards.
This is dissimilar to the environment in which ARM came about. ARM device manufactures bundle their own operating systems and their hardware isn’t made to interoperate with others. I argue that this forced bundling and inability to bring your own operating system has been extremely harmful for consumers. Yet I recognize that manufacturers have little incentive to change this. It’s worse than simply not caring, they deem it preferable when customers are more dependent and have less control. As sick as I am of this norm, I think it’s here to stay.
I hope not, but manufacturers decided that planned obsolescence is the future and don’t want owners to be empowered. Last month I upgraded a system window 11 for a client and it turns out Prolific planted time bombs in their windows drivers to intentionally break on windows 11. Special video signal overlay equipment that used prolific chips to interface to USB worked fine…until the windows 11 upgrade.
https://www.systweak.com/blogs/fix-pl2303-phased-out-error/
Prolific played the long game, pushing out drivers in windows 10 that were designed to stop working on the next version of windows (ie windows 11) in order to force users to buy new hardware. One can install old drivers as mentioned in the link, but whenever prolific updates their drivers, windows replaces the old ones again. Also if you unplug and plug back in the USB cable, windows re-installs the new (ie broken) driver. This is garbage, I would go so far as to say prolific should face punitive damages for this, but then there’s no law against drivers with time bombs. I fear that manufacturers are determined to ruin all technology going forward. So you may be right, even to the extent that RISC-V could fix many of the real world troubles we’re having, it probably won’t because manufacturers don’t actually want these problems to be solved.
There are no need for RISC-V to “become a real thing” to devolve into an unholy mess. Just go on https://ubuntu.com/download/risc-v and observe that there are eight images for eight different boards (seven boards plus emulator, to be precise). And if you visit https://winworldpc.com/product/ms-dos/1x then you’ll find out the exact same mess on 8086 platform, too!
How the heck have we ended up with compatible platform on x86? Easy: Lotus 1-2-3 and Doom (with other similar games). They were crazy popular, back in a day – AND they required direct access to the hardware. All clones that tried to play games with the hardware, so called “MS-DOS compatibles” (instead of “IBM PC compatibles”) have died off.
That was one-off quirk of history, today hardware is powerful enough for that story to not be repeatable. There are no incentive for anyone to create a “killer app” that would directly access hardware.
One exception to the rule that hardware you buy is tied to OS forever may be servers where people DO buy hardware and OS separately, but even that is unlikely: there we would, probably, end up with fixed VIRTUAL environment while actual OS that runs on hardware would be proprietary. “IBM z” solution.
Even in the 68000 era, there was one release per platform. So it’s not just “tied to the CPU”, never was.
And what you don’t understand is that Microsoft started pushing for standardization immediately afterwards, since version 2.0. Microsoft played the game of the OEMs just long enough to enter the market, but after they dominated the market, they quickly started shifting the game to their own. This is because the Microsoft of those times understood the consequences of PCs running ancient versions of the OS with no way for the user to upgrade and the fragmentation this causes.
Us people who were with Android from the beginning expected something similar from Google, but nope, they are happy with the mandatory fragmentation of Android and simply release a library (AndroidX) to fill in the gaps, shifting the burden on the developer.
Why spread lies when they are so easily refuted? Not only you can find MS DOS 2.x and 3.x on that same web site that couldn’t be used on regular PC, but Microsoft continued to provide unique versions of MS-DOS up to and including MS DOS 4.00 (from year 1985… MS DOS 4.00 from 1988 is entirely different beast).
Sure, Microsoft (just like Google) would have loved to consolidate everything on one, single platform, but it was Gang of Nine that made that dream reality.
Android OEMs were always much more powerful than PC makers were when they decided to dethrone the IBM. They always could just go their separate ways (and Huawei even did), thus it’s ridiculous to expect that something like Gang of Nine may happen with Google.
We would see what would happen when Google would push Android on desktop: would it keep enough control over platform or would it be forced to adapt to the wishes of OEMs… I have no idea, but we would see soon, I guess.
Microsoft did start pushing for standardization since 2.x, it didn’t happen overnight obviously, but they started emphasizing “100% IBM PC Compatible” and gradually phasing out the unique versions. The gang of nine was a response to the need for standardization Microsoft created.
The Gang of Nine was response to IBM PS/2 that IBM created. It was Compaq guys, who went to Microsoft, not the other way around.
And Compaq guys were the ones who refused to adopt modification of AT bus proposed by Dell just one year ago!
But when IBM PS/2 arrived and Compaq felt that IBM would destroy all of them… Gang of Nines was organized and yes, Microsoft embraced that idea, of course.
That’s when and why “100% IBM PC compatible” marketing term was invented and that’s how industry coalesced around ISA. And yes, after that happened it become almost impossible to go into separate direction and create their own, incompatible standard.
But OEMs have learned from that episode, too. And they understood that they can easily be caught in that trap, too.
Thus with phones they fought, tooth and nail to never accept such an idea.
History is very good predictor of the future – but only UNBIASED one.
If you bend your historical records to suit your agenda… then it stops working: history still follows similar paths, but it refuses to follow your ideas if your ideas are brought to life by bias in your knowledge.
zde,
I would say MS-DOS compatibles and IBM PC compatibles were two sides of the same coin.
Your mention of the applications is well and good but we shouldn’t ignore that DOS itself needed to be able to boot. There was not a market for an x86 computer that didn’t have the necessary hardware / BIOS / VGA bios / etc to boot DOS. While it was certainly more about pragmatism than altruism, there is no doubt that this widespread interoperability helped foster other indy operating systems could boot on practically every x86 computer. This isn’t to say there were no quirks, but those were unintentional as manufactures were trying to be compatible.
Windows itself is the “killer app”. Don’t forget this because to this day it means you can still use the same install image across the entire x86 ecosystem and it will work. Not only windows, but linux (and other) boot disks work too…as a user this is absolutely fantastic! For it’s part MS tried to push UEFI standards on ARM (albeit with the intention of using secure boot to block competitors, which is whole other topic). The main parties responsible for breaking interoperability have been manufacturers building their own incompatible linux/android forks.
Many of us assumed technology going forward would follow the x86 model, but it turns out that was too optimistic and modern devices would not become molded by the same pressure to standardize.
I’ve always bought x86 servers for this reason. I am interested in ARM but…
1) I need to be able to bring my own OS, ideally with one standard image. That’s just not happening. I have no interest in becoming dependent on a manufacturer bundled OS – I’ve been burnt by this with routers/NAS devices/etc. Things are so much better for owners when we’re not painted into a corner.
2) Prices for general purpose ARM servers are outrageous. It’s funny because you can rent ARM instances from amazon and other companies for cheaper than x86 because it’s more economical for them at scale. However for mere consumers ARM servers have been largely inaccessible to us and commodity pricing for x86 is far cheaper.
“MS-DOS compatibles” (like Tandy 2000 or Siemens PC-D and many, many others) were entirely separate beasts from “IBM PC compatibles”. There were, briefly, many more of them than IBM PC compatibles in years 1982-1983. And many of them couldn’t run neither “normal MS-DOS” or “PC DOS”. So much for the “DOS itself needed to be able to boot”.
But many apps refused to work on them (benchmark litmus test was Microsoft Flight Simulator, ironically enough, but no Microsoft wasn’t perceiving Microsoft Flight Simulator as major revenue generator).
THAT is where the push for standardization of hardware originated. Not with Microsoft who was ready to port BASIC on the platform of your choice and where Charles Simonyi was planning his “revenue bomb” at that time – strategy which very explicitly relied on bazillion different, incompatible, PCs. And not with IBM, who tried to destroy the nascent IBM PC clones business with IBM PS/2 line.
But with applications and users. Heck, the fact that EMS memory standard full name is LIM EMS for Lotus, Intel, Microsoft shows where the push for the hardware standards originated!
Only after that happened Microsoft seized an opportunity and started pushing for standarization, too. But it only released first version of MS-DOS that you may actually purchase in store, MS DOS 4.00 in 1988 (I mean clone of IBM PC DOS 4.00, not the one that Microsoft sold to OEMs in 1985)!
And yes, after the “100% IBM PC Compatible” have become a marketing term it became very hard for OEMs to break out of that box… but Microsoft was happily providing all versions of Windows in a special form for any strange device, be it 80386 IBM PC accelerator or Japanese Made Pentium-based FM Towns system.
Heck, I remember tale of a Dell server in our school which had special version of Windows NT, because regular version refused to boot on it! We opted out of Windows and technicians spent hours trying to install regular one (they couldn’t break seal on a special edition because we haven’t purchased it), till we have said that we would install Linux on it and would inform them if anything wouldn’t work.
But these tales of compatibility/incompatiblity have ONLY happened because APPS refused to work on that hardware!
And APPS needed compatibility because they had to use direct hardware access for speed.
But with ARM… APPS work with OS supplied by vendor and DON’T directly access hardware… where would push “to follow the x86 model” come from?
zde,
I looked it up and Tandy 1000+ series were compatible at least as far back as DOS 2.
http://www.dosdays.co.uk/computers/Tandy%201000/1kdosman.pdf
https://en.wikipedia.org/wiki/Tandy_1000
Before that Tandy had TRS80 products, but these used completely different CPUs, not x86.
https://en.wikipedia.org/wiki/List_of_TRS-80_and_Tandy-branded_computers
You can debate the semantics of calling Tandys “IBM PC compatible”, but honestly the name of the standard is less important to me than the fact that they were compatible with the same operating system disks and could run the same software. Personally I will continue to call these IBM PC compatibles because it’s the terminology I grew up with, but I don’t mind if you’d rather call them MS DOS compatibles – to me it’s the same standard from a flipped perspective.
A friend of mine had an old Tandy and we ran many dos games on it. If I recall correctly it didn’t have the full 640kb available and this may have been a problem for larger programs. Unless we’re lucky enough to find an article about it, we probably won’t be able to diagnose the specific issue you are referring to with flight sim. It could have been memory, who knows.
I’m really not familiar with them, but according to wikipedia, they eventually became IBM PC compatible too.
https://en.wikipedia.org/wiki/FM_Towns
My point isn’t to glorify the IBM PC standards, but the simple fact that x86 standards were so ubiquitous played a huge role in the viability of alternative operating systems like linux. If there had been bad standards and all x86 vendor did their own thing (aka ARM), everyone would need custom distributions for their hardware. Linux would have gone nowhere if this had been the case.
We’re fortunate that x86 standards became so ubiquitous. But IMHO.the lack of similarly widespread standards with ARM (and potentially RISCV) present huge problems for the future of technology. Outside of x86 we’re becoming too dependent on manufacturers for operating systems, including those based on linux!
I was too young to really experience it, but I know others expressed frustration with NT3/NT4 at the time. It was technically the more robust operating system but the drivers were relatively immature and even just unavailable. I’d wager a guess that the same hardware probably did run DOS/windows 95/98 just fine. IMHO Win2k is really when winnt came into full stride with good support and compatibility.
AND you still NEED the operating system to run those APPS! Hardware compatibility with the OS is not less important.
With ARM devices like android, every manufacturer bundle’s their own custom builds. They don’t support standardized OS images provided by a 3rd party. This is where the motivation for standards came from with x86.
zde,
For kicks I decided to look on youtube for flight sim running on Tandy. Someone uploaded a video of it!
“Microsoft Flight Simulator for Tandy 1000”
https://www.youtube.com/watch?v=9FgXfclqT_I
This video shows a Tandy with 384K booting into msdos on the C: drive. Then rebooting into the Tandy version of ms flight simulator on it’s own boot disk.
If you watch the video you notice he switches between CGA and Tandy graphics with more colors. Conceivably the standard PC version might work on the Tandy, albeit in a CGA mode that the PC version of flight sim supported. Incidentally I found tons of posts where people upgraded their Tandy 1000 with VGA adapters, not necessarily because they wanted to upgrade the Tandy, but because CGA monitors have become so rare, haha.
I suspect that 384K of memory was the limiting factor. This could have been too tight for many PC titles.
Why are you looking on Tandy 1000 and not on Tandy 2000 that was made before it and which I VERY EXPLICITLY NAMED? Wikipedia article for Tandy 2000 even list 14 different models with 8088/8086/80186 CPUs:
https://en.wikipedia.org/wiki/Tandy_2000#History
Most are not “true” IBM PC machines, and some even support 768KB of RAM (including Tandy 2000). This was supposed to be competitive advantage, but, of course, meant that where IBM PC had video RAM these computers had regular RAM.
Sure, that era haven’t lasted, and as your quite cited “The Tandy 1000 was the first in a series of IBM PC compatible home computers produced by the Tandy Corporation”… but it arrived full year after Tandy 2000 and most other MS-DOS compatibles!
Year is a lot in a computer industry.
And I have no idea in what world computer with incompatible floppies (5.25 720K – physically incompatible format, you simply couldn’t read or write these on PC… although later-introduced IBM PC AT could read them if you use special TSR), and couldn’t run program that work on IBM PC may be called “IBM PC compatible”.
zde,
Fair point, I didn’t know that Tandy 2000 came before 1000.
Your link describes how much of a problem it was that half of DOS software didn’t work on the 2000. There would have been quite a large disadvantage for manufacturers implementing their own hardware standards instead of being compatible.
All of that happened before my time, but for better or worse the IBM PC did become the de-facto standard. Everyone on x86 including Tandy became compatible with it. and the fact that there was this standard was (and still is) extremely important for x86 computers that can boot the same OS and run the same software largely independently from the manufacturer. This is something we desperately lack with modern ARM hardware.
I think we’ve passed the point of novelty on “will it run doom?”.
Porting an entire OS is so much more satisfying and useful. IMO.
…. but I’d still load up dos inside of windows and try to run doom. Jus’ sayin’.
I wonder why M$ didn’t make an official install image of win11 for Pi5? It would be trivial for them.
Because M$ have accepted the fact that there are no money in IoT. It was pushing different versions of Windows (first Windows CE, then Windows IoT) while it hoped to get royalties (and it was actually getting them, I think).
Today price of OS that IoT people are ready to pay is zero thus there are no incentive for Windows to even be there.