Microsoft is making the Windows 11 setup process a little more entertaining, at least on some laptops. I unboxed the Surface Laptop Studio 2 yesterday (read Monica Chin’s review here) and noticed that Microsoft now prompts you to play the modern version of its SkiFree game while you wait for updates to be applied.
A fun little touch.
Puts me in mind of Invade a load, Space invaders you could play while waiting for your game to load from tape. I seem to recall a few games having these. Good idea putting something similar on the installer.
I was also thinking invade-a-load too! Microsoft not slow!! now doing what the c64 was doing 36 years ago! (with only 64kb)
why stop there, let it play doom while installing!
Hardware has improved to the point where I don’t really see why we should have to wait for installs any more. Typical m.2 drives can handle 600MB/s random writes and exceed 2GB/s sequential writes, and according to this source Windows home and pro are about 18GB installed (seems bloated to me, but whatever).
https://helpdeskgeek.com/windows-11/how-much-space-does-windows-11-take-up/
So depending on how/where the files are installed, it should take anywhere from 9s to 30s for an unoptimized installer.
I haven’t installed windows 11 personally, but here links covering windows 11 install time…
https://robots.net/tech/how-long-does-it-take-to-install-windows-11/
https://techguided.com/how-long-does-windows-11-take-to-install/
https://www.digitalcitizen.life/how-long-does-it-take-to-install-windows-11/
I have to ask why is it so slow? I guess microsoft are doing more stuff in the process, but I still don’t see why it should take more than 2 minutes max given the state of hardware. More than enough time to do plug and play and extract files. It seems the installation process is just very poorly optimized and inflates the install time by several hundred/thousand percent above what the underlying hardware is actually capable of.
Someone will correct me if I’m wrong* but my understanding is that for an in-place upgrade, the installer will first back up the old OS installation to a windows.old folder, perform the upgrade, then merge everything back together. For a clean installation it sort of does what we all used to manually do with Windows 95 and 98, i.e. copy the installer files from the installation media to a temporary folder on the hard drive, make the hard drive bootable, and reboot into the installer running from the hard drive. We did that in the old days because CD-ROM media was so much slower than a hard drive.
* https://xkcd.com/386/
Morgan,
That’s an interesting theory. I haven’t independently confirmed it, but the last link said it took almost the same amount of time for in place and fresh install. If that’s true then it suggests that the overhead is common between both types of installs.
From what I recall about the windows 98 installer, it was just the finalization process that took place after rebooting, but the bulk of OS installation had already taken place. Additional software/scripts could be placed on the installation media to run after installation and this would automatically run on first boot. I could probably confirm the win98 behavior, though I was more interested in windows 11 today, which I don’t have.
I wasn’t interested in slower install methods, but one of the links did cover it…
I’ve never used the reset this PC feature, but…
Anyway, with an m.2 it should easily be possible to get from uninstalled to installed in 2 minutes. The reason I added another 300% overhead over raw copy times was to account for additional processes like those you mention. Yet install times are taking a magnitude longer than this. Do you feel there’s a justifiable reason it should take significantly longer? I ask because I don’t know and am genuinely curious.
It could be interesting to test inside of a VM on a ram disk. I get a (host) ramdisk speed of about 6GB/s. If that didn’t significantly reduce the time to install, then we could definitely say that disk is not the bottleneck, but then what is? CPU? Memory? PCI? How/why?
I feel that I care more about optimization than others do. I often come across the mentality of “it works, ship it”. This annoys me because most projects don’t do optimization during development (criticized as premature optimization). “Do it afterwards” is the prevailing opinion, which would be fine expect that afterwards many project managers won’t allocate any further time for optimization so users just have to deal with badly performing software by over-provisioning hardware. On the hosting side of things I encounter this all of the time!
Instead of fixing whatever the underlying overhead is, they’re adding games to help users pass the time…it just feels like misplaced priorities to me.
I think a lot of it has to do with thousands of really small files that are put into place during an installation. SSDs are better at random writes than HDDs, and NVMe drives are much better, but random writes are still the Achilles’ heel of storage and data transfer. You can see this for yourself if you initiate a file transfer with lots of small files and a few large ones in Windows and watch the meter. The large files will transfer as fast as the drive’s cache allows for, then they will drop down but still transfer near the drive’s advertised speed once the cache fills up. However, the thousands of small files will drop the drive to a crawl. You can also simulate this with CrystalDiskMark, it will test both sequential and random writes with varying sample sizes.
Morgan,
Yes, I agree with you, but I did factor in the random write speeds already. I used a 600MBps ballpark figure in my original post, but much faster m.2 drives are available. Most m.2 random reads/writes are specified in terms of “IOPS”…
samsung.com/us/computing/memory-storage/solid-state-drives/ssd-970-evo-plus-nvme-m-2-1-tb-mz-v7s1t0b-am/
To get the most random and least performant write speeds possible, this needs to be multiplied to multiplied by the cluster size, which for NTFS is most likely 4KiB.
support.microsoft.com/en-us/topic/default-cluster-size-for-ntfs-fat-and-exfat-9772e6f1-e31a-00d7-e18f-73169155af95
So for a queue depth of 1: 60,000 IOPS * 4KiB = 234MiB/s
And for a queue depth of 32: 550,000 IOPS * 4KiB = 2148MiB/s
In general most FS writes (unlike reads) can be cached and queued, so even though there is some overhead for the file system itself, very high write speeds are achievable.
Yes, if I’m not mistaken crystaldiskmark includes FS overhead by using actual files…
xda-developers.com/crystaldiskmark/
glennsqlperformance.com/2020/12/13/some-quick-comparative-crystaldiskmark-results-in-2020/
I’m on board with your ideas, but given the real performance of modern day m.2 media it still doesn’t seem to add up without some other source of overhead.
Actually now I’m not sure that crystaldiskmark measures small files versus small reads/writes inside of large files.
I rarely use windows based tools like crystaldiskmark. I’m more familiar with fio.
https://arstechnica.com/gadgets/2020/02/how-fast-are-your-disks-find-out-the-open-source-way-with-fio/