Martin Brinkmann at ghacks.net:
We noticed back in October 2018 that Microsoft’s Windows 10 operating system was not creating Registry backups anymore.
The scheduled task to create the backups was still running and the run result indicated that the operation completed successfully, but Registry backups were not created anymore.
It turns out that this is a feature, not a bug, as Microsoft has posted a support document explaining the new behaviour and the reasoning behind it.
Starting in Windows 10, version 1803, Windows no longer automatically backs up the system registry to the RegBack folder. If you browse to to the \Windows\System32\config\RegBack folder in Windows Explorer, you will still see each registry hive, but each file is 0kb in size.
This change is by design, and is intended to help reduce the overall disk footprint size of Windows. To recover a system with a corrupt registry hive, Microsoft recommends that you use a system restore point.
This might come as a surprise to some, hence it seems prudent to highlight this change. In the support article, Microsoft lists methods to reenable registry backups.
AFAIK, the registry is usually pretty small. I mean, I think usually in low megabytes, but can get as big as a couple of gig (right? but unusual).
That is to say. I smell a lie.
Count me in :
https://www.osnews.com/story/129919/systems-with-small-disks-wont-be-able-to-install-windows-10-may-2019-update/
This is ridiculous, given you can get a full blown GNU/Linux installation in under 10 GB, including office suits and whatnot!
10GB? If you don’t need anything more for the office suite than a word processor and spreadsheet and use a lightweight desktop like XFCE4 or LXQt, you can do it in under 6GB.
A pure Windows installation has gone up massively since XP, but has hardly changed after that. XP used to require a couple hundred megs and eventually became about 1 GB at the SP3 timeframe. Vista suddenly increased that to about 8 GB and that hasn’t changed in the last 10 years until Windows 10 1903. It is all the other stuff that takes up massive amounts of storage so the minimum requirements isn’t 8 GB but 32 GB now:
* Swapfile
* Hibernation
* “Just to be sure we now have a permanent update-partition”
* WinSXS
* Restore Points
* Windows Update Roll Back Files
* Cache-folders
* Temp-folders
* Per-user-apps
All of the above serve some purpose and are generally a good idea for the average Joe, assisting greatly in performance, reliability and recoverability. Having a proper snapshotting ability in the filesystem would surely save a whole lot of diskspace and installation/recovery-time though
They are also trying to compete with Chromebooks, remember? Go look on Amazon and you’ll find a TON of super cheap Win 10 systems that have tiny 32Gb storage space and when you are dealing with systems that small? Every byte counts.
What MSFT should do is simply have the OS detect how much space the OS has and then flip the switch if the system is say on a 100gb or greater drive but we all know what would happen then…some bunch like CNET would write a clickbait “If you buy one of these you are getting ripped off” (which you are but not because of backups, because those sub 5w Intel Atom chips are just God Awful) because any time they have a…shall we say “low rent” version of Windows it gets slagged, see Vista Basic and Win 7 Starter.
With Win 10 they went back to the classic Home/Pro model which everyone seems to be fine with but that means that any unit that ships with Win 10 Home needs to be just that, after all they don’t want another Vista basic lawsuit debacle on their hands, but that means they need to get Win 10 Home to run comfortably on a 32gb eMMC so every switch they can flip that will shave a few Gb off? They are gonna flip it.
Honestly I don’t see this being an issue anyway, with win 10 its so much easier to just use the built in refresh/restore functionality I don’t see Joe and Jane average using reg backups and those that know enough to actually want reg backups can easily just get something like CCleaner that does more than just reg backups.
Remember the specs Windows 200 ran on ? 400 MB HD and 256 MB RAM. What are the spoiled bytes used for now ? Can’t even say because the UI displays more colors, can’t say because it’s AR-VR UI, it just does not much more than it did in 2000. So why eating up 32 GB of eMMC for such crap ?
Well… Windows now basically installs the files for every conceivable feature, and keeps multiple versions of supporting libraries, under WinSXS. When you “install” a feature, most of the time, it actually gets kinda “connected” to the rest of the OS from the WinSXS folder. That made it a lot larger in the XP->Vista jump
Windows 10 also includes essentially an entirely new stack alongside win32, for the UWP “world” (which is heavily .NET based)
And anything .NET based has the original CIL bytecode version installed, plus a precomplied local version too. More space chewed…
Then I’ll rent Microsoft a larger part on my personal hard drive since they decided to take more room out of it. What about $10/mo, seems a fair offer for a dedicated 1GB allowing them a bit more expansion ?
as part of windows since 7 by default it will create a little hidden file “hiberfil.sys” on the off chance you want to hibernate instead of suspend, if you have 2gb of ram this file is 2gb so whatever? when if you have 8/16/32gb of ram this file is 8/16/32GB large! by default for a feature you may not even want or ever use or know about!
Always allocated unless you disable hibernation by running a command in administrator command prompt.
windows of full of bloat and waste, I can think of a few things they can cut to save space, actvation (they are giving win10 away at this point) microsoft store, stupid halo cortana, media player, Internet explorer or whatever garbage browser they are currently trying to peddle!
tldr if hobbyists can throw together a sub 500mb windows PE installation packed to the brim with useful utilities I don’t see why Microsoft can’t throw together a minimalist windows desktop in a similar fashion!
> as part of windows since 7 by default it will create a little hidden file “hiberfil.sys” on the off chance you want to hibernate instead of suspend, if you have 2gb of ram this file is 2gb so whatever? when if you have 8/16/32gb of ram this file is 8/16/32GB large! by default for a feature you may not even want or ever use or know about!
Two important points to make about this:
* It gets pre-allocated to ensure that you actually are able to hibernate. If it didn’t, and you had less disk space left than you had RAM, you couldn’t hibernate (and imagine explaining that to end-users who aren’t tech savvy).
* Not even want, use, or know about is a bit generous. I know a lot more people who never shut their system off and just hibernate instead 99% of the time.
Most people I know let the computer sleep. Hibernation is pretty rare use case. It really only makes sense if you want to leave your project open on a laptop you are not using for a few weeks (if the laptop has desktop RAM), or a few months (if the laptop has LPDDR memory).
Kochise,
Ok, but it’s not really fair comparing windows 200 to windows 10. 🙂
I’m with you on windows 2000 though. IMHO windows 2000 was the plateau for MS desktop innovation and alot of the developments after that was change for change’s sake. The control panel got worse, new versions of windows brought about fancy new themes, but it was slower and didn’t aid productivity so I set it to “classic mode” when available. Of course today it would lack modern hardware drivers, but if it weren’t for that I imagine that I could still be productive on windows 2000. Can you think of anything that has fundamentally improved? I’d say there were definite anti-features for kernel development starting with vista, which in hindsight microsoft probably regrets since it accelerated the uptake of linux by developers.
I’m less familiar with mac evolution, but I’m curious when they hit their plateau? With my limited experience, I would guess it was with mac os-x and the “new” mach kernel. Have things improved much since then on apple’s side or is it once again mostly theming?
I did not have time to read this in any detail, but it looked interesting…
http://www.osxbook.com/book/bonus/chapter1/pdf/macosxinternals-singh-1.pdf
I still use Windows 2000 and XP from time to time to test my development compatibility as I use Windows 7 as my daily driver, and I can attest that neither 2000 nor XP lost their productivity factor. Beside some fancy shortcuts and windows sticking to screen borders that you also can get with some third parties extensions, there was no real justification for later versions of Windows. The main problem was the 64 bits transition, but nothing prevented Microsoft to make what they are doing with Windows 10 using incremental updates that replace large parts of the OS. And about security, they proven themselves it was a moot point since they were able to provide recent security updates for Windows XP, so it really not was a technical issue like they pretended it to be. They just have to milk the cow for their shareholders to get their fees.
If only Linux was really technically superior by a large margin and/or had a better and consistent desktop experience, I would gladly switch, but maintaining a Linux distribution still requires a lot more knowledge and is far more time consuming resolving dependencies conflicts that currently even for me, I prefer to stay under Windows by pure lazyness. Well, Windows 7 not to experience the updates mess Windows 10 has become. And as I said, I still have Windows 2000 and XP as a fallback and are still perfectly productive.
XP and Server 2003 (NT 5.1 and 5.2) really were very different from Vista (NT 6.0) and later versions. NT 6.0 made a big design switch from “put the hardware central” to “put the user central”. Suddenly the mantra wasn’t “save memory”, but “unused memory is wasted memory”. The driver model was completely turned upside down with grapics moving out of the kernel for higher reliability. We went from “efficient” DLL-Hell to inefficient-but-reliable Windows-Side-by-Side. We even went from “everyone and everything as admin all the time” to “User Account Control”. Even the installation/deployment grealy changed (WIM/dism). It took 2 years to work out the kinks of Vista and create good drivers but when Windows 7 came out there was no denying the effect of the changes: Normal users could finally have an OS where hardware worked out of the box and where you didn’t need a re-install or a “tech-friend” to fix your pc when you installed a wrong driver. At the same time basically all software continued to work and hardware had caught up to the extra requirements of NT 6 vs NT 5.
NT 5 was indeed perfectly productive, but only for techies that could fix the breakages.
NT 6 was when Windows became great for normal people. 7-8-8.1-10 are all gradual improvements with lots of tuning of the kernel and experimenting with the UI, but nothing ground-breaking anymore. I would still divide the history into :
* DOS (perfectly suitable for the average nerd/geek)
* DOS+Windows (…but Microsoft wants more so let’s add a GUI on top)
* 95-98SE (consumers get a nicer GUI and plug-and-play) and NT4 (business gets a somewhat stable and secure platform)
* ME-2000 (they tried to combine the consumer and business platform, but left plenty of kinks)
* XP (they finally succeeded….32-bit perfection….now what?)
* Vista (they tried, but took it out of the oven too quickly)
* 7-10 (okay, modern and stable platform that can run all your software on your increasingly faster and cheaper hardware. Desktop Nirvana reached! Now what to do next?. Hey, don’t look at that shiny mobile future, don’t look,….you just had to look hadn’t you)
Ok, Windows 7 was a welcomed improvement, but after XP SP2, I hardly experienced any stability issue. Once you get the right driver working flawlessly, don’t touch anything, make a ghost of your system partition, and voila. Thankfully those were like 1 GB in size and you could fit several compressed images on one DVD.
You know, there are people that still use their old HP 29C because it just works and you don’t need all the fancy stuff of more modern offers. What I do with a computer is nothing much that I did with 2000 or XP. But sure 7 provided some nice changes, and starting from 8 started to remove them to be touch oriented. And lost me.
I also used Ghost (and Partition Magic) at that time and you are basically right. Once you had everything setup and working flawlessly and didn’t touch anything XP-post-SP2 was working really nicely….and then it broke because some program/update installed a conflicting library and you had to use your Ghost-image to restore everything back to “virtual perfection”. It was also the timeframe where you bought a faster CD/DVD-Writer and your favorite Burner-program didn’t support that hardware. And it was also the timeframe where every Video-Driver-Update would break Windows Media Center (ATI All-In-Wonder 9700 if I remember correctly)
In NT6 the driver model is much more robust, so the burner-programs work immediately with every CD/DVD-Writer…that we no longer use because of USB-“Sticks”
One program might interfere with another program, but will not break my entire setup forcing a “ghost-restore” and DLL-hell is a thing of the past
I also no longer need tools like Ghost and Partition Magic because the built-in tooling in Windows is now so much better than before (fdisk vs diskpart, dism)
MultiMonitor-Support, Window-Management and literally dozens and dozens of other “niceties” all add up to a much more robust and easier to use OS. The “touch oriented” features from Windows 8 never bothered me after the first “Service Pack” that made the Start-Screen have the same background and added the “boot to desktop”. Start works the same for me in 7-8-8.1-10: I hit the start key, type a few letters and hit enter. I also greatly prefer the new Settings screen vs the old Control Panel. It is much better organized, searchable and you don’t have to constantly “yes,ok, yes, all, reboot” for things to become active
What I do with a computer also didn’t change much between XP and 10, but now everything works much faster, more reliable, is nicer to look at and work with. 10 is simply a much better “ExPerience” than XP (duh, it is also 15 years of progress in both hardware and software)
avgalen,
It’s a mater of code maturity as much as architecture. XP became more robust with each service pack and by the end of it’s run it was pretty good. In a way it’s ironic that OS vendors replace things when they are working their best with new stuff that breaks all kinds of things like vista did. Eventually that code matured into windows 7. But when most people were happy with that microsoft goes and stuffs it up again with windows 8, haha. As a consumer, it’s not unreasonable to ask why we can’t stick with what works, and technically we could, but as vendor the answer is that you can’t sell new stuff if it doesn’t change. Change is necessary for business whether it’s beneficial or not.
Windows XP was available in 64bit, but the driver situation was as bad as if not worse than vista at the beginning.
https://en.wikipedia.org/wiki/Windows_XP_Professional_x64_Edition
It’s hard to tell, apple, microsoft, and even canonical seem to be experiencing lukewarm interest in their new platform developments. Sure there’s new hardware specs, which is always nice, but it’s hard to see where the next desktop platform innovation is going to come from. They’re pushing “cloud services”, but that’s as much a consequence of trying to extract more money and/or advertising dollars from consumers who don’t feel we need it as much as it is giving us something that we actually want to pay for.
I guess it’s an open question whether we’ve reached a permanent plateau or if more desktop innovation is on the horizon. It’s possible that future innovations are anchored more in business models than technological progress. This will probably disappoint most of us who experienced the rapid technological progress in past decades.
You are entirely correct! Most businesses and even many consumers skipped 8 and 8.1, but 10 “got it right” and is being widely adopted. Maybe people actually have a clue?
XP-64 was a weird Frankenstein that was made at the end of the XP-era when Vista was taking too long and Microsoft needed to release something for the consumer market when 64bit was starting to become available. It was basically a “Server 2003 with Desktop Experience” instead of “XP compiled for 64bit” and had the 5.2 kernel of 2003 instead of the 5.1 kernel of XP. Of course only “server-hardware like drivers” were available.
It wasn’t until Vista Service Pack 1 (Server 2008) that the 64bit-era really started for Microsoft. (Ignoring Itanium entirely here)
The Desktop Platform is now entirely mature so not much innovation in it is possible or wanted. Desktops have been replaced by laptops for most tasks a decade ago because laptops are now powerful enough to do almost anything with the obvious benefit of mobility.
Desktops/Laptops are still used for business and “productivity” (Windows, OSX, Linux) but most consumption is now done in browsers and in apps (Android, iOS)
We are slowly starting to see the next phase of innovation already with purely single-task based devices (IOT) like home-automation, smart-speakers, etc that are often voice (and app) controlled.
It seems like there is a plateau around the 2GHz CPU, 4 GB RAM hardware-features where a platform tops out. After that only very niche-features are used by a small group of hardcore users. The rest of the world is satisfied with the situation as is and enjoys the price-drops, fine-tuning and awaits the next generation of devices that fills a different need
If you look close enough you can still see a whole bunch of innovation on both the software-side and hardware-side that actually fills needs. Some examples:
* ChromeCast. An ultra-cheap and small device that allows you to show whatever content you want on a big screen
* Cloud-Storage and sync-clients. Your phone takes a picture that is automatically tagged with location, uploaded for longtime storage, analyzed for content and available on all your devices everywhere
* Power-Management. Basically every device now has an ultra-low power-usage when not in use but becomes “instantly” available whenever you want
* Waze. We have come a long way from CD-based maps and printed navigation-instructions to the current Map-applications with crowd-based traffic information
* Virtualization, Software-defined-everything. Basically the only great reason to have lots of hardware resources is to split it up into several smaller hardware resources
avgalen,
I’m not really happy with google’s chrome cast smart tv platform, it bothers me that it’s engineered to connect to google even for what *should* be local tasks. It’s bad for privacy. There’s no reason local interactions must be initiated through google’s data centers, but it’s engineered this way to give google the capability to monitor us. IMHO this is what’s wrong with where the industry is going. Home automation and streaming tech is great, but the way it’s being unnecessarily engineered to spy on us is cringe-worthy.
I’d rather have miracast or something that runs locally without phoning home to tell google about my habits!
This is one of those things where we’ve had these sorts of capabilities for a long time, but it seems to be “rediscovered” on mobile devices. It’s all well and good to have these kinds of features, I’m just against proprietary solutions. It needs to be an open protocol where users are allowed to select their own providers or even self host if they want to. If users can do that then I really don’t object. But if they’re tied to a platform where the provider holds our data and can dictate who has API access to our files, then that’s extremely problematic.
PCI and USB have had this in the specs for decades, but of course it never worked well because manufactures did a poor job with hardware and drivers. You’re right it’s gotten better, especially with mobile ARM devices.
Sure virtualization is great, but it’s not dependent on new operating systems.
It’s not that I’m necessarily trying to dismiss your use cases, they’re fine, but I don’t necessarily think we needed new operating systems to get them working.
Out of interest, how do you use them? On hardware? In VMs?
The big challenge I’ve had with those systems is VirtualBox emulating multiprocessor support that they can use is incredibly inefficient and ends up pegging a core on the host. On their discussion boards this is described as “IOMMU overhead”, but it prevents MP use on older systems. Using them in single processor mode is fine. This works in 2003 64 bit and newer (including Vista/2008 32 bit, sigh.)
The good part about virtualization, and VirtualBox specifically, is that it has guest additions for these old systems and prevents the need for modern hardware drivers. Combine this with remote desktop and you can have very old systems do things they would never have done in their day. I use NT 4 terminal server on 2x1080p monitors for example, over RDP so no hardware drivers are needed.
I’m slightly surprised you can get an NT4-RDP compatible client running on modernish machine to be honest.
On Linux, rdesktop has this support, and appears to be in “maintenance mode.” It still gets updates without big feature additions, so it always works on newer systems and is in almost every Linux distro.
On Windows, the newest Microsoft client is in XP, pre-SP2. This is a pretty boring Win32 app though, and works on any OS that can run Win32. I’m currently using it on 8.1 64 bit, although I’d be surprised if it didn’t work on newer systems. My biggest complaint with it is it doesn’t want to report widescreen resolutions to the server, so the left and right screen regions are unusable even in fullscreen. Since rdesktop can do this just fine, it doesn’t appear to be a server side restriction.
On Mac the situation is a little worse. The newest Microsoft client that talks to NT4 is version 1, which is PPC only. It performs fine under Rosetta but obviously not on newer versions of Mac OS that don’t include it. Although on Mac I’ve been amused for a while that X11 + rdesktop is significantly faster than any official client (not just Rosetta, but always), so although it’s clunky to go via X11 it works pretty well if you want to launch it and stay in it for a while.