One of the biggest community and customer benefits of UUP is the reduction you’ll see in download size on PCs. We have converged technologies in our build and publishing systems to enable differential downloads for all devices built on the Mobile and PC OS. A differential download package contains only the changes that have been made since the last time you updated your device, rather than a full build. As we rollout UUP, this will eventually be impactful for PCs where users can expect their download size to decrease by approximately 35% when going from one major update of Windows to another. We’re working on this now with the goal of supporting this for feature updates after the Windows 10 Creators Update; Insiders will see this sooner.
Not earth-shattering or anything, but still a nice improvement.
I read the same multiple times from Microsoft. So I wonder if it materialise now… but I’m a bit sceptical about it. It seems like too many Windows-teams are working on their own features without knowing what others are doing or just only cares about that other teams exists.
To understand this you have to realize how Microsoft operates internally – every team is its own competitive force against all other teams. At least for most of Microsoft’s history there is little collaboration and more reward in foobaring others while getting your stuff done, even if several teams implement the same thing.
Are we talking about the same functionality we’ve had via yum/dnf drpm’s for like 7 years now? Attaboy, MS! How’s that software/package management coming along?
Package managers only work if you’re the one distributing all the software. Microsoft isn’t that.
And, when they try to do it via the Windows Store, people, many of which levy the above complaint against them – cry foul.
Drumhellar,
This is my gripe with package managers. I’d really like to see a solution that is less centralized. A repository that worked more like DNS, for example. One of the shortcomings of the repo model today is that we can’t experience the full simplicity and benefits of package managers with 3rd party software.
3rd party developers should be able to register with the repos to offer 3rd party software to users by delegating those packages back to the developers.
Users could manage 3rd party software with the exact same programs they use for 1st party software (ie aptitude, yum, whatever). It seems better than manually downloading/installing/updating from websites.
Rather than register with a repo, I’ve thought an OS should have an update service that installed software could register with, and the service runs periodically to contact software makers own servers to check for updates, rather than whats done where the installer adds its own background updating service that runs at startup (i.e. one for Java, one for Adobe, etc).
As far as install/uninstall, in Windows, some limited facilities of package management exist via MSI installers (Mostly just being able to install them remotely, plus some limited dependency resolution features).
Of course, the situation with Windows is, even Microosft they provided an awesome solution to these problems, good luck getting developers to use it. All the various forms of write redirection in Vista is the result of developers making the same wrong assumptions that Microsoft had been telling developers not to make since at least WinNT 4 (I.e. don’t assume privileged execution, don’t assume you can write to the program directory). It still wasn’t enough to not break a shit-ton of software.
Wrong. You can easily use package management with multiple repos, from different vendors. For example on my machine dnf uses packages from fedora, rpmfusion, google, adobe and I could easily add a few more if needed. No conflict yet.
Multiple repos only makes things slightly-less centralized, and does very little to improve third-party software distribution.
I’ve got software from 50+ different developers populating my Start Menu right now. 50+ different repos isn’t much of a solution, especially if there is a plurality of repos like RPM Fusion, that they like to replace Fedora packages with their own, occasionally breaking things.
Not true at all..
I used to run several applications by installing (and updating) them myself because they had no package for my system. I had zero problems with that although the rest of my system was updated using the package manager.
As long as the programs installed manually do not interfere with the system itself, there is no problem. The same is true for windows.
That’s not true insofar package managers allow for multiple repositories. Windows Update could have allowed for multiple “update channels” a long time ago, had Microsoft been interested in it. Geez, Firefox can update extensions from multiple sources with only one click. So can any *nix package manager. Even gentoo has simplified the overlay structure, allowing for easy use of several repos (at your own discretion I might add…).
Mozilla is the one distributing extensions, though.
Not always. You can find extensions in other places than addons.mozilla and you can get them updated from other sources. Imagine opening “Programs and Features” (or “Add/Remove Programs” etc.) and simply press a button called “Search for updates” and each application will search its own update channel (essentially a single package repository) for updates.
Repositories do not have to contain multiple packages. Google has (or had?) a repository solely with Google Chrome. An update channel for a single app is simply a repository for a single application. Any directory will serve as a repository in the conceptual sense.
People cried foul because Microsoft made a centralized system that could not be expanded – everything had to go through and be approved by Microsoft. That’s not how the Linux Package Managers work.
Now, if Microsoft had done:
– Made an expandable PM system that provide functionality to load repositories, install software, etc (f.e APT, Yum/DNF, Pkg) using a standardized package format (f.e tarballs for Slack, ebuilds, DEB, RPM, etc).
– Provided a centralized repository (or repositories) for Microsoft provided systems (Windows, Office, etc)
– Provided ability for others to setup their own and integrate into Windows
That last point it the big contender. IOW, companies can setup their own Linux Repositories for distros they support and then their customers (or staff) can simply install those repositories, update, and install their software. There is no approval needed by any of the Linux distros to do this.
Companies cannot use the Windows Store to distribute their own software internally; and if they want to distribute to customers they have to get it approved by Microsoft to go into the Windows Store – and it could be removed if Microsoft changes its mind in some TOS or Policy.
Same goes for the Apple’s AppStore for MacOS.
Note: Even Android has this capability, though it does lock it down by default you can add additional stores like the GNU/FSF F-Droid AppStore if you root the phone.
APT works just fine with letting you add third party repositories, or PPAs…
bert64,
Apt is strong when everything you need is available from your primary repository, but in my opinion apt doesn’t work well when two or more sources need to co-exist together.
Just the other day on an Ubuntu LTS server I needed a package that wasn’t available in the LTS repos. So I tried adding the package from the next repo, but then it wanted to replace half of the operating system in a dependency cascade – bah!
I had the same issue on another production system where I needed a version of nginx that was unavailable in the stable repos, getting that version working from a different repo was harder than it should have been and required a lot of manual intervention.
Far be it for me to defend the windows model, however it’s course of evolution has been more accepting of installing software with uncoordinated dependencies. Apt needs to learn that sometimes it’s necessary to have multiple versions of a shared library in order to support foreign versions of a binary. Right now it insists the whole system be pegged at the same library versions, which is a bad assumption when multiple repos are needed and it just results in pain for anyone attempting to use apt between oncordinated repositories.
Part of the solution might be to add the concept of new “namespaces” for different repositories. And perhaps some changes to the linux library loader and/or search path for binaries of different repositories.
Edited 2016-11-05 18:36 UTC
Or… for the lack of a better solution at the moment, a binary with statically linked libs included so that it has (almost) no dependencies? Or, to say it in heretic’s tongue: the MS Windows way.
Although that isn’t a real solution, one should have at least the choice between the current way and what I proposed, so that they themselves can make the choice and deal with the possible backfire it will cause.
Edited 2016-11-07 13:10 UTC
Hi,
No; we’re talking about something that updates an OS (e.g. the base system that all third party apps, etc depend on) that has nothing to do with anything outside the base system. We aren’t talking about an over-complicated and error prone design failure (a package manager) that’s needed to attempt to hide the symptoms of the hideous dependency nightmare that you end up with if you’re too stupid to have any concept of a “base system”.
I guess what I’m saying is that, in a way, it’s the opposite of a “pukeage mangler”.
– Brendan
Manually downloading stuff from the web, continuous handholding and rebooting doesn’t sound very modern either. 😉
It is entirely unclear to me what kind of scenario they are going to change: The Update, or the Upgrade scenario. Microsoft completely mixes these 2 terms but treats them very differently:
* Update: If your current version is 14393.100 and it becomes 14393.200. This is really just a bunch of (monthly) patches that are applied as 1 Cumulative Update. This is already “delta-optimized” so if the .200 patch is 800 MegaByte you will probably only download 300 MegaByte if you already have .100.
* Upgrade: If your current version is 10586.x and it becomes 14393.x. This is basically an entirely new installation like you would perform if you went from Windows 7 to Windows 10: Backup the State of the machine, Perform a clean install, Restore/Merge the previous state of the machine.
This Upgrade process has happened twice sofar, from 1507 (RTM) to 1511 (November Update) to 1607 (Anniversary Update) and the next scheduled one is “1703” (Creators Update).
This Upgrade process is also the way Insider Builds are performed, basically replacing your entire OS everytime which takes a long time and is known NOT to migrate all you settings back.
So is it only going to influence this Upgrade process by turning it back into an Update process? Or just by not replacing your entire OS but only the changed parts?
I am very interested in this because Updates work perfectly fine for our company, but Upgrades are causing problems in two ways:
1) As mentioned above they are known to loose settings
2) Native Boot (bare metal VHD’s) installations are blocked by the Upgrade process.
Windows already has enough problems reliably applying complete update packages. I hate to think what a differential update is going to be like to fix when it messes up as it inevitably will. Plus, anyone know how this will affect WSUS servers yet?
Will it be as self-descriptive as the Democratic People’s Republic of [North] Korea?
At this Wise-Trashing Epoch is a huge competitive advantage.
This effort would crash as a house of cards. [Which could be better, security wise]. Doc Backups Should be now enforced.
Well. I think Microsoft’s effort to enhance its platform is much deeper reaching than what most people think.
Additionally to UUP Microsoft has also introduced its new servicing models for Windows.
To make it short… Until now MS provided gazillions of updates for every single bit of code meaning that the update manager has (had) sometimes hundreds of patches per month to manage.
The IT of each company often made exclusions of some of these updates for whatever reason they had.
– The good : It’s easy to exclude a single patch that could cause a compatibility problem.
– The bad : An almost infinite version of Windows exist out there (Fragmentation is much worse than with Android) and a real update management nightmare.
Since October MS introduced its new servicing model for operating systems ONLY (No other app, not even Silverlight).
This new model is composed mainly of 2 types of updates:
– Montly Security (A single block that contains ALL security updates for the month)
– Monthly rollout (A single block that contains ALL security AND non security updates for the month)
Each of this updates can only be installed in a block or removed in a block. It is no longer possible to add or remove a specific update from the list.
Where it becomes tricky is that the monthly rollout is recursive! It contains all the updates from all preceding monthly rollouts.
Where it becomes vicious is that starting next year, the montly roulout will integrate ALL updates since the inception of the concerned product (Windows SP1 being the oldest concerned)!
– The bad : If an update contained in the monthly security causes an incompatibility you can only remove the whole block but you can continue to install those from following months.
– The worst : If you use monthly rollouts and you have a compatibility problem you can remove the whole block BUT YOU CAN’T INSTALL THE FUTURE ROLLOUTS UNTIL THE COMPATIBILTY PROBLEM IS RESOLVED!
– The VERY good : Even if it will be a difficult transition period (Especially in 2017), it will normalize all base installs and make it extremely simple to deploy new systems (Base OS + one update only (The last monthly rollout)). This is quite a revolution in the IT sphere and will resolve a lot issues that infrastructure teams face on a daily basis. It will also force software providers to update their software to make them compliant with actual windows baselines (GLORIA!!!)
Take a look here for a better description (it’s a MS URL)
https://1drv.ms/b/s!AsSKRAC3eQiE0tQGMwwVfe6X5DZVwA
Edited 2016-11-04 19:37 UTC
Goods and Bads. Compromises. But means MORE CONTROL for Microsoft. If up to the task, then They will not -simply- shrug.
MS, for the first time, will be handling a FINITE pack of beasts.
Interesting comment. Thanks Novad.
Could We handle a problematic update trough virtualization?
Or, is it MS aim to ‘expurgate’ BOTH non-compliant providers and consumers from the Ecosystem?
That goes for Darknexus problems. Could He run a virtualized ‘Non-Updated Windows’ [from a point in time where everything was just OK] within his ‘Up-To-Today-Windows’?
My bad… Should have read all your answers before answering the previous one.
Yes… You could perfectly do that with hyper-V for example (Natively embedded in any >win8 pro)
You just have to know that starting with Win 10 (Except for the LTSB Version) you loose support if you don’t update for 3/6 months depending of the version.
Really looking forward this new model. To an already captured market A lot more important Is that THINGS THAT ALREADY WORKED for me… [Part of the user view of stability] -JUST KEEP THAT WAY. Suspecting virtualization is going to be very instrumental at this objective.
[Edge is not Smart enough to offer me English tools, sorry about that].
“Is a good axe, this my family axe -you know. Long ago We had to change the handle…” [Para-phrasing].
People are willing to pay a decent amount of tax, in Exchange of not having to thinker with some issues, ever again.
That’s quite correct. There will be some control given to MS… On the other hand it’s the base OS only which gets this treatment, not applications.
No “store” which limits third party applications.
It’s only my humble opinion but I think it’s the right compromise. Base OS becomes a walled garden while still leaving freedom for the whole application stack.(Docker also brings interesting things in this aspect)
It’s this part I’m not sure to understand… To handle the updates by virtualization seems really difficult except maybe with third party products like “Mirage” (I really don’t recommend this kind of workaround).
What you could do to work around a really incompatible application would be to package it through ThinApp or maybe Dockers (for example).
Another alternative could also be virtual apps or RDS.