In Windows 10 version 2004, we are introducing the concept of Hosted Apps to the Windows App Model. Hosted apps are registered as independent apps on Windows, but require a host process in order to run. An example would be a script file which requires its host (eg: Powershell or Python) to be installed. By itself, it is just a file and does not have any way to appear as an app to Windows. With the Hosted App Model, an app can declare itself as a host, and then packages can declare a dependency upon that host and are known as hosted apps. When the hosted app is launched, the host executable is then launched with the identity of the hosted app package instead of its own identity. This allows the host to be able to access the contents of the hosted app package and when calling APIs it does so with the hosted app identity.
This seems like something that could be useful for progressive web apps, and maybe even Electron apps by making them use Edge Chromium’s rendering engine instead of having every Electron application use its own copy of Chromium, which could benefit performance and battery life.
“instead of having every Electron application use its own *outdated* copy of Chromium”
I suspect this is the primary usecase here, since Java and Python have had “bundle the interpreter straight into the EXE” options on Windows for ages.
Although it is a nice option at first sight, I can not help but think that the core of the problem is not even mentioned in the linked blog post, and it is: version compatibility and dependency resolving.
Let’s see how different ‘runtime’ technologies tackle this problem:
– java: after years of pain and suffering, the common delivery model shifted to have each app install its own jre…
– python: create a ‘virtualenv’ for each installed app. works so-and-so in many scenarios (eg. when you want to run in the same command-line two python apps that require two ‘virtualenvs’)
– php: using Composer: install required php libraries locally as part of the app; does not really tackle having multiple versions of the runtime itself (php does generally a better job at BC than other runtimes though)
– npm, ruby, etc…
It seems that none of those relies on the OS-provided an ‘install-and-dep-resolve’ infrastructure.
Given the fact that Linux distributions have been providing said infrastructure since forever, and none of these language runtimes adopted it (in fact they _moved away_ from it), the question is: how/why would Windows be able to do better?
gggeek,
I’d agree those are two of the bigger problems.
PHP has had a terrible track record for updates. Be it 4->5 5-6> 6->7 7->71, they’ve all caused breaking changes in our production environments. Sometimes it’s just some minor changes, but when you’ve got a project with hundreds of thousands of lines of code and combine this with the fact that you can’t always detect problems using “php -l” it becomes an issue. For all their faults, languages like C and even perl don’t cause breakages in the way PHP has; code from decades ago has a realistic shot at still working today if you have all the dependencies. I simply can’t recommend PHP for long term stability.
Linux has had dependency resolution for a long time, which makes it trivial to install software from repos. However it only works well for things that are in the repos. Once you start building software (whether your own code or code from an open source project) you can encounter the same “DLL Hell” that can plague windows software. As a developer, you usually want to use the latest code available from the official project, but very often that’s newer than your own repos and you’re forced to resort to manually fixing the dependencies. To this day I absolutely hate working with libav/ffmpeg codebases because they rely on so many dependencies and they’re constantly breaking between versions. I prefer the approach of projects like SDL that do a much better job of providing long term stability and code will build/run successfully across a wide range of library versions.
The problem for platforms like windows and linux is that they are expected to solve these problems when the fault technically lies elsewhere. Without some sort of meta-data/human intervention, the OS simply has no way to know which dependencies will work. Software that follows good practices should work fine with the latest dependencies, but assuming this to be the case will cause breakages with some software. The only way an OS can 100% guaranty it will work is by bundling all programs with the exact dependencies they were developed & built against. Yet if we apply this as policy then it compromises the ability to update libraries. For instance you may need to update openSSL for security purposes, yet that would violate this policy.
So the OS is stuck, it NEEDS to be told which dependencies it can update. For better or worse though this punts the problem back to software developers. The OS can provide a framework/infrastructure for developers to use in specifying dependencies, however it’s effectiveness becomes inherently limited by the quality of input provided by 3rd parties, some of whom continue to be negligent when it comes to compatibility.
With repos, you have a custodian that makes these decisions on behalf of all the software contained in their centralized repo. This works fairly well, but requires high centralization & coordination. It fails when you start building code independently (or using multiple repos, etc).
Yep. It’s not a bad idea, but the key will be in the details on how all this plays out in the details.
To use a concrete example.
In their example for a ‘script’ they declare their dependency as: uap10:HostId=”PythonHost”
Seems straight forward enough. Now of course the real guts will be in what conventions will appear.
For example, will the Python community use separate HostIds, like “PythonHost2” “PythonHost3″…
And that’s just for major versions.
We’re not even getting into the concept of the different dependencies. Do they start throwing in package dependencies which then get passed to the host so the host can set it up nicely? Sounds like crazy complexity.
As most people have said, the default these days seems to be to ship dependencies with the application. It’s just how it goes.
I know a lot of people have their issues, but I’ve found working with FlatPak and Snap to be a pleasure compared to anything else and everything makes a lot of sense. I especially like that they have ‘packaged dependencies’ for the big common things like GTK, KDE… so you don’t end up with copies of all of those. Yes, there are those pain points, like Snap’s forced auto update. But the general idea is probably the best way of package management I’ve experienced.
I’d have honestly rather seen Microsoft work on something like FlatPak or Snap with good support for various pythons versions, java…
One solution – and I don’t know if this is what they’re doing – would be to use versioning in combination with a single instance model. For example, if two apps require Electron vX.Y, they could share the same host. But if a third app required Electron vX.Z, that would pull down another host. Ideally, each app would move forward at their own pace, and the number of runtimes would be minimized – you could imagine having some sort of “scavenger” service that periodically cleans up old runtimes.
I also believe that UWP apps run within containers, so apps can make conflicting changes without stepping on each other’s toes.
One of the things I don’t understand is how this fits in the context of what they’ve already announced. When they described the architecture of Windows 10X, for example, they talked about some of the optimization strategies they were employing. If two apps depend on the same version of the same library, each will get a hard link to the same file on disk. If one decides to upgrade the library, its link will be broken and a new file will be saved. This should mean each version of each app file should only be saved on the system once. Again, everyone moves forward at their own pace.
So if that’s the case, what’s the advantage here? It could be just another optimization strategy. Microsoft will certainly bundle newer versions of .NET, for example, but with this approach they won’t be obligated to keep every version of the runtime on their base image, and app developers don’t feel they need to bundle the runtime in their app package. With Edge, it looks like they’re using this to help make PWAs feel more like native apps.
I’d need to review the documentation to know for sure, but that’s my guess. I’m less familiar with how Linux package managers handle this, so I wouldn’t say Microsoft’s approach is better, but I don’t think it’s a bad thing for them to add this into their platform.
I don’t understand what problem other commentators have understanding this packaging concept. It will be trivial to package multiple Python or Java or PHP versions as separate “hosts” and for a hosted app to declare dependency on the specific host package (i.e. platform version). I also expect that it will just as trivial for an app to bundle in its library dependencies, which this host/hostee model does not seem to affect at all.
With regards to Electron: You just don’t understand what Electron is. It isn’t a web browser app that runs independently of the browser part, no, it is a direct enhancement of the browser code itself. There is no standalone “Chromium” that could be packaged separately for an Electron shell to execute on top of.
Similarly different Electron-based apps cannot just swap their Electron engine to a newer one and start running. Different Electron apps seem to have different versions of Electron as well. Current version of Visual Studio Code is running on top of Electron 7 whereas Atom seems to lag behind, stuck to Electron 4.
Also, I don’t understand what benefit were you expecting from re-using a Chromium base for multiple Electron apps? The only things I see possible are minor gains through 1) using less disk space 2) using less RAM by not loading multiple copies of essentially the same DLLs.
sj87,
It’s not clear who you are talking about here. Yes you can make a bundle use with whatever dependencies you want, but it’s always been possible to run your software on whatever version of PHP/Python/Java that you needed. I’ve actually needed to do this at some point for all of these languages. For example oracle broke lots of java applications that sun microsystems had supported, my remote KVM being one of them. There’s a lot of python 2.7 software that is incompatible with python 3, which is why both versions frequently need to be installed. I already mentioned PHP.
None of this is new, and microsoft’s packaging concept isn’t entirely new either. It’s essentially metadata about the requirements of the package. In an ideal world it wouldn’t be a problem if the metadata were properly maintained, however in practice it isn’t feasible to determine dependencies automatically and requires input from human developers/administrators.
Take a look at microsoft’s example of “Declaring a Hosted App”. It depends on PyScriptEngine with min version 1.0.0.0. Well that’s great, but when the next version rolls around we don’t know for a matter of fact whether that is compatible. It may be compatible, but strictly speaking that’s an assumption that microsoft’s hosted app model will have to make. Sure the “metadata” tells it that it is compatible, but that doesn’t imply that it is (and as pointed out earlier sometimes it really isn’t compatible). You can say it’s not the operating system’s fault when it doesn’t work, the 3rd party developers are always at fault for poor metadata and/or breaking compatibility, and I’ll agree with you, but the problem is still there and there may not be anyone to fix it in the long term if these packages become unmaintained and the companies responsible fall off a cliff Conversely, the package may force the use of older compromised dependencies even when new versions are compatible. Again, it is not the operating system’s fault but we need to recognize that these dependency systems are reliant on human input to provide and maintain quality metadata.
On an end user system, it probably is not a big deal – potentially hundreds of megs of ram wasted running different versions of the same software “oh well”. But in a multiuser environment with numerous runtimes running side by side this could make the difference between running smoothly and constant disk trashing trying to accommodate too many different versions at the same time. Even a small python/PHP/java/.net whatever website could impose a large burden in the form of unshared dependencies. I’m not sure if these packages are intended to be run on a server like this, but I just wanted to point out an example of where it could be a problem.