It’s not a secret that I’ve been working on sandboxed desktop applications recently. In fact, I recently gave a talk at devconf.cz about it. However, up until now I’ve mainly been focusing on the bundling and deployment aspects of the problem. I’ve been running applications in their own environment, but having pretty open access to the system.
Now that the basics are working it’s time to start looking at how to create a real sandbox. This is going to require a lot of changes to the Linux stack. For instance, we have to use Wayland instead of X11, because X11 is impossible to secure. We also need to use kdbus to allow desktop integration that is properly filtered at the kernel level.
An excellent reasoning, “Yeah, we cannot provide true API/ABI compatibility even among modern, ostensibly compiled from the same code, distros, so let’s bundle a full set of libraries with every tiny application, so that we finally have real compatibility every ISV and user have been dreaming of.”
This is why Linux is so f’ed up: https://bitly.com/gBOiz6
Being in a sandbox doesn’t mean you can’t have access to shared libraries. You can create read-only rights to a directory and include it in the sandbox configuration.
Besides most of the time when setting up a sandbox; you’d create a directory try unique to the application and then create Links from these phantom directories back to the real ones. These would grant read-only access to the linked files.
This, 100 times this. And before the Linux people come back with some ill-reasoned argument about how this is what Windows and Mac software developers do: they only bundle external libraries. System libraries are guaranteed to be there and can be counted on and if, for some reason, they’ve been corrupted or are absent then you know there’s a problem right away.
Well, in this approach there is a common “runtime” too. For example “Gnome 3.14” runtime – which guarantees a certain set of libraries. The app can just declare which runtime it needs and the necessary libraries are then linked into the sandbox without the need for the app to bundle them.
I know Gnome is one of the better projects when it comes to backwards compatibility, but what if the developer develops and tests his project against Gnome 3.14 but the user has installed Gnome 3.16 (or whatever). Do you suggest the application install should force a replace of 3.16 with 3.14?
Also you seem to suggest that Gnome 3.14 on one distro is similar to all other distros providing Gnome 3.14, and that is not the case(!).
The main problem with Linux is not with backwards compatibility, but ironically with “forwards” compatibility. If you require an older version of a dependent library, it’s relatively trivial to fetch that version from the repos with some distros. Ironically, that tends to be a no go with either Windows or OS X. Things get trickier if you require a newer version of the lib than the one your distro shipped with.
A runtime (in this context) is not what you have installed on the system. As I said a runtime like “Gnome 3.14” for example is a set of common libraries and other resources shared by Gnome 3.14 applications (gtk+3, glib,…). The runtime is signed and downloaded from a repository – it doesn’t use what is provided in the system. So yes, currently you would have duplicated Gnome 3.14 libraries (if you ran Gnome 3.14 at the same time) but in the future this would not be a problem anymore when all Gnome apps become sandboxed.
A quick demonstration of this was at the conference just recently here: https://www.youtube.com/watch?v=t-2a_XYJPEY
Linux doesn’t HAVE a distinction between “system libraries” and other libraries at the moment. It’s either in LD_LIBRARY_PATH or it’s not.
One of worst ideas from Apple (and MSDOS) now infecting linux. great
Hardly. On Apple systems only external libraries are bundled and only when needed. You can count on system libraries. On Linux you can’t count on even that. I can’t count the number of times I got error while loading shared library: libasound.so.1 because the particular distro used libasound.so.1.1 instead and that’s a core library. You do not encounter this on Apple or Microsoft operating systems.
Mmm, you obviously don’t know how to use a Linux system and try using it as Windows.
No, he’s right.
I’m sufficiently skilled to do things properly and I still have .debs I had to manually install from previous distro releases because some closed-source game trusted the system to provide some reasonably core library like one of the libicu*.so.48 family and now it only provides a newer one like libicu*.so.52.
Which ones, specifically?
Conversely, you can get burned the other way as well.
If you install Steam on Linux, it bundles a bunch of system libraries to help emulate a particular version of Ubuntu, since that’s the development target. However, if you want to run Steam on Fedora, you actually need to delete most of those bundled libraries, because they cause conflicts with system ones, breaking things like direct rendering…
That’s Fedora packaging Steam wrong, then. Debian and Arch don’t have conflicts between Steam Runtime libraries and package-managed libraries.
It’s not Fedora doing the packaging… it’s a third-party repo. Here’s a link on the subject…
https://ask.fedoraproject.org/en/question/55934/steam-on-fedora-21-o…
But from what I gather, the conflict is that Steam is trying to override major system libraries (libgcc, libstdc++) with older versions – but messing around with the library path in this manner breaks the loading of various OpenGL driver libraries. The “official” fix is to blow away the unwanted older libraries, because they’re doing more harm than good.
Edited 2015-02-18 03:20 UTC
The sad part is that this is not a Linux issue, but a package manager issue.
the .so.1 and .so.1.1 is enough for LD to work with to figure out that you have different libs installed (unless one is symlinked to the other).
But most package managers out there balk at having lib-1.0 and lib-1.1 installed side by side. Not because it can’t be done on the file system, but because the tracking database is limited to one version pr package.
All in all this push towards sandboxing is another case of the Gnome/systemd/Fedora furball “fixing” a “problem” by nuke from orbit.
If Chrome OS classifies as a desktop OS can native executable binaries running offline inside Chrome (or in their own Chrome window) be classified as sandboxed desktop applications?
If so then these type of Crome OS applications are the first sandboxed Linux apps…
I really missed that! especially for proprietary apps like skype. Hopefully, it will become usable soon!
For quite some time I was running Skype and Adobe Reader in a 32-bit root jail under a 64-bit Ubuntu (and using schroot to manager the root jails). It is a crude form of sandboxing where the sandbox contains an entire distribution. It was working fine. But then I gave up because of the simplicity of simply installing both via apt-get from the partners repo – that works for the LTS releases at least.
Skype is probably not a great example.
I predict the Skype client will be dead in a couple of years. Because there is already a replacement technology.
Which is WebRTC. Which get’s you everything Skype does: webcam-video, -audio, data, files-, screen- and application-sharing in the browser.
And properly peer2peer and encrypted again (which Skype, FaceTime doesn’t do anymore !)
Microsoft is one of the companies working on the standards for this technology. Others are: Ericsson, Cisco, Mozilla, Google, to name some of the big players.
Skype, like WOW, Facebook, and a bunch of other services that touch on the social element, will remain because of social gravity.
Unless you can get the mass of aunt Tillies to wholesale move to WebRTC, Skype will remain the go to service.
Never mind that a big deal with Skype from day one was the interfacing to local POTS networks. It basically eliminated international call charges even if just one side used the service.
Also, MS has already concoted their own take on WebRTC: and is pushing it by a Skype web “beta test”.
Their own take on WebRTC. Meant a different browser API.
The proposal from Microsoft didn’t fly. No other vendor supported them.
But a number of industry players agreed a better API was a good idea. So they started a workgroup and later on companies like Microsoft and Google joined that workgroup.
And now the new API will be part of the next iteration of the WebRTC standard: https://www.w3.org/community/ortc/
So what Microsoft will use is just ‘plain WebRTC’ just like everyone else. The protocol they use is the same protocol. And they are compatible with everyone else. You’ll just have a legacy and a newer API in browsers for a while.
Their are also people working in the workgroup to create a Javascript library which emulates the new API for browser which only support the old API.
Microsoft is going to combine all their offerings under the Skype name. Skype, Lync and something else I forgot at the moment.
So I don’t see a big problem.
The real question is, while any of these networks interact with each other. I doubt it. But having these protocols and APIs available for free means there will be other players.
It’s like if GoogleTalk and Skype both support XMPP, but don’t talk to each other.