Earlier this week, we reported on the Berlin Packaging API, an effort to consolidate the various different packaging formats and managers in the Linux world. Many compared this new effort to PackageKit, and today Linux.com is running an article detailing what PackageKit exactly is, with a few quotes from the project’s lead developer, Richard Hughes.
“PackageKit is a glue layer between the distro-specific parts, and some prettiness,” Hughes explains. What PackageKit effectively is, is a universal front-end to the various distribution-native packaging formats such ad dpkg and rpm, leaving them intact. As is often the case, the tool started when Hughes asked himself a fairly simple question: “Can you make a system so that you can still use the existing tools and put a bit of cleverness on top?”
PackageKit only runs when it’s called, and communicates with the native packaging systems using the libpackagekit library. As Hughes explains:
PackageKit has no idea what it’s doing. It just takes the standard output of dbus signals or whatever, and then puts it out through the common API. So far as PackageKit is concerned, it just says do this, and the back end does it and gives the information about what happened back to PackageKit.
Again, I have my reservations. Just like with the Berlin Packaging API proposals and concepts, PackageKit simply doesn’t address the bigger issue at hand: software installation on just about any operating system is inconsistent, messy, overly complicated, and anything but transparent. Whether we’re talking the ten billion million different types of installers in Windows, the insanely complex (for less computer-literate users!) Synaptic, or yes, even PackageKit, these tools are extremely alien and incomprehensible for less computer-savvy people. Mac OS X does it better, but it lacks a whole boatload of other features that any self-respecting operating system should have, such as a system-wide application updater or a proper way of uninstalling more complex applications.
In order to fix these deeply rooted issues, we need to radically change the way we approach installing, managing, and removing applications. All present-day paradigms have their strengths, but also a whole slew of limitations. That’s why I came up with the Utopian Package System, which combines all the strong points of the various paradigms.
Let me me see if I’ve got this right. Packagekit does absolutely nothing and despite doing nothing still treats the most or second most used package manager (apt) as a second class citizen.
how exactly does that help anything.
As far as I understand, PackageKit simply adds a new layer to the system’s native package manager (for example, apt, very famous in Linux), so installation process can be controlled with a kind of agnosia what has to be done exactly.
Personally, I would prefer one authorative standard instead of another layer that may bring bloat and slowing down abstraction to where it shouldn’t be. That should be possible, basically.
The problem is, you can’t control all the places KDE and GNOME get deployed, namely Gentoo, FreeBSD, Solaris, and even Windows. PackageKit at least provides some hope that a unified UI can be used. That being said, it seems to be inferior to Synaptic or gnome-app-install on Ubuntu, so it’ll have to mature a bit before it can be a true replacement.
not really though. I mean it’s not like there was anyone stopping people from porting the gui portion of synaptic or yum to different packagemanagers or heck porting packagmanagers to different package formats see apt-rpm, hell pclinuxos is rpm based and uses synaptic.
its still only unified if everyone ports packagekit to their packagemanager (pacman, portage, ports, msi, pkg etc) AND sticks to the “official” packagekit UI- which as it doesn’t support debconf, or repo management, and i suspect ebuilds are right out- seems unlikely.
Porting an existing gui is a mess in many cases. Packagekit.org has a presentation detailing the problems. Also there is nothing stopping people from running the PackageKit GUI and having their native package manager gui installed in parallel for more advanced tasks. That is the approach used by many distributions. Note that ebuilds are getting to be a supported backend via a Google SoC project
http://www.gentoo.org/proj/en/userrel/soc/
Fedora and Foresight Linux already include PackageKit by default and nearly all the mainstream distribution are going to do it by the next version.
Should installing extra packages be transparent?
I would like to think this is something that the user should have to give some thought over.
Extra so for installing unsigned packages/new repositories etc.
Well packagekit is indeed cool. And I have great respect for Richard’s work. He did this is in a very short time. For example if you want to open a openoffice file and you don’t have it installed. Packagekit can ask to install the required program.
You’re hitting the nail on the head here: PackageKit has a lot of cool features that aren’t mentioned in the article summary. PackageKit has particular benefits and particular problems it tries to solve, so it’ll fall short if judged against a different set of goals…
IIRC there are other (available / planned, I don’t know) features, such as the ability to give different users different rights with respect to doing software updates, installing new software, removing software, etc… So you can give different users more fine-grained privileges on the system software than the current “Can do anything” / “Can do nothing” setup many distros have!
I, again, can’t understand the constant fetish with packaging systems. There is an overall standard, and its being in use by everyone, and hence you can take source code written and compiled on, say, Ubuntu and build it on Fedora, or Gentoo, or even FreeBSD, and to some points, Windows/Mac/Solaris/Etc.
The standards are in the programming languages, their implementation (e.g. gcc for C/C++, Perl, Python, the JVM, and others), and make for the install. Whatever Distro creator X wants to do with that, is his/her choice, package maintainers for X should not concern themselves with how packages are being handled in Y. It is sufficient that Y users can take the source and repackage it themselves.
The issue is not with GNU/Linux, or with Debian/APT, or with Ubuntu, or Fedora/SUSE/Mandrake/RPM or with Slackware, or with Gentoo. The issue is with user a complaining that his friend, user b has “this cool app running on his Linux and I can’t find it on Synaptic/Yum/whatever, and why can’t it be like Windows where everyone can run the same app etc. etc.”
A distro isn’t a “version”, or a “flavour” of Linux any more than Mac OSX is a version of BSD Unix. A GNU/Linux distro is an OS. Ubuntu is an OS, Fedora is an OS, Gentoo is an OS, and so on. Microsoft delivers the OS, but not make sure third-party apps install well, or play well, or doesn’t do strange hijinks to your computer. GNU/Linux package maintainers do. Microsoft doesn’t make sure software that ran on SP 1 is also running on SP 2. GNU/Linux package maintainers do. The whole packaging system, from the packaging methods, the format, to the distribution and maintenance process is way more powerful and empowering for a user, for instance, the http://getdeb.net project, offering fully compatible Ubuntu packages for those who want the latest and greatest in software, and are dissatisfied with the official repository.
Bottom line? There is no packaging issue. There is the issue of Linux pundits trying to refer to GNU/Linux as if it was a homogeneous OS such as Mac OSX or Microsoft Windows, for marketing reasons, or similar, while it isn’t, it’s a big, wide range of OS based on same tools and standards and a on UNIX-compatible kernel, other than that, its a matter of diverging philosophies, ideals, concepts, design and opinions. Those are hard to market, but pointing the finger at the diversity of the GNU/Linux-FreeBSD as the main issue is false.
You’re missing the point. It doesn’t matter whether Ubuntu/Fedora/SuSE/etc are different operating systems altogether:
1. End users don’t expect them to be completely different operating systems. They expect them to be just variations, like Windows XP vs Windows 98 vs Windows 2003. The media has taught people over years that different distros are just different variations.
2. You can go against end users’ expectations, but that just means they’ll get pissed off and move to, say, MacOS X. And voila: even less users for Linux, and therefore less potential developers who can help with the development of Linux and Linux-related software, less incentive for hardware manufacturers to provide drivers or hardware specs for Linux, etc.
3. For me, as a software developer, it’s just annoying as **** to not be able to provide distro-neutral packages. Why should my end users have to wait 2 months before my latest version, with important new bug fixes and features, are packaged into their distro? Why can’t I provide my own package that works on most Linux distros? Why shouldn’t I? There are technical problems surrounding the creation of cross-distribution binary packages, but the only reason why those technical problems exist is because of politics and culture.
You say that all those distros are different OSes altogether. Why should they be different OSes? Why can’t they just be different, compatible, flavors of the same OS? There’s nothing technical that prevents distributions from becoming compatible. There are benefits to increased interoperability.
Ofcourse there are. The dstro makers do not wake up one day and decide to Think Different (TM) to break compatibility. newer versions of software come along, there are different technical solutions applied to real problems. All of these can cause inconsistency. Add up many different packages/issues and you get to a state where we are at now.
As well as disadvantages. Bigger “Jumps” in feature sets as distro’s align themselves and not introduce incompatibilities in a natural fashion. Upstreams become stale as their latest and greatest are not picked up for a long time after release… potentially hiding quality issues for a year or two.
Even if there is perfect alignment, Packagekit will still not allow you to make universal binaries, just universal install methods – probably of little use to upstream developers.
@ whoever suggested that source is the universal install “package” – just write a source “backend” for packagekit. Click on source file(s), click install. People might find it useful. (I do not expect it to be that simple… can’t ignore the many install options that packages havem but I guess good defaults can be chosen for atleast some of them.)
That’s not a technical problem, that’s a social-problem problem. The reason why distros are incompatible is being they aren’t trying to be compatible. If they try, then they’ll succeed, hence a social-political problem.
Oh really? We have these standard called “HTML 4” and “CSS”. Did standardization of the web hurt innovation? Are Firefox, Opera, Webkit and the like not innovating because they’re compatible? *
Why can’t distros innovate *and* be compatible? They can just shift the compatibility layer to the background, where it’s not user-visible but works, while still doing awesome stuff in the foreground. Being compatible with the i386 didn’t prevent Intel and AMD from innovating either.
(* Today, in 2008, all browsers with the exception of Internet Explorer are mostly compatible. If I write an HTML 4 and CSS compliant website, then 95 out of 100 times, it renders the same in Firefox, Opera, Konqueror and Safari. Only Internet Explorer is constantly giving me problems, but my point that web standardization didn’t hurt innovation still remains.)
When distros have the same goal and there is no technical difficulty, I tend to agree, but not all distros can be compatible. Do you want to install MSI package on LFS? Or do you want rpm on Gentoo? What is the point of LFS or Gentoo if you install rpm on them? And what about GoboLinux? Or Puppy? Puppy has no system V init. That’s the point of Puppy (among others)! So what if you install openssh rpm on puppy and it doesn’t start when you turn on the computer? This is a technical problem. If you force compatibility on GoboLinux or Puppy or LFS, you remove the point of those distros and beyond that, you remove innovation. Look at Windows and its compatibility with DOS! Don’t tell me that DOS doesn’t suck!
Now when 2 linux distros have more than 20% market share each, then we’ll talk about compatibility between them. Until then, it’s all about inovation and getting the critical mass where compatibility makes sense.
If you are a developer, you should consider using the GNU build system with autotools. Write a script to package your software with checkinstall and just launch your script at each release and the packages are automatically created for the distros you support at the same time as the tar.bz2 tarball. If the packages are automagically created with this script at each release, you users won’t have to wait 2 months, they just have to download the right package you provided for their distro. Have fun.
Edited 2008-06-26 13:56 UTC
How can you possibly expect people like my mom to install stuff via source tarballs and checkinstall? She doesn’t even know what a terminal is.
Is your mom a developper?
You didn’t get what I wrote. checkinstall is a script to make rpm , tgs and deb packages. It’s the developper that use checkinstall when he want to release his software to many people. Your mom doesn’t use checkinstall. Your mom click on the rpm and it installs.
Yeah sure. But the problem is I have to create a different package for different distributions, even if they use the same packaging format. Debian and Ubuntu are probably compatible, with RedHat/Fedora and SuSE are most definitely not. Mandriva requires another separately built RPM. Slackware uses tgz, and people tell me that it doesn’t even support dependency handling. Gentoo uses yet another system.
On top of that, different versions of the same distro might not be compatible either. A package built on a newer version of a distribution doesn’t always work on an older version, requiring me to use an old version via a VM. Packages built for older versions don’t always work or don’t work out-of-the-box on newer versions because of missing libraries or other issues.
Do you see where I’m going? If I am to distribute binary packages then this can quickly become a tedious and boring task. Why should people not try to make things compatible so that it’s possible to build a single package? Why should I spend 50% of my time building packages for 10 distros and versions, instead of utilizing that time to fix bugs or create new features?
you could write a script to automate this task, even better you could use free services like the opensuse build service to build your packages.
The point is that I shouldn’t have to.
So, the unknowledgeable user who works Linux through point-and-click will be fine?
Yes!
If all this is sincere, then you’re in the wrong side of the road. I could say that if Plan9 would’ve ditched all the innovations they made and create nothing more than a new version of Unix, it would’ve been more used; or that if *nix OS’ would’ve ditched their multi-user scheme with all that root privileges and adopt a simpler, less complex system, more people would’ve used it. But that’s not the point.
The point is, that GNU/Linux’ not about more users=more developers=more stuff. Stallman didn’t create GNU to fill a vacancy, all the tools he wrote were originally clones of existing Unix software. This was made to answer his need of free software. For him, if the alternative is either non-free drivers, or no screen, he’ll find an old teletype and hook it.
I don’t think that either Debian, nor Gentoo, when they started, gave a second thought to losing the existing crowd because their package incompatibility. I don’t think Linus Torvalds who created Linux in the first place thought about “let’s make it 100% compatible with Unix so people will use it”, but “let’s make something BETTER.” If package incompatibilities, or X client/server architecture, or the super-user, or whatever GNU/Linux has will cause users to go to Mac os is not the concern of a lot of the Linux developers anymore than saying “use VB instead of C, more people will use it then”.
That’s why you release the source. Everyone can install from that.
a: they don’t. See above
b: Because the distro creators have an obligation to their (non-beta) users. Because your latest and greatest might break their screen driver, or cause the mouse to stop working, or have a security bug that will cause their systems to crash altogether. The distro maker needs to be able to say “we tested this with our OS and it is approved to work”. If you, as a user, don’t care about your system, fine, package it yourself. AS A DEVELOPER, I am positive you’ll never do anything that will jeopardise your work system, why do you expect others to do that?
The same culture that is responsible for GNU/Linux in the first place. The same culture that is responsible for Unix and C, Perl and Python, GNU and Linux, Debian, Red Hat, and so on. That spirit is responsible for nearly everything that is good about computers in the past 50 years. If it wasn’t for that culture, we’d still be working on COBOL apps running on IBM mainframes, I guess. This reminds me of a startup I knew who created a service based on GNU/Linux based free-software and then started complaining about those pesky hackers who want them to release their codes. Those politics and culture created the world we’re in. The computer’s interface, graphic and non-graphic, the different hardware and peripherals, the PC, today’s operating systems were all created in universities and research labs, not in Microsoft’s board room or in Steve Jobs’ ego. You can’t make your bread of it, then complain about those smelly hackers weird politics.
You’re the tree complaining to the sun about the heat.
Linux is a kernel, GNU is a series of tools. X is a graphical protocol. GNOME/KDE et. al are desktop environments/window managers. Open Office is an “office suite”. Firefox is a browser. GCC is a compiler collection. VI is a text editor. Bash is an interactive command line environment (shell). None of these does much on its own, so to get them all to work in a way that will actually “do” something for a user, someone needs to package them in a way that will create an Operating System. Those are called distros, short for distribution and everyone can do it the way he/she/they think is the best. These all abide to underlying standards, so whatever you spin it, if your source was created in a standard language, say C, with standard libraries and compiles and run with standard parameters on any distro, it should compile and run on others, as long as they abide the standard. Other than that, it’s a free game.
And to your question, no one forces any user to use GNU/Linux, so if Linux doesn’t support that user’s graphic card, that user is welcomed to either buy a new card, one that’s supported, or try another OS. If you chose *nix as your playground as a user/developer/else, you’ll need to accept the rules. If you don’t want to play, you are welcomed to try the other side.
Yes, I am being sincere. But how is this “on the wrong side”?
My reasons are simple. I love Linux, I don’t want to get a Mac, and I want all my hardware to work on Linux, therefore it is in my best interest for desktop Linux to gain a market share that’s as large as possible.
Example: I have a laptop, and I want to have mobile Internet on it. My phone company offers a mobile Internet subscription, but it works via a special USB device that requires platform-dependent drivers. They provide drivers for Windows and OS X, but not Linux, and a Google search indicates that nobody has even attempted to write one. Now I am forced to either boot to Windows or get a Mac. This wouldn’t have happened if Linux has a larger market share.
How does Linux get a larger market share? By properly serving end users! By being usable!
Being Free is not mutually exclusive to market share and usability. See Firefox for Windows.
The distros might not care right now, but they should be, which is my point. The #1 bug on Ubuntu’s bug tracker is “MS Windows’s market share is too large”. Not doing something about cross-distribution installation compatibility doesn’t exactly help the situation, does it?
Who is “everyone”? Does “everyone” include people like my mom? I want people like my mom to be able to install software on her own, and she doesn’t know what this “source code” thing is or how to use the command line.
Now, I happen to be lucky that I’m creating software for system administrators, so my target users are fairly competent with the commandline. But what if I’m developing a desktop product? Oh uh.
A correct, but useless observation. How do you sustain the current culture? By having a stable influx and outflux of people who support the culture. People who are converted to Linux are more likely to support the current culture. If you don’t keep the influx going, the culture will die off as more and more people eventually leave.
Also, people are increasingly moving from Linux to OS X:
http://apple.slashdot.org/comments.pl?sid=406154&cid=21914770 – “I ran exclusively Linux on desktop and laptop for 3 years. I ran Gentoo. I deflibberated many many cronoodleblitzen. I loved it. Still love it. Still manage 6 Gentoo servers. I currently run Leopard an a Macbook Pro.”
http://apple.slashdot.org/comments.pl?sid=406154&cid=21913414 – “I bought a Mac. It gave me the best of both worlds.”
Here we have a competitor, OS X. You can still use all your Unix open source software on it, but it has better software and hardware compatibility and has the reputation of being more usable than Linux. Is this a good thing for the culture? If you want the culture to survive, or to spread, then it is vitally important to increase your market share.
No, I’m complaining that the other trees aren’t cooperating with me to do something about the heat. Some are even getting burned by the sun without knowing. The only reason why we don’t have a building to hide from the sun is because they’re not even trying to build one, not cause it’s impossible to build one. Hence, a social-political, not technical, problem.
A correct, but nevertheless useless observation. See my point about OS X and sustaining the culture. Defining Linux as “just the kernel” is just a cover-your-ass statement, it doesn’t solve the problem, which exists.
Okay, so where do I get mobile Internet that supports Linux. Oh wait… nowhere! Every single mobile Internet ISP only supports Windows, and occasionally the Mac.
You say I need to accept “the rules”. Why do the rules say “if you use Linux and you support open source then you can’t have mobile Internet”? Why don’t people try to turn that into “if you use Linux and you support open source, then you can also have mobile Internet”? Being forced to make this trade-off is insane.
I’m waiting for the alpm backend for packagekit to mature enough for daily use
MSI Packages a perfect.
You download it to your computer (or buy it at a store).
Run the installer
Software is installed
When you want to remove it, you go to this central area called “Add/Remove Programs” and uninstall it.
Why can’t Linux duplicate this?
True, MacOS has a simple elegant method, but that only works for simple programs.
RPM/DEB, are nice but they require you to have a repository, meaning if you want something, it has to be in a repository. Sure their is the occasional deb or rpm that you can download, but thats rare.
DEB/RPM do as MSI does, it has scripts to configure the package, scripts after install the package and mostly the smartest part: they offer a dependency tracker. The worse is when your MSI installer installs one application that doesn’t have a DLL and you get the nice error: System cannot find the dll: mscore.dll version 2.05.2.3 (let’s say).
If you want only .DEB packages or only .RPM packages, look that most big projects (look on Opera site for instance, or Skype, etc.) they can work as a MSI file, the only difference, is that they can work too as a dependency solver.
One small issue that MSI have, excluding the DLL hell, which is one of the nicest is the delta package. For instance RPM offers the delta packages, which patch the current RPMs on your system (look on SuSE 11.0 as released soon) and they will not download the whole package (if you have already) and will install only some diff files.
And one small issue that have MSI files: they need a runtime to execute, as did RPM or DEB files, the issue is that you probably as a simple user you are not aware of that in your preferred distro, but you will have no need for them in your Ubuntu, Mandriva or Suse (that are examples only, I’m not a partisan of any OS/Disto) but at the end I remain with one opinion: your oppinion is as much as flawed as I may say that the best threading system in on Windows, because so many applications work with it. (IMHO the best are pthreads, and at least comparing with Win32 threading implementation that is limited to 64 threads/application, pthreads make the threads as LWP – Lightweight processes, and may work with them as many as your OS support to keep as running threads).
Don’t do trolling with no justify, thanks!
No they dont. Downloading a DEB and double-clcking it will install it. The DEB itelf may or may not, depending on the person/company that created it, require additional dependencies.