O’Reilly’s latest entry in the “Pocket” series, “Linux Pocket Guide“, bills itself as a “quick reference for experienced users and a guided tour for beginners”.The book can be broken down into two logical sections. The first 33 pages cover, in a very quick and high-level manner, the inner workings of basic *nix fundamentals: the shell as a concept vs bash as the Fedora default, filesystem, logins/outs, job control, etc. It’s hard to know to whom this section was targeted. It covers a lot of important ground but doesn’t really delve into any of the concepts, or sometimes even the terminology, that makes it essential or properly informative for new users. At best, I imagine this section will be most useful to technically oriented Windows programmers or administrators who need to
understand the analogs in Linux of things they are used to in the Microsoft world.
The rest of the book is devoted to introducing groups of essential Linux programs, categorized by function. This is the book’s best feature and will prove valuable to new users of Linux who aren’t familiar with the
standard command line tools. Some examples: The “Basic File Operations”
chapter covers ls, cp, mv, rm and ln. A chapter on “Viewing Processes”
covers ps, uptime, w, top, xload and free, and the chapter on “Network
Connections” covers ssh, telnet, scp, sftp, and ftp. There are 40
chapters in total, ending with an extremely brief introduction to bash
shell programming.
While it would be quite easy to take exception with some of the program
choices, it is true that a new Linux user would get a useful
introduction to some of the most essential and oft-used programs and the
administrative functions that go with them. For someone coming from a
Windows administration background, reading this book in one or two
sittings would easily give them a start on basic tasks in Linux.
The book comes off as a bit of a mixed bag, though, when considered as a
reference tool. Too many of the entries are a very brief introduction
with a list of the most used command line options for a given program.
Often the entry itself references you directly to the manpage without
any examples. Other entries, such as that for cdrecord or host, are
stronger tutorialials with useful examples to give a more complete
picture of not only what the program does but how to use it. These kind
of entries feel like they were taken from a “Hacks” or “Cookbook” book
series and should be used more liberally throughout the book.
One of the details that separates some man pages from others is the
“examples” section towards the end. The power of command line tools is
especially evident when you combine program options together to work
some magic. Some valuable entries in the book miss out on the
opportunity to show creative uses of the program and instead leave it up
to the reader to deduce what kind of output might result from the use of
multiple options together. Some of these combinations are so standard
and essential that they’ve made their way into countless alias lists
(“ls -al” or, in my case, “ls -alhF”) and their inclusion could allay
the intimidation that might develop upon reading a straight list of
available options for a given program.
It’s hard to understand why some programs get long, example-rich entries
(e.g. dc and crontab) but others get example-free or less useful entries
(e.g. chmod). The chmod entry is comprehensive in its explanation but
misses out on the opportunity to shake confusion from a new reader’s
brain by not listing some basic user permissions (0644, 0755) we see in
the real world and explaining what they mean and why we use them.
The book itself is “tailored to Fedora Linux,” though it notes that
“most of the information applies to any Linux system.” While this is
true enough, it is unfortunate that O’Reilly didn’t take the opportunity
to expand the scope of targeted distributions. The ties to Fedora aren’t
really that essential to most of the book: the chapter on “Installing
Software” covers up2date, rpm and the .tar and .bz2 file formats in 4
pages. Each program entry also includes the filesystem location for that
given program (or listed as “built-in” when that’s the case). Some of
those entries are already wrong for the pending Fedora Core 2 release
(see the OpenOffice.org entries) or generic enough for most distributions.
As an advanced user of many flavors of *nix (starting with Irix, and
moving through SunOS, Solaris, AIX, HPUX and now Linux and BSD), I
didn’t find a lot of new information in the book. Given its short length
and breadth of coverage, this isn’t really surprising. I did develop a
growing sympathy for what a new user to *nix is facing in terms of
understanding the arcana that has developed over its long history. *nix
is a wonderful computing system filled with ostensibly mysterious
inconcistencies and eccentricities. Writing a book to rationalize these,
in very short order, is certainly a challenge. The author does point
these out where they are relevant, but the book is a little lacking in
humor to smooth these out where they might cause a reader to furrow
their brow in consternation (I counted exactly one bit of humor in the
whole book, on page 152).
Users migrating to Linux are definitely in need of a book that gives
them an introduction to the most relevant tools in fundamental
functional areas. This first edition of the Linux Pocket Guide will
indeed prove quite useful to these users, but I look forward to a
slightly expanded second edition that covers more real-world examples
and basic “tricks” of our favorite and most essential command line tools.
FOR THE ERRATA: There aren’t many errors in the book, but I did find
three in the opening pages: 1) on page 27, a “\” is referred to as a
forward slash; 2) on page 37, the example in point 2 won’t work because
the *.tar.* files aren’t located where the user is, in a new directory;
3) and again on page 37, the example at the end of point 4 won’t work
because after the user “su -l”‘s, they’ll be sitting in the root user’s
homepage and the “make install” command won’t work from there. These
last two errors are probably the result of too much brevity.
About the Author:
Jason Vagner has been using flavors of *nix for over ten years. He is the founder and CTO of Rock Commerce, a web hosting and development company.
Buy “Linux Pocket Guide“ at Amazon.com |
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.
It seems like half the Linux books out are “tailored to Fedora Linux”. Do the writers just assume that we are all lined up and waiting to be free bug testers for Red Hat?
This sort of backs up my belief that if Linux were ever to explode in popularity, it would probably be centered around one or two distros.
Well duh. There’s only a few commerically supported distros now.
The difference is that various distros are almost 100% compatible with each other. Yeah, if closed source app makers only choose to let the install work on certain distros (not through technical issues, but for support/laziness reasons), that’s a problem, but you’re/they’re missing the whole idea.
The goal is to be using 100% Open Source apps, so that way you don’t need a package from the app creator, the package maintainers for the distro just need to package it up. Yes, it’s different than Windows, but it allows for more freedom.
Of course the major commercial apps won’t go open source anytime soon, so they should provide a tar.gz. You should be able to just untar it to /opt/app or something. This is the best solution, because every distro has tar/gzip. I suppose there’ll be linking/binary problems, but that’ll always be an issue with closed source apps because people can’t just recompile.
I think that it is a safe bet to write a book about a red hat distro. Like it or not most of the people/companies that have switched from windows or unix to linux will choose red hat for various reasons. I know that the company that I work for uses Red hat 7.3 for our computer system. So mabye most people on OSnews that use linux arnt using FC, it is somewhat the industry standard.
“Well duh. There’s only a few commerically supported distros now.”
Yeah, and there’s already guides for Red Hat 9.0 and Fedora available. But if you’re going to write a “Linux Pocket Guide”, then why not keep it general? Or why not just write a “Fedora Pocket Guide”?
The goal is to be using 100% Open Source apps
And who’s goal is this? Certainly not mine. And even if I did use 100% open source apps, I’m not a programmer so you could throw all the ‘hostile code’ you wanted in there and I’d never know the difference.
so that way you don’t need a package from the app creator, the package maintainers for the distro just need to package it up. Yes, it’s different than Windows, but it allows for more freedom.
The only problem with this solution is that you’re pretty much at the mecry of the distro maintainers to make packages for every single app you use, and do it in a timely manner. It’s eitehr that, or else it’s back to configure, make, make install, and dependency hell. I don’t know how many times I’ve tried using these package managers, only to find that the app I’m looking for either is not available or hasn’t been updated in ages.
IMHO, the right way to do it is to have a standard for package management. Now, notice I said standard package MANAGEMENT, not package MANAGER. So, you come up with some sort of RFC (or whatever) that explains how an app is to be packaged and installed, and let each package manager implement that standard in whatever way it chooses. That way, the end users are happy because there is a standard and the OSS crowd (who seem to be obsessed with having at least 4,000 ways to do any one thing – I presume just so that they have this illusion of freedom) shoudl be happy as well.
That wouldn’t work, because each distro stores files in slightly different locations.
Besides that, what system will we use? Not every user will like every system. Sure, things would be “great” if we only had one of everything, as long as its the things you like. Believe me, I’d hate if I was forced to use KDE or RPM. I’d probably quit using OSS. Just because I don’t like them though, I don’t run around saying they shouldn’t exist like you.
That’s just ideological. There’s also technical issues with a single package solution. Most applications can be built with different types of support. For example, GAIM can be built with or without spelling support, Mozilla with or without Xft/GTK2, Rhythmbox with a variety of audio backends, etc, etc. There are also a variety of patches that are applied to packages. Each distro chooses the best configuration for them and their users. I like to use the XINE backend for rhythmbox, so I would hate to have a gstreamer version shoved down my throat.
Every distribution isn’t trying to impress the “desktop user”. For every desktop distro out there, there are two “other” distros (advanced, server, source based, embedded, etc). Are you saying these distros should all have the same configuration/packages?
If you’ve had issues with out of date packages, you aren’t using the right distribution. Off the top of my head, I know Debian, Gentoo, Source Mage, and ArchLinux all have very well stocked and very up to date repositories. I get new versions within a week on Arch. You shouldn’t have to compile anything yourself.
And besides, if you don’t rely on the package mainainters, you have to rely on the developer. That’s stupid. He’s busy writing the code, he doesn’t have time to make/test 20 different packages.
That wouldn’t work, because each distro stores files in slightly different locations.
No shit, Sherlock. That’s why we need a standard
There’s also technical issues with a single package solution. Most applications can be built with different types of support. For example, GAIM can be built with or without spelling support, Mozilla with or without Xft/GTK2, Rhythmbox with a variety of audio backends, etc, etc.
Doesn’t this only apply if you’re compiling from source? If the package is a binary one (as in Debian), then what difference does any of this make?
If you’ve had issues with out of date packages, you aren’t using the right distribution. Off the top of my head, I know Debian, Gentoo, Source Mage, and ArchLinux all have very well stocked and very up to date repositories.
Ok, so maybe you can find up-to-date versions of 19 out of 20 packages. But it always seems like it’s that 20th one I’m looking for.
I get new versions within a week on Arch.
New versions of what, specifically? Everything?
And besides, if you don’t rely on the package mainainters, you have to rely on the developer. That’s stupid. He’s busy writing the code, he doesn’t have time to make/test 20 different packages.
I don’t think you’re quite understanding me, so let me try again.
The developer would create only one package – say, application.pak. Whatever options are available at the point of installation, such information could be laid out (for example) in an xml file included with the package. Since the makeup and layout of the xml file is described by the Package Management Standard (called PMS .. I like that), you could use any package manager you wanted that supported the standard.
So, in essense, you wouldn’t be tied to any one package manager. Whether you like GUI or command-line package managers, take your pick! Only thing these package managers would have in common is that they all would work the same ‘under the hood’. Kind of like email and IRC clients. Different interfaces, same standard underneath.
Since the OSS world is always crying about standards, I say when it comes to package management, either shit or get off the pot.
No shit, Sherlock. That’s why we need a standard
Yeah, because that’s worked SO WELL for Microsoft. They have standard File System guidelines too, and how well do they work? Once again, file system structure is opinion as well. There is already the Standard File System Hierarchy, but that provides flexability. Some distros like /opt, some don’t. Some use /usr/local, some don’t.
Changing UNIX, which is over 30 years old isn’t going to happen.
Doesn’t this only apply if you’re compiling from source? If the package is a binary one (as in Debian), then what difference does any of this make?
That’s the point, there are many features/options that are only available at compile time, and when you only distribute ONE binary package, you lose the ability to get those different options.
For example, look at the standard Mozilla binary for Linux. I believe it still uses GTK1 and no Xft. I have an LCD monitor, and un-antialiased fonts (Xft provides the AA) are unbearable. Luckily, Arch’s package has Xft/GTK2 enabled, so I’m fine. Different strokes for different folks. I wouldn’t use Mozilla if I could only use their standard package.
Another easy example is binary compatibility. The easiest way to make sure you don’t have glibc/gcc issues is for the package maintainers to rebuild things when they break (due to upgrading gcc/glibc). If we all used the developer’s package, we’d have to wait until they update it. But the problem is that all distros dont use the same glibc/gcc version (you can’t stop this, distros update at various times). So now you end up with a bunch of different packages anyway.
Ok, so maybe you can find up-to-date versions of 19 out of 20 packages. But it always seems like it’s that 20th one I’m looking for.
Well, all the distros I mentioned earlier have a forum/mailing list/something. If a package is out of date, just make a post and it’ll be updated. Arch even has a little button next to each package on the website that lets you flag a package out of date right there.
But of course, no distro is perfect, and some packages will go out of date, but VERY few (and none of the major ones).
New versions of what, specifically? Everything?
Yes, Everything. 🙂
The only reason a new version wouldn’t come is if there is some technical issue holding it back.
And I understand how the package would work, but there is already a “standard” distribution method that you seem to be missing. That’s the source tarball. Like I said, the developer should only have to release that and the package maintainers will compile and build a package for you.
And on your email analogy, email clients STORE the mail in many different ways. Only the protocols are standard (not the same as a package). You would compare a package to an mbox file, or maildir, or mh, etc. There are about 5-6 major mail storage formats, all used by various clients. Yet somehow, we manage to survive. 😉
“Since the OSS world is always crying about standards, I say when it comes to package management, either shit or get off the pot.”
_Open_ standard, Worknman. _OPEN_ standard. And you forgot to state arguments from those of your opponents.
APT is an open standard. RPM is an open standard. Ports collection [including Emerge] is an open standard. Alien is a program to convert a package using one of these *you guessed it* open standards to another.
Now, where do this OSS world of yours argument against competition? Don’t you think that if that were true, there would never been something like Emacs given Vi already existed?
contrasutra
Once again, file system structure is opinion as well. There is already the Standard File System Hierarchy, but that provides flexability. Some distros like /opt, some don’t. Some use /usr/local, some don’t.
Changing UNIX, which is over 30 years old isn’t going to happen.
Alright, good point. So how about instead of having every distro install files in the same place, as part of the standard, we have each package manager have a file called ‘paths.xml’ (or something similar) that would describe to other package managers exactly where the files and directories on a particular distro are kept? IMHO, the user should have the option to be able to choose exactly where they want to install the stuff anyway. So when an app is installed, the package manager keeps a log of exactly where it is, how it is installed, and which compile-time options are enabled.
For example, look at the standard Mozilla binary for Linux. I believe it still uses GTK1 and no Xft. I have an LCD monitor, and un-antialiased fonts (Xft provides the AA) are unbearable. Luckily, Arch’s package has Xft/GTK2 enabled, so I’m fine. Different strokes for different folks. I wouldn’t use Mozilla if I could only use their standard package.
Using one of your examples, you said the Gaim could be compiled either with or without spelling support. Does that mean every single distro that has their own package manager is going to provide me two different packages? Even if they did, the overhead would be astounding. However, if there was a package management standard, in the case of Gaim, you’d have two different packages (one with spelling support and one without) that worked on every distro out there. What I’m saying is, even if you had to make 5-10 different packages to please everyone, why not only do it once instead of having every distro duplicating each other’s efforts?
Even still, shouldn’t an option like spelling/no spelling be something that can be turned on/off at runtime? What if I compile without spell checking and then decide later on that I want it? Do I have to re-compile the whole thing again? That’s just lame. Contrast this with MS Word where I can just tick/untick a checkbox in order to turn it on and off.
Another easy example is binary compatibility. The easiest way to make sure you don’t have glibc/gcc issues is for the package maintainers to rebuild things when they break (due to upgrading gcc/glibc).
Ok, so if we need multiple versions of gcc/glib, why not provide them all as part of the distro install? Sure, we’d be wasting some disk space, but if that is necessary, I think many users would gladly trade disk space for simplicity.
Well, all the distros I mentioned earlier have a forum/mailing list/something. If a package is out of date, just make a post and it’ll be updated.
Being an end-user myself, if I’m looking for a particular package, I want it NOW – not 2 weeks from now and not 2 days from now. Call me demanding, but perhaps I’ve just been spoiled by using an OS for the past 11 years that provides this for me. The point I’m trying to make is that you can’t change the demands of the user in order to compensate for the limitations in the package management systems .. it must be the other way around.
And I understand how the package would work, but there is already a “standard” distribution method that you seem to be missing. That’s the source tarball.
Yeah, that would be acceptable to me, assuming you could execute three commands (as it is supposed to work) and actually have it work with some degree of consistancy. As it stands, based on my own experience, it works maybe 30-40% of the time .. depending on if you have the right libraries and how complex the package is.
Like I said, the developer should only have to release that and the package maintainers will compile and build a package for you.
Agreed. But instead of package maintainers having to make packages for each distro, why not make life easier so that whatever they do works in ALL distros?
I think the difference between you and I is our attitudes. When I mention the idea of a standard for package managers, your automatic response is to come up with a list of reasons why it can’t be done. My approach is simply to look at all the challenges facing us and trying to figure out how we can overcome them.
dpi[/i]
Now, where do this OSS world of yours argument against competition? Don’t you think that if that were true, there would never been something like Emacs given Vi already existed?
Alright, let’s use your example of Emacs and vi and say that a standard for storing ASCII text? You would have each app both doing their own thing, and one would probably be incompatible with the other. So in the case of plain ASCII text, you’ve got plenty of text editors (providing competition) that are all using the same standard, thus being compatible with each other. Competition and compability – the best of both worlds. That, IMHO, is how it should be.
You don’t have to have each distro using incompatable ‘standards’ (even if their OPEN standards) in order to have healthy competition. I shudder to think of where the WWW would be without any kind of standard for HTML. I guess if the OSS community had anything to do with it, we’d have about 100 HTML ‘standards’ and each browser would only implement one or two of them.
Alright, good point. So how about instead of having every distro install files in the same place, as part of the standard, we have each package manager have a file called ‘paths.xml’ (or something similar) that would describe to other package managers exactly where the files and directories on a particular distro are kept? IMHO, the user should have the option to be able to choose exactly where they want to install the stuff anyway. So when an app is installed, the package manager keeps a log of exactly where it is, how it is installed, and which compile-time options are enabled.
That would require a lot of change, but would be a great solution. A lot of software tends to rely on absolute paths, i.e. /etc/configfile, instead of $SYSCONFDIR/configfile. You can change this usually with a configure script, but once the software has been built, there isn’t much changing.
Even relocatable packages, i.e. ‘soft’ path, wouldn’t be able to do this. Say the config file is found in $BINDIR/../../etc/configfile. Or the library in $BINDIR/../lib. Basically, if one distro puts their libs in /usr/lib, and another in /usr/share/lib, then changing LIBDIR (which doesn’t exist, but would be defined in paths.xml) would break the package.
There would need to be a standard naming convention that paths.xml would use, that would be built into software. I think pkg-config can do this, or is at least on the right track, but ALL software would need to adhere to this standard.
I agree, this would be really cool, but it won’t happen any time soon. In the mean time, things work pretty well, and there are so many packages that need to be brought up to modern build systems that our time would be better spent elsewhere.
Using one of your examples, you said the Gaim could be compiled either with or without spelling support. Does that mean every single distro that has their own package manager is going to provide me two different packages? Even if they did, the overhead would be astounding. However, if there was a package management standard, in the case of Gaim, you’d have two different packages (one with spelling support and one without) that worked on every distro out there. What I’m saying is, even if you had to make 5-10 different packages to please everyone, why not only do it once instead of having every distro duplicating each other’s efforts?
This is where distributions create policies with what types of add-on support should be available, and what should not. A well-behaved package will merely create extra files for each feature, and the program will detect the presence of said feature on startup. That way, in this example, you could have gaim-0.7.6-i386.rpm, and gaim-spelling-0.7.6-i386.rpm.
Being an end-user myself, if I’m looking for a particular package, I want it NOW – not 2 weeks from now and not 2 days from now. Call me demanding, but perhaps I’ve just been spoiled by using an OS for the past 11 years that provides this for me. The point I’m trying to make is that you can’t change the demands of the user in order to compensate for the limitations in the package management systems .. it must be the other way around.
The problem is not having software unavailable, it’s having software be slightly out of date for two weeks. I consider software released when I can easily install/upgrade it on my system, not when the source gets updated by the developers. Software seems to get updated faster in the Linux world than in the Windows world, so this is a tradeoff.
Lots of distributions have average update times of less than 3 days. I consider that fine. A new user won’t even know the package was updated until the package gets updated. Plus, distributions are moving towards an automatic “every-night” update system, where you don’t even pay attention to software being updated.
Yeah, that would be acceptable to me, assuming you could execute three commands (as it is supposed to work) and actually have it work with some degree of consistancy. As it stands, based on my own experience, it works maybe 30-40% of the time .. depending on if you have the right libraries and how complex the package is.
More software ought to use autoconf. In my experience, the percentage is closer to 80.
I think the difference between you and I is our attitudes. When I mention the idea of a standard for package managers, your automatic response is to come up with a list of reasons why it can’t be done. My approach is simply to look at all the challenges facing us and trying to figure out how we can overcome them.
It could be done, but I don’t see the reason why. If you just pick a distro and stick with it, where would the problem come from? Lots of Windows users get confused in Linux thinking that they need to go to the developers website, download, and install the software. In Linux, all of that gets taken care of for you. Waiting 2-3 days from the source release is a small price to pay for a new user having software installation made easier.
Also, having worked with many package managers, the differences between all of them are quite elementary, when you compare with pms of relatively equal feature sets.
Agreed. But instead of package maintainers having to make packages for each distro, why not make life easier so that whatever they do works in ALL distros?
The parent didn’t mean that. One maintainer doesn’t make packages for all the distros, each distro has maintainers that each make packages for several pieces of software.
Updating packages is trivial. Bump the version number, and possibly change some of the build commands. Fix bugs as they get reported.
There are some pretty good ideas in this thread. I too believe package managers should keep a small database in an standard XML file on where things are. This would grant FREEDOM to use 3rd party installers and possibly even more than one package manager at the same time. As it exists today there is a different standard for every package manager and make does not even have that.
For those that say UNIX has been that way for 30 years and will never change, or those that say all software needs to be open in order to work with all the existing broken methods of installing software. Why don’t you chime in with THAT the next time someone says Linux will be on the desktop?
If this never changes Linux will never be “desktop” material. I don’t even know how you can make Linux desktop until this problem is already in the past, and right now I don’t even see an answer to this problem in the next few years.
Even after this problem is solved there are several things that need to be addressed like ease of GUI application development.
The problem is that large scale change cannot happen in Linux because there are too many people/companies that disagree and it would be impossible to get them to agree on something and coordinate. Only if Linux were closed and controlled could large scale changes (like a single /apps directory for GUI applications) actually happen.
I would PAY MONEY to see these points covered in an article by someone who is not a Linux advocate (read religious blindness).
A well-behaved package will merely create extra files for each feature, and the program will detect the presence of said feature on startup. That way, in this example, you could have gaim-0.7.6-i386.rpm, and gaim-spelling-0.7.6-i386.rpm.
So in this case, I would assume you’d have something like this:
Readhat/Fedora:
– gaim-0.7.6-i386.rpm
– gaim-spelling-0.7.6-i386.rpm
Debian
– gaim-0.7.6-i386.deb
– gaim-spelling-0.7.6-i386.deb
(Plus various Debian-based distros would probably have their own as well)
Suse:
– gaim-0.7.6-i386.rpm
– gaim-spelling-0.7.6-i386.rpm
(These probably wouldn’t work in Redhat, and vise-versa)
Slackware:
– gaim-0.7.6-i386.tar.gz
– gaim-spelling-0.7.6-i386.tar.gz
Plus whatever everybody else uses. Now, under my solution, we’d have only two packages:
– gaim-0.7.6-i386.tar.pak
– gaim-spelling-0.7.6-i386.pak
Which would work under all standard-supporting package managers.
[A package management standard] could be done, but I don’t see the reason why.
See above.
If you just pick a distro and stick with it, where would the problem come from?
Well, let’s take Xandros for instance. (And don’t tell me I’m using the wrong distro, dammit .. this is only an example.) Last time I checked in Xandros Networks (version 2.0) for packages, the pickings were very slim. You can switch to their ‘unofficial’ archive which had most of what I was looking for, but out of the 20 or so packages I searched for, only about 3 of them were up to date. If you were to switch to another apt repository, Xandros warns that you could end up breaking the distro.
Of course, there is always Synaptic in ‘vanilla’ Debian distros, which has many of the same problems (outdated packages unless you switch apt sources, possibility of breaking soemthing, etc) and piss-poor organization.
Lots of Windows users get confused in Linux thinking that they need to go to the developers website, download, and install the software. In Linux, all of that gets taken care of for you. Waiting 2-3 days from the source release is a small price to pay for a new user having software installation made easier.
Whenever I’m looking for a particular app (say, an email program), it’s not unusual for me to download 15-20 of them at a time over the course of a weekend and try them all before sticking with one I’m happy with. It would suck if there were 15 different apps I was trying to download, and 5 of them are out of date. So, now I’ve got to write to the mailing list (or whatever) to get these other 5 updated – a royal pain in the ass, and something I’d rather not deal with.
But even under normal circumstances, if I am looking for a particular app and find that the version that the distro provides is out of date (or missing altogether), that’s a frustrating experience for an end user and one that ought not to exist in the first place. Even if I went through the trouble of getting someone to update the package, and then I installed it, used it for about 5 minutes, realize I hate it, and then uninstall it, I’d feel bad for having wasted whoever’s time it was that put the package together for me.
The parent didn’t mean that. One maintainer doesn’t make packages for all the distros, each distro has maintainers that each make packages for several pieces of software.
Yeah, but that’s my whole point. In a senario where a standard existed, we wouldn’t need seperate maintainers for each distro. Every distro could have their own package manager, but all the packages would come from the same place.
So, who would create the packages under my senario? Well, let’s use the Windows version of MAME (www.mame.net) – which is an open source arcade emulator. The developers release it in source code form, then somebody compiles it and places the binary on the MAME website, and everybody gets the binaries from there. Once the binaries are created, you’ve usually got 3-4 different binaries .. one for Windows, one for DOS, one optimized for i686, etc. But you get those binaries from the same place. You don’t have to have one binary for Windows 98, one for Windows ME, one for Windows 2000, one for Windows XP, etc.
This is a model for which Linux should try and follow.
If I did the whole paste&reply thing, I’d be taking up too much space.
I see your point, but I still disagree. If a distribution is grossly, out of date, my suggestion would be to switch. Updating versions is not that hard, and a distribution with a poorly maintained package collection is probably also poorly maintained itself
Anyway, I think your point is that if there were one package format, since fewer maintainers would be required across all the distributions, updates would push faster and more consistently.
This line of thinking seems to imply that the bottleneck is constant through all the distributions; i.e. it is having to make unique packages for new versions of software that takes time, not lengthy beaurocracy.
It is precisely the beaurocracy that is the problem. Many distributions (e.g. Debian) have really rigorous procedures to follow to get updates pushed onto the servers. Others do not. Other distributions are just slow.
Finally, .deb, .rpm, and many others can all do anything that a ‘universal’ package format would have to do. We need to separate the package manager (the backend) from the end user frontend (e.g. apt, yum) that interacts with both the package manager and some kind of ports tree. I think that just as gcc makes a symlink to gcc called cc, and well-built Makefiles call ‘cc’ and thereby ignore what c compiler is actually installed, these frontends should also have a generic symlink.
Having built a lot of packages, I would much rather see source tarballs standardize to the point where making packages could be automated.
The point is that there are _hundreds_ of rebrands of Linux. A package that works for one distro, in my experience often does not work for another, or even a different version of the same distro for that matter. If my app has 2 packages (see above GAIM example) and I want to build it for the last 3 version of the 20 most popular distrobutions I have to build and test 120 packages. Even with 120 packages this can still break on systems where someone upgraded their version of QT.
If I update my application I also have to update all 120 packages. On windows or Mac I build one package.
This _forces_ me to make my software on Linux open source. Where is the freedom in that??
And don’t tell me to switch to another distro if I cannot find packages for the one I am using. I have enough trouble finding working packages with RH and Mandrake which are the 2 most popular distros in case you missed the memo.
I won’t even go into library incompatibilities. The OS was simply never designed for the desktop.
Anyway, I think your point is that if there were one package format, since fewer maintainers would be required across all the distributions, updates would push faster and more consistently.
Not only that, but you would probably never run into a senario where you can’t find a package for a particular distro. Also, this might persuade some commercial vendors to port their apps to Linux if they could package their app up only once and have it work for all distros.
If a distribution is grossly, out of date, my suggestion would be to switch. Updating versions is not that hard, and a distribution with a poorly maintained package collection is probably also poorly maintained itself
Alright, then … came someone list for me these ‘wonder distros’ where you can get the latest version of anything you want in 2-3 days?
> Alright, then … came someone list for me these ‘wonder distros’ where you can get the latest version of anything you want in 2-3 days?
– LFS
The following come close:
– Gentoo
– Debian unstable
– Arch Linux
The point is that there are _hundreds_ of rebrands of Linux. A package that works for one distro, in my experience often does not work for another, or even a different version of the same distro for that matter. If my app has 2 packages (see above GAIM example) and I want to build it for the last 3 version of the 20 most popular distrobutions I have to build and test 120 packages. Even with 120 packages this can still break on systems where someone upgraded their version of QT.
No, pre-built binary packages won’t work between distributions. Source will, though. And native packages will.
OpenSSL is one of the few library examples where you have to upgrade any package that links to it for every version number. Apps that build against gtk 2.0+ do NOT need to be recompiled each time you upgrade gtk. Same for QT. The major version number change indicates an entirely (or partly) new and incompatible API, but those almost always can be built alongside the old version.
If I update my application I also have to update all 120 packages. On windows or Mac I build one package.
Why not just distribute the source and save yourself the trouble? Do you actually think that application developers are releasing 120 versions of each package for each release? That just doesn’t happen–that’s not reality.
This _forces_ me to make my software on Linux open source. Where is the freedom in that??
Plenty of software is written for Linux which isn’t open source. Also, there may be less freedom for you, but more for the users. Freedom isn’t exactly the same thing as choice. Suffice it to say, as a developer, there does exist pressure on you to release your work under an open license. That is hardly stifling your freedom.
And don’t tell me to switch to another distro if I cannot find packages for the one I am using. I have enough trouble finding working packages with RH and Mandrake which are the 2 most popular distros in case you missed the memo.
What packages are you having trouble finding? I think I did miss the memo, but if you tell me what you’re having a hard time with installing I’ll be glad to help for free–that’s what we do in the OSS community
I won’t even go into library incompatibilities. The OS was simply never designed for the desktop.
Please do. None of the aforementioned points really conclude that Linux isn’t desktop-capable, and if you want to maintain that point, I suggest you support it.
Not only that, but you would probably never run into a senario where you can’t find a package for a particular distro. Also, this might persuade some commercial vendors to port their apps to Linux if they could package their app up only once and have it work for all distros.
I think a standard way of source code being written is a better idea. Have all source code reference, for example, SYSCONFDIR instead of /etc. Have all source code use gnu autotools, with a set of standard naming conventions for options. That way packaging would be trivial, and those wanting a lightweight package manager could use one that didn’t use all of the standards, and kept things simple, another user could use a package manager with all the nifty features.
I won’t deny that package management is a shortcoming in Linux for newcomers. I’ll attest that once you learn Linux, it’s not a problem, but I think our debate is implicitly dealing with new users, so I won’t really back that up.
Alright, then … came someone list for me these ‘wonder distros’ where you can get the latest version of anything you want in 2-3 days?
I use Crux, and most of the software gets updated really quickly, except for major updates. Then again, I manually update a lot of stuff myself
I think that the trend in Linux shows a gradual progression to more standardization, more simplicity, and more ease of use.
I think a mistake often made in the Windows camp is assuming that ease of use for newcomers means a tradeoff of ease of use for the experts. In reality, the most elegant solutions make life easy on both.
they’ll be sitting in the root user’s homepage and the “make install” command won’t work from there.
Don’t you mean the root user’s home directory?
3) and again on page 37, the example at the end of point 4 won’t work because after the user “su -l”‘s, they’ll be sitting in the root user’s homepage and the “make install” command won’t work from there. These last two errors are probably the result of too much brevity.
Incorrect. When you su to root, the shell maintains the directory you were in as root’s current path. When you exit, however, you return to wherever su was called from, not where you last were as root.