Linux kernel creator Linus Torvalds last weekend released a test version of the Linux 2.6 kernel called test9, a sure sign that a production version of the next kernel is fast approaching. In this e-mail exchange with SearchEnterpriseLinux.com, Torvalds explains what kind of insight he hopes to gain from enterprises that install test9, and he reveals a tentative release date for the kernel.
And what the hell is wrong with the init process? What are you doing messing with the init process anyway? If you want to start an app at startup, just add it to your session (there is a session pref in GNOME, and a “save session” feature in KDE). The user should not have to deal with the init system at all, and the administrator should only need to turn on and off services. Again, there are GUI tools to do that both for KDE and GNOME.
I fail to see what software or dependancy issues have to do with Linus’s work…
Oh, and if you have depencency issues, then that is your choice. Personally, I tend to choose distros that do not have dependancy problems.
You’re long on gab, short on details. Exactly what are you suggesting, and why should anybody go to the trouble of implementing it?
— “The whole way that Linux deals with the init process needs to be worked on. It does not handle processes as it should. For a matter of fact just throw the whole mess out the ‘window’.” —
How should it handle them?
> The whole way that Linux deals with the init process needs to be
> worked on. It does not handle processes as it should.
Would you care to elaborate? The BSD system does have a number of advantages (simplicity being one of them). Also, standardised names for services between distros couldn’t hurt. Apart from these, I don’t have many complaints about linux’s init system. If you do, tell us rather than making a completely subjective comment like this.
> For a matter of fact just throw the whole mess out the ‘window’.
I assume you were trying to make a point but it just didn’t quite materialize into what we call “english”.
(I hope I didn’t just respond to a troll……) 🙁
What in the hell is more basic than the kernel?!?
The problem is not so much the init process but all the junk the default distros place in the init process.
What is so darn wrong with the init process?
How does it not handle processes as it should?
A real complaint might be that a better backgrounding of init processes is needed. That I can understand.
A real complaint might be that while there are gui tools for editing runlevel services (which ones start when) there is no easy gui method of adding totally new init processes.
A nice inittab editor would be nice. But that is not what you said.
Maybe you could say that distros need to include more feature-full init scripts with helpfull status options and other options besides just start/stop/restart.
Still this has nothing to do with Linus.
What does any of this have to do with kernel testing?
Nothing.
Also, the dependency “problem” is only a problem for those who are too used to the primitive software handling of Windows and OS X.
I certainly wouldn’t call shipping with a large standard API “primitive”. I think it’s more indicative of the flexibility versus ease-of-use issue which has divided the open source/commercial software camps for quite some time. While separating basic system components into packages that are installed as needed does allow a high degree of flexibility and allows software to share common components, it does lead to the dependancy issues which plague the platform.
I am personally against this approach. I would prefer a standard, consistent systemwide API/ABI. This makes releasing software in binary form extremely easy as you can be assured that it will run on most any installation of the operating system you are targeting (if programmed properly, of course)
In Linux the general approach seems to be to release a tarball and leave users of your software on their own to unpack and place it where they see fit (although usually it will be impossible to avoid prefixing at least a few files, which will probably wind up in either /etc or /usr/local/etc) or to package it for each distribution you wish to target. The latter requires a great deal of work and greatly complicates distribution. The former is a messy approach which requires a certain degree of knowledge on the part of the user.
Windows and OS X both have standard installation mechanisms and standard packaging formats for applications. Linux doesn’t. In this respect I would call Linux “primitive”, as the only way to package applications in a cross-distribution safe format is a tarball, which is a primitive form of packaging indeed.
And what the hell is wrong with the init process?
SysV is a very messy init process. There is no means of building a dependancy tree from the symlinks provided; you must have knowledge of what applications depend on each other before you attempt to muck around with enabling/disabling services you need/don’t need. Because you can’t build a dependancy tree, the process becomes impossible to parallelize, so if any portion of the process begins blocking on I/O of any kind, the entire system init hangs. This can be quite annoying if you include some network operation in the init process (such as ntpdate) and the remote service is down, but the blocking call is preventing something much more important from starting up (if you are administering a system remotely, for example, it could be preventing sshd from coming up) The SysV solution would be to move sshd’s number up, but is this really a solution? Wouldn’t it be better if the entire startup process were parallelized, so no single startup process could hang the system during startup?
I saw a very interesting writeup on this at IBM:
http://www-106.ibm.com/developerworks/linux/library/l-boot.html
How should it handle them?
In IBM’s writeup (which I just linked) they demonstrate dependancy handling using make. A tool similar to make for this purpose would be great, although I would prefer a small set of tools custom-tailored for the job, and a much different file format from Makefiles, but make accomplishes the basic goals of dependancy handling and parallelization.
But, with make as an example, you would specify every service as an entry in the Makefile (all as .PHONY of course) and then set targets for certain system mode changes. For example, you could have targets for each system runlevel which start/stop the appropriate services. You could then use the -j flag to make to parallelize startup, so when make <runlevel_target> is called all processes that don’t depend on each other are started/stoped simultaneously. This would decrease startup times and prevent processes which may block for prolonged periods of time from hanging the entire system during startup.
I certainly wouldn’t call shipping with a large standard API “primitive”.
>>>>>>>>
That’s not how OS X and Windows handle dependencies. In fact, they don’t handle dependencies at all. Shipping with a large API hides the problem, but doesn’t fundementally fix it. Linux distributions with proper package managers fundementally fix the issue.
it does lead to the dependancy issues which plague the platform.
>>>>>>>>>
What dependency problems? There is the occasional broken package in APT/Portage/Yum, but there are ocassionally broken Windows installers as well.
This makes releasing software in binary form extremely easy as you can be assured that it will run on most any installation of the operating system you are targeting (if programmed properly, of course)
>>>>>>>>>
Doesn’t do anything if you need to use an external library. I’ve already hit this problem in Windows. My software needed a number of small libraries. In the windows version, I had to include packages and install instructions for all of them. In the Linux version, I just had the install script call APT to get them.
In this respect I would call Linux “primitive”, as the only way to package applications in a cross-distribution safe format is a tarball, which is a primitive form of packaging indeed.
>>>>>>>>>
Who the hell cares about cross-distribution safe packaging? If you’re distributing binary-only software, make and RPM and a DEB and put it in a repository. Users of more unusual distributions know what they’re getting into, and can deal with using alien or rpm2targz to install the software.
“It would be nice if they started working on the basics, like software and some other way to get away from the dependencie mess.”
Do you mean the kernel developers start working on the desktops ie KDE or Gnome? & who would work on the kernel KDE/Gnome developers?
“The whole way that Linux deals with the init process needs to be worked on. It does not handle processes as it should. For a matter of fact just throw the whole mess out the ‘window'”
I’m I taking a wild stab in the dark or have you never actually used a Linux distribution?
Maybe you could send your info to the kernel developers “Some things wrong, its got nothing to do with your kernel, but I thought you should know about it?
Shipping with a large API hides the problem, but doesn’t fundementally fix it. Linux distributions with proper package managers fundementally fix the issue.
Yes, until you start trying to release commercial software in binary form for Linux, at which point you are forced to statically link *everything* if you wish to ensure your package will run on all distributions. This includes the C library, as glibc 2.1/2.2/2.3 have broken binary compatibility in many calls.
While you are forced to do this on Windows/OS X as well, the problem is *greatly* mitigated by the fact that a considerable amount of functionality is provided by the standard libraries.
What dependency problems? There is the occasional broken package in APT/Portage/Yum, but there are ocassionally broken Windows installers as well.
Again, step outside the bubble of open source software for a second and consider where the open source advocates are attempting to push Linux: into a realm where commercial software developers would consider releasing desktop applications. Without a standard application framework, Linux is *not* fit for this role. This is a problem that needs to be addressed.
Doesn’t do anything if you need to use an external library. I’ve already hit this problem in Windows. My software needed a number of small libraries. In the windows version, I had to include packages and install instructions for all of them.
Sounds like you aren’t linking your application properly, or are pulling the libraries in improperly. If you’re pulling the DLLs in with associated LIBs, Windows should search the directory the EXE is located in for the respective DLLs. There’s no need to register them systemwide.
In the Linux version, I just had the install script call APT to get them.
That’s great… for distributions that provide apt-get. What about someone running a distribution that doesn’t provide apt-get? What if the package is named something different from what you specified?
Also, what form is your package in? A tarball? If so, this means the package requires command line interaction for installation. Is this really better than Windows/OS X? If anything it seems more primitive and cumbersome.
Who the hell cares about cross-distribution safe packaging?
Anyone attempting to release a commercial software package for Linux.
If you’re distributing binary-only software, make and RPM and a DEB and put it in a repository.
And what if you want to ship your software on a CD that someone can stick into their system and install, ala Windows/OS X?
Users of more unusual distributions know what they’re getting into, and can deal with using alien or rpm2targz to install the software.
And once again, the dependancy issue is encountered.
The traditional approach I’ve seen is to triple release all software as rpm, deb, and tgz. As for rpm and deb, separate dynamically linked and statically linked versions must be provided.
This is truly terrible; please rationally contrast this situation compared to OS X and Windows. There is no getting around the fact that releasing commercial software for Linux is a considerable hassle for the developers, and installing software is a considerable pain for the users. This is due to: disparate installation of application frameworks due to lack of a consistent, coherent set of APIs, and disparate packaging/dependancy handling methods.
Conclusion: Linux is a terrible platform to release commercial software for, because of the considerable amount of fragmentation in what constitutes a Linux system.
It’s unnecessary. The application layer remains (for the most part) forwards compatible.
In Linux, packages are compiled to fit into a particular branch of a distribution, and may or may not be forwards/backwards compatible between branches due to the constantly changing APIs/ABIs of the various libraries.
The only solution to this is bundling/static linking what you don’t expect to be consistent. On Linux, this is *everything*. It would be great if you could count on the glibc developers not to change the glibc API/ABI, and not to redefine ANSI C constructs between glibc releases (e.g. fpos_t) but unfortunately the glibc developers care more about having the freedom to make changes where they see fit than keeping glibc consistent between releases.
On Windows/OS X, a large set of application frameworks is provided and forwards/backwards compatibility is largely preserved. This means that very little needs to be bundled/statically linked compared to Linux, as the system frameworks can be used instead.
On the Linux side, some of these frameworks are rather large and provide extremely high levels of functionality. Let’s take, for example, streaming media. On the Windows side, your application can use DirectShow. On the OS X side, your application can use QuickTime. On the Linux side, there are media frameworks in development, such as gstreamer. But how do you release an application that uses gstreamer? The only way to do this is a cross-distribution manner is to bundle the entire gstreamer framework, including all codecs you wish your application to have support for along with your application.
You could release a RedHat/Debian package, but will your package be binary compatible across different branches of RedHat/Debian? The answer to that is a resounding no… as a user of Debian’s sid branch I have seen binary compatibility with Debian’s stable branch broken constantly. In the case of dynamically linked GUI applications I have never managed to get any to run due to missing/changed symbols. This problem is compounded by the fact that the gcc developers decided to change the C++ ABI multiple times, and most Linux distributions, not wanting to be encumbered with the legacy ABI, upgraded all of their C++ libraries as well.
Bottom line: It’s impossible to release binary packages if binary compatibility is constantly being broken.
Again, step outside the bubble of open source software for a second and consider where the open source advocates are attempting to push Linux: into a realm where commercial software developers would consider releasing desktop applications. Without a standard application framework, Linux is *not* fit for this role. This is a problem that needs to be addressed.
No comment, this is what I 100% agreed! It was the reason why I was hope the BlueEyedOS will turn out a very good one, but I don’t know if it will be.
This is the only part what I am complaining about Linux/BSD desktop is that it needs a standard application framework. I guess, it’s almost impossible because nobody is same and very hard to get each others agree together.
No comment, this is what I 100% agreed! It was the reason why I was hope the BlueEyedOS will turn out a very good one, but I don’t know if it will be.
Yes, it’s a sad state of affairs when the most comprehensive application framework available for Linux is… Wine.
This is the only part what I am complaining about Linux/BSD desktop is that it needs a standard application framework. I guess, it’s almost impossible because nobody is same and very hard to get each others agree together.
I think an excellent starting place would be OS X’s CoreFoundation, which has been released by Apple under the APSL and is portable across several platforms including Linux and FreeBSD. If this were combined with reimplementation of some other OS X core frameworks, in conjunction with GNUstep, would allow for a Cocoa compatible application framework, which would provide source compatibility across Linux and OS X.
Install software on Mac OS X sometime, then talk to me about RPMs. Remember Loki’s installer? It did the trick, didn’t it? Regardless of the distro it worked the same – my only complaint was the needed for a terminal to type ./install or ./setup, thats no big deal to most of us geeks – but an end user shouldn’t have to do that. Make the setup binary double-clickable like Windows or Mac OS X. Everything else about the installer was great, why isn’t it more widely used?
You’re not talking about dependency handling anymore, but the entirely seperate issue of binary compatibility.
Binary compatibility could be a problem on Linux, but its still hypothetical at this point. There is little binary-only software on Linux, and most of that (Maya, Oracle) is high-end and requires a certified platform to run (just like their Windows versions).
Going forward, I don’t think that binary compatibility will be a huge issue for Linux. Here’s why:
1) As Linux matures, it will stabilize. G++ has already adopted the multi-vendor standard, and the C++ ABI should not change save for fixing a few corner cases. Eventually, standards like the LSB will help further. Now that people like Sun have a stake in things, compatibility will be regarded more highly.
2) The advanced package managers will play a big role in managing different library versions. If an app needs gtk 2.x while the distro comes with 3.x, 2.x will automatically be installed. Sure, it will take up some extra disk space, but compatibility libraries is one of the reasons Windows XP takes up 1GB+ for the base install. This is where having a real package system pays off — you don’t have to static link your apps like you have to in more primitive OSs. Rather, you can let the OS do the work for you.
3) A future with Linux won’t just mean that Linux will be substituted for Windows. Software will be different. First, most of the day-to-day tools that people use will be open source, and thus not subject to binary compatibility. Its not like its that big of a shift. People don’t pay for web browsers now, and they won’t in the future. Second, a lot more service-oriented software (encyclopedias, map programs, etc) will be built on web platforms or on platforms like Java or .NET. You’re already seeing a move to this with things like MapQuest, Amazon, etc. Lastly, if the software industry moves to a competitive market, you’ll not just have Linux with 90% marketshare, but many OSs each with some fraction of the market. Software will have to be cross platform, and software developers will simply have to spend a little more time packaging.
Of course, this is pure speculation in response to a largely theoretical problem.
@Christopher X: Does OS X’s software install require you to manually find the package (via CD, internet, etc)? Then its already more complicated than APT/Yum/Portage.
@Bascule: The reference to CoreFoundation is rather odd. The only thing in there that isn’t in a UNIX C library is XML and preferences handling. A much better platform would be POSIX+Qt, which provides all of the above, plus a GUI, database access, and more. And it would buy you compatibility with Windows *and* Mac.
— “In IBM’s writeup (which I just linked) they demonstrate dependancy handling using make…” —
Actually, the question you were trying to answer was directed at Deak R and his comment on how he felt the Init system handled processes wrong. The question was not related to dependancy issues.
Binary compatibility could be a problem on Linux, but its still hypothetical at this point. There is little binary-only software on Linux, and most of that (Maya, Oracle) is high-end and requires a certified platform to run (just like their Windows versions).
And what’s the cause of this dearth of commercial software for Linux? Perhaps… problems with binary compatibility and releasing software in a manner such that it can be easily installed by anyone on any Linux system?
As Linux matures, it will stabilize. G++ has already adopted the multi-vendor standard, and the C++ ABI should not change save for fixing a few corner cases. Eventually, standards like the LSB will help further. Now that people like Sun have a stake in things, compatibility will be regarded more highly.
So, bottom line: Linux is too immature at this point to consider for the purposes of commercial C++ development?
The advanced package managers will play a big role in managing different library versions. If an app needs gtk 2.x while the distro comes with 3.x, 2.x will automatically be installed. Sure, it will take up some extra disk space, but compatibility libraries is one of the reasons Windows XP takes up 1GB+ for the base install. This is where having a real package system pays off — you don’t have to static link your apps like you have to in more primitive OSs. Rather, you can let the OS do the work for you.
And how long until we see a cross distribution packaging format and packaging tools with GUI integration that provide point-and-click installation? I don’t think it’s fair to call Windows and OS X “primitive” when Linux lacks a cross-distribution packaging and installation framework that vendors can utilize to release applications.
A future with Linux won’t just mean that Linux will be substituted for Windows. Software will be different. First, most of the day-to-day tools that people use will be open source, and thus not subject to binary compatibility.
That doesn’t describe the future of Linux; that describes the present, and currently the answer to the dearth of a great number of applications on Linux is Wine with Crossover Office. Is this really a solution? Is it really worth buying Crossover Office and consequently limiting a system to a scant number of supported applications in order to use Linux instead of Windows?
Its not like its that big of a shift. People don’t pay for web browsers now, and they won’t in the future.
Well, for your information, some of us do pay for web browsers (i.e. Opera)
But how about in the realm of preprint, where a handful of zealots claim Linux is gaining ground? Scribus and the Gimp will *never* have Pantone support, and there is no way around this. What about financial software that comes with a subscription service to receive yearly tax tables? These are some of the examples where open source software *cannot* provide what commercial software can.
Unless Linux matures to the point where the ABI is consistent across platforms and cross-distribution packaging/dependancy handling problems can be resolved, this software will never be available on Linux, except, of course, through Wine + Crossover Office.
The reference to CoreFoundation is rather odd. The only thing in there that isn’t in a UNIX C library is XML and preferences handling.
It’s a requisite component of source compatibility with OS X. You could say the same of BlueEyedOS… all of the same functionality is present in other libraries, BlueEyedOS is just providing a consistent applications layer. CoreFoundation and GNUstep are two parts of an applications platform which would be source compatible with OS X.
A much better platform would be POSIX+Qt, which provides all of the above, plus a GUI, database access, and more. And it would buy you compatibility with Windows *and* Mac.
No, I think you have it backwards. You can *buy* compatibility with Windows and Mac, for ~$2000/developer, or ~$2500-$3000 if you wish to support X11 as well.
And what’s the cause of this dearth of commercial software for Linux?
>>>>>>>>>>
Because there isn’t large enough of a market.
So, bottom line: Linux is too immature at this point to consider for the purposes of commercial C++ development?
>>>>>>>>>>>>
No, the point is that commercial software is largely irrelevent to Linux right now. Linux is structured to get the maximum growth rate out of OSS software. If commercial software does become more important on Linux (through the involvement of Sun, etc) Linux will adapt to those needs.
And how long until we see a cross distribution packaging format and packaging tools with GUI integration that provide point-and-click installation? I don’t think it’s fair to call Windows and OS X “primitive” when Linux lacks a cross-distribution packaging and installation framework that vendors can utilize to release applications.
>>>>>>>>>>
You need to get out of your commercial software bubble. The simple fact is that commercial software is not a big factor in the (desktop) Linux market. Thus, stuff like cross distribution packaging doesn’t really matter. The tools and technologies are all there to support such a model (APT + Synaptic, etc) but its not a focus for the developers. If and when commercial software becomes important, you’ll see more developers paying attention to it. You’re starting to see this already, with the recent work at Progeny to attempt to get APT to handle DPKG and RPM on the same system.
That doesn’t describe the future of Linux; that describes the present,
>>>>>>>
The point is that for the day to day apps people use, binary compatibility will not be a factor.
Well, for your information, some of us do pay for web browsers (i.e. Opera)
>>>>>>>>
You’re in the extreme minority. And thank you for demonstrating my point. Opera is a cross platform product (its part of their niche appeal) and has to deal with multiple packaging anyway. Doing a few more packages for the main distributions isn’t a big deal. If Linux is successful, you’ll see more commercial software having to do the same thing.
These are some of the examples where open source software *cannot* provide what commercial software can.
>>>>>>>>>>
Right. I never claimed otherwise. I never said that all the tools people used would be open source, just the basic ones. That means for the masses of users out there who do nothing but word process and browse the net, binary compatibility will be a non-issue.
Unless Linux matures to the point where the ABI is consistent across platforms and cross-distribution packaging/dependancy handling problems can be resolved, this software will never be available on Linux, except, of course, through Wine + Crossover Office.
>>>>>>>>>>>
First, you’re confusing package/dependency handling, with cross-distribution support. The former is a (solved!) technical issue, and the latter is an organizational one. There is nothing stopping you from taking APT/RPM and RedHat tomorrow and declaring it the binary standard. If traditional commercial software becomes important to Linux users, you will see consolidation of this nature — nothing technical is preventing it.
However, I’m asserting that *traditional* (packaged) commercial software will be much less important than you think and also that what traditional commercial software there will be will have to deal with cross-platform issues anyway. Commercial Linux software will not be Linux-only. That would be brain-dead, and would just mean another monopoly. If Linux becomes popular enough to require commercial software, it will be in a competitive software industry with multiple OS vendors, and software makers will have to adapt to dealing with packaging for multiple platforms.
It’s a requisite component of source compatibility with OS X.
>>>>>>>>
Why do Linux users care about source compatibility with OS X? Why not use a proven, open standard that provides the same functionality and source compatibility with multiple OSs?
No, I think you have it backwards. You can *buy* compatibility with Windows and Mac, for ~$2000/developer, or ~$2500-$3000 if you wish to support X11 as well.
>>>>>>>>>
Mr. Commercial software has problems paying for his development tools? Qt pays for itself with the sheer gain in productivity, and software houses pay thousands per developer in development tools anyway. Visual Studio licenses, Office licenses, Perforce licenses, licenses for your groupware tool, your bug-tracking tool, etc, it all adds up to a lot more than Qt. If you’re just making some shareware, then you probably should rethink your business, because OSS software will take away most of your market.
>>>@Christopher X: Does OS X’s software install require you to manually find the package (via CD, internet, etc)? Then its already more complicated than APT/Yum/Portage.
Er, manually find? You either double click an installer icon or drag the bundle to your app folder, done deal. There is no “manually find.” What are you refering to? Pop in the CD or download the file, open – install. Done. Manually find?! You’ve really never used OS X, have you? Guess what, I can upgrade my O.S. and not trash my apps too – I just installed Panther on my new G5 last night, and all my apps survived just fine. I like the packaging system in theory, but its just too fragile. Remember I too love Linux, I currently dig Slackare again – and have used Gentoo, Debian (ugh), and all three BSDs among other more Linux distros of varying obscurity. When I’d upgrade between Red Hat distros back when I was mr Linux desktop only I’d backup my /home folder and simply do a clean install, that saved me alot of headaches. A new version of OS X might break some apps (so far I hadn’t had any probs) but it doesn’t, in my experience thus far, seriously thrash everything.
It seems that we more or less agree that commercial software is not a viable option on Linux at the present time, although we seem to have differing opinions on its necessity.
You’re in the extreme minority. And thank you for demonstrating my point. Opera is a cross platform product (its part of their niche appeal) and has to deal with multiple packaging anyway. Doing a few more packages for the main distributions isn’t a big deal. If Linux is successful, you’ll see more commercial software having to do the same thing.
Opera is an excellent example of the nightmare of packaging commercial software for Linux, and if I may say so, case in point. Opera is forced to ship:
* An RPM packaged version, dynamically linked with Qt
* An RPM packaged version, statically linked with Qt
* A Debian packaged version, dynamically linked with Qt
* A Debian packaged version, statically linked with Qt
* A tarball, dynamically linked with Qt
* A tarball, statically linked with Qt
This must be repeated for all 3 architectures (as opposed to OS X, where all 3 architectures could be accomidated in a single binary)
That said, I’m running Debian sid. I tried installing the dynamically linked version, but it requires the libqt3-mt package, but I only have libqt3c102-mt. The libqt3-mt package from Woody failed to install with dependancy issues. I can hack the dependancies on it by extracting control.tar.gz from the .deb with ar, untarring it, and editing the dependancies manually. I can also hack the dependancies on the Opera package itself and manually change the dependancies to libqt3c102-mt, but this results in missing symbols. Calls to apt-get upgrade with the libqt3-mt package from woody installed cause apt to want to remove both Opera and the libqt3-mt package.
As I mentioned earlier, if a commercial developer wants to release a package on Linux, they must release a separate package for every branch of every distribution they wish to support. This makes Linux a nightmarish developer platform for commercial software vendors, or a maintenance nightmare for users of unpackaged commercial software.
If you don’t care about commercial software, that’s fine, but there are those of us who do who are tired of seeing “Install Linux, problem solved”, or posts questioning why commercial software isn’t ported to Linux or suggesting that it should be (e.g. http://osnews.com/comment.php?news_id=4948&offset=15&rows=30#159610)
First, you’re confusing package/dependency handling, with cross-distribution support. The former is a (solved!) technical issue, and the latter is an organizational one.
No, you’re simply unwilling to conceed the simple fact that it’s impossible to package software for Linux in a distribution agnostic manner and have dependancies of that software handled in a distribution agnostic manner. The “solved” issue has been solved in a not just a distribution specific manner, but a branch specific manner. This makes it impossible to ensure that software is binary compatible across different branches of distributions without resorting to static linking, let alone between distributions themselves.
A solution to this problem is requisite to point-and-click installers for commercial software, unless software vendors are willing to release software in distribution/branch specific packages.
Why do Linux users care about source compatibility with OS X? Why not use a proven, open standard that provides the same functionality and source compatibility with multiple OSs?
Ugh, this is going nowhere. The point is that OS X provides a coherent applications framework, something Linux is desparately lacking. In order to cope with this, commercial software vendors need to statically link against the packages they use, or they need some way of handling dependancies in a distribution agnostic manner.
You seem to have solved the problem to your supposed shyness of commercial vendors developing software for Linux. Just as in MacOS or Windows, what stops commercial developers from providing binaries statically linked to needed libraries in their product? Of course, it’s more bloat. Of course it’s backward and primitive.
But that’s the way it’s done in other operating systems. In fact, if I’m not mistaken, isn’t this how commercial/proprietary drivers are distributed today in Linux? As far as I’m concerned, commercial vendors shouldn’t be linking to dynamically shared libraries or assuming a user has these libraries installed on their system. They should write their own libraries and tool kits, just as they would do in Windows/Mac and link them statically to the application is question. Which again is backward.
Commercial vendors aren’t shy of developing applications in Linux because of packaging issues, that’s far-fetched, it’s because they wouldn’t make a dime doing so. Dynamically linked libraries exist primarily for open source products. Commercial producers should write their own libraries and kits and use them statically. Afterall, isn’t that what we are paying them for?
Dynamically linked libraries exist primarily for open source products. Commercial producers should write their own libraries and kits and use them statically. Afterall, isn’t that what we are paying them for?
Opera comes statically linked with a number of open source libraries/toolkits. Here’s a list from Opera’s help:
———————————————————-
This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit. Copyright © 1998-2001 The OpenSSL Project. All rights reserved.
This product includes cryptographic software written by Eric Young. Copyright © 1995-1998 Eric Young
The Independent JPEG Group
The PNG Development Group, Glenn Randers-Pehrson, Andreas Dilger, Guy Eric Schalnat and Group 42, Inc.
Jean-loup Gailly and Mark Adler
James Clark
Eberhard Mattes
Number-to-string and string-to-number conversions are covered by the following notice:
The author of this software is David M. Gay.
Copyright (c) 1991, 2000, 2001 by Lucent Technologies.
Permission to use, copy, modify, and distribute this software for any purpose without fee is hereby granted, provided that this entire notice is included in all copies of any software which is or includes a copy or modification of this software and in all copies of the supporting documentation for such software.
THIS SOFTWARE IS BEING PROVIDED “AS IS”, WITHOUT ANY EXPRESS OR IMPLIED WARRANTY. IN PARTICULAR, NEITHER THE AUTHOR NOR LUCENT MAKES ANY REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE.
The elektrans
Nice Graphics ? by Pål Syvertsen, Flott Altså
Yes, I know that. But the keyword in that quote is “primarily“. All I’m getting at is that packaging a “commercial/proprietary” product for Linux is no stranger than than for Mac or Windows. A commercial software product that requires you to install dependencies or assumes that you have certain libraries installed on your system has a silly development team and shouldn’t deserve you hard earned income or be taken seriously for that matter.
All I’m getting at is that packaging a commercial/proprietary” product for Linux is no stranger than than for Mac or Windows.
Windows and OS X both provide standard frameworks for graphical installation of packages, and standard package formats (.msi and .dmg respectively) which integrate with these tools.
A commercial software product that requires you to install dependencies or assumes that you have certain libraries installed on your system has a silly development team and shouldn’t deserve you hard earned income or be taken seriously for that matter.
Except there are several standard system facilities on Windows and OS X that are not standard at all on Linux.
The GUI APIs are standard. Every MacOS X system will have Quartz and Carbon. Every Win32 system will (obviously) have the Win32 GUI APIs. The same goes for multimedia APIs and sound APIs, as well as a host of others.
Such facilities are not standard on the Linux side, and may face fragmentation/duplication of effort issues. There’s libxml/libxml2/expat. There’s OpenSSL and GNUTLS. OpenSSH and LSH. OSS and ALSA. GTK and Qt. Gnome and KDE. Lots of choice; very little standardization.
Windows and OS X both provide standard frameworks for graphical installation of packages, and standard package formats (.msi and .dmg respectively) which integrate with these tools.
How to install a commercial binary package.
1). Launch your favorite terminal
2). Unzip or Untar the commercial package.
3). Change to the directory of the commercial package.
4). ./name_of_the_commercial_pre-compiled_binary.
5). Done
For the anti-terminal user
1). Launch your favorite file manager
2). Unzip or Untar the commercial packager.
3). Click on the unzipped or untarred folder.
4). Click on the name_of_the_commercial_pre-compiled_binary.
5). Done
It doesn’t get any more standard than that. If the commercial software vendor wants a graphical installer, they can write one.
Except there are several standard system facilities on Windows and OS X that are not standard at all on Linux.
The GUI APIs are standard. Every MacOS X system will have Quartz and Carbon. Every Win32 system will (obviously) have the Win32 GUI APIs. The same goes for multimedia APIs and sound APIs, as well as a host of others.
Such facilities are not standard on the Linux side, and may face fragmentation/duplication of effort issues. There’s libxml/libxml2/expat. There’s OpenSSL and GNUTLS. OpenSSH and LSH. OSS and ALSA. GTK and Qt. Gnome and KDE. Lots of choice; very little standardization.
The has nothing to do with writing, installing and packaging commercial software applications for Linux. A commercial package shouldn’t rely or depend on other open source project and shouldn’t force users to do so, period.
The availability of various packages doesn’t imply that there aren’t any standards in Unix. It only means that Unix users have a choice regarding their tools and environment of choice. This is directly related to users productivity and creativity. In addition it also empowers the users.
Standards are available in Unix and where it is needed, it is applied and were it isn’t, it is discarded. The only standards commercial application vendors should worry about are coding standards, POSIX standards, kernel compatible standards and if the application is graphical, X11 standards. Tell me, what other standards do they need?
Bascule has gave the best explanation about the lacking of a common framework on Linux I ever read.
But you seems not to get it.
I wonder how many people like you are preventing the standarization of Linux.
If Linux were only little close to have a standarized framework, the rate of adoption will multiply by the thousands.
MacOs X is the perfect example of what Linux should aim to be at the functionality level for a common person. And with the freedom of Open Source any geek/advanced user should be able yo modify it to their taste.
“If we find any issues that need attention, we’ll cut a test11 and so on, but the hope really is that we’ll be done by early December.”
Woo i know what i ask for Christmas 🙂
And how long until we see a cross distribution packaging format and packaging tools with GUI integration that provide point-and-click installation?
This already exists. Someone earlier mentioned the Loki Intaller, and of course there’s the very promising Autopackage.
Is it really worth buying Crossover Office and consequently limiting a system to a scant number of supported applications in order to use Linux instead of Windows?
Well, considering that the “scant number of supported applications” actually comprises the most popular Windows titles, the kind of programs people have invoked as “must-have”, sine qua non conditions to switching, then yes, I think it is really worth it.
A good example is Disney Animation switching to Linux, and using Photoshop with Crossover Office.
But how about in the realm of preprint, where a handful of zealots claim Linux is gaining ground? Scribus and the Gimp will *never* have Pantone support, and there is no way around this.
Actually, there is a way. The Pantone patents will expire eventually – and in fact, I think it will be sooner than later, considering they’ve been around for quite a while already. As soon as the patents have expired, you’ll have Pantone support in Free Software. In the meantime, someone can use Photoshop for graphics work, with Crossover Office (which, to refer to your previous question, makes it “worth it”).
These are some of the examples where open source software *cannot* provide what commercial software can.
On the other hand, there’s nothing preventing proprietary (please, do not confuse “commercial” with proprietary) versions of Photoshop and Quark to be ported to Linux.
Unless Linux matures to the point where the ABI is consistent across platforms and cross-distribution packaging/dependancy handling problems can be resolved, this software will never be available on Linux, except, of course, through Wine + Crossover Office.
The reason such programs are not yet available for Linux has nothing to do with a consistent ABI or dependency problems. It is simply a question of market share within the target markets. I know this for a fact after talking to an ex-Adobe employee. Of course it’s a chicken-and-egg problem: Adobe won’t make PS for Linux until enough graphic artists use Linux, but graphic artists won’t use Linux until Photoshop is available for it…This is where Crossover Office can help, by making it feasible to use Photoshop on Linux and (slowly) increasing the market share among graphic artists.
Er, manually find? You either double click an installer icon or drag the bundle to your app folder, done deal. There is no “manually find.”
I’m pretty sure that he’s referring to the fact that you must first obtain the executable installer on the Web, or on CD-ROM. I.e. if I want to install program Foo for OS X, I need to find the website where it is available, or go to the store and buy the CD-ROM, etc. While with modern Linux package tools, one simply needs to type “apt-get Foo” or “urpmi Foo” to have the necessary packages automagically downloaded from the Internet and installed, all in the same operation.
Opera is an excellent example of the nightmare of packaging commercial software for Linux, and if I may say so, case in point.
Maybe Opera just didn’t do it right: consider StarOffice (or OpenOffice, for that matter). There is only one installer to download. Same thing with programs distributed with the Loki Installer, or Autopackage.
Again, the reason proprietary (which != to commercial) software is rare on Linux is a question of market share, nothing more. It is very possible to package software so that it installs on most, if not all, distros.
Bascule has gave the best explanation about the lacking of a common framework on Linux I ever read.
You need to expantiate on Linux’ “lack of a common framework”.
But you seems not to get it.
You know what, you are right. I don’t get it.
I wonder how many people like you are preventing the standarization of Linux.
If Linux were only little close to have a standarized framework, the rate of adoption will multiply by the thousands.
That’s being the highest order of crap I’ve read on this thread. One graphical toolkit; one browser; one media player; one window system; one platform ; one desktop environment; one email client ; one irc client; one library; one package manager. That seems to be your definition of standardization, right? Assuming that is your definition, then you are right again, many people like me are preventing standardization.
MacOs X is the perfect example of what Linux should aim to be at the functionality level for a common person. And with the freedom of Open Source any geek/advanced user should be able yo modify it to their taste.
God forbid! Let’s see. It’s not reuseable. It’s not modular. It’s not transparent. It’s definitely not diverse (there is only one right way in the Mac world and it is Mac’s way), It’s not extensible. It lacks composition( you have to write a library or tool for every single application you develop). It’s not portable. Definately cannot withstand the robustness or stress of any Unix OS. It doesn’t attrack casual programmers. I’m forgetting to many to mention.
So what’s great about Mac? It creates an illusion of being pretty. An awful amount of time is spent on animation and eye-candy, in the name of user interface advancement and ease of use. It’s an alternative to Windows. Linux has copied all it would and can from Mac, and that’s its look-alike themes. Instead I think Mac should learn from Unix, and in fact they are.
On the other hand, there’s nothing preventing proprietary (please, do not confuse “commercial” with proprietary) versions of Photoshop and Quark to be ported to Linux.
Again, the reason proprietary (which != to commercial) software is rare on Linux is a question of market share, nothing more. It is very possible to package software so that it installs on most, if not all, distros.
Perhaps I might take the time to respond to your posts in full, however you have prevented me from doing so by goading one of my pet peeves.
FOR THE LOVE OF GOD, STOP ARGUING SEMANTICS
I *loathe* having a set of terminology dictated to me, and do not find your terminology to be any more or less accurate than mine.
Do you understand what I’m saying, or am I confusing you? If I truly am confusing you, then there is need to raise questions about terminology. However, if I am not and you are simply arguing semantics for the sake of semantics, you are engaging in the least productive activity ever created since the dawn of humanity.
FOR THE LOVE OF GOD, STOP ARGUING SEMANTICS
It’s not my fault you use an incorrect definition of the word commercial. One can easily prove that commercial != proprietary by buying a SuSE, RedHat or Mandrake boxed set.
Whether it’s a pet peeve of yours or not, I really don’t care. The right term to use is proprietary. Now, you don’t have to use that term – you’re free to say what you want – and I’d be the only one wasting energy on the subject if you didn’t overreact in such a way…
…you’re still wrong about the reason there are few proprietary apps available for Linux (with the notable exceptions of Maya, Oracle, DB2, Lotus Domino and Smoke).
Well, no it’s not that point. You have missed his most main point.
It was a little over a year ago, I believe, when computing services at my university audited my department. They found ~$10,000 worth of unlicensed Microsoft software, consisting primarily of Windows 2000 and Office 2000.
We were given a choice: pay $10,000 for the licenses to Microsoft software, switch all the computers back to Windows 98 (although some computers were hand made and didn’t have a license for that even) and still shell out ~$5,000 for Office, or switch all the computers to Linux and pay ~$2,000 to transfer licenses to other software we use over to Linux. What’s worse, computing services gave us only a week to fix the problem, or we would be handed over to the ravenous lawyers in Microsoft’s legal department. We certainly didn’t have an extra $10,000 in the budget, and no one wanted to go back to Windows 98, so the decision was made to move to Linux.
A week later all the computer systems were running Debian with KDE and StarOffice (we have since moved to OpenOffice) We had purchased the Linux version of MATLAB 5, and the installer ran without a hitch. However, when we went to start MATLAB we were informed that it couldn’t find libc.so.5. Debian made this easy to install, but when we tried to start MATLAB again we were greeted with a friendly “Segmentation fault”
Our sysadmin thought it might be compatibility issues with Debian. After hours of manually rearranging libraries and playing with my LD_LIBRARY_PATH variable MATLAB was finally running. We had to create a directory with a custom collection of libraries strictly for the purposes of running MATLAB. This really pointed out to me the deficiencies of companies trying to release software for Linux; would installation always be this big of a nightmare? I had been hearing for years that Linux was ready for the home user, but if it was this much of a hassle to install one program how would the home user ever survive?
This is just one sad chapter in the nightmare known as Linux. I collaborate on research papers, but unfortunately OpenOffice is unable to open the document templates for these papers. I’m forced to convert the template to a PDF in the Windows computer lab on the other side of campus, then reconstruct the template by hand in OpenOffice.
In general what I have noticed about open source applications is that they are full of small problems which when combined as a whole on a daily basis become very frustrating. Getting the software to do what I want is a lot harder than in Windows, and often times things simply don’t work.
Installing software is the biggest headache of all. Part of the problem with Debian is that all commercial software focuses on RedHat, and software made for RedHat doesn’t always work on Debian.
Overall, I would say that despite the fact most people in this department have a Unix background (most of us, myself included, started on AIX) we would be much happier on Windows. Despite being virus/worm free and feeling sad for my Windows-using colleges dealing with things like Blaster, I would rather not have to waste hours of my time reconstructing document templates for my papers because my word processor can’t open them.
Perhaps I can convince my boss to buy Crossover Office and a copy of Microsoft Office. I would find it ironic, however, if the solution to my Linux woes was Microsoft software.
“”It’s not my fault you use an incorrect definition of the word commercial. One can easily prove that commercial != proprietary by buying a SuSE, RedHat or Mandrake boxed set. “”
Best go check your definitions again dude.
Proprietary: “protected by trademark or patent or copyright;”
Which, as I’ve been arguing for the past 3 years, makes SuSE/RedHat/Mandrake/ALL GPL SOFTWARE as proprietary as anything else.
It’s not my fault you use an incorrect definition of the word commercial.
I can’t believe you’re dragging me into a semantic argument… despite the fact that my words managed to convey the the intended meaning, you insist in *continuing* to argue that the meaning is not precise enough for… what now? What harm does my using the term “commercial software” when you would prefer I use “proprietary software” do? Attempting to force me to use one term over the other simply because you do not find the particular choice of words precise enough for your own high standards is a complete waste of time. Let me say this right now… you are incapable of making me change my semantics.
So go ahead, argue semantics all you want, as you are clearly incapable of countering my points.
Tne can easily prove that commercial != proprietary by buying a SuSE, RedHat or Mandrake boxed set.
The fact that there are commercial distributions of open source software does not change, in my mind, the semantics of “commercial software.”
Perhaps you’d actually like to address my points instead of dodging them and engaging in semantic arguments?
Whether it’s a pet peeve of yours or not, I really don’t care.
You clearly care as you’ve broached this issue multiple times in multiple threads, and point out that my preferred set of terminology clashes with your own.
The right term to use is proprietary.
In your mind, not mine.
Now, you don’t have to use that term – you’re free to say what you want – and I’d be the only one wasting energy on the subject if you didn’t overreact in such a way…
Wake up… you’re the only one who cares. On no thread has anyone ever jumped to your aid and said “Yes! Commercial software is the wrong term to use!”
Contrast this to this thread, where the original issue broached was that Linux has packaging problems. I added that packaging is especially problematic for binary only software releases, as there is currently no standard way of packaging software for release in a cross-distribution manner (save for tarballs), and with the lack of a common application framework the only solution is to statically link the entire executable.
I’ve received a “this is being addressed” (re: Loki, Autopackage) response from you with absolutely no supporting argument as to how well either of these could work, how long until they will be production quality, or how long until they are standard across all Linux systems.
You dodged the rest of my questions by saying that what I consider problems aren’t problems to you. Well, please read through the thread and see how many people agree with me that the lack of a common application framework on Linux is a problem. Unlike you I’m not a lone minority in what I consider a problem.
So go ahead, argue semantics all you want, as you are clearly incapable of countering my points.
Actually, I’m doing both. I did counter your points: there are “distro-agnostic” ways of installing software in Linux available, and the “no standard package” argument to explain why there’s little proprietary is groundless, as a) there are ways to distribute apps in a single package (see OpenOffice.org, Loki installer, etc.) and b) making multiple packages of the same program is trivial, at most a day’s work.
If you want to bicker about semantics, then why don’t you discuss this with Err, who now says that all Linux distros should be considered proprietary. (Hint to Err: words can have more than one meaning.)
Yes, that was nitpicking on my part, just like saying one should use GNU/Linux instead of just Linux. Just ignore those remarks if they offend you…
I’ve received a “this is being addressed” (re: Loki, Autopackage) response from you with absolutely no supporting argument as to how well either of these could work, how long until they will be production quality, or how long until they are standard across all Linux systems.
Loki has been “production quality” for a while. It was used for their games distribution, and is currently used by Codeweavers to distribute their Crossover Office product. Proprietary software vendors can use it to package their product, or they can develop their own installer, like the one used for OpenOffice.org – which is also “production quality.”
Autopackage works very well – you can try it on their website. It might not be completely ready for all apps – on the other hand you can already install the Gimp (a rather large program) with it. So it might be sufficient for some proprietary applications as well.
You dodged the rest of my questions by saying that what I consider problems aren’t problems to you.
Actually, no. I specifically answered one assertion that you made, that the non-standard packaging for different Linux distro (what you call “packaging problems”) is what prevents major software vendors from porting their apps to Linux – which I know for a fact is not true in the case of Photoshop, and which anyway doesn’t hold once you think it true for a few minuts.
Well, please read through the thread and see how many people agree with me that the lack of a common application framework on Linux is a problem. Unlike you I’m not a lone minority in what I consider a problem.
Please. The fact that some people agree with you is no indication that you are right. Facts are facts, and in this case the fact is that no one is forcing proprietary software vendors to use RPMs/Debs. There are distro-agnostic installers available to them. Period.
Now, will you dodge my counter-arguments?
maybe the community should start supporting and using autopackage…
http://www.autopackage.org
“”Hint to Err: words can have more than one meaning. “”
Hint to Great Cthulhu: We don’t all want to use yours :>.
***
I think part of the problem here is there’s no standard base for Linux distributions.
What would be handy is for the distribution maintainers (Or possibly some collaboration between the distribution maintainers, the kernel developers and the GNU userland developers) to come out with a set of guidelines as to what makes up the set of base software that all Linux distributions should have (Along with a clarified Linux specific FHS). Then developers at least have some guidelines to work from when deciding which base libraries/apps will be available to their applications on any Linux distribution, and where those libraries will be.
As it stands now I could quite happily create a Linux distribution that breaks compatibility with everything up to and including “./configure && ./make && ./make install”, yet I would still be able to sell it to people as a Linux OS (Of course I doubt they’d buy :>).
Choice is good, but IMO it makes a great deal of sense that choice should be the cherry on top of a standardised base.
“In your mind, not mine.”
Sigh. Check your dictionary Bascule. It exists for a reason.
Mine CLEARLY states the difference between proprietary and commercial.
They are NOT interchangable. You’re using the definition in a wrong way. Hard to blame someone else when s/he does it according to the dictionary rules, isn’t it?
Well, there is the Linux Standards Base (LSB). Right now it’s not much, but it’s a start…
>>That’s being the highest order of crap I’ve read on this >>thread. One graphical toolkit; one browser; one media
>>player; one window system; one platform ; one desktop
>>environment; one email client ; one irc client; one
>>library; one package manager. That seems to be your
>>definition of standardization, right? Assuming that is
>>your definition, then you are right again, many people
>>like me are preventing standardization.
If for a moment you stop your brain mess, sit down and think you’ll realise what I’m talking about.
I’m not saying that we should stick to one browser, or any particular toolkit or any other piece of software for god sake.
What I’m saying is that all the linux distros should agree on a common framework, set of tools or whatever you want to name it.
I mean: SUSE, RED HAT, DEBIAN and all the others should:
Include a set of tools common to all of them, apart from allowing the user to install whatever they want to.
All the distros should share one Toolkit in common, one set of libraries in common and so, apart from whatever the maker of the distro will like to put on the distro.
And please do not tell me that this is already happening because it’s not true.
Don’t you understand!!!???? It’s so difficult????
Read this and then think.
Isn’t that a bit of a hack? (Autopackage)
Yeah, it is. Unfortunately it’s necessary, at least for now. The problem (that software on linux can be hard to install) is caused by the fact that Linux is open, and as such people tend to create differing versions of it. One of the most obvious ways in which they differ is file paths and library versions, but distros can differ in other ways too. There are two solutions to this: either ALL distros must conform to some standards such as the LSB, or a package manager must be built that is powerful enough to deal with the myriad differences.
Also, some dependencies are harder to detect than just looking for a single file. For instance the genst utility requires the dialog program, a small executable that displays a variety of screen widgets and forms using ncurses. Unfortunately, there are at least 3 different forks of dialog, and not all of them support a –version flag. Detecting them can be a bit of a game, but is easy when using scripts. You can also create dependency skeletons that don’t directly corrolate to one package: virtual dependencies such as “sound server” are supported, where the script will scan for known sound servers that the app can use.
Finally, it has to be said that standards are a Good Thing, and now many major distros ahere to the LSB tightly. However, even though they do, the LSB doesn’t standardise everything, and often for instance there is still some slack in these standards. The “Filing System Heirarchy Standard” for instance, does not define any particular location for KDE/GNOME/Enlightenment/WindowMaker and so on. As such, distros can and do place them in different locations. As the FHS is in some sections rather vague, this problem isn’t going to go away anytime soon. So autopackage attacks the problem from the second angle, with the hope that the two solutions meet somewhere in the middle
Original Document: http://www.autopackage.org/faq.html