Obviously there are a lot of Linux distros out there, and it’s hard to differentiate some of them from one another. Arch is “lightweight and flexible,” but it’s not the only one. Do you consider this proliferation of distros a strength or weakness of the Linux movement? Is it an unproductive duplication of effort, or an environment of teeming fecundity from which springs unanticipated innovation, or somewhere in between?
Thomas Bächler:
It’s about diversity and choice. In the Windows and Mac OS world, you have a
monoculture. The Linux world comes in many flavors. No matter what the banners
say, none of the distros out there is like Arch, which is the reason it still
exists.
Allan McRae:
I do not care what other people do with their time, much like I doubt they care
what I do with mine. So I have no issues if someone rolls yet another distro to
scratch an itch they have. You cannot tell people what to do with their time.
Aaron Griffin:
In the spirit of what the others said: this isn’t a competition. We don’t make
Arch so that we can win the most users, or get piles of cash. We make Arch
because this is the OS we want to use. Others might not agree and so will use other
distros or even make their own.
Tobias Kieslich:
The development in Linux land is evolutionary. People make efforts to improve
things. Some projects succeed while some projects’ ideas survive and get
incorporated into other projects be those Linux distributions or actual
software projects. Yes, it creates overhead and the wheel gets reinvented every
once in a while. However, it ensures that the best ideas stay and they usually
don’t get turned down before they get at least tried. This makes it more
interesting and arguably better-tested than development that follows a given
agenda.
“Lightweight and flexible” can mean more than one thing. For one user it means
that it runs on older hardware sufficiently. For me it means hackable. Take
packages, recompile them, try stuff utilizing the package manager because it is
so easy. Embrace diversity– I consider it a strength.
Ronald van Haren:
I guess every distribution has a purpose; some are to have an average desktop
system for the average user, others have a more specific purpose. Probably most
distributions are started because the ones starting it see a need for it for
themselves. It may just happen that some others like it as well.
Pierre Schmitz:
While we might share some single ideas with other distributions, I don’t know
any other system with the same design principles and goals as Arch. But go
ahead and name one.
In general, I don’t see the problem of duplicated work even if projects are very
similar. In the end there is some kind of evolution among distributions. But
there is a good chance that even if a project dies, some of their ideas might
be adopted by others.
I believe we all benefit from people starting forks or working on crazy ideas.
Dan McGee:
It is definitely a strength and a weakness. The innovation aspect is exciting
to see- without the variety of distros we have today, we wouldn’t see Linux
running on everything from my lowly wireless router to big academic computing
clusters. However, I think you do get some duplication of effort that is
unfortunate. Some distos heavily patch their packages without trying to get
these changes upstream where they will benefit everyone; I think Arch excels
here by not dragging around a lot of patches and getting them upstream if at
all possible.
Hugo Doria:
I think that a lot of diversity can hinder the growth of Linux. On the other
hand, limit the number of distributions is not the solution. I think the key is
in the balance.
I am not against the creation of new distributions if they really have a
reason to be born.
Roman Kyrylych:
It annoys me to see yet another Ubuntu clone on Distrowatch every week, with a
different set of the included software and different default wallpaper, while
distributions that offer something new are rare.
From my point of view, there is a duplication of work in many areas of FOSS, but
you cannot command developers to do this and don’t do that. Everyone is free
to do whatever (s)he wants with his/her time.
The teams behind some (mostly early) Linux distros parlayed their projects into lucrative professional services businesses. Do you think that door has closed? Will there ever be another big “Linux company” to emerge? Why/why not/how?
Dieter Plaetinck:
New ones can still emerge, though it seems hard to compete in the world
of enterprise Linux distros. Most of it seems just politics. There are
a lot more smaller companies who just offer Linux/opensource services,
and that’s a more vivid market imho, though all of this doesn’t have to
do anything with Arch.
Allan McRae:
My magic 8-ball says “maybe”.
Aaron Griffin:
I think it’s completely possible. But the days of “software as goods” are past
us. A sucessful business model in the current ecosystem provides a service above
and beyond what the software provides. That said, I don’t know if selling
services for just an OS is feasible anymore. That section of the market is
cornered by the existing companies.
Tobias Kieslich:
That’s hard to predict and is probably not a function of new Linux distros
coming out. It rather depends on the requirements of the market. See a demand
for something that can be addressed by a specialized Linux distro, build it and
you have a product. Sure it’s likely that these spots will be occupied by
incumbent, but I would rule out that a new company rises.
Ronald van Haren:
I suppose it can happen. It happened once when Canonical stepped in, it can
happen again. Who knows.
Dan McGee:
I don’t think you will see too many more big (e.g. Red Hat type) companies
emerge. I do see plenty of room for smaller, more specialized companies that
develop a business model around a free software base, however.
What decisions have you made in your development of philosophy that have gone against conventional wisdom? Have those decisions proven to be important to make Arch a superior distro?
llan McRae:
Rolling release == good. Simple build system == good.
Aaron Griffin:
Make simple choices to cover the common case. Let the edge case users do a
little extra work to get what they want (for example, rebuilding a package via
PKGBUILD to add more complex options). This would be one of the
implementations of the KISS principle
Pierre Schmitz:
We don’t provide any tools to configure your system or making an update between
major versions of packages smooth. Our users are responsible to update their
configuration files and make sure their scripts still work with the latest
versions of PHP, for example.
This philosophy makes our lives as packagers a lot easier and makes it possible
to provide the most recent package versions with a relatively small team.
Dan McGee:
I didn’t make the decision, but when Judd started Arch and decided to write his
own package manager and tools on the basis that what was out there sucked, it
was a rather unconventional step. The standard was deb and rpm; we are one of
very few distros not using one of these two formats.
Hugo Doria:
Let users tailor the system the way they really want. For real.
I haven’t read the whole interview yet, I will get to that a little later, but I have to say this distro does give new life into dependency hell. I enjoy using the distro, but the x64 build has less than the optimal amount of applications in it, so they point you to AUR which are user made applications for arch. Only problem sometimes is that one application could require 10+ dependencies or more, with many times those dependencies branching out. For example I will use wine since it is the most recent example I can think of.
http://aur.archlinux.org/packages.php?ID=7915
So you start installing the package using pacman and aur and you hit a snag saying you need a dependency, first thing you do is there are a list of dependencies on that webpage it says you need, so you download the one missing and it will then tell you that you need a dependency for that dependency(or 3, which these can branch out further). This can go on for a while 10+ once, and one time there wasn’t even an end(meaning no one made the package).
This seems to be mostly an X64 problem though, so it might not be a big problem for most. Also I only really ran into it while trying to get wine going initially. So if someone that is knowledgeable in using Linux wants to give this Distro a try it is a very good and well made distribution, except for that slight annoyance that I ran into. The only problem I can think of is that everyone now only makes packages and designs everything for Ubuntu, so you get less packages that are easy to install and are mostly at the mercy of other users or your own time. Which is to say business as usual.
The example you said is kinda, well, annoying, as seems the WinE project doesn’t have a real x64 support yet, and it depends on many other packages, that need their 32-bit version to work properly with wine.
The problem here isn’t Arch, but the third party development, which is incomplete or has any other issue.
that is what AUR and abs are for: edit PKGBUILD for your own liking.
I do this all the time when I want to cut down on dependencies or want to add/modify something
To be fair, AUR is completely community driven. You will often find many versions of the same software, some of which doesn’t even have an owner anymore. The biggest problem is finding what version everyone else is using. I realize that there is a tool called yaourt that makes AUR seamlessly integrate with pacman, but from the comments I’ve read it breaks more than it fixes.
I consider Arch’s package management system (pacman) to be the best of any Linux distribution. Part of the reason is that it is flexible enough to allow you to decide which sub-packages you want to install. With KDEmod, for example, you can decide whether or not to install Konqueror (I don’t), or whether you want Xine, Mplayer, Gstreamer, etc as a phonon backend. It’s hard to explain in context to those who haven’t used pacman, but the elegance of Arch’s package management system dawned on me a few weeks ago when I was using FreeBSD ports (which until Arch, I had always considered the gold-standard). I was installing KDE (make config-recursive), and it was pulling in all sorts of dependencies (Samba, Gnome-vfs, Mozilla Firefox) that I had no clue what they were being used for. While other package management systems have may have gotten lazy with their dependencies, Pacman still emphasizes minimalism.
It’s called yaourt.
“So you start installing the package using pacman and aur (…)”
That’s the first problem. You start using yaourt.
http://wiki.archlinux.org/index.php/Yaourt
Dependencies sometimes don’t work, but you only have to go get them, getting into dependency Hell, when the package in question is very broken.
You can always automatically build all those packages with yaourt.
Also, you can find bin32-wine and all its dependencies here: http://arch.twilightlair.net/games/x86_64/
The problem with your example above is that wine is a 32-bit app. and it requires a couple of 32-bit dependency packages. Since Arch 64-bit implementation is very lean, there are few (if any) 32-bit packages on it.
But like all of Arch, if you read the forums, you will get easy instructions, complete with community repos that allow you to install it in no time.
Wine32 dependencies are the same no matter the distribution (package name may differ, but same deps). Check on a Ubuntu, or Fedora and compare. But more important, why you want a 64-bit OS, to run a 32-bit wine, whith the extra 32-bit libraries, running side by side, taking double the memory?
Do you have more than 4 GB? Otherwise is a waste. Maybe a 32-bit OS is better for you.
Edit: typo
Edited 2010-01-12 15:41 UTC
0. bin32-wine is 64-bit; wine is 32-bit.
1. 64-bit WINE will not run 32-bit Windows apps.
2. If you have more than 3.5GB, or more than 2.5GB, with a good video card, 64-bit is very handy, even before its other performance benefits come in.
3. Why should anyone be dictated what OS they should run, to use multiple layers of 3rd-party unsupported software?
Wrong. Bin32-wine is NOT 64 bit of course (why would it be called bin32 then?) – it’s 32 bit with a wrapper for x86_64 Arch. As to your second point, I have bin32-wine installed:
[molinari@Helios ~]$ pacman -Q | grep wine
bin32-wine 1.1.36-1
and I’m happily playing Morrowind on it
The wrapping is the important part. Wine will generally not run, installed from the wine package (which you can make happen…).
All of the bin32 packages were 64-bit, last I knew. In a 32-bit system, there is no need.
Edited 2010-01-12 22:08 UTC
It may or may not be an Arch x64 bit problem. But what Arch really tells you (and other hide from you) is that when you install wine under a 64-bit Linux box, you end up installing a lot of 32-bit dependencies, most of them Xorg libraries.
While x86_64 processors can run 32-bit code side by side without penalties, when you launch wine, you are loading a bloat of 32-bit Xorg libs with it, which makes it less efficient than runing wine on a pure 32-bit system.
The link bellow compares the same ubuntu edition with 32-bit and 64-bit. You will notice that games do not gain much of a performance, but server and encoding software do benefict from the change.
http://www.phoronix.com/scan.php?page=article&item=ubuntu_32_pae&nu…
Like in my previous post, if you have more than 4 GB, or use some software that “really” use the 64-bit architecture, stay with a 64-bit Linux, otherwise a 32-bit edition is better for you.
And I’m not dictating, just telling some facts.
Edited 2010-01-12 18:15 UTC
There are tools like yaourt that will resolve the dependencies in the aur. HTH
This is perhaps the best description of Arch Linux. It brings awareness and transparency to its users. Whereas many other distro’s attempt to create an environment that is best suited for one or more particular purposes, Arch Linux provides you with the tools to design and build your own environment.
I am archer myself and I have a question, how do i ensure that latest updates are well tested and won’t break my system ? With a rolling-release, before you could fix something the next release/update is out and it could still be broken.. the point is, arch is not meant for someone who cannot fix issues for themselves. How different is arch from say debian testing/sid in terms of what to expect when one upgrades his/her system??
Simple answer: Don’t use the [testing] repo.
That said, it doesn’t completely avoid problems. But I’m highly impresses at the relatively small amount of breakage, and relatively minor nature of it, in a rolling release system that Arch has. It is orders of magnitude better than when I tried Gentoo a couple years back (Gentoo may have improved since then).
On the flip side, using [testing] and providing feedback helps get packages better tested.
Simple answer: Don’t use the [testing] repo.
That said, it doesn’t completely avoid problems. But I’m highly impressed at the relatively small amount of breakage, and relatively minor nature thereof, that Arch has in a rolling release system. It is orders of magnitude better than when I tried Gentoo a couple years back (Gentoo may have improved since then).
On the flip side, using [testing] and providing feedback helps get packages better tested.
Well, on the one hand, you just can’t ensure that packages are well tested if you want the latest stable release. As the arch devs say in the interview, arch catches lots of gotchas because the distro is an early adopter.
On the other hand, important packages ARE tested in the testing repo. And if something slips by, arch devs are very very fast in providing fixes. If you see an upgrade to a major component (X, kernel, etc.) you can wait a few days and monitor the forum for posts describing issues with the update.
And lastly, the best way is to join the testing team – which basically means enabling the testing repos, and writing useful bug reports.
That said, I’m in no way an expert. I look at a PKGBuild, and I have no clue about what I’m looking at. I’m not a programmer, my dayjob and my education has nothing to do with computers. I just got curious about linux some 7-8 years ago, and I just got stuck with linux. It suits my needs. And of all the distroes I have tried, including the major “well tested” ones, Arch still seems to be the most stable of all. Well, Mandriva was fairly stable, but Kubuntu has been a nightmare since Gutsy (you can’t get more UNTESTED than that), and the rest fluctuated in quality greatly.
So there are no insurances, but my experience tells me that these guys know what they are doing. And because of the simplicity, transparency, and clarity (staying as close to vanilla as possible) of the system, overall you get a very stable distro.
On 2 computers, I have installed Arch with KDEmod as a secondary. Arch is one of the distros to use if you like KDE4. Their KDE packages are clean, stable, and up to date.
On both computers, my primary OS is Windows. I still like that best for desktop computing – its graphical layer is just better than xorg with less advanced drivers. I like Arch as a secondary desktop OS because it allows you to experiment with Linux, and it boots up very fast.
For servers, I recommend Ubuntu server. Easy to install and administer. Arch just needs too much care if you are depending on a server, although if you have enough time, an Arch server could be quite interesting.
I wouldn’t unless you don’t mind having your system borked from an update.
My advice is to use FreeBSD or Cent. I’d trust either team over Canonical anyday. I’m still not sure of what Canonical’s 300 employees do most of the time. Foosball perhaps?
I’ve run an Arch Server for a few years now and find it needs very little care because once you set it up how you want it (which should be done at install time) all you need is the updates from the rolling release.
Furthermore, contrary to your reasoning, I’ve always ranked “easy to administrate” quite low on my list of requirements when choosing a server OS:
I’d sooner spend longer configuring a system, but have it streamlined for my specific requirements and understand the system from the ground up than have a server that took 10 mins to set up but leaves me scratching my head if/when things act abnormally.
Besides, as I’ve already stated above, a good administrator should only really need to spend the initial set up time and then use (custom) scripts to automate any lengthy or regular jobs to make future administration relatively easy (even on the trickier of systems).
Do you mean Ubuntu server LTS? Because regular Ubuntu, like other fast release based distros (yes, I mean Fedora) have a very short lifespan. In a work environment you don’t want an OS that you need to reinstall every year and a half because its support is over. In that escenario, the need to have the last version of everything is not important , but having securitiy fixes it is. So that either a long supported distro or a rolling one is a must.
So, for servers, are better suited:
Ubuntu LTS
Red Had Enterprise $$$
Cent OS
Debian
Gentoo (even better with hardened profile)
Arch (as a developer said, if you know what you are doing)
I personally used Gentoo hardened for over 6 years without having to reinstalled it. Can your Ubuntu do that?
Actually, I have used a Ubuntu server for some years, without reinstalling or package conflicts. And yes, always running the latest version of Ubunto. A release upgrade is not that hard.
I’m glad it has worked for you but I don’t trust Canonical after they have broken Dell Ubuntu desktops and notebooks repeatedly with updates.
If they can’t be bothered to make sure their updates won’t break their top partner’s hardware, then why would you trust your own hardware to them?
Edited 2010-01-13 04:36 UTC
Well, I guess I have more trust in the server edition of Ubuntu than in the desktop edition. It might have something to do with the fact that Ubuntu is largely based on Debian. The server edition is more stable, and less complex, and less modifief, compared to the desktop edition.
That’s why I like Arch on my desktop.
Gladly it worked for you. The problem with dist-upgrade is that there are so many big changes between the distribution editions that may bring all sort of issues, like configuration changes, packages that do not exist anymore, dependency changes, etc. By using a rolling system and updating frequently you will be able to isolate those problem on a one by one basis, thus making it easier to fix, and having shorter downtime.
Edited 2010-01-13 16:32 UTC
After trying many distribution, Archlinux was the distribution that finally made me give up completely on windows. I haven’t had windows installed on my machine since I started using ArchLinux. I haven’t touched or used windows ever since!
Arch is by far the best Linux distro IMO, but in terms of quality it still doesn’t match OpenBSD … once you use OpenBSD and it ports and package system which IMO is superior (in terms of consistency) than any of Linux or BSD release, you won’t want to come back.
I happily run Windows 7 and OpenBSD 4.6 … great combination … Windows for .NET development/Outlook (work), openbsd for other dev stuff (python, perl, ruby etc).
Horses for courses.
I too use OpenBSD and find its quality and simplicity refreshing. The tradeoff one makes (for “general” desktop use) is that the ports aren’t updated for a release (or, as of late, -stable) due to the fact that the devs there prefer to focus their time updating the ports for -current. In addition, the OpenBSD approach to security, openness of all code, etc. puts some apps out of reach (wine, high-performance VMMs, some drivers,…). If you need those things, OpenBSD is not really an option.
For what Arch does (rolling release) its quality is exceptional. I’ve been running it as a general-purpose desktop for a couple years with little (and typically minor), if any, breakage. The quality difference between Arch and Gentoo circa 2+ years ago (which I left in frustration before finding Arch) is a chasm measured in parsecs.
Another interesting (and spot-on, IMO) comment Aaron Griffith made in the interview is that he was worried about Linux getting more interdependent and intertwined. This worries me as well, as I don’t want Linux to become the wipe-and-reload-once-a-year-to-clean-out-the-mystery-problems mess that Windows is. I’m keeping an eye on NetBSD and DragonflyBSD in case that happens
Edited 2010-01-12 05:37 UTC
True enough. Performance has always been an eternal issue with OpenBSD as well. But that was not really the question here.
Well, while the quality of this “rolling release” methodology may be good, there are still breakages, as admitted by the team. I left Arch behind exactly for this reason. If there only would be a binary distribution that would be “rolling releases” but not bleeding edge.
Yeah, this was an insightful comment. And personally I think Linux already is largely that. By borrowing your cheerful expression, you already have to wipe-and-reload-once-a-year-to-clean-out-the-mystery-problems with things like Ubuntu and Fedora. Except that you have to actually do it twice a year .
As the team also discussed, there is generally a great danger for distributions like Slackware and Arch Linux when more and more Windows registry-like *Kits and HALs are emerged as more or less necessary dependencies. Perhaps the big-name distributions have too much power on certain “upstreams”.
The complexity will bite. Mainstream Linux is more and more like Windows with all of its flaws, each passing day.
Edited 2010-01-12 06:04 UTC
Maybe because it’s development model is more server oriented than desktop. Want to run a BSD desktop, go FreeBSD.
What’s the biggest challenge facing the overall Linux movement today? What’s your prescription for addressing it?
Aaron Griffin: Attitude. People have crazy attitudes about Linux and free software in general. Some people act like it’s some sort of holy grail that’s going to save us from global warming, swine flu, and World War III. It’s just software. Get off the high horse.
This is one of the most intelligent statements I have read in a long time.
I would like to add that his opinion is valid for software industry in general (Other operating systems, web browsers, word processors, etc).. It’s just software.
Edit: typo
Edited 2010-01-11 20:24 UTC
This is what I don’t like about Linux:
What part of the Arch Linux development is the most active?
Thomas Bächler: Definitely the package update monkeys.
Allan McRae: Packaging.
Aaron Griffin: Packaging is by far the most active part, followed closely by Pacman development.
A total waste of man-hours. Windows and OSX are much more efficient in that the developer builds a single package and it’s ready for use.
They seem like a competent team but good lord why do all these people want to work on building packages for yet another general purpose distro? My god so boring.
Edited 2010-01-12 06:05 UTC
I don’t know what you do with your time but it’s likely a lot of people would consider it a total waste of man-hours. Similarly you might consider whatever I do with my time a total waste of man-hours as well.
The important question here is whether Arch provides something other distros don’t. IMO unlike a lot of distros it does.
As the devs say in the interview, wheels do get reinvented from time to time. So be it. On the plus side we end up with a huge choice of distros to play with til we finally find one that fits us perfectly.
Unlike the Windows and Mac OS worlds Linux users have choices. This is a good thing.
I’m not an Arch user but I’ve nothing but admiration for this distro and the people who make it.
It’s not a question of one’s hobbies, it’s a question of whether or not it makes sense to have distro teams spending most of their time preparing packages when there is a common goal of greater improvement.
There’s a surplus of general purpose distros that don’t do anything to distinguish themselves over the others. Most of them might as well be theme packs. You said you liked arch linux because of quality but that is what everyone says about their favorite general purpose distro.
That’s very true, especially true of the plague of *buntu-with-a-different-wallpaper distros. I’d say Arch does distinguish itself very well with its KISS philosophy, rolling updates and relatively vanilla packages.
While someone who’s purely an end user might not notice the difference between Arch and another distro (they might not even notice the difference between KDE/Gnome/XFCE/etc) your average OSnews reader should appreciate the differences under the hood.
Actually, for FOSS this methodology makes perfect sense:
* nobody knows the distro better than the developers behind the distro. So it makes sense that nobody is better qualified to build distro targeted packages than the distro maintainers
* You can’t expect application developers to build dozens of packages for every single distro out there. They simply don’t have the time nor the motivation. So instead they should be concentrating on making the application itself as complete as possible
* If you expect application developers to build the packages, then you’d find some distros would lack basic applications because the application developers happen to dislike some distros thus give said distros lower priority in packaging.
* FOSS software doesn’t just run on Linux. There’s *BSDs, OpenSolaris and even a few non-*nix OSs out there that also run FOSS software. Are you seriously going to expect application developers to port to every open source platform including ones that they’ve never even run, let alone have development experience of?
The difference between Windows and Linux is windows does not provide any kind of package downstream – so MS are effectively washing their hands of any responsibility and expecting:
* the developers to build their own deployment packages (thankfully there’s numerous tools out there to assist)
* and the users to have enough knowledge to differentiate between safe packages and malware.
Sometimes the Windows model works – sometimes it doesn’t. eg:
* the icon mess on the desktop, start menu and quick launch,
* the way how standards (like where application profile settings are stored) change from one application to another)
* the fact that I have to spend as much time googling applications to find download links as I do actually installing the application.
So as much a waste of man-hours as you might perceive it – I’d always prefer the OS maintainers to control the package deployment any day (and just so long as I have the option to override their catalogue should the rare occasion occur that I need to)
Edited 2010-01-12 13:52 UTC
Well of course I am not suggesting that application developers build for every distro. There should be a move away from the shared library system or at least a standard library base that distros follow.
It also runs on Windows and OSX and yet in those cases only needs to be built once and the binary will work for the life of the OS.
This isn’t even a legitimate complaint given how easy the deployment wizards have become.
That is a problem that doesn’t require the shared library system to solve. You can have a safe repository of any type.
It’s also the model that OSX uses and it has far fewer headaches than ye old shared library system. Applications still break in Linux from library updates which typically requires command line meandering to fix . That’s unacceptable for the general public.
The benefits from the shared library system such as a safe repository and application index can easily be added to an independent library system along with significant productivity gains.
But don’t worry most people in Linux land are like you and defend ye old shared system that was designed to save hard drive space in an era when gigabyte drives didn’t exist.
Arch linux isn’t really just a general purpose linux. It’s base install is very compact and runs pretty well as an embedded system. Shared libraries allow different people in different locations to develop the software. This is mandatory for small developers without an army of programmers. Many of us don’t even consider large gigabyte drives useable and only use little flash drives for the system.
I could say the same thing about Slackware.
The shared library system isn’t needed to allow that.
Oh give me a break, even 8 gig flash drives are cheap these days which is plenty of space.
And a good number of Arch users are ex-Slackware users.
However there are also a number of key differences between Slack and Arch (which is why some people like myself switched).
Edited 2010-01-12 20:46 UTC
When was thew last time you tried running Win7 or Vista on 8GB? Not really sure about that for MacOS, but there really aren’t any embedded mac’s aside from ipods and iphones
That’s an irrelevant question. The fact that you can install Linux on an 8GB drive is not due to the shared library system. It comes from being able to strip the system down to the components that you want.
You can also install PC-BSD on an 8GB drive and use the pbi system which doesn’t use shared libraries.
But then you lose the whole point of different distros.
Different developers and users prefer a different model of package deployment. Hense the reason ArchLinux exists in the first place (I trust you read the interview?)
You can distribute Linux binaries too – so in that respect, Linux isn’t much different to Windows.
It’s just there’s usually little point in distributing stand alone binaries as package repositories do all the leg work for you.
That makes little sense. A repository /IS/ a shared library system.
Plus I thought you were arguing that you don’t need safe repositories….
Now your talking about a completely different topics.
(plus repositories / package managers SOLVE dependancies issues which often break systems rather than causing them as you suggest).
The command line dependancy has nothing to do software repositries what-so-ever!! (and more importantly, 99% of the time you don’t need to touch the command line – it’s just many experts advice users to dip into it as it’s quicker and easier to list a number of commands to run than take screenshots of the GUIs that need to be used.
Most linux distros give you the CHOICE of using a command line or a GUI. You DONT have to use the command line, but sometimes it’s just easier to explain on a forum than trying to navigate someone around various windows and menus.
So what you’re suggesting is to replace one software repository with another!?
Plus you’re still missing the point that sometimes packages need to be tailored specifically to that distro.
Software repositories have nothing to do with disk space savings!
Do you even know how they work? Have you actually ever used a package manager?
They exist to centralise applications, automate deployment and ease system administration.
ArchLinux could use as much diskspace as Windows if you wanted it to. It’s just many ArchLinux users don’t see the point in installing surplus applications that they’re never going to use.
I don’t want to get into a platform war (you like what you like and I like what I do) – but please at least understand how a system works before attempting to draw comparisons.
I said at least provide a standard base that distros follow. Anyways most of the distros are completely pointless. It makes more sense to have an OS that is modular in design that can modified for a variety of purposes while maintaining binary compatibility.
You’re being disingenuous. Linux is much different to Windows AND OSX in that regard since you can’t build a single GUI executable and expect it to to work across all distros for a reasonable amount of time. The Linux ecosystem is designed with the assumption that user software is open source. If the goal is adoption by the public then it doesn’t make sense to design the system completely around open source.
The binary compatibility across Linux distros that exists is for small command line programs, and even then it is limited since the distros can’t even agree on basics like where user programs and settings should be stored.
In the Linux sense of the word. By general definition a repository is storage system. You can store safe executables for the user to download. There is no reason why this must be a feature exclusive to shared library systems.
No I’m not, it’s all a part of the same problematic software distribution system. Package managers attempt to resolve dependencies but applications still get broken by updates.
Here’s the genius shared library system at work:
Skype broken after KDE update:
http://fedoraforum.org/forum/showthread.php?t=233354
I said that going to the command line is typically needed to fix dependency breaks.
Here’s an example:
http://itechlog.com/linux/2008/12/18/fix-broken-package-ubuntu/
Explain how the last example problem could have been fixed with the GUI.
One that makes more sense.
I’m missing the point even though I already went over this? How long did you spend reading my response? 10 seconds?
The tailoring wouldn’t be needed if the distros had a common library base and directory structure.
There are other options including a standard common language interface, binary compatibility layer or even a VM solution. But shutting your brain off and defending the status quo is probably the worst option.
The shared library system was designed in a completely different era when saving hard drive space was a priority. That is no longer an issue and now the remaining benefits can be adopted within an independent system where applications can have their own libraries that can’t be broken by a system update.
Trying reading my response more carefully next time instead of just skimming it and providing a knee-jerk response. It isn’t a Windows vs Linux issue. It’s a software engineering issue. Apple’s engineers decided to ditch the shared library system so maybe you should at least question as to why.
Edited 2010-01-12 19:00 UTC
Err, Linux IS modular in design and can be modified for a variety of purposes while maintaining binary compatibility.
You can. I’ve already stated that. Stop trying to spread BS.
The problem with Linux (if you can call it that) is that it’s a rolling release – so where as in Windows, you have a major release every 3 to 5 years (on average), you have lots of minor releases in Linux.
Sometimes these minor releases will break things. But then I’ve had service packs break Windows too – let alone whole OS upgrades break apps.
So yes, Linux binaries won’t work indefinitly – but then neither will Windows binaries.
Again that’s absolute BS. It makes no difference whether the source is open or not.
Plus ArchLinux and all the big user-centric distros push binaries out via their repositories. So the users never need know the source code was optionally downloadable.
Again that’s completely rubbish.
You do realise that there’s plenty of large closed source apps available for Linux? VirtualBox (not the OSE but the more feature-rich edition) is closed AND has a GUI. And given the complexity of virtualisation, I’d hardly define that as a small command line program.
Right, I get you.
You still don’t get it. The package managers /DO/ resolve the issue. Sure, there’s occations when things still go tits up. But then that’s the case with EVERY OS.
Operating systems are infinity complex – so sh*t happens.
However, try and manually resolve dependancies in Linux (rather than using the “problematic software distribution”) and I bet you’d instantly run into troubles.
So trust me when I say that package managers have made life a HELL OF A LOT easier on Linux.
That link has nothing to do with your arguement (it’s details on how to fix a package that corrupted on install and nothing to do with dependancies).
But for the most part they DO (and those that don’t, don’t because of very specific reasons and usually the same reasons why they forked to start with)
Personally I like the fact that there’s lots of different distros. Sure it complicates things, but at least I get to run the system I want without compromise.
While I get what you’re driving at – this is never an issue for the home users as package managers are bloody good these days. So I still think you’re massively overstating the problem.
Sure, the devs at ArchLinux (and other distro devs) might get fed up from time to time.
However they’re the ones in the position to make the change (as bad as it sounds – it’s not my problem, it’s theres. So I’ll invest my spare time developing solutions to problems I encounter)
Your initial post used Windows as a comparison and it’s just continued from there.
That’s why I said OS, as in a full operating system, not a kernel. The problem is that there isn’t binary compatibility across distros that use the Linux kernel.
No one expects Windows binaries to work indefinitely. However you can expect them to work for the life of the operating system. Both Windows and OSX see the value in offering developers a stable platform. With Linux you can’t even expect them to work between minor updates.
I was talking about user software. The software distribution systems are all designed around open source. You run into massive headaches when you work outside that system. Not just through distribution but because the distro clusterfu*ck is dealt with by releasing the source and having the package managers downstream account for the differences.
There are closed source apps available for Linux but the companies that produce them still have to account for all the differences. Companies that release a single tar file are hiding all the “poke in the dark” scripts that have to be built to deal with all the distros. Even if you release for a couple distros you still end up building multiple binaries.
Opera’s Linux section shows what supporting multiple distros really looks like. Note that some distros have multiple packages for differing versions.
http://www.opera.com/download/index.dml?platform=linux
As for VirtualBox it is open source while VMWare is closed source. VMWare has in fact been broken multiple times by updates.
http://www.netritious.com/virtualization/fix-vmware-after-ubuntu-up…
I think, for the most part, we’re going to just have to agree to disagree on this one.
However one item I can categorically prove is VBox.
Past discussion:
Response:
See the following link and scroll down:
http://www.virtualbox.org/wiki/Downloads
^ as I clearly stated, there is an OSE (open source edition) and a closed binary.
The closed binary has more features than the OSE and is the version people typically use when downloading outside of package managers (which leads to incorrect assumptions – like yourself – that they’re using “open source”).
Furthermore, I think you’ll find that many of VMWare’s products are open source as well:
http://www.vmware.com/download/open_source.html
(though I’d wager the licence isn’t as “open” as GPL/BSD – but that’s just a guess based on their previous business model)
Edited 2010-01-13 07:57 UTC
??? The same thing gets done for those platforms. Have you never used a ports system on OS X, or various native FOSS ports? They get OS-specific packages, just the same.
It’s no more efficient. It’s just less work due to using more popular platforms.
No I haven’t used a ports system on OSX because it isn’t the main method of software distribution. I’ve found everything I needed at Softpedia.
Apple wisely ditched the shared library system in favor of application independence. The MacPorts project is a community effort to make compiling open source Unix utilities easier for users.
There *could* be greater compatibility across distros just as there is compatibility across different versions of Windows. Even if the distros followed a very basic common library that would cut down the repackaging time immensely. It’s an inefficient system from a software engineering perspective. Even if Windows and OSX didn’t exist it would still be inefficient since there is so much redundant work.
While Arch is a little more difficult to install than other distributions, its simplicity and lean design can make wounders.
For example, by using a Gnome LiveCD (fedora, ubuntu) , the installed system ends up with a lot of packages, dependecies that some users may not need (or want). Sometimes, those packages can be uninstalled, but sometimes they can’t without breaking dependencies. Many of those non wanted apps consumes both hard disk and ram space. For example those 2 distros end up with a desktop with a normal RAM usage of 300MB or more.
I installed a Gnome desktop on my laptop. I limited myself to either C/C++ or Phyton apps. I had no Banshe, Tomboy, F-Spot mono apps. I had no search backend because I can track my files on my mind fairly well. What I do was replace the Arch network script with NetworkManager (for the laptop wired and wireless adapters), And my ram usage end up slightly over 100MB.
I used to be an exclusive Debian user, until I got sick of some of Debian’s flaws and decided to try Arch. I’ve been using it ever since on my home pc (which is actually an Alienware m17x notebook), on my Asus Eee PC, and on all my VPSs (four).
I disagree with the guy who wrote Arch gives dependency hell a new meaning. First of all, dependencies were one of the most annoying things that made me move off Debian. Arch’s package management system (pacman) is way, way better, and the AUR is a fantastic platform.
Not only does Arch have many, many binary packages, you can find most linux applications in the AUR and easily install them from source. True, at times you’d find some packages in the AUR with an outdated or even broken build script, but these are mostly not the massively used apps out there. Wine is kind of an annoying exception, I tried to install it once on my x64 machine with bad results, but I really don’t need it.
Plus, I’ve always tried to build packages in Debian and found it almost an impossible process. Building packages in Arch by yourself is easy.
Besides the great package system, I also really like how Arch made configuration easy. Not only the system’s configuration, but also package configurations. Just installing a simple. lightweight web server like lighttpd on Debian can have you trying to find out what the hell all those configuration files are for. Arch, on the other hand, usually comes with one easy to understand and modify conf file for every app, with great default values.
Performance is also great, and I love that fact that I am not forced to install an outrages list of applications the come with every desktop environment, or even entirely create my own setup (on my Eee, for example, I have a very lightweight setup).
Anyway, the thing about linux distributions is that every user can find the distribution that is most comfortable for them. I hate Ubuntu. I really do. I tried it a few times, had no idea how they managed to take Debian’s already flawed package system and make it even worse. At least in my opinion. I do appreciate, though, the extensive documentation the project has written for various applications, problems, scenarios, etc. They did a great job there.
I know the basic differences:
bash vs python
precompiled vs source
But what are some other reasons why someone might choose arch over gentoo?
Personally, I enjoy seing the code compile, makes me feel like my computer is actually doing something useful.
Depends on choice. Both are light, and fast. And each one has it’s advantages over the other.
For me personally, got tired of compiling Xorg and KDE everytime an update shows on Gentoo’s portage. But still recommend Gentoo with hardened profile for server (no gui btw).
Edited 2010-01-13 16:47 UTC
Dan McGee: “Signed packages are super important and are targeted for the next major pacman release.”
This is great news. I’ve always been attracted to Arch Linux, but there have been some buts related to it still being a relatively small hobby distro lacking the security and reliability of some bigger distros. A major concern has been that there have been no way to guarantee that the Arch repositories are really safe and not hacked and maybe having malicious software. I’m glad that Arch developers are now finally ready to do something about this too:
http://wiki.archlinux.org/index.php/Pacman_package_signing
http://bugs.archlinux.org/task/5331
“Instead, advanced users will appreciate a simple build system based on GNU Bash: A language they are most likely familiar with contrary to other systems using custom languages like RPM spec or python-based portage.”
The fact that portage is written in python has as little to do with an accessible build system as the fact that pacman is written in C. The public facing parts, the eclasses and ebuild-helpers as well as the ebuilds themselves, are written in bash.
Yes, Gentoo’s ebuild format does have a somewhat higher learning curve than Arch’s pkgbuild. But that is because there is more abstraction into custom functions. But this is very convenient once you have picked up on the conventions and available functionality. The kind of users who would start hacking ebuilds should have no problems to do so. You don’t need to be a programmer, I’m just a hobbyist myself.
Some of the guys in the interview, especially Thomas Bächler, sounded pretty damn snarky. Which is one of the reasons I never cared for an otherwise interesting distro. Arch sounds good, but damn, don’t ask them any questions–you’ll just get “read the docs first” or “read them better”. And then there’s some guys who clearly believe that it does everything 100% perfectly. 😐
I’m interested in Arch, but it’s their attitudes that keep me away from their distro. Plus, the fact that I usually have some kind of problem, but I don’t dare ask at their own forums at risk of being told the equivalent of “RTFA again and f*** off.”
Edited 2010-01-14 01:07 UTC
UZ64: Read the beginners guide: http://wiki.archlinux.org/index.php/Beginners%27_Guide
Edgarama: You think I didn’t already? Come on, seriously, you’re just confirming the Arch mindset of “STFU and RTFM” I’m complaining about. 😐
Arch isn’t trying to be the distro for everyone. It is not a hand holding distro. It allows you to make a basic installation, then you build it up from there.
The wiki is great, especially the beginners guide. Check forums for solutions to any problems the wiki might not have helped you with. Then post on forums if you couldn’t find any solutions. You however need to demonstrate that you have tried to solve your problems first, list what you have tried, then ask nicely with petinent info regarding your problem.
I am a recent convert from Ubuntu, installed first onto my Aspire one, then on my main desktop system. All from reading the beginners guide. YMMV.
Again, Arch may not be for you.
Edited 2010-01-14 18:44 UTC
nice questions/answers