Tomorrow, Ubuntu’s second ‘long-term support’ release, 8.04 or Hardy Heron, will propagate its way through the list of mirrors. OSNews took a short look at the beta release of Hardy Heron a few weeks ago, and concluded that “All in all, this release packs some interesting new features and frameworks, some of which should have been part of any Linux distribution three years ago. It is quite clearly a beta though, and definitely not ready yet to be labeled as a ‘long term support’ release.” In anticipation of the release, El Reg caught up with Mark Shuttleworth in London.Shuttleworth advocated the idea of the major Linux distributions synchronising their release schedules, which would enable easier collaboration between the various distributions. Ubuntu is willing to alter their own release schedule to make such a synchronisation a reality.
Timing your releases drives a whole bunch of things. It means a greater ability to collaborate on bug fixes. If we are on the same versions of the Linux kernel, it is a lot easier for us to say, ‘Hey, here is this patch to make this device work. Do you know any reason why we shouldn’t put it in?’
El Reg continues, confronting Shuttleworth with Martin Owens, the Ubuntu Massachusetts LoCo leader. Owens said that PulseAudio, one of the big new features in Hardy Heron, produces a lot of problems instead of fixing any of the long-standing audio issues in Linux. Just like ALSA, OSS, and ESD, PulseAudio is ‘just’ another audio system, and instead of it being a replacement for the other three, it is added as the fourth audio subsystem, making the whole audio landscape even more intricate than it already was. Shuttleworth agrees the situation is “messy”, and joked “I am glad you are not into video editing because the story there is worse”.
Ubuntu Hardy Heron will be released tomorrow, so stay tuned for a review round-up from across the net – and there will be reviews, trust me.
I’m using Hardy RC (I had to use the alternate CD because GParted freezed) and I noted with great surprise that my fav editor, jedit, is included in packages.
It’s interesting to note that jedit by default depends on openjdk-6-jre and not on sun-java6-jre.
Netbeans 6.01 is also included.
Good job guys :-D.
Edited 2008-04-23 08:17 UTC
I agree that synchronizing releases would be a good thing for every distro and for 3rd parties. As Linux becomes more mature there is not the need anymore to release too often with all the new stuff and to innovate constantly. Now Linux needs to stabilize a bit, put more resources into QA, focus on normal users’ needs, etc…
For example, if all the major distros agreed to release once a year during the same month, and use the same versions of each package, things would be much easier for everyone. You would just say: “I have X problem,I’m using Linux 2008”, and that would be it. No need to say you’re using distro X that has version Y of this package and version Z of that other package. Bug fixes would run across all distros rapidly, 3rd party packages would just have to care about the year of the release, etc…
The 6 month release cycle is good for enthusiasts and for rapid development, the rolling release method is good for geeks, developers, etc… But for normal users, one release per year is better (or even one every two years, but that might be asking too much for now…)
Now, what about unifying package management too? Would the RPM distros be willing to drop YUM, URPMI, YAST and switch to deb/APT? Again, I’m afraid that’s asking too much, but who knows, maybe some day…
I am also very interested in seeing all the package management getting unified, but when even the same damn format is so dependent on the individual distro its no point. Like ubuntu debs being ubuntu specific and will not necessarily work on other distros that use debs…
I like the way Mac OS X has solved packet management though…
Dumping everything into one folder is not the answer. When Apple has more people developing for the Mac, and there is more of a reliance on core, shared components, then they’ll see why that is.
Well I see this perhaps from a different angle: if I want a packet that is NOT in the repository, how should this be installed? I don’t want to have to add a new repository for each package I want to install (like seems to be the solution for ubuntu), and I don’t want to be limited by what the repository maintainers says I should have access to. By downloading external packages these may or may not know what shared libraries (and versions) that are present on the target system (and here is the problem with different distros). Furthermore, spreading the files “all over the system” like many unixes does today is not very plesant when I want to remove or otherwise modify the stuff.
All I want is a system where I can download a single file from the vendors website and just click it and it works.
For .deb based systems there’s gdebi. If I click on a *package* built for my version of Ubuntu in my browser then it’s downloaded and launched in gdebi automatically. This then checks that the required dependencies are installed and installs them from the repositories if necessary, it then installs the downloaded package. All with two clicks.
Well this is possible som application have debs directly on the hompage wich can be installed directly from the web browser.
All I want is a system where I can download a single file from the vendors website and just click it and it works.
Sounds like Windows or OS X, so it’s already available.
Exactly.
It sounds very much like OS X and nothing at all like Windows.
And PC-BSD 😉
the Apple .apps in Applications are just flimsy zipped file structures with some metadata. That way all the special things a specific app might need can be put there to override system defaults. It just LOOKS like one click-n-drag file to the user… which is cool!
Ubuntu handles repositories pretty well from the handful of things I’ve had to add manually. I’m surprised they don’t support something like Klix that the Knoppix guys were working on. That was similar to how .apps work by putting a “flipped” Unix tree of symlinks in one folder and calling it one file. There’s no way on any system to get around having some dependency issues and duplication of library functions at some level. Even my iSeries has Library structure and higherarchy to handle same name files with different versions.
Really? When does that scale tip? There are plenty of developers on Apple now.
I am also in the “really like Apple’s way”.
I think the trick is that the operating system and the main frameworks are provided by Apple and then the developers just write applications that use them. Whereas under Linux the people who wanted to write a decent graphics application ended up writing their own complete graphics framework.
Its quite likely that the Linux package managers are far more advanced with all their dependency tracking than the Mac is where you just drag a folder out of a virtual disk into applications.
Maybe you didn’t know this, but Mac uses deb underneath as well. 🙂
So when you say, I don’t like how the deb-distributions can be incompatible, but I like how the Mac handles it. You might not be making as much sense as you think you are.
As Mac is ‘just’ an other very-much-incompatible deb-based ‘distribution’.
Update: fixed 2 typos, there might still exist other problems.
Edited 2008-04-23 22:32 UTC
blablabla.
Ubuntu should synchronise with Debian, can with Fedora (everything is open, so is the scheduler).
Il seems it’s just Mark Shuttleworth blablabla.
Yeah, I find this annoying too. In some sense there are reasons that they deviate a bit, like the restricted manager and restricted modules package. They make it rather difficult to compile in individual drivers. Debian’s system using module-assistant makes it so much easier to install or uninstall (especially uninstall).
A lot of packages don’t have this problem in the same way as trying to use a SUSE RPM on Mandriva would have. SuSe at least has gotten better. I remember when their naming scheme for RPMs were just *name*.rpm (like gimp.rpm) and not have any version numbers at all.
Most Ubuntu packages will work in Debian and most Debian packages will work in Ubuntu. In fact if you pay close attention, the maintainers are often the same people.
This is already happening, at least at the end user level. PackageKit is the unifying element.
Packagekit integrates well with other elements of modern desktop Linux such as Policykit and D-Bus.
It provides both CLI and GUI frontends, even though the GUI is mostly Gnome for now. A QT version is under development.
PackageKit can handle all sort of packaging formats, debs and rpms included. So, in essense it will make the packaging format irrelevant to the end user.
As for rpm based distros to drop rpm, I would say that this will never happen. For a distro to be Linux Standard Base compliant it must be able to handle rpms, currently debian based distros do this through alien.
Naturally, it makes no sense to have developers learn two different, but very similar in feature set, packaging formats when it will be all the same to the end user. So, if we can’t come up with something that is significantly better than rpm and debs, I would say that the best way to go would be if everybody sticked to RPM, as it currently is part of the LSB standard.
However, whats more important than the choise of packaging format is that a package providing functionality X have the same version number in all distros.
Perhaps distros should join forces and create some kind of metapackages, so that you could refer to a Linux distro as e.g. GNU/Linux x.xx/2,6.25.x compatible
These metapackages should at least cover the functionality of LSB.
Doing this would be benefishal to all Linux distros including the big ones, as it would make a Linux a more
well defined target for people and companies that want to port their software to Linux. With a little luck, perhaps even Adobe could be persuaded to port their software, if this happened.
PackageKit and SMART (and even Klik) are/were great ideas for unifying package management. What you’re forgetting about, however, are the large egos of the developers (and hordes of unquestioning followers) of the various disparate package management systems in use today. As an example, look at what occurred recently when the original developer of RPM tried to bring RPM from the dark ages into the modern times with RPM5. Someone at Red Hat had a snit fit and the new format ended up forking rather than being implemented downstream. Regardless, I agree that a unified package management system (if even on the surface) is essential if Linux is to continue growing on the desktop.
I was waiting for the flip. I was waiting for the sarcastic, elitist, generally anti-ubuntu proclamations that everyone should release in lockstep. It never came.
But, even tho I use ubuntu,
(and arch)
(and freebsd)
it would be a horrible thing if every distro released at the same time. The give and take of release timing and targeted features keeps up the competition between distros. E.G. : branding-wise, I have no reason to switch to fedora, but fedora 9 will ship with kernel-based mode setting for my laptop – a feature I would love to have. Had they released this earlier, I wouldn’t be able to take advantage of it. In the end, even normal users benefit from differing release cycles.
The only way that will happen is if all (or any!) of the distros would get off their asses and clearly delineate what is “base OS” and what is “third-party apps” and get out of the “everything in the repos is the OS” mentality. Until they can separate things such that you can install new third-party apps with ease, based on a known-stable-base, without having to upgrade the entire distro to do so, we will not get anywhere.
And no, “backports” repos don’t cut it.
Rolling release need not be for geeks. For instance, PCLinuxOS is pretty much a rolling release (except a re-basing every few years), and it’s one of the most newbie-friendly distros. It seems that Mepis will go this way as well, now that it rebased on Debian stable after the fiasco of trying base on Ubuntu.
Anyway, I think package management is an open problem in GNU/Linux. Ideally, one should be able to add as many unofficial repositories as one likes without worrying about conflicts, about “polluting” the system. Most package management systems assume that repositories are well-behaved and don’t conflict each other. That’s too much to assume in the Linux world.
Some people point out that proprietary operating systems like Mac OS X, or FOSS operating sytems which separate the core system from the third-party programs (like FreeBSD with its ports) don’t have this problem. Well, one may say they do have to face this problem, but they hide it behind scenes while they develop the “core system”, which is then presented to third party developers as a finished product. When the core system is developed in a more distributed way, as in GNU/Linux, the problem shows more clearly.
I may have voiced this opinion before, sorry for repeating myself
Ah god… This is the time that those lame reviews about Ubuntu are comming…
I *BET* someone is going to mention that Ubuntu brown theme is ugly… Move on people…
I think Mark is hinting that way already. Just make X.06 and X.11 releases.
That would really help releasing more polished products cause a lot of fedora/suse fixes could be integrated.
Realeasing this early made sense in the past cause it adds to hype and traction .. but Ubuntu has those in high amounts and now the real problem is meeting exspectations.
I really hope Mark Shuttleworth thinks this way too ( BTW does he read OSNEWS? )
P.S.: I think the audio issues will be fixed really soon. With arts and esd being dead soonish Alsa + Pulseaudio will be the defacto standard. ( Added with some phonon or Jack in some places )
Which kinda makes Ubuntu LTS releases irrelevant. If I stick with an Ubuntu LTS release, I have to wait 5 years for the updates!
Having Used Ubuntu for the past 2 years, I’ve turned back to Debian, it’s much less trouble!
For personal use, LTS may be irrelevant, but there are some uses where you really just don’t want to change things. LTS is a blessing.
The problem with LTS releases in Linux-land is that you have to wait 3-5 years to get new versions of APPLICATIONS. Waiting 3-5 years to get a new OS isn’t such a big deal. But waiting 3-5 years to get a new version of APPX that fixes a major bug or adds a new feature that you really need is a pain! And then some.
The backports repos in some distros helps a bit. But these are usually non-standard, non-distro-supported projects.
Business users like to have their base OS be stable, secure, and not-changing-very-often. But that doesn’t mean they want their app versions frozen for 3-5 years.
That is the point of LTS releases not the problem. The main point of a LTS release is a stable platform, including application versions. This provides a stable environment for enterprise level deployments, where anything beyond security fixes (and often even security fixes) has to be tested and approved prior to installation. The fact that it is a LTS should make next to no difference to an end user who upgrades with each release (as I do). The fact that this is an LTS release should not make a difference on stability either except in the sense that Canonical would want it as stable as possible to reduce support issues. As time rolls on a LTS release should be more stable since it has a longer support time and this the bugs get cleared out. This is not the main goal though. An LTS release is to provide a predictable platform with an extended support cycle for environments that require such. It is also a primary target for 3d party commercial applications, such as Oracle and DB2. IBM would not want to support every release of Ubuntu but could target the LTS releases as it is a stable set of applications that they can test against.
Business users most definitely do want apps frozen as well, except bug fixes. Version changes make for an unpredictable support environment.
I work at a large enterprise company. We are still using Windows 2000 and Outlook 2003. We are finally starting to phase out Win2K for XP simply because Win2K is loosing security support.
Outlook 2003 is staying so far (I’m not sure when support for this will end). But most any task that any of us need to do can be done in these older versions.
I would expect the same in an enterprise environment running Linux, keep the oldest supported version until it no longer is supported. If you must have a new version of something installed than have IT install the update or re-image the systems with the update installed (this tends to be a good idea on the Windows side anyway).
As for home users, just update with every release. I’ve seen Windows users go years without updating various software and wonder why something stopped working or why they don’t have X feature that their friends have. With a twice a year release cycle if the user knows how to upgrade than they may very well be better off than being on an OS that does not get updated for years and doesn’t not update all your installed software.
Hmm, what business world do you work in?
All the business people I talk to, and the education segment where I work, want long-term, well supported OSes … we don’t want frozen apps. We want to install the OS, configure the OS, and then forget about the OS, while we manage the apps that run on that OS. We want to be able to install an OS released last year … and use it with apps released this year.
The problem with Linux distros, is that they aren’t OSes and apps … they are collections of packages, where the versions of all those packages are frozen. And it’s only the Linux distros that do this.
Look at Solaris, look at the BSDs, look at Windows, look at MacOS X. Do you really think they’d be popular with third-party developers if the application base was frozen every 6 months, or worse, for over 3 years? Do you think people would be happy if they had to install Windows Server 2008 SP1 in order to run MS Office 2007, instead of being able to run it on their 7-year old Windows XP?
The application stack and the OS should not be developed in perfect tandem, and should not be integrated in such as way that you are forced to upgrade them both in lock-step.
You realize that LTS releases do get major bug and security fixes during their lifetime, right?
Businesses do not want new applications, they want their current applications to continue working for a very long time.
Yes, but not new feature releases of the existing apps.
How do you install the current version of an app (released this year, has that one feature you really need for the current project), on an LTS install from two years ago?
That’s the issue we keep banging up against at work.
No it doesn’t, you just dont understand what it’s good for.
Nothing is stopping you from upgrading from 8.04 to 8.10 when the time comes. Again, you don’t understand what and who LTS is for.
Isn’t that special.
Having Used Ubuntu for the past 2 years, I’ve turned back to Debian, it’s much less trouble! [/q]
LTS releases are made every 2 years, as Mark clearly states in the article. LTS server releases are *supported* for 5 years, and desktop releases for 3 years.
> I like the way Mac OS X has solved packet management though…
You have to distinguish between mpkg and appfolders here. To achieve something like mpkg, one would indeed need a unified package management system.
But appfolders can be achieved in a more simple way, because they are not packages in the sense the term is used in Linux – they need not be installed. Instead, you’d just need support to “run” an appfolder, which could be added to all distros regardless of their native package manager. Some things would still have to be sorted out (e.g. how a storage place for configuration data and other, non-document data is assigned), but that’s much easier than installing (system) packages.
Ultimately, I envision a solution where most OS components can be installed similar to appfolders, e.g. installing device drivers just by moving a “driver-folder” into the correct system folder. But that’s a long way to go.
> Dumping everything into one folder is not the answer. When
> Apple has more people developing for the Mac, and there is more of
> a reliance on core, shared components, then they’ll see why that is.
I do not see a problem with that. Of course, you’d put only those things into the appfolder that are not included in the base OS. With more and more advanced versions of the base OS, the core will grow, and appfolders will shrink.
Continuing the thoughts in my previous posting, non-core shared libraries could be handled by “libfolders” which are just as easy to deal with as appfolders. They’d be installed by placing them in a central “libraries” folder.
Note that none of these ideas is new. They are, in fact, older than package management. Traditional unix did not need package management because it was as simple as appfolders: Each application is represented by a single file, and you install it by moving it to /bin. Each library is a single file, and you install it by moving it to /usr. And so on. Things became complex when people started distributing applications as several files (instead of extending the executable format to something equivalent to appfolders such that the single-file approach could be continued).
I tried HH beta a week ago, it looks like a good improvement over the previous version but I still hold out for using it with 2 large monitors out of the box, no fuss or muss up to 2kx1.5k on each head just like say W2K did 5-8yrs ago.
For single head the nVidia restricted driver worked fine but on twin head cards both nVidia & ATI, the setup app for that is complete crap. It basically hosed the graphics system so that I had a pair of 640×480 windows into the bigger desktop and after that I lost all control of any video resolution.
Its ironic that even on BeOS, I got support for ATI twin head a few years ago from a lone developer (Rudolf) so why can’t the Ubuntu people with 1000x more resources support this. Twin head is pretty common, almost every video card out there at $50 up has 2 heads.
Question, what’s the best way to get full support for twin head without messing with text files. Another distro?, a different twin head card?, 2 separate cards? Does anybody have this working?
“Question, what’s the best way to get full support for twin head without messing with text files. Another distro?, a different twin head card?, 2 separate cards? Does anybody have this working?”
I have it working on Fedora and openSUSE. If you have installed the drivers from nvidia for an nividia card, use the nvidia control panel that is also installed to set up twinview.
Did you use “Screens and displays” utility? I think it’s not supported anymore (and usually breaks xorg.conf), look into “old” resolution setting app, it can configure dual displays and layout now. I have tried it only on my notebook with Intel graphics (which are usually very up to date with new Xorg stuff) and worked OK, it ignored layout but worked. It might not be so good on Nvidia but it’s worth a try.
As i understand it from testing the betas (as of yesterday), when you browse a windows pc with shares which are protected by a username and password, hardy doesn’t ask you for login details, it just displays nothing. This worked in the prev version and will be fixed at some point with an update…
Not being able to view windows shares like this isn’t the best for us who want windows users to try it out.
I personally think that what Mark is suggesting here is a good idea. If all the distros were to sync their releases that doesn’t necessarily mean that they would all be the same. Some distros may want to go with older kernels, some distros may want to go with older more stable packages, you have different DE’s supported, you have different take on how these DE’s look and work. However, when it comes to bug fixes, software and hardware support they should all be on the same page. They should all be sharing info and trying to ensure that all Linux flavors work as they should in-terms of hardware.
I would like to see Ubuntu and the other distros do yearly releases, instead of every 6 months. This will ensure that with every release the packages are at least tested and implemented properly. one of the major issues I have with ubuntu and other distros are regressions (though sometimes is not their fault. what the hell happened to removable drives and media in gnome 2.22) or things aren’t implemented well due to time constraint (pulseaudio). A yearly release with a serive pack every 6 months would essentially be the same thign they have now but the service pack should try to update everything but not break anything from the previous install.