According to a Novell confidential memo dated June 14, Novell is delaying its next release of both the server and desktop versions of SUSE Linux Enterprise 10 “to address final issues with our new package management, registration, and update system and also fix the remaining blocker defects.”
They can race Microsoft to see who can delay the longest 🙂 (*although MS has a pretty convincing lead)
Seriously, SUSE 10.1 and Ubuntu 6.06 were both rushed out the door, with less than stellar performances as a result. Better to get it right first time than to have to release SP1 a month down the road.
Better to get it right first time than to have to release SP1 a month down the road.
Interesting thread this is going to be. I’m really looking forward to all the excuses why for a Linux company it’s okay, but for MS it’s not .
They don’t try and spin webs around it. Novel have given us a reason, and one that isn’t surrounded in mystery. In fact you go watch their bugzilla site and see why yourself firsthand.
And Ubunutu did the same thing only a while back. They shipped precisely on the new date as said.
And they didn’t pull major features from a ‘feature complete’ beta product.
Interesting thread this is going to be. I’m really looking forward to all the excuses why for a Linux company it’s okay, but for MS it’s not .
It’s not okay for any company (FLOSS or even proprietary) to delay a release of product MANY MANY times.
It’s considerable if they did it once or twice.
MS delays for so many times plus they are pulling out features. (Sorry I don’t have the exact duration. Anyone remembers how many years Vista has been delayed?)
Depends on how you look at it. I came across a statement by someone at MS saying after Windows 2000 was released they should/would follow a faster release cycle, much like OSX is now. If you go by that schedule, they’re like…5 years late.
>>”It’s not okay for any company (FLOSS or even proprietary) to delay a release…”
Why not? They don’t have to tell us anything about their development process and their release plans. They owe us nothing. They’d be justified in keeping their mouths shut until the product was ready to ship.
It’s all about the magnitude of the delay. Vista should have been out, what, three years ago? And in its present form it has had tons of features cut.
Not that I complain though. Anything that’s bad for Microsoft is good for the competition and hence for the computing world.
Three years ago?
Nice attempt at a troll, but it sucks.
Actually the only troll is you.
Vista has been kept slipping. XP and Win2K3 were only released to fill the gap. And Vista _was_ aimed for 2003 but it slipped, and it slipped and it slipped and it slipped and it slipped and it slipped and … you get the picture, right?
Actually the only troll is you.
Vista has been kept slipping. XP and Win2K3 were only released to fill the gap. And Vista _was_ aimed for 2003 but it slipped, and it slipped and it slipped and it slipped and it slipped and it slipped and … you get the picture, right?
No, you’re a cracksmoking moron if you think that XP were only released to fill the gap. But even in that time, desktop linux couldn’t make up any ground and still sits at 2%. XP must not be so bad. You lose.
Linux market share appears to be above 4% (at least outside USA).
Linux on the Desktop is gaining on Windows year for year, so you lose, to use your “logic”.
“Cracksmoking moron”? I might be a moron, but I don’t smoke crack. Anyway, calling people names is quite offensive, wouldn’t you say so?
It may be XP mustn’t be so bad, but could it be that most persons don’t know better, and just use whatever is preinstalled on the pc? And could it be that resellers have to pay a higher price for Windows if they don’t preinstall on the pc’s they sell?
When Windows98 was new it took quite a market share, and it wasn’t because it was the best desktop OS at that time.
And what’s all that lose-thing about? I didn’t know this was a competition
Linux market share appears to be above 4% (at least outside USA).
Since you zealot fanboys are incapable of anything exxcept living in your little fantasy worlds, we’ll call it 2-2.5% as the most realistic figure.
“Cracksmoking moron”? I might be a moron, but I don’t smoke crack. Anyway, calling people names is quite offensive, wouldn’t you say so?
To say that XP wasn’t supposed to happen or something is just so laughable. But you and the other clowns around here live in your little osnews circle jerk fantasy worlds.
It may be XP mustn’t be so bad, but could it be that most persons don’t know better, and just use whatever is preinstalled on the pc? And could it be that resellers have to pay a higher price for Windows if they don’t preinstall on the pc’s they sell?
Oh, the old “god, look at me I put in a Ubuntu CD and I’m so smart. If only Linux was pre-installed”. Can’t you clowns come up with new material, or are you so brain dead you have to regurgitate the same crackhead nonsense everyday?
When Windows98 was new it took quite a market share, and it wasn’t because it was the best desktop OS at that time.
NT would have been a better desktop. OS9 didn’t even have pre-emptive multitasking and was a complete joke. Linux was still nowhere close to being desktop worthy.
Since you zealot fanboys are incapable of anything exxcept living in your little fantasy worlds, we’ll call it 2-2.5% as the most realistic figure.
I’m not a zealot. I’m not fanatic about using one or the other system. I’m not even fanatic about using F/LOSS. I use proprietary as well as Open Source (as in OSI-approved license) and Free (Libré) Software. I’ve even written software incompatible with the GPL.
To say that XP wasn’t supposed to happen or something is just so laughable. But you and the other clowns around here live in your little osnews circle jerk fantasy worlds.
I never wrote it wasn’t meant to happen. I wrote it was created to fill the gap before the (later renamed to) Vista-release in 2003.
Oh, the old “god, look at me I put in a Ubuntu CD and I’m so smart. If only Linux was pre-installed”. Can’t you clowns come up with new material, or are you so brain dead you have to regurgitate the same crackhead nonsense everyday?
WTF!? Who says I’m using Ubuntu? Trust me, most windows users cannot even figure out how to burn an .iso-file to DVD or CD. And most of them are afraid even of running Linux live cd’s (even the wannabe-geeks running XP and IE at home are afraid of live-cd’s in general).
Fact is Windows has been preinstalled on most PC’s for over a decade, and most of the Windows market share has been gained that way, be being preinstalled on PC’s bought by complete newbies. Very few pc’s are shipped with no OS installed.
Why do you keep calling people brain-dead, crackheads, morons etc.? Don’t you see it only makes you look like the fool? It’s much easier to have a debate when both parts agree to a certain standard. I’m looking forward to a debate with you on equal terms. But it won’t happen until you grow up and leave that 14-year old attitude behind.
NT would have been a better desktop. OS9 didn’t even have pre-emptive multitasking and was a complete joke. Linux was still nowhere close to being desktop worthy.
NT would surely have been a better desktop than Win98. I used Win NT4 as desktop and apart from some issues with DirectX and certain games (and a crappy onboard sound card), it actually worked reasonably well. Not too stable though, but much better than Windows98. I could actually have the pc running over night.
True that OS9 didn’t have pre-emptive multitasking. Nor did Win9x/ME. (OS/2 did (and does) however, and it’s still superior to Windows, IMHO). But it didn’t prevent Windows9x from dominating the Desktop entirely.
NT4 was a better desktop than Win98, but it wasn’t better than RedHat 6.0. Windows2000 gave newbies a better desktop experience than Linux did, but late Windows-versions pre-Win2K was no better on the desktop than Redhat 6.
No, you’re a cracksmoking moron if you think that XP were only released to fill the gap.
There was going to be a major new version of Windows after Windows 2000 that was going to end up being Vista, or even Blackcomb (the version after Longhorn). We got XP and 2003 instead, and Microsoft was confident we would get Longhorn in 2003 or 2004. You and they lost.
There was going to be a major new version of Windows after Windows 2000 that was going to end up being Vista, or even Blackcomb (the version after Longhorn). We got XP and 2003 instead, and Microsoft was confident we would get Longhorn in 2003 or 2004. You and they lost.
And once again, you people that live in these osnews fantasy worlds, are only too happy to make up whatever little stories that can pop into your demented little heads.
And once again, you people that live in these osnews fantasy worlds, are only too happy to make up whatever little stories that can pop into your demented little heads.
Whatever. Anyone who knows anything about Windows knows that there was going to be a version since 1995 that would have WinFS like technology along with various other enhancements that Microsoft has continuously wheeled out for each version of Windows since. Instead we get versions that are a pale shadow of their former selves, and end up being the version of Windows we should have had to start off with. 2000 -> XP for example.
Microsoft were absolutely dead set for Vista in 2003 or 2004, features were lumped in on the fly, features were cut back that were never going to be feasible and there were numerous codebases.
See, you always lose because you’re just another loser that lives in his basement that fantasizes himself in some battle against Microsoft.
It must just burn you losers up that all these years have passed since XP was released and desktop linux is nowhere.
See, you always lose because you’re just another loser that lives in his basement that fantasizes himself in some battle against Microsoft.
I don’t know whether to laugh or feel sorry that you are so desperate about this issue. It doesn’t take much reverse psychology to understand that you are a person sitting in his basement who feels strongly about this for some reason.
I don’t follow Windows religiously like many Microsoft watchers, but anyone who knows anything about Windows knows that for the past 10+ years various, supposedly ground breaking features, have been wheeled out by Microsoft only to be put on the back burner. The result is that we end up getting versions of Windows that have nothing hugely significant in them, and are at best the versions of Windows with the quality we should have got one or two versions back.
It must just burn you losers up that all these years have passed since XP was released and desktop linux is nowhere.
And now another mental complex comes out. This is true yes, and I’m happy to admit it, but this has nothing to do with what has been discussed in this thread.
It’s summer. Relax and have a beer. Your dinky little operating system is still number one, which is what this has all really been about.
Lay off the koool-aid.
Longhorn was originally considered a stepping stone between XP & Blackcomb (Vienna). And it was originally given a release date of 2003.
Quit reinventing history 🙂
TomK has been molested by Penguins too btw. Contact him if you need help.
Lay off the koool-aid.
Longhorn was originally considered a stepping stone between XP & Blackcomb (Vienna). And it was originally given a release date of 2003.
Quit reinventing history 🙂
TomK has been molested by Penguins too btw. Contact him if you need help.
Oh please. You and the other dorks have been circle-jerking each other for so long, you wish you could have a little kool-aid.
But it’s so funny. Some kid in his basement can put together a distro with a decent package management system, but Novell fails once again.
Novell doesn’t even produce anything. all they do is repackage. There’s no hope for desktop linux with losers like that leading the effort.
Oh please. You and the other dorks have been circle-jerking each other for so long, you wish you could have a little kool-aid.
But it’s so funny. Some kid in his basement can put together a distro with a decent package management system, but Novell fails once again.
Novell doesn’t even produce anything. all they do is repackage. There’s no hope for desktop linux with losers like that leading the effort.
I didn’t even comment on Novell, I just corrected your revisionist view on Vista’s original release date / purpose.
Seek help.
Edited 2006-06-20 21:37
Linux can’t compete, so it needs all the delays from Microsoft it can get
The main difference is, IMO, that MS has had 5 years to develop Vista, whilst SUSE releases twice a year.
Why is that? Because Vista is an attempt to square the circle, with its backwards compatibily (which is far from working all the time, BTW)
There is nothing wrong with delaying a product. There is something wrong with promising a product with X features, then delaying said product and stripping out X features, then delaying again, stripping out more features, then delaying again and stripping out even more features. This is especially bad for a company as large as Microsoft and with delays in the years. I can deal with 6 weeks or even 6 months if that’s what it takes to get a product up to standards, including all originally announced features. I’m having a hard time convincing myself that Vista is worth the wait after two of the biggest features (in my opinion) are no longer going to be included. Those features being WinFS and Monad. At least it’s good for competition as many Linux distributions are now far outpacing Windows with features, like XGL and Beagle.
*Sigh*
Thom, really, this is getting annoying and leads to me (and I’m quite sure this also applies to others) visiting this site, especially the discussion area, less and less.
I’m not even going to start to argue your point about MS and Novell. For anyone with half a brain it’s plain to see that there’s a hugh difference between delaying a product for some weeks, or having these incredible troubles getting out a new product that MS had.
Be that as it may (and who cares really), why is it that you as one of the editors of this site, try to start a stupid and unnecessary flamewar? Aren’t the usual stupid flamewars taking place here not more than enough already?
Edited 2006-06-21 08:18
Learn to use the internet. There’s a SMILEY in the post. You know, a smiley indicates, depending on which one, humour, sadness, or whatever. In this case, it was a smiling smiley, as such, I was just JOKING.
Edited 2006-06-21 13:55
Thom, you knew pretty well that smiley or not your comment would lead to a flame war, so don’t give me that lame “but I used a smiley” excuse.
Other than that, your childish insistance on personal attacks speaks for itself.
Grow up.
They can race Microsoft to see who can delay the longest 🙂 (*although MS has a pretty convincing lead)
and duke nukem whenever beats them all
what problems have you seen on 6.06?
I don’t want to list my own problems with new Ubuntu (lots of them), suffice to say there was about 100mb of updates around friday and I’m still experiencing some bugs.
In Kubuntu the control centre applets become unresponsive sometimes, I then try them in the traditional KDE control centre and they also don’t work there. If I kill the related processes and then restart either one of the control centres the applet will work again until the next time it happens. This is a problem I’ve only ever experienced in Kubuntu, and it happened in both 6.06 and 5.10. I have not discovered the reason why it happens, but you did ask what problems there are.
like:
KDE 3.5.3
Gnome 2.14.1
Compiz/XGL improvements
Cups 1.2
Inclusion of TaskJuggler as a default install App
Koffice 1.5.1
etc
or is the package list frozen from 10.1 except for critical bug-fixes?
Have you ever heared of additional repositories?
Official updates from SUSE are bugfix/security fix *only*. But with additional repositories added you can upgrade to for example KDE 3.5.3 and Koffice 1.5.1.
I advise to use Smart for this, because of the broken package management in SUSE 10.1.
Download Smart from GURU, do a
smart update
smart upgrade
And there you go! The latest greatest!!!
Regards Harry
or is the package list frozen from 10.1 except for critical bug-fixes?
It’s frozen, SLED/SLES and Suse 10.1 are using a common base.
This is strictly about getting that monofied bastard child of the former Suse and Ximian package management systems working, since it’s the same system Suse Linux 10.1 users have been wrestling with. Patches were released for 10.1, but all they did was elevate package management/updating from being totally unuseable to being somewhat absolutely horrible to use.
In fairness to the Suse devs, they never tried to deny their was a problem with the new system and have worked like gangbusters to fix it, but the underlying understanding was that this was inflicted by Novell after version freeze specifically to burn in and test the new system for their enterprise offering.
Now that it’s blocked their enterprise release, there should be a “drop everything and get it working now” attitude at Novell that should finally fix things for opensuse users as well.
I really like Suse and continue to use it, but am extremely disappointed by the way the pm system was mishandled, there are politics involved beyond any drive to streamline and improve efficieny of the subsystem. The new system looks good on paper and certainly adds benefits, but when it became apparent early on just how broken the implementation was it should have been shelved for the next version release.
I’m not entirely confident Novell will get this fixed in time for the gold master. This could be a big boost for the Smart Package Manager, as that could be the only way paying enterprise customers are able to manage updates on their systems…
“I really like Suse and continue to use it, but am extremely disappointed by the way the pm system was mishandled, there are politics involved beyond any drive to streamline and improve efficieny of the subsystem. The new system looks good on paper and certainly adds benefits, but when it became apparent early on just how broken the implementation was it should have been shelved for the next version release.”
That is exactly the way I feel about it, and is the reason that I wanted Novell to delay OpenSUSE. Releasing a (heretofore excellent) distribution with a broken package manager is patently unacceptable in my opinion, especially considering that they ALREADY HAD a working package manager in prior releases.
Yeah I have to totally agree. I installed 10.1 on a desktop and a laptop. I love it EXCEPT for the new package management. Oh my, what a mess. I have two other computers to upgrade, but not until that disease has been cured.
not an acceptable solution.
there is no reason for SUSE have not to used the time between 10.1 and 10.0E to update some core packages with minor updates.
they may have chosen not to, which is their choice fair enough, but adding stuff from repo’s is not what i asked about and certainly not what i want.
cheers
R3MF
Creating releases are not about being bleeding edge, but about being stable.
Rather a bit old releases and working, than bleeding edge and not working.
there is no reason for SUSE have not to used the time between 10.1 and 10.0E to update some core packages with minor updates.
Actually, there is a very real reason: A single codebase.
It’s been Novell’s intention at least since 10.0 to base the next enterprise release (SLED10/SLES10, or 10.E as you call it) on the 10.1 codebase. That’s why 10.1 got the semibroken package management so late in the development cycle, because they wanted it there for the upcoming enterprise releases.
So, no version changes.
There have been so many significant packages released in the past month. GCC 4.1.1 is ready for general use and can dramatically improve C++ software in particular, new GLibC, KDE, KOffice, et al.
I suspect – but may be wrong – that they want to recompile the whole code base in the new compiler against the latest GLibC. If they’re smart, they will.
That would be the stupidest thing they could do. You don’t switch to a new gcc and glibc a couple months from release and try to compile the entire distro. They’re probably delaying to try to make Xgl suck less.
>That would be the stupidest thing they could do. You
>don’t switch to a new gcc and glibc a couple months from
>release and try to compile the entire distro.
Oh really? Why have Debian, Gentoo and others done it, then? Debian has been using 4.0x in Testing and Unstable for a long time now, and 4.1.x is a much better compiler from most accounts – certainly for Java and C++. Debian Unstable has gone to 4.1.x now. Some Debian Testing packages have been showing up compiled with 4.1.2-prerelease.
Moving from 4.0x to 4.1x isn’t a big leap and brings in the performance for most apps that 4.0x lacked. It brings stability, better compiled code and lower memory usage for C++ apps.
One might expect that they want to get kernel 2.6.17 into the distribution, now, too. Since most modern dists install the root file system with EXT2/3, a performance improvement of up to 50% might be of interest?
This isn’t cutting-edge stuff. It’s modern. Only the latest kernel and GLibC in my list could be considered cutting-edge, IMHO. The rest are just the current evolved and matured revisions.
Edited 2006-06-20 12:14
Are any of those distro’s you just listed being touted as enterprise class software and sold with a 5 year support promise? Errrr No.
SLES and RHEL need to be stable, funny enough most enterprise could not give a stuff about the latest packages because above all they value stability and certification of thirdparty software.
And that last point is also why there will be no last minute changes of the magnitude you suggest, thirdparty software support can make or break an enterprise platform.
OpenSUSE 10.1 suffered because of important last minute changes, Novell will not want to piss of customers with a shoddy and unreliable release of SLES 10.
You should think about the business implications of what you suggest.
Oh and while you talk about Debian UNSTABLE, as Debian STABLE (3.1) was released June 2005 if the previous release schedule is anything to go by then GCC, Glibc and the other packages won’t go mainstream 2008 when their stability is assured.
Edited 2006-06-20 12:36
Are any of those distro’s you just listed being touted as enterprise class software and sold with a 5 year support promise? Errrr No.
The GCC in there now is hardly enterprise class and 4.0 was a bit of a step backwards. In reality, upgrading the compiler and toolchain would be a fairly low risk fow quite a few new benefits. However, if the release was perhaps a few months further away it would have been more realistic.
OpenSUSE 10.1 suffered because of important last minute changes
Different thing. The last minute changes to 10.1 were the package management system, and it was absolutely pointless.
Novell will not want to piss of customers with a shoddy and unreliable release of SLES 10.
Oh don’t worry. They’ve been making a great job of that for some time across all of their products.
Edited 2006-06-20 17:48
where did you get this 50% number from?, SUSE use reiserfs by default by the way.
Gentoo is.. gentoo, so it has the obvious excuse for doing such things.
Debian as you mention does no such thing on Debian stable. testing and unstable is … just that.
Including Xgl in a serious distro as anything other than an optional expermiental toy during 2006 is bound to end in disaster. Xgl/compiz crawls on quite a few systems, fails to start on many and crashes as often as win95 on most. Last time I tried it, it also multitasked horribly when the CPU was busy.
Novell, please, for once let us have a truly properly tested and stable Linux distro. I had high hopes for Ubuntu 6.06 but alas…
Xgl is more than just a toy, sounds to me you have not tried the latest because alot of issues are gone now. CPU load is minimal and i’d rather have the GPU take the load rather than the CPU.
We’re not talking about kiddies playing around with eye candy. This is suppose to be for the enterprise. Xgl is way far away from being stable enough for that. Plus, Xgl is really just a hack.
Xgl is more than just a toy, sounds to me you have not tried the latest because alot of issues are gone now.
On graphics cards like nVidia it tends to work OK with some strange problems here and there that need to be tracked down and sorted out. On other graphics cards the quality can vary wildly. It will take some time to make it stable on a widespread basis, if ever.
Must be compiz, use it with the XFCE’s internal damage control instead of compiz and it’s rock steady.
Xgl/Compiz has never given me any trouble. At all.
I would never have it turned on by default with someone where reputation / first exprience mattered though. Just in case.
They’re probably delaying to try to make Xgl suck less.
XGL is the absolute rock bottom least of their worries. They need a working package manager, build up Zenworks and make sure that the technology they build all that stuff up with is complete and stable enough at the same time. No mean feat.
According to the article it’s supposed to come out on July 22. Novell always said it would come out in the Summer, so I don’t see where the problem is.
Of course, had they promised it for 2003 and we were still waiting then I’d see your point.
I have been testing suse since v 7 and I never liked any of the released versions of it. One simple reason for that: quality is not a concern with this company; all their software were rushed to market while in beta though they mark it as gold release.
I know that there are tons of packages on SUSE DVD but what should I do them if they are unstable and buggy, and x server crash from time to time.
I have tried all GUI interfaces to discover the same problem to persist:on KDE, GNOME, All window makers … and other exotic ones.
This move from Suse might be the first turn towards looking for quality rather than features of marketing reasons! I wish them to wake to solidify linux appeal.
HI all…
Interesting…but I think that it’s the right approach….this is an enterprise release..it HAS to be right or they will completely blow their reputation.
I have been VERY HAPPY with all of the SUSE releases since 9.1 as I guess I have been lucky NOT to have all of the “problems” that a lot of other people have…maybe I don’t exploit the distro’s as much as I tend to be somewhat conservative….but I do compile apps and use the librarires/etc. a lot and don’t seem to have trouble.
But I think that it has to be understood…that the release philosophy of SUSE vs. SUSE ENTERPRISE is going to be different. In my view, the early release of the standard community version of SUSE is fine being a little buggy….as it’s pretty much experimental anyway…much like Fedora Core is. I’m not bothered with this philosophy. The release philosophy of Enterprise MUST be different…as it must work right out of the box damn near perfectly….I personally think that they are trying to do this. Much like Red Hat Enterprise version….older, more tested software…but stable as all heck (my only experience is Cent and Scientific, but I consider these pretty much the same as RHEL).
So…the last comment about XGL I agree with….I personally think that it’s pretty much stupid to advertise EARLY BETA software as a “feature” and they would be well advised to tone that part down a bit if they release that with SUSE Enterprise..as it’s too unstable for enterprise use anyway. This was dumb letting that cat out of the bad so early.
OK..that’s it for now…thanks for letting me chime in
hey what can I say, they took leaps and bounds and made a really cool product….now they need to go back and clean up the details. At least the big work has been hammered out now they need to go tweak everything. Sounds reasonable.
Ok, yea they should of done this before rolling with it…
Linux distributions work well only through collaboration. I have not seen Novell do that much lately. Rather than using/enhancing yum or apt, they wrote something different. Rather than using/enhancing SELinux, they wrote something different. Rather than using/enhancing gcj, they pushed Mono. Rather working with the Xorg committee, they went and wrote XGL from scratch. The problem with Novell is that they are trying to do too much all by themselves. They just need to start working with the community again.
might have a point!
I would like to see more “togetherness” from a lot of distros and players in the linux field but I am afraid it may never happen. I cant fault JUST novell on this and at least they kicked some ass with all the stuff you mentioned and got something accomplished and done in a short period of time. But if wishes were horses I would like to see more “working together” instead of so much “working apart” going on nowadays.
You are forgetting that the problems are attacked from different approaches. When Red Hat created rpm and then yum, you could have argued that they didn’t cooperate with apt and instead created a new tool. But it’s a matter of design and all that, and it’s no as easy as you say to cooperate.
Keeping your line, we could say that Gnome folks didn’t cooperate with Kde, and you could also say that Gentoo people didn’t cooperate with the Debian world, and the examples continue.
It’s very easy to say that a group or another isn’t cooperating. But the fact is, you don’t know the reasons behind it.
Plase, take a deeper look before saying things like these.
(For example, Novell didn’t cooperate with gcj and instead created Mono, because a huge number of reasons that have been debated here long enough).
Carlos.
You are forgetting that the problems are attacked from different approaches. When Red Hat created rpm and then yum, you could have argued that they didn’t cooperate with apt and instead created a new tool. But it’s a matter of design and all that, and it’s no as easy as you say to cooperate.
Yum is was not written by Red Hat. It was written by some good folks at Duke University, and was based on the Yellow Dog updater (hence its name: Yellow dog Updater, Modified).
It’s very easy to say that a group or another isn’t cooperating. But the fact is, you don’t know the reasons behind it.
That’s a crucial point. E.g. AppArmor has been around for a while, and was developed by Immunix. Novell bought the assets of Immunix, including AppArmor.
And in the end it’s not too bad. A good Mandotary Access Control framework or X server is not something that you write in two weeks, or fork and maintain without sufficient manpower. So, we’ll end up with two major MAC frameworks, giving room for some fair competition (and choice).
“When Red Hat created rpm and then yum, you could have argued that they didn’t cooperate with apt and instead created a new tool.”
You just made the point for me. RPM was created by RedHat while Yum (YellowDog Update Manager) was not. RPM was also a small project that was much needed and one that Red Hat chose to take on, and one that everyone now shares in maintaining.
“Keeping your line, we could say that Gnome folks didn’t cooperate with Kde”
But again, you are comparing the Gnome community to the Novell company. Gnome is a community project and has contributors from IBM, Sun, Red Hat, and many, many more companies and individual open source contributors.
Competition among the open source community is not a bad thing. However, one company trying to compete against the entire open source community is going to be difficult regardless of whether or not they keep their source code open or closed.
“Novell didn’t cooperate with gcj and instead created Mono, because a huge number of reasons that have been debated here long enough”
And again, Novell has taken full responsibility in getting the bloated mess to work with a limited number of their employees and a complete lack of community. On the other hand, while RedHat is fairly involved in developing gcj, they are just merely one company out of an entire community of developers working to fix it.
The reason for the success of open source projects is not due to open source itself. It is the collaboration and mutual work of an entire community in sharing the burden of development and then reaping the benefits. If the community does not get involved, it does not really matter whether the project is open source or closed source.
The problem with Novell is that they are trying to do too much all by themselves. They just need to start working with the community again.
You have a fairly major point there. Novell are looking like the proprietary company they always were by trying to take software in house and work on it there. The problem is the nature of open source software is very ill suited to this, and they’ll end up working on things mostly by themselves that give them no payback for their effort at all.
Linux distributions work well only through collaboration. I have not seen Novell do that much lately. Rather than using/enhancing yum or apt, they wrote something different. Rather than using/enhancing SELinux, they wrote something different. Rather than using/enhancing gcj, they pushed Mono. Rather working with the Xorg committee, they went and wrote XGL from scratch. The problem with Novell is that they are trying to do too much all by themselves. They just need to start working with the community again.
Novell has actually been better to the community than most commercial distros. When they acquired Suse they GPLed Yast. They did a great job with XGL and released a much better (IMO) solution than AIGLX. XGL had been in the works for a long time and no one really contributed so I give praise to Novell for continuing its development. The best part about XGL is that it is much more graphics card agnostic than AIGLX. As for GCJ and Mono, Suse bought Ximian, the developers of Mono, and continued with its development because Mono is a good technology and it fits in with Novell’s attempts to be compaitible with Microsoft. Mono applications tend to fit in better than Java apps on a Linux distro anyway. The only point you made that I halfway agree with is SELinux. I would like to see SELinux mainstream but AppArmour is a simpler and easier to implement security technology. In the end the decision to develop AppArmour might have been a good one considering the work that must be done to SELinux to make it viable on widely varying setups.
The only point you made that I halfway agree with is SELinux. I would like to see SELinux mainstream but AppArmour is a simpler and easier to implement security technology. In the end the decision to develop AppArmour might have been a good one considering the work that must be done to SELinux to make it viable on widely varying setups.
AppArmor was a technology developed by a third party (Immunix) that used the same kernel infrastructure that SELinux did.
As you point out, AppArmor is simpler to use for most people and its simpler to create arbitrary sandboxes for applications, it does not require a lot of expertise to create new profiles.
When Novell bought AppArmor, they open sourced the technology which before was proprietary.
With lots of things here there is community on one side & business attitude ,interests & politics etc on the other .
But IMO good they are fixing things up because zen is still not completly happy on my system.
Just IMO
>any of those distro’s you just listed being touted
>as enterprise class software and sold with a 5 year
>support promise? Errrr No.
So you advocate instead that they go with older packages of all the software, despite the fact that the feature sets and bug fixes of the latest packages such as KOffice tend to be considerable in comparison with previous revisions? You sound like the maintainers of Debian Stable, which make the OS obsolete before they are even stabilized.
I don’t advocate going bleeding-edge with *.0 releases of things, mind you. Hey, isn’t XGL not even a full x. release?
>And that last point is also why there will be no
>last minute changes of the magnitude you suggest,
>thirdparty software support can make or break an
>enterprise platform.
3rd-party software almost always ships with its own libraries or with statically compiled binaries. Distributions generally frown on the practice anyway.
>You should think about the business implications of
>what you suggest.
I’ve been using 4.x and 4.1.x compilers for some time now. I’ve considered what I am suggesting quite a bit. The latest GLibC and GCC are extremely valuable contributions to the future of Linux, as are the latest versions of KDE and KOffice, if that is your favoured environment. If Novell isn’t planning that far ahead, I’ll be concerned for them, especially given the focus on Java and the improvements in GCJ/AWT.
>Oh and while you talk about Debian UNSTABLE, as
>Debian STABLE (3.1) was released June 2005 if the
>previous release schedule is anything to go by then
>GCC, Glibc and the other packages won’t go >mainstream 2008 when their stability is assured.
Debian is probably the most conservative distribution around. Yet their Testing distribution is using 4.1.x for new binaries and they have 4.1.2-pre available already in Testing. I strongly suspect that they will deprecate 4.0 in days. Talk to most Debian users and they will tell you that Testing is the distribution they use, because Stable is obsolete. Unstable is a gamble, but they have already standardized on GCC 4.1.x and the latest GLibC (Testing is at GLibC 2.3.6 still).
In Gentoo, my issues with the new (to Gentoo) compiler have been next to zero, and the benefits of memory usage, software startup, etc. have been noticable compared to the 3.4.x toolset they were using before. Give it a shot with some of your code, if you haven’t.
Edited 2006-06-20 15:38
“So you advocate instead that they go with older packages of all the software”
Look lets clear things up a bit. I don’t disagree that some of the packages will be a bit old when SLES10 finally ships. Assuming the SLES 10 is derived from OpenSUSE 10.1 then the packages are what 2-6 months old when they froze the packages for Release Candidate testing.
Lets face it when SLES10 ships it will take most 3rd party vendors 3 months to announce certification and support for SLES10. A lot of places (who favour support and stability) will not introduce SLES10 until their middleware vendor (SLES 3rd party vendors) announces that certification……which means by the time SLES10 can be integrated/upgraded in an existing environment the packages will be quite old.
So from all that I agree with you. Right or wrong that is their preferred test and release schedule to which they attempt stability.
The point you raised which I don’t agree with is that your suggested changes can not be made this close to the release date. It would invalidate a lot of testing that has taken place not only by Novell but ISV’s. As stable as the latest gcc/glibc/packages may be Novell can not take the risk of releasing an unstable SLES.
Enterprises are slow moving risk averse places and they are who Novell (and Red Hat) are targeting.
The main reason the company I work for use SLES9 is because we MUST use a certified platform for support from our middleware supplier. No certified platform == no support, and that is NOT an option.
So old packages are not preferred, but stable is the name of the game with SLES.
“In Gentoo,….”
I like Gentoo, but most 3rd party software vendors don’t like fast changing platform – its a support nightmare. And thats why SLES and RHEL are the preferred distro’s for most businesses (suppliers and users). I/we don’t have to like it but thats the way it is.
So you advocate instead that they go with older packages of all the software, despite the fact that the feature sets and bug fixes of the latest packages such as KOffice tend to be considerable in comparison with previous revisions? You sound like the maintainers of Debian Stable, which make the OS obsolete before they are even stabilized.
I suppose it’s a balance thing, but people always equate older with more stable. In the open source world this isn’t really the case as more and more bugs get fixed in later versions and improvements are made, especially between minor versions. Debian Stable isn’t actually more stable or more bug free in terms of the software it uses at all, it’s just more bugs tend to be known about.
There’s one big advantage in the whole “packagemanagement sucks in 10.1” issue:
Smart get’s a huge boost! Personally I believe that Smart (especially considering it’s age) can become a major player in the package management issues accross distributions. That fact that in can handle (and does, beatifully!) many different repo types is really very cool!
SUSE 10.1 rocks, it really, really rocks, if you forget the pm issue and install Smart.
>Assuming the SLES 10 is derived from OpenSUSE 10.1
>then the packages are what 2-6 months old when they
>froze the packages for Release Candidate testing.
Therein lies the problem, in my mind. Linux is all about pushing the boundaries of computing and providing features in a linear series of updates, rather than setting a feature freeze and catching up later. What they should be doing is setting a list of software they want to include, the package manager they want to include, and some extra value-added features that sets their distribution apart.
But freezing code at obsolete revisions is pointless, to my thinking. When the OS ships, it should ship with the latest non-point-zero revisions of the software minus any packages that are *known* to cause problems (or packages that would cause a major distribution upheaval in terms of configuration).
I’m OK with them not shipping GLibC 2.4.0 but not shipping GCC 4.1.1 would be a big mistake. Linux is also largely about development, and 4.1.1 has features that cannot be overlooked – stack protection, C++ performance, dramatic GCJ improvements, much better non-x86 architecture optimization support, et al.
Anyway, the other side of that argument is something I’m not blind to as well. If they want SUSE to be known as the distribution that is stable at the cost of feature set and are not concerned with known bugs which exist in obsolete software, then so be it. They won’t have me as a customer, and I guess that isn’t a concern if that’s where they are headed. Remember, as well, that you don’t have to ship binaries compiled with the newer compiler in order to ship the compiler.
>So from all that I agree with you. Right or wrong
>that is their preferred test and release schedule to
>which they attempt stability.
I understand, and I can see that point of view. I just hoped for better from Novell, especially given some of the more important projects they have funded.
>As stable as the latest gcc/glibc/packages may be
>Novell can not take the risk of releasing an
>unstable SLES.
Maybe. While I think otherwise, I’m not going to debate this point, since I don’t know what sort of facilities they have for testing. But… Debian and Gentoo have done the testing and released 4.1.x for some time now (in Gentoo 4.1.1 is now the ~arch compiler of choice).
>Enterprises are slow moving risk averse places and
>they are who Novell (and Red Hat) are targeting.
Understood, but somehow I doubt the enterprises will be using applications that are 100% GPL. If I’m wrong, I am entirely happy to admit ignorance!
>The main reason the company I work for use SLES9 is
>because we MUST use a certified platform for support
>from our middleware supplier. No certified platform
>== no support, and that is NOT an option.
I feel your pain. We have a RH9 system that we have to support, and it’s a nightmare. Luckily there is a project out there to support RH9 with bugfixed packages, but RedHat has abandoned its children. RedHat is a menace to the Linux way of things, IMHO – they have done to customers exactly what people went to Linux to escape – planned obsolescence.
> So old packages are not preferred, but stable is
> the name of the game with SLES.
I suspect this will get them left behind in enterprises with IT staff that are really on the ball. But time will tell, and perhaps my perspective isn’t as good as yours – I’m used to dealing with the latest/greatest and cleaning up as I go, but I realise not everyone has the same drive or the same time budget.
“I’m OK with them not shipping GLibC 2.4.0 but not shipping GCC 4.1.1 would be a big mistake. Linux is also largely about development, and 4.1.1 has features that cannot be overlooked – stack protection, C++ performance, dramatic GCJ improvements, much better non-x86 architecture optimization support, et al.”
In a way yes. However I know for a fact with the industry I am in, Simulation Software, we require the use of GCC 3.4 due to high profile applications that use that, and do NOT work with GCC 4. We still use Suse 9.3 for that very reason, as it is much easier then setting the machine up to use multiple GCC versions. Until the other software catches up, we are stuck down here in order to do what we do.
Therein lies the problem, in my mind. Linux is all about pushing the boundaries of computing and providing features in a linear series of updates, rather than setting a feature freeze and catching up later.
It just doesn’t cut it. When you deploy an enterprise application you want:
1. The APIs to be stable.
2. The system to be supported for many years.
And that is what RHEL and SLES are all about: stable APIs and long release/support cycles. Maybe it doesn’t fit for you, but when it comes to supporting customers it is ideal. You can focus on two platforms that are not moving, and that’s it.
I’m OK with them not shipping GLibC 2.4.0 but not shipping GCC 4.1.1 would be a big mistake. Linux is also largely about development, and 4.1.1 has features that cannot be overlooked – stack protection, C++ performance, dramatic GCJ improvements, much better non-x86 architecture optimization support, et al.
Quite often new features are already backported to the stable enterprise version. E.g. RHEL4 has stack protection techniques integrated in gcc and glibc. And sometimes new features are backported (especially in the kernel) if it does not result in API and ABI changes.
Maybe. While I think otherwise, I’m not going to debate this point, since I don’t know what sort of facilities they have for testing. But… Debian and Gentoo have done the testing and released 4.1.x for some time now (in Gentoo 4.1.1 is now the ~arch compiler of choice).
I can’t comment on Gentoo, I have never used it. But Debian Testing is not Debian Stable. And Debian usually does long freezes before renaming the current testing branch to “stable”. So, the fact that it is Debian Testing now is not really relevant. In the Red Hat and SUSE scheme of things it is like saying gcc so and so is in Fedora Core 6 Alpha or SUSE Linux 10.2 Alpha.
I feel your pain. We have a RH9 system that we have to support, and it’s a nightmare. Luckily there is a project out there to support RH9 with bugfixed packages, but RedHat has abandoned its children. RedHat is a menace to the Linux way of things, IMHO – they have done to customers exactly what people went to Linux to escape – planned obsolescence.
Red Hat 9 is not an enterprise version. If your company used RHEL3 (which was originally based on RH9) they would still get updates through their subscription, and would continue to get them until October 2010. Or if you don’t want to pay: CentOS 3 will do the same thing.
> 1. The APIs to be stable.
The = key in dselect will solve that for Debian.
Rarely will a kernel or a C compiler change that unless your application needs to directly address the kernel, but C++ etc. with their runtime libraries certainly can. Luckily, Linux will, like most *nixen, support multiple versions of a library and problems can be solved that way. Or the apps can be statically compiled; which contrary to popular thought is *not* a performance/memory loss in every case.
GLibC can be an issue.
Ironically, one of GCC 4.x biggest features is that it is much more strict and complete in terms of supporting the C++ ABI/specs. Yet another reason to recommend it, since the runtime libraries are unlikely to greatly change from this point onwards.
>Quite often new features are already backported to the
>stable enterprise version. E.g. RHEL4 has stack
>protection techniques integrated in gcc and glibc.
Yes, I have seen that in many cases. I think it’s kind of ugly to do that, but what the hey…
> But Debian Testing is not Debian Stable.
Most people who use Debian are using Testing, because Stable is obsolete. Not, mind, as obsolete as the old Stable was. If your application is vertical in nature and requires Stable, more power to you.
> Red Hat 9 is not an enterprise version.
Tell that to the application vendor. In this case, they also require kernel 2.4, which doesn’t run worth a fiddler’s damn on current-generation machines (no or little support for many new chipsets).
Overall, I guess what I was trying to say is that I hope Novell is progressive with their new distribution instead of becoming another RedHat, with planned obsolescence and a required forklift upgrade (or a nail-biting upgrade process) between major revisions rather than continuous improvement. Ubuntu was supposed to provide us with that, but my experience is that stability of Dapper is dismal even in comparison with my Gentoo boxen running unstable arch profiles.
Edited 2006-06-21 01:12
Luckily there is a project out there to support RH9 with bugfixed packages, but RedHat has abandoned its children. RedHat is a menace to the Linux way of things, IMHO – they have done to customers exactly what people went to Linux to escape – planned obsolescence.
You’re spot on there.
>people always equate older with more stable. In the
>open source world this isn’t really the case as more
>and more bugs get fixed in later versions and
>improvements are made
You’ve made the thrust of my argument clearer than I could. Thanks for that.
2005 – Windows Server Unit Sales up 12.9%
2005 – Linux Server Unit Sales up 14.3%
Linux used to grow at 50% or more, then it was 30%, now its just above Windows.
2006 it will be half that of Windows. Cherry picking Unix on x86 upgrades only works for so long.
(Yeah yeah, your servers are bought without an OS and you downloaded Linux later. Me too. 200+ Dell Servers with no OS. We buy our Windows Server OS from a reseller because we get an Academic discount).
Working in a Novell environment, I just want them to “get it right”. I don’t care about bleeding edge technology. I want to be able to get actual work done on my Novell technology based network. If it takes six more months, no worries here, as long as it’s high quality.
Edited 2006-06-20 20:39
Ok Mr Slate, that’s been enough insults for now. Now, where’s that drop-down ban menu?