2CPU.com has an article posted comparing the performance of Linux kernels 2.6.4 and 2.4.25 under a variety of server-oriented benchmarks. Their Apache and MySQL results were especially interesting. It’s a good read for those of us debating whether or not to update our production machines from 2.4 to 2.6.
He’s probably referring to the monolithic kernel vs. micro kernel debate… look up the history of the linux, particularly the Torvalds-Tanenbaum section…
By all means, enlighten us. How would you have done better than Linus?
I never claimed that I could have done better myself
But a more modular architecture such as a modified microkernel would have been a better starting point than a monolithic kernel.
Of course, everyone has their own opinion on the matter. Thia is mine. The Linux developers have done a fantastic job regardless.
just because some of the stuff in linux when it first came out was wrong (e.g. intimately tied to 386 architecture) doesn’t mean that it will always be wrong and would never be corrected or addressed
Don’t get all upset. I use Linux on quite a few of my boxes, and wouldn’t if I thought badly of it.
Are there any benchmarks out there on desktop performance between 2.4 and 2.6. I know the desktop is pretty subjective and I’m not sure how you test it but maybe someone has come up with some kind of objective test. I’m running Dropline Gnome 2.4 on a fast Intel with a gig of ram, but you can always use better performance – especially with freaking gtk+ being the way it is. The only reason I haven’t switched over yet is because of my damn wireless card. Realtek won’t open up the drivers and they won’t run on anything over 2.4.21. I guess I could go linuxant, but if 2.6 isn’t going to give me that much of a performance boost on the desktop on a fast machine with lots of ram then I might as well stick it out with 2.4.21 and just wait to see if Gtk 2.4 and Gnome 2.6 gives me a speed bump(once its out for dropline).
Personally, I think that you’re situation would qualify as one of the few truely legitimate reasons not to upgrade to 2.6.
You should try linuxant. The 2.6 kernel is noticibly nicer on the desktop, especially when the system is under load. I’m listening to Shoutcast right now (nothing like French electronica while coding!) and I just tried to update my locate database, compile the kernel, and ‘ls -R /’ and could hear a single skip
If you do upgrade to 2.6, make sure to remove any hack your distro might have done to get X to run at higher than normal priority — its not necessary anymore.
multithreading is already here. ACL as well. metadata fs is coming (R4). original flaws are being mended.
i hope there would be 1 common standard for metadata. though chances for this looks slim *sign*
it is just to wait when userspace starts using those.
i really dont understand all these fuzz.
there is still no native 2.6 linux os available. that means a system where u cannot run a 2.4 kernel on it because its compiled with nptl instead of linuxthreads and glib compiled against 2.6 headers.
if there is one i really like to know. what they tested was a interim solution to 2.6
…is that they are difficult to debug. Which is one reason, why ideally, they are much smaller than monolithic kernels. Nonetheless, why would we be better off with a micro kernel? Simple fact is, just because it’s a buzzword, doesn’t mean we’d be better off with it. In fact, if you check, I think you’ll find that it is seemingly easiler to build a poor performing system via a micro kernel approach than it is to build a fast system with a monolithic kernel.
Simple fact is, just because it’s a buzzword
‘Microkernel’ hasn’t been a buzzword for ages.
And you are right about it being quite easy to build a poorly performing microkernel (look at the Hurd). But having as much of the kernel as possible outside of kernelspace does make the kernel more robust, simple fact. That’s why QNX is so often used in medial devices etc, because there is so little that is fundamental to a kernel that can fail in such as way as to really bork the box.
“…is that they are difficult to debug. Which is one reason, why ideally, they are much smaller than monolithic kernels.”
A micro kernel is much easier to debug than (most) monolitic ones. By forcing all sub-systems to communicate over one IPC path the kernel code can be verified pretty easy.
“Nonetheless, why would we be better off with a micro kernel? Simple fact is, just because it’s a buzzword, doesn’t mean we’d be better off with it.”
As “Debian” wrote it hasn’t been a buzzword for a very long time. The reason micro kernels can be better is that they are (forced) modular and that makes it easier to implement module hot-swapping, software redundancy and other advanced features.
“In fact, if you check, I think you’ll find that it is seemingly easiler to build a poor performing system via a micro kernel approach than it is to build a fast system with a monolithic kernel.”
Yes that is true. When designing a monolitic system one can use shortcuts to speed up communication between different sub-systems, while this is also possible with a micro kernel based system it goes against the design philosophy. Adding specialized IPC paths makes the kernel harder to debug, increases code size and often makes the sub-systems using those paths more linked to the kernel.
as hardware gets better, the higher level benefits of better designed micro-kernels will outwiegh the perceived performance of monolithic kernerls.
this is a trend seen in all software domains over the last twenty years. (good) software deisgns which were impossible to run on hardware 15 years ago is now commonplace, from image processing to network protocol stacks.
i look forward to a time when the GNU Hurd is ubiquitous!
“this is a trend seen in all software domains over the last twenty years. (good) software deisgns which were impossible to run on hardware 15 years ago is now commonplace”
So – why does that mean monolithic kernels can’t survive? I still don’t get it. Linux, freebsd, comercial unixes etc, are not “unmaintainable pieces of code”. You don’t need to have a message passing thing and run drivers as independent processes to make a good an maintainable design. You won’t get the advantages/disadvantages of them, but that doesn’t means traditional kernels can’t have a good design…
true – it doesn’t mean that there isn’t a place for monolithic kernels. i just can’t think of one right now, other than embedded and other constrained environments. but even these “monolithic” kernels have become modular over time, with kernel loadable modules for linux, freebsd and netbsd, and possibly solaris (i’m not sure). asymptotically approaching a fully segrated design.
but yes, as always, the right tools for the right job, but i believe microkernels will appear in ever greater areas, including the desktop and server space.
“there is still no native 2.6 linux os available. that means a system where u cannot run a 2.4 kernel on it because its compiled with nptl instead of linuxthreads and glib compiled against 2.6 headers.
if there is one i really like to know. what they tested was a interim solution to 2.6”
Gentoo can be a native 2.6 linux system. I’ve got 2.6.4 kernel and glibc compiled with NPTL and against 2.6.4 headers, a simple USE (“nptl”) is sufficient.
At the _very least_, the HURD deserves far more attention than it is getting right now. Whether it becomes the defacto GNU system is another matter (I’d like to think of Linux as a “steward” in the HURD’s absense…Linux is the John the Baptist to the HURD’s Jesus — eh, nevermind, you get the picture
If you mean a box set with 2.6? Try Turbolinux, and I believe SuSE now.
Gentoo supports 2.6 very well. There may be some compiling, but the system is VERY fast (might be due to the different tweaks gentoo can do.) Gentoo is 2.6 native as much as any distro is, and has been since before 2.6 came out. (Much as devfs is liked, udev & sysfs are supported just fine.) Not to mention any number of packages. In
Oh, and there are multiple patch sets to chose from of 2.6: the official 2.6(development-sources), Andrew Morton’s patchset (mm-sources, mostly stuff that will go into the kernel after a bit of testing in his patch set), and at least one other. So it’s supported, even the variants.
Linux *has* been ported to a Microkernel:
http://os.inf.tu-dresden.de/L4/LinuxOnL4/
but there really doesn’t seem to be much interest in it since even the microkernel advocates don’t seem to know about it.
this guy needs to have a look here: http://www.bitbenderforums.com/vb22/showthread.php?threadid=58650
Hey ya. I have boosted my network @ home with close to 25% with 2.6, same driver, same hw.
I also have much less load on the server: look here gentoo users that need to tweak proggies that wait around for user input etc: http://www.bitbenderforums.com/vb22/showthread.php?postid=310232#po….. that is a kernel tweak.
This is not to flame gentoo, but flaming this guy for choosing something he has no clue over
The micro vs. mono debate is lame. It’s usually “theoreticians” vs. people who have actually done work or an implementation. It’s retarded, and if microkernels were the wave of the future, don’t you think the Hurd or similar efforts would be more popular with developers, and not just with freshly-graduated IT students?
I wonder which architectural changes allowed the rather startling performance increase for file serving?
It’s retarded, and if microkernels were the wave of the future, don’t you think the Hurd or similar efforts would be more popular with developers, and not just with freshly-graduated IT students?
Microkernel implementations are used plenty. And this isn’t even mentioning embedded or real time applications, where microkernel’s are used almost exclusively.
XNU/Darwin, and the most widely used operating system in the entire world (take a wild guess) are modified microkernels (but then again, there are very few “pure” mk’s).
So you have a world of embedded and real time
applications, and 2 popular operating systems following a microkernel approach. So if you didn’t know, microkernel
“research” has been implemented outside of the academic circles for some time now.
the most widely used operating system in the entire world (take a wild guess) are modified microkernels
Nope. It’s not a microkernel. NT was a microkernel in version 3.51 or something like that. People keep propogating this myth that Windows uses a microkernel. It doesn’t. If it did then it would be much slower, and a lot more stable. Even OSX doesn’t take advantage of Mach enough to be considered a microkernel OS. The implementation of BSD does not ride on top of Mach but instead more alongside it.
“Nope. It’s not a microkernel. NT was a microkernel in version 3.51 or something like that. People keep propogating this myth that Windows uses a microkernel. It doesn’t. If it did then it would be much slower, and a lot more stable. Even OSX doesn’t take advantage of Mach enough to be considered a microkernel OS. The implementation of BSD does not ride on top of Mach but instead more alongside it.”
Are you implying that it was rewritten that much between NT 3.51 and NT4? Seems like a lot of work, and pretty unlikely.
Personally I don’t care if it’s micro or mono, I’ll leave implementation to people who actually know what they are doing until I understand it better.
I’m sure there are advantages and disadvantages to both. And from all I’ve heard Windows NT is a modified Microkernel. Please show us evidence that it’s not, your word means nothing as you are anonymous.
Can we see a comparison for which kernel series has had more remote root vulnerabilities? I know 2.6.x is off to a rocky start, but I’m guessing there are still more in 2.4.x. Maybe even a proportion of remote root vulnerabilities per day since release would be interesting.
Hurd might be a nice microkernel. I have no idea, but since it’s doubtful that it’ll ever allow binary drivers its doubtful that it will ever be anything besides a hobbyist os on the desktop unless ATI and/or Nvidia open up their drivers. By the time Hurd is ready the desktop will most likely mandate some sort of 3d acceleration via OpenGL or something.
Hi
we have no remote holes in 2.6.x at all and possibly one in 2.4.x.
regards
Jess
Abraxas (IP: —.37.3.239.adsl.snet.net) is right about NT 3.x being a true microkernel, and later releases being modified microkernels. After 3.x Microsoft moved a good deal of the window system into the kernel, while keeping the rest of the kernel pretty much as it was. Hence a modified microkernel.
Mac OS X is also a modified microkernel, but Apple included BSD bits instead of a window system in Mach.
Make no mistake, monolithic kernels are quite often faster and simpler to implement initially, but microkernels (and better yet, their ‘impure’ modified microkernel relatives) are the better bet in the long run, as they are easier to debug, easier to extend, and easier to maintain (due to the compartmentalization inherrent in these message passing systems). They are also easier to understand (I cannot fathom folks who swear the monoliths are easier to follow again, due to the compartmentalization. Small, bite sized chunks of code, with well defined interfaces between them (like all things, this varries with implementation. Some are really bad).
Linux advocates (Linus aside IIRC) will swear up and down that this is not the case (and the Linux folks have done a fantastic job of pushing the monolithic kernel idea to it’s limits), but monoliths are just not built with future proofing in mind. It takes the legions of developers and the corporate interest to keep it going. With a better, saner, more modular, and more maintanable architechture, corporate help wouldn’t be quite so nessesary, lawsuits aside (not to mention that the corporations are the largst reason for highly dangerous lawsuits, generally speaking.
Even now Linux and FreeBSD are seeing the first headaches that this design philosophy has in store. Some would argue that Linux’s popularity is due entirely to it’s quality and technological excelence, and that the big corps are falling in line left right and center due to this fact. Linux is pretty damned good, for sure, but it really is stretching the limits of it’s fundamental architecture.
But you won’t have to take my word for it. Linux and FreeBSD aren’t the only free, open source OSs out there, and some of the others in the coming years are going to be giving them a run for their money. It’s going to be a fun decade in OSS land
Hi
Microkernel may be in theory easier to maintain but look at HURD to understand why it fails miserably in practise. operating systems are not object oriented and cannot be used to compartmentalise systems with IPC. each of the components should be tightly integrated and optimised for its architecture.
Morever the sucess of Linux is because its a clone of the tried and tested unix model not because its superior or whatever
regards
Jess
The Hurd is ‘maintained’ by a small group of people that argue endlessly, and code little.
operating systems are not object oriented and cannot be used to compartmentalise systems with IPC. each of the components should be tightly integrated and optimised for its architecture.
That argument doesn’t make much sense. There is absolutely no reason why an OS can’t be OO, and not all microkernels use IPC exclusively in the ways that you imagine.
Morever the sucess of Linux is because its a clone of the tried and tested unix model not because its superior or whatever
Also a very simplistic argument, that covers none of the real reasons (there are many, many of them) that Linux is the most well known of the free, open source OSs. FreeBSD is not a clone, but a direct decendant of the ‘tried and true.’ It’s not quite so popular now is it? Is that because it’s a descendant instead of a clone? What a silly thought.
“this guy needs to have a look here: http://www.bitbenderforums.com/vb22/showthread.php?threadid=58650
Hey ya. I have boosted my network @ home with close to 25% with 2.6, same driver, same hw.
I also have much less load on the server: look here gentoo users that need to tweak proggies that wait around for user input etc: http://www.bitbenderforums.com/vb22/showthread.php?postid=310232#po……. that is a kernel tweak.
This is not to flame gentoo, but flaming this guy for choosing something he has no clue over”
hhrmmmfff… it seems you didn’t understand anything. You can recompile kernel in every existent distro, but we weren’t talking about recompiling kernel, we were talking about (N)ative (P)osix (T)hread (L)ibrary, that requires 2.6 kernel (unless you have a RedHat distro and patched kernel) and don’t work with 2.4, instead of LinuxThreads. And now, by the way, I’m using cfq.
“At the _very least_, the HURD deserves far more attention than it is getting right now.”
Why?
“Linux is the John the Baptist to the HURD’s Jesus”
You’re right, HURD is like Jesus. He’s supposed to be coming, but he’s never going to. Just like HURD.
You’re right, HURD is like Jesus. He’s supposed to be coming, but he’s never going to. Just like HURD.
That made my day
I was thinking of saying something along the lines of John the Baptist being real… but I thought better of it
You’re right, HURD is like Jesus. He’s supposed to be coming, but he’s never going to. Just like HURD.
Heh, my analogy is strictly first-century here. Equating second-comings with first-comings doesn’t make sense. Good try though.
Anyways, the main reason for the HURD’s poor performance is because people are too busy ignoring it because of poor performance. I don’t understand why the HURD, out of all the open source projects out there, is rejected on the basis of alpha status, when all it takes to improve it is contributing.
I don’t understand why the HURD, out of all the open source projects out there, is rejected on the basis of alpha status, when all it takes to improve it is contributing.
Let’s not forget the fact that the most important Hurd developer was essentially fired by RMS for speaking his mind. I found that quite disturbing coming from someone supposedly so big on freedom, as in freedom of speech…