And no, the microkernel debate is not over yet. In a reply to various comments made over the past few weeks, Andy Tanenbaum has written an article to address these. He first clearly states he respects and likes Torvalds, and that “we may disagree on some technical issues, but that doesn’t make us enemies. Please don’t confuse disagreements about ideas with personal feuds.” The article states: “Over the years there have been endless postings on forums such as Slashdot about how microkernels are slow, how microkernels are hard to program, how they aren’t in use commercially, and a lot of other nonsense. Virtually all of these postings have come from people who don’t have a clue what a microkernel is or what one can do. I think it would raise the level of discussion if people making such postings would first try a microkernel-based operating system […]. Has a lot more credibility.”
I’m sorry but just because Linus wasted is entire life developing a monolithic kernel in C doesn’t mean he is actually qualified to talk about operating system design.
Would you ask a visual basic programmer something about type theory?
Linus just doesn’t have what it takes. All he has done is monolithic kernels and programming in C. That’s it.
Edited 2006-05-15 10:47
I’m sorry, I didn’t quite get that? Someone who wrote the most popular free Unix kernel is not qualified to talk about operating system design? Gee, I wonder who is then.
Of all people, Linux enthusiasts should know that being popular != being the best. Windows is more “popular”, yet you’d rarely equate it with being good.
I believe what the OP was trying to say is, that Linus has only done work on a monolithic kernel. It’s a very successful monolithic kernel, but that doesn’t automatically mean that everything he says about operating systems is the gospel truth.
It also doesn’t mean that he is not qualified to talk about operating systems, which the OP clearly suggested. As far as I’m concerned the people not qualified to talk about operating systems are the people who aren’t involved in designing or writing them. And even then lack of qualification does not automatically suggest nobody but qualified people can discuss operating system design.
Discussing is one thing, but claiming superior knowledge on the basis of your own kernel’s popularity is another.
Mystilleef I’m sorry, I didn’t quite get that? Someone who wrote the most popular free Unix kernel is not qualified to talk about operating system design? Gee, I wonder who is then.
Success doesn’t mean brilliance. By your comparison, Microsoft is the one who shall teach the world on how to write an OS since they have the most popular OS in the world.
Right?
Does failure mean brilliance?
I would say experience above everything
Actually I can agree with JohnX in a sence.
I know that I will be hated and flamed for this one but… When I first read Tanenbaum-Torvald debate several years ago I was actually stunned buy Linus’ arrogance and assumption. I mean, in that time Linus was just a student and certainly wasn’t qualified to argue with professor Tanenbaum about OS designs, just on basis of his experience with hobby project. Nowdays… O.K., but back then… whuh!
Anyway, I really can’t believe that people don’t see the essence of this argument. Probably because they tend to get emotional about these things, like they are defending their own opinions, (even if they are not qualified enough to have one) while they are just defending their choice of OS – which is noting to be defended.
Microkernel vs monolitic kernel is design philosophy comparable to OO vs structural programming. C++ code will never be as fast as C one, but one the other hand it is much more readable, reusable, etc…
With well-defined object (service) interfaces you can fine tune method/permissions and definitions of “who can do/request what”. That can be extended to almost infinite complexity, which will be required for more sophisticated computer systems of tomorrow. That’s how all biological systems function – through well defined interfaces and (biochemical) message passing – if you thing about it. Just imagine what if any external factor (virus,bacterium,etc) could trigger apoptosis (programmed cell death), or if we would start inheriting genetic material of everything that we eat.
So there you have it, performance vs security/integrity (health).
Its one thing whether some product/technology has market potential and completely another if it makes sense from engineering stand point. That said, with industrial/technological progress, things that where niche or enterprise level will eventually scale down to consumer market.
Was VHS better then Beta? Was Rambus wrong going serial with its memory interface? Are those engineers that research microkernels really don’t know what they are doing?
I know I’m probably oversimplifying things but that’s just in desire to conciliate disputants. Tanenbaum was just explaining merits of current mainstream approach in hope that he will raise awareness of the issue.
Microkernel vs monolitic kernel is design philosophy comparable to OO vs structural programming. C++ code will never be as fast as C one, but one the other hand it is much more readable, reusable, etc…
Both statements are invalid.
uK vs monolitic is about address space separation. It has nothing to do with modularity.
And C++ code can and often is faster than C (you would not be able write those inline template optimizations by hand in C…)
Both statements are invalid.
uK vs monolitic is about address space separation. It has nothing to do with modularity.
Actually it is more the address space separation. It’s about well-defined interfaces between uK and servers around it. Capability based systems are all about that – defining which component is capable of what. It’s like making tens or thousands of diferent adress spaces with their own permissions.
And C++ code can and often is faster than C (you would not be able write those inline template optimizations by hand in C…)
I was refering to large projects, like OS (tens of millions lines of code or so). Not some special case situations or algorithms. As I said, I’m aware of oversimplification but I just couldn’t came up with better analogy.
Actually it is more the address space separation. It’s about well-defined interfaces between uK and servers around it.
Wrong. It is ONLY about address space isolation. Nothing prevents monolithic kernel to provide clean interfaces. Most of them do.
uK means you are going to write scheduler and VM as server in separate address space.
Mirek
Wrong. It is ONLY about address space isolation. Nothing prevents monolithic kernel to provide clean interfaces. Most of them do.
You obviously didn’t understand concept behind capability based systems, or even implications of having two processes in the same address space comunicating by shared memory.
Monolithic kernel doesn’t provide you with ability to fine grain control permission of different system components because it utilises much faster and much simpler memory sharing techniques, rather then IPC. Of course, you can implement access control lists but then you are back at the beginning with performance problems. It’s not so simple as you think.
Do a research on capability-based systems like Coyotos, CapROS or KeyKOS.
Bottom line, nither approach (uK vs MK) is better per se for all applications.
Edited 2006-05-15 13:49
“Monolithic kernel doesn’t provide you with ability to fine grain control permission of different system components because it utilises much faster and much simpler memory sharing techniques, rather then IPC. Of course, you can implement access control lists but then you are back at the beginning with performance problems.”
Hum you do realize that Shared memory and Message passing are both ways to do IPC (Inter Process Communication)right ? Monolithic kernel DO IPC when they use Shared Memory among 2 or more processes.
I got to admit than Windows way of dealing whith IPCs is interesting for example when a process needs to communicate a small amount of datas (<= 256 Bytes) it uses the message passing IPC facility but when the amount of datas to be transfered exceeds 256 Bytes the Shared memory IPC facility is used, it’s a fast and intelligent way of doing IPCs.
” It’s not so simple as you think.”
I completely agree with you on that one !!! 😀
Hum you do realize that Shared memory and Message passing are both ways to do IPC (Inter Process Communication)right ? Monolithic kernel DO IPC when they use Shared Memory among 2 or more processes.
Yes, sorry. I was actually referring to message passing.
I understand why a microkernel would want to keep drivers out of kernel space (lots of potentially untrustworthy and unreliable third party code), a file system (very complex code, large data structures, need to be loaded on demand when mounting drives), even the cache (massive amount of memory, lots of dynamic allocation) could probably benifit from being in user mode, but why worry about the scheduler and VM?
It seems like those are parts that would rarely need to be changed at run time, and could be kept small and thoroughly debugged during development. The scheduler and VM seem like ideal components to be kept in kernel mode.
How often do you need to update the scheduler itself rather than its policy? And surely a VM contains far to much of the running state of the system for it to ever survive a crash of the VM surver.
Minix3 sounds interesting, but I don’t think it would scale very well. All system calls are synchronous and it is entirely single threaded.
Your statement is just foolish. If you start *designing* an OS with microkernel approach, you will see: its OO
Every server will have its IDL, and if you dont go with OO approach, you system dies soon. I suggest, you to read about any microkernel based OS designs.
“Linus wasted is entire life”
– That’s a sour judgement on your part. If you enjoy what you do, it’s never a ‘waste’ – only to others. If I were Linus, I’d be damn proud of my accomplishments. How are you ‘wasting your life’? hmmm?
“Would you ask a visual basic programmer something about type theory? ”
– You could. They might not have a perfect answer, but they would have AN answer. And what if they also program in C/Java/Fortran/etc? I assume you’re referring to someone who has never had a programming class and just jumped-in on VB from scratch.
“Linus just doesn’t have what it takes.”
– Eh? According to whom? Maybe he is a jerk, but he’s allowed to be a jerk if he wants to. I sincerely doubt you’ve ever met the man (neither have I), but you’re probably not too qualified to say whether he’s “got what it takes” – and to do what exactly? Write a worldwide-accepted enterprise-quality open-source kernel? Gosh, I wonder who wrote the Linux kernel? You? Me? Oh, wait – no, it was Linus….
I don’t agree with many of Linus’ arguments, many of them do seem to be biased. But when you’ve poured your heart and soul into constructing something, you want to defend it to the end. I don’t fault him for that, I give him credit for his convictions – and skills. Of anyone, Torvalds and Tannenbaum are two of the most credible authorities on this subject. And I doubt Tannenbaum would say Linus has ‘wasted his entire life’.
Just like to point out you know what he’s published, not what he’s done. I really have to doubt there is such thing as a guy who’s only written one program.
I recognize the work of Linus Torvalds and his Linux and all the merit that he has (and the merit of the huge army of developers around the world that helped to turn its his OS in something successful), but, there is no comparison between the Torvalds’ knowledge and the Prof. Tanenbaum’s knowledge. The academic level of Tanenbaum is grades above the Torvalds academic level and giving support to the arrogant Torvalds’ comments and no to the well demonstrated and scientific approach of Tanenbaum seems only an emotional behavior.
None of you people have the foggiest notion what the “knowledge” of these people actually is. If you have anything to say on the topic then use actual facts relevant to the merits of argument to discuss it instead of encouraging pissing-contests by proxy. That is an emotional behavior, and has nothing to do with science.
No, but I’d ask the person who wrote Visual BASIC about it. That would be a far more applicable analogy; although a better analogy would be to ask the people who wrote C about type theory.
You ask someone that has done research in type theory about type theory. Asking any particular programming language’s designer(s) about type theory has no intrinsic value because that isn’t necessarily their area of expertise. This VB type theory thing is mind-numbingly stupid. VB is a programming language; programming with it says absolutely nothing about your level of education in any particular subject. It’s just plain out there, like so many of the comments in this discussion.
They both despise Slashdot.
They both despise Slashdot.
Then besides attending the VU as well, I also have something else in common with Tanenbaum (and Torvalds) .
That’s kind of funny, Thom, considering OSnews.com is a mini-Slashdot. I mean you’ve got pretty much the same articles they do and a similiar modding system. How much difference is there really except the number of posters?
Yes, such people aren’t generally big fans of any site like this or Slashdot, where the unwashed hordes are experts in subjects on every specialty (including those outside of computing) because they’re big-time IT hotshots.
“Yes, such people aren’t generally big fans of any site like this or Slashdot, where the unwashed hordes are experts in subjects on every specialty (including those outside of computing) because they’re big-time IT hotshots.”
LOL. The unwashed hordes? You certainly are being presumptive.
-Mak
More to the point I’m communicating the sentiment from first-hand experience. Were I that disdainful of this community, I wouldn’t post here. Though people like you certainly do talk out of your rectums, I find it amusing for the most part. It’s tantamount to public exhibition of ignorance. It’s funny how imbittered you’ve become as a result of not being willing to look at yourself critically. Notice of course that you don’t offer any sort of intelligent criticism that rests on your own credentials. Perhaps you can continue adding childish responses to my comments.
“More to the point I’m communicating the sentiment from first-hand experience. Were I that disdainful of this community, I wouldn’t post here. Though people like you certainly do talk out of your rectums, I find it amusing for the most part. It’s tantamount to public exhibition of ignorance. It’s funny how imbittered you’ve become as a result of not being willing to look at yourself critically. Notice of course that you don’t offer any sort of intelligent criticism that rests on your own credentials. Perhaps you can continue adding childish responses to my comments.”
You question everyone else’s “education” and make suggestions as to how others should be discussing this topic, yet provide virtually no insightful comments or opinions. Furthermore, in my case, you even have the audacity to suggest that I need to be introspective with regards to my discourse, setting a new record (at least in this forum) for unbridled arrogance.
I made my point of view very clear in my original post. That you don’t like my opinion, or materially disagree with it, is perfectly valid. That you attempt to invalidate my opinion by rudely questioning my qualifications and education under the thinly veiled pretense of “enlightening me” is insipid at best.
Out of curiosity, what exactly qualifies you to criticize other people’s qualifications and views regarding the topic at hand?
-Mak
Apparently you want to act thicker than I’m giving you credit for. My comment about your lack of formal education was speculative and dismissive like your ad hominem regarding Dr. Tanenbaum’s expertise. Other than various Microsoft certifications, I have no idea what the credentials of Edwin Gnichtel are. You are attempting to invalidate Dr. Tanenbaum’s opinion using the reputations of people you really don’t know by dismissing him as an “academic” that “writes good books” but otherwise has no idea what he’s talking about. Your opinion was worthless, because it was contentless and flat out ignorant. I have since asked you repeatedly to make a personal critique of a technical subject on the facts rather than attempting to dismiss Dr. Tanenbaum opinion, but since you don’t have one you continue to explain to me why I have said what I said, when I have made it perfectly clear why that I have said it. In effect wasting time acting defensive instead of giving someone a reason to care what Edwin Gnichtel thinks about Dr. Tanenbaum’s experience.
You can attempt to obfuscate the situation all you want, but the truth is that you aren’t willing to make an argument for yourself about this on technical merits based upon your own expertise, but you’re perfectly willing to attack others using the names of others. That is cowardly and intellectually dishonest.
You can attempt to obfuscate the situation all you want, but the truth is that you aren’t willing to make an argument for yourself about this on technical merits based upon your own expertise, but you’re perfectly willing to attack others using the names of others. That is cowardly and intellectually dishonest.
Had you simply said that my “opinion was worthless, because it was contentless and flat out ignorant”, I would have been more than willing to engage in a technical discussion. Instead, you chose to act like an insecure and pompous academic troll by attempting to “school me”.
This has nothing to do with my using the names of well know kernel architects and it has nothing to do with my having to prove a damn thing. This has everything to do with the fact that you were incensed that a member of “The Unwashed Horde” didn’t hold academic theory and knowledge to be quite the equivalent of practical experience. Your own insecurity regarding your status as an (admitted) academic prompted you to lash out at me under the disingenuous guise of defending Tanenbaum.
Yes, such people aren’t generally big fans of any site like this or Slashdot, where the unwashed hordes are experts in subjects on every specialty (including those outside of computing) because they’re big-time IT hotshots.
Sheesh, who is this “Get a Life”…???
As least they’re contributing to the technology debate All you’ve done is bang on about the style of the posting and berate people because they’ve mentioned someones name but don’t know them personally.
Perhaps you should read some of the articles linked to from this site and add some valid wasting everyones time with your lame postings. It’s ironic that your life describes exactly what you need to get… 🙂
That’s kind of funny, Thom, considering OSnews.com is a mini-Slashdot. I mean you’ve got pretty much the same articles they do and a similiar modding system. How much difference is there really except the number of posters?
Don’t talk crap. Slashdot reports on a lot more stuff than we do, they have an unusable commenting engine (I don’t want to read a damn manual just to read comments), and far too high a density of FSF zealots/anti-MS zealots.
Unstable commenting engine? You mean as opposed to “Oops, our database has gone down” (again). Be real. There are plenty of fanatics here too, plus you have to keep in mind that Slashdot is part of the Open Source Technology Group. Of course they’re biased towards open source. If all you wanted was rosy pictures of MS, you could just go to their forums.
A bit off-topic, and I have to say that I don’t read slashdot because of most the reasons you stated in your comment but: Funny // Insightful // Informative mod points are just Great!
Wish I had them here…
I noticed Andy lists OSX with his list of u-kernels but later correctly points out that although Mach was a microkernel but since the BSD subsystem runs in user space this is no longer the case.
The Arstechnica article he links to makes an interesting point as well. There is really no need to take a performance hit by placing the BSD subsystem user space since “if BSD goes down, Mac OS X is basically hosed anyway”.
If the whole system crashes what benefit is there to saying “Oh look, the Mach kernel is still fine”?
Sure buggy drivers can be a problem, but on the grand scheme of things like drivers, network i/o etc. are not unreliable enough to justify the cost hit of constant message passing to place them in user space.
I won’t bet my lunch on it, but moving these subsystems to kernel space may even prove to be more reliable as your kernel subsystem is also in protected memory and not sharing user space with unreliable applications (which are to blame for most problems I assume).
Mac OS X is not a microkernel. The fact that Tanenmabum includes it in its “microkernel” list is laughable
http://www.usenix.org/publications/library/proceedings/bsdcon02/ful…
“xnu is not a traditional microkernel as its Mach heritage might imply. Over the years various people have tried methods of speeding up microkernels, including collocation (MkLinux), and optimized messaging mechanisms (L4)[microperf]. Since Mac OS X was not intended to work as a multi-server, and a crash of a BSD server was equivalent to a system crash from a user perspective the advantages of protecting Mach from BSD were negligible. Rather than simple collocation, message passing was short circuited by having BSD directly call Mach functions. While the abstractions are maintained within the kernel at source level, the kernel is in fact monolithic“
If you had continued to read for about three more paragraphs then you would have seen him qualify his statement.
Mac OS X is sort of microkernelish. Inside, it consists of Berkeley UNIX riding on top of a modified version of the Mach microkernel. Since all of it runs in kernel mode (to get that little extra bit of performance) it is not a true microkernel, but Carnegie Mellon University had Berkeley UNIX running on Mach in user space years ago, so it probably could be done again, albeit with a small amount of performance loss, as with L4Linux. Work is underway to port the Apple BSD code (Darwin) to L4 to make it a true microkernel system.
Please RTFA before claiming the author is ‘laughable’
Edited 2006-05-15 12:51
I DID read it.
I’t still laughable – Mac OS X runs the: filesystem, TCP/IP stack, all the drivers, in kernel mode. IOW: It does precisely THE CONTRARY of what a microkernel is supposed to do.
Ironically Tanembaum doesn’t mention NT as a real microkernel and says instead that its more monolithic, despite of the fact that NT was written from scratch and based on Mach; while Mac OS X took a lot of code from a monolithic kernel – BSD, something that NT didn’t do (IOW, NT is more microkernel-ish than mac os x, it doesn’t have monolithic blubs in it). Certainly it’s laugahble to include mac os x and not mention NT in the same category…maybe Tanembaum doesn’t likes to suppose that there’s any relation between microkernels and a kernel with bad fame like NT? Dunno, but it’s not what I’d expect from serious research.
“ronically Tanembaum doesn’t mention NT as a real microkernel and says instead that its more monolithic, despite of the fact that NT was written from scratch and based on Mach; while Mac OS X took a lot of code from a monolithic kernel – BSD, something that NT didn’t do (IOW, NT is more microkernel-ish than mac os x, it doesn’t have monolithic blubs in it)”
NT family is now a hybrid kernel (win XP and win 2003) that is neither microkernel nor monolithic kernel.
I don’t know why but technicaly-wise, Tannenbaum is much more insteresting, brilliant and talented than you are… 😉
Work is underway to port the Apple BSD code (Darwin) to L4 to make it a true microkernel system.
That makes no sense to me. If Darwin is ported to L4 as is, then it will be a hybrid kernel based on L4, just as right now it is a hybrid based on Mach.
The only way to make Darwin a true microkernel-based system is to separate out the bits of BSD from one another, which can (presumably) be done on either Mach or L4.
“Mac OS X is not a microkernel. The fact that Tanenmabum includes it in its “microkernel” list is laughable ”
The Fact that you pretends Tannenbaum wrote Mac OS X is a microkernel IS laughable … Read the article then talk..
I find it odd that he mentions Singularity as being a microkernel OS.
It does not rely on any hardware protection (it uses managed code to isolate the processes), does not have kernel or user space modes, does not rely on messaging to pass data between the kernel and the processes. Actualy one of the benefits of Singularity is that you can freely pass objects(together with the ownership!) between processes and the kernel, which is exactly the opposite of microkernels.
Well for me, singularity is a microkernel: it use small independant parts which are protected from each other by code management instead of hardware protection, but there is still a protection.
>It does not rely on any hardware protection (it uses managed code to isolate the processes), does not have kernel or user space modes
True for current version, but they talk about the possibility of using hardware protection to allow the use of legacy applications.
> does not rely on messaging to pass data between the kernel and the processes
In their documentation they describe their communication channels are ‘message passing’ so I don’t know why you’re saying this..
Singularity has a small kernel that interface to the hardware. This is written in C/asm and is assumed to be correct. The fact that protection is provided by software doesn’t remove its microkernel nature.
Passing references instead of copying is still message passing. The semantic is just different. In many cases it is smarter to pass references, but doing so with hardware protection can be a huge overhead.
The ideas of singularity are actually very old. Take a look at http://rmox.net. It basically the same except that it uses Occam instead of C#.
The never ending storry
I agree with Tanenbaum here that discussions or debates about these things should only revolve around technical aspects.
I admit that my knowledge of how kernels function is severely limited (I’m much more interested in how computers can process human language) but I found this article to be pretty insightful. If I wasn’t neck deep in my final year project in college, I might have downloaded MINIX and tried it out.
It’s pretty impressive that Tanenbaum and 3 other people could design and build and entire operating system!
Virtually all of these postings have come from people who don’t have a clue what a microkernel is or what one can do. I think it would raise the level of discussion if people making such postings would first try a microkernel-based operating system […]. Has a lot more credibility.”
I would try a microkernel based OS if there was one that met my needs. That’s the problem and that’s where a lot of the arguments come from. If microkernels are so great how come there isn’t a viable implementation for us to use? There are research projects, and RTOSs, and half baked solutions, but not anything usable on a daily basis. Until there is something for all of us to play around with I’m willing to agree with Linus that microkernels are too clumsy and difficult to program for a dekstop operating system.
I was a huge fan of the concept of a microkernel based OS until I found out that there really isn’t one in a usable form yet and the only one I know of that is even attempting to take that role is HURD and most of us have lost faith that it will EVER be completed. Give me a microkernel to drop in place of my Linux kernel and I’ll try it, but I doubt I’ll see a big enough positive difference to switch. Until then my opinion on microkernels will be the same because there isn’t any hard proof to the contrary.
In this case a microkernel OS that “meets your needs” is one that gives you the experience and knowledge to comment on this discussion, instead of discounting projects in total ignorance of their design, performance, and advantages.
OOP spent 20 years as a “research project” before modern language developers were finally converted and created the likes of Java and C#. It didn’t suddently become a good idea when Java was launched: it was always a good idea, but huge inertia in the industry meant people were unwilling for a very long time to look into it.
The same is true of functional programming: it’s been around since the 60s, but it’s only with Ruby, Python and the upcoming C# 3.0 that many of its concepts are hitting the mainstream.
Microkernels today are in the same place OOP and functional programming used to be. It’s a decent idea that’s tipping away at the status quo piece by piece. Indeed, it’s already made significant progress. Had you read the article, you would see that where there are serious reliability concerns, microkernels are hugely successful.
The military is using microkernels in a big way. If you recall, the Internet started out as something the military looked into as well, before it’s time came and someone came up with a product (the web and Netscape) that made it come alive for the broader public.
Edited 2006-05-15 12:05
In this case a microkernel OS that “meets your needs” is one that gives you the experience and knowledge to comment on this discussion, instead of discounting projects in total ignorance of their design, performance, and advantages.
Name one usable microkernel for general usage then. Just one.
OOP spent 20 years as a “research project” before modern language developers were finally converted and created the likes of Java and C#. It didn’t suddently become a good idea when Java was launched: it was always a good idea, but huge inertia in the industry meant people were unwilling for a very long time to look into it.
While that is true it means nothing when it comes to microkernels. Also languages like Java were much too slow until recently. The same could be true for microkernels. They may not be feasible today, but perhaps there is a future for them. I didn’t discount this idea entirely. What I said was that if they are so great now then give me one I can use. I know that microkernels are used today but not for general use. If people want to argue about general use operating systems using microkernels then they have to have a usable microkernel to make their argument. So far we don’t have one. General use computers like desktops are a lot more complex than the domain of current microkernels. There is too much variation in hardware and usage patterns.
Microkernels today are in the same place OOP and functional programming used to be. It’s a decent idea that’s tipping away at the status quo piece by piece. Indeed, it’s already made significant progress. Had you read the article, you would see that where there are serious reliability concerns, microkernels are hugely successful.
Again, this has no real relevance to my point. OOP isn’t the same thing as microkernels and we cannot assume that the technology will travel the same path. We also cannot assume that just because the technology works in some scenarios that it will be useful in all scenarios. There has yet to be a useful microkernel based operating system for general use and until there is one THAT HAS ACTUAL BENEFITS any point made about them is moot.
We can argue all we want about the superiority of microkernels compared to Linux but if there isn’t a viable microkernel to replace Linux then there really isn’t anything superior to it then is there? It’s fine and dandy to talk about these “wonderful” research operating systems but it means squat in the real world.
In this case a microkernel OS that “meets your needs” is one that gives you the experience and knowledge to comment on this discussion, instead of discounting projects in total ignorance of their design, performance, and advantages.
Name one usable microkernel for general usage then. Just one.
With respect, read the article and my comment again. The point was that before one comments on whether or not a micro-kernel is a good idea, one should try one itself. This is not about “general usage”, this is about whether a particular design strategy works for one component of an overall operating system. Whether or not you can run bittorrent and watch the latest episode of the O.C. is entirely irrelevant to the subject at hand.
However, if you are desperate for examples, QNX does have a desktop system for developers which used have a lot of free software available for it at one point. Minix supports X11 now so you could do development using twm and some basic editors. Further L4/Linux supports the full Linux software stack. That’s three examples taken right out of the article.
The point I was making with my OOP argument is that the absence of a substantially popular implementation – while possibly an indicator of some fault – is not proof of a fault. OOP was long derided due to perceived performance limitations: however once software began to approach a certain level of complexity, people realised what the researchers had been saying all along: that it was a markedly improved way of developing software. And of course, once the initial leap was made, a lot of work went into optimising OOP languages, so now the performance hit, while it still exists, is neglible.[1]
Likewise, most people deride micro-kernels for the same performance related reasons. However if your focus is on reliability and security, then they are clearly the superiour alternative. If the general public moves in that direction, and I think they will, then microkernels, or some design based on microkernels, will succeed.
Of course the other point is that lots of research projects, and successful commercial implementations (most notable, but not exclusively, QNX) have already mitigated the performance-hit enormously, so that performance isn’t nearly as large an issue now as it was before.
—————–
[1] The performance of contemporary Java implementations is perfectly okay. GCJ can compile to native code. C++ is an OOP language with no performance issues. C# likewise does pretty well. Python and Ruby are not noticably slower than Perl. Don’t make the mistake of conflating poor implementations with poor theory
Edited 2006-05-15 13:05
With respect, read the article and my comment again. The point was that before one comments on whether or not a micro-kernel is a good idea, one should try one itself. This is not about “general usage”, this is about whether a particular design strategy works for one component of an overall operating system. Whether or not you can run bittorrent and watch the latest episode of the O.C. is entirely irrelevant to the subject at hand.
It is entirely relevant. I don’t doubt that microkernels are useful in some scenarios and I have stated that previously. What’s bothering me is all the chatter on OSX about microkernels and how superior they are when there isn’t even a general usage microkernel in existance. I guess my point is that they are not superior outright. It all depends on the situation. For general use they seem to suck badly as we don’t have one good implementation.
However, if you are desperate for examples, QNX does have a desktop system for developers which used have a lot of free software available for it at one point. Minix supports X11 now so you could do development using twm and some basic editors. Further L4/Linux supports the full Linux software stack. That’s three examples taken right out of the article.
You’re not listening to me. QNX isn’t a general usage kernel. QNX for x86 is for development and doesn’t do all that much. Trust me, I’ve tried it. Running a couple of text editors on Minix isn’t very useful and L4/Linux isn’t beneficial in any way unless you like higher latencies. Those 3 operating systems don’t really convince me that microkernels are superior in any way.
The point I was making with my OOP argument is that the absence of a substantially popular implementation – while possibly an indicator of some fault – is not proof of a fault. OOP was long derided due to perceived performance limitations: however once software began to approach a certain level of complexity, people realised what the researchers had been saying all along: that it was a markedly improved way of developing software. And of course, once the initial leap was made, a lot of work went into optimising OOP languages, so now the performance hit, while it still exists, is neglible.[1]
Likewise, most people deride micro-kernels for the same performance related reasons. However if your focus is on reliability and security, then they are clearly the superiour alternative. If the general public moves in that direction, and I think they will, then microkernels, or some design based on microkernels, will succeed.
I didn’t say that lack of a useful micorkernel meant that it would never catch on. I am just basically saying if they are so great then shut up, code one that works, and show me the benefits. So far it’s all talk about how crappy Linux is compared to microkernels, and how all our kernels should be microkernels, and that Linus is stupid for using a monolithic kernel. It’s making me sick.
You’re not listening to me. QNX isn’t a general usage kernel. QNX for x86 is for development and doesn’t do all that much.
I beg to differ. Read my article on QNX, and see how it can do so much more than just serve as a dev. platform.
http://www.osnews.com/story.php?news_id=8911
Saying QNX ‘does very little’ is very, very, very shortsighted, and maybe even offensive to someone (read: me) who used QNX as primary desktop for months.
Saying QNX ‘does very little’ is very, very, very shortsighted, and maybe even offensive to someone (read: me) who used QNX as primary desktop for months.
You’re being a bit over-sensitive if you are offended by an opinion about an operating system.
No-one said Linux was crappy, indeed a couple of years ago Andrew Tannebaum went to great efforts to clear Linus’s name after an author of a rather sensationalist book tried to smear him by saying Linux was a simple uninspired copy & paste job of Minux, and that Linus wasn’t particularly talented.
The argument Tannenbaum and others are making is that as stability and reliability become more and more important; and as the functions an OS is expected to provide become more and more complex; it becomes easier to fulfill these requirements if you use a microkernel-based design.
You’ve missed my point twice in a row now: the absence of a good or popular micro-kernel implementation does not necessarily mean that the idea is flawed, it could equally mean that no-one has yet got around to creating a good and / or popular implementation yet. I lean to towards the idea that no-one has created a popular implementation: QNX is a very good system, if they opened it up a software system would grow around it rapidly using the standard Unix software stack (GNU Utils, X11, GTK/Qt, Gnome/KDE). However they have no desire to do so.
In some way your argument mirrors the old creationist’s “eye” argument. A creationist would argue that the eye is so complex that its very existence proves that an intelligent designer exists. However what it really proves is they know very little about eyes: for someone who knows how eyes work – and knows about the variation in how eyes work among different species – it’s very easy to see how they could have naturally evolved.
Likewise, you say that because you personally have never seen a desktop based around a micro-kernel, that they are not suited for the desktop. But the fact that you have not seen one does not necessarily mean that they’re not suited. In fact a substantial number of the OS kernels written in modern times are microkernels, especially where reliability is a hard constraint. There are lots of successful microkernels out there, you just don’t know about them.
You seem to be constantly upping the ante on what a kernel is expected to do. What you’re looking for the the user-space software stack (display, desktop-environment, full application suite etc.). This is totally irrelevant to a kernel: the only consideration for a kernel is how good it is at hardware access, file access, network access, memory management and scheduling; in short, the things that enable the desktop stack. QNX used to do quite a lot, I used use it myself (it even had RealPlayer for a while). But the desktop was of no interest to QNX, so they dropped it. The kernel was still able to enable a desktop though, so clearly it is a valid example.
I am just basically saying if they are so great then shut up, code one that works, and show me the benefits. So far it’s all talk about how crappy Linux is compared to microkernels
As the article shows, lots of people have coded successful microkernels, and Andrew Tannenbaum and his team are in the process of creating another (and for a four-man team, they’re doing great work). You should at least give them the chance to talk up what they’re doing: how else are they going to encourage volunteers to come along. Linux is simply brought up as it’s the best known example of a monolithic kernel, there’s nothing personal about it at all (at least not between Tannenbaum and Torvalds).
It’s making me sick.
It shouldn’t, most people (certainly not the two personalities at the centre) care personally about this.
Edited 2006-05-15 22:43
These kernels sometimes do evolve into mainstream operating systems.
The point is if you are building a small research OS and you want to make it stable, you separate the kernel from the drivers etc.
If your projects grows to the scale of ~10 million lines of code it no longer makes sense to make the protected mode/user mode separation between the kernel and drivers.
Both OSX and NT kernels began as a microkernel implementation and the subsystem/drivers were _later_ moved into kernel space.
Why do you suppose that is?
Microkernel, or should I say “capability based microkernel” approach is not just about bringing everything to the userspace because that way it will not hang system down when something brakes. Rather, its about making well-defined interfaces between microkernel and all other OS components (drivers/services), so that it makes a lot harder to compromise a whole system by exploiting vulnerability of one of it’s components. It’s about making errors and/or security exception propagation harder and thus the whole system safer.
Just my $0.02.
I disagree about your point about the millions lines of code: the reason why Linux is so big is the total number of drivers created, but for one specific PC, the size of the kernel used isn’t so big as the kernel only loads the necessary modules.
“I disagree about your point about the millions lines of code: the reason why Linux is so big is the total number of drivers created, but for one specific PC, the size of the kernel used isn’t so big as the kernel only loads the necessary modules.”
You are correct, but when I said “~10 million lines of code” I meant an entire OS (like desktop), not just the kernel. I also made no mention of Linux anywhere in the post.
I still don’t see why a big code size make the difference between user and kernel space irrelevant as you claim..
I still don’t see why a big code size make the difference between user and kernel space irrelevant as you claim..
I didn’t say it was user and kernel space were irrelevant. What I am saying is when you are talking about a full complete OS (XP, OSX, Linux distro etc.) that it does not make sense to have things like disk i/o , memory management, drivers etc. placed in user space seperate from the kernel.
Most of the many reason for this have already been covered but I made a post on the subject here: http://osnews.com/permalink.php?news_id=14612&comment_id=124754
Both OSX and NT kernels began as a microkernel implementation and the subsystem/drivers were _later_ moved into kernel space.
Why do you suppose that is?
NT was at one time closer to a microkernel than it is today but it was slow as hell so they had to move more stuff into the kernel. OSX was NEVER a microkernel. OSX is based on NeXTSTEP which is a hybrid.
“OSX was NEVER a microkernel. OSX is based on NeXTSTEP which is a hybrid.”
Uh, NeXTSTEP (the operating system) used the Mach kernel. Wiki it.
Uh, NeXTSTEP (the operating system) used the Mach kernel.Wiki it.
I don’t need to Wiki it. I know the history. NeXTSTEP used the MACH kernel and so does OSX but that doesn’t mean anything because MACH isn’t the only thing in kernelspace in either of those operating systems. Like earlier versions of MACH the BSD subsystem is in kernelspace which completely destroys the microkernel concept.
“I don’t need to Wiki it. I know the history. NeXTSTEP used the MACH kernel and so does OSX but that doesn’t mean anything because MACH isn’t the only thing in kernelspace in either of those operating systems. Like earlier versions of MACH the BSD subsystem is in kernelspace which completely destroys the microkernel concept.”
Sigh, the quote you argued with was this:
“Both OSX and NT kernels began as a microkernel implementation and the subsystem/drivers were _later_ moved into kernel space. ”
Mach was a microkernel but in XNU (which later becasme the core of OSX) the BSD layer was moved into user space.
So the OSX core /did/ begin as a microkernel project and it /was/ later modified into a hybrid or mono (depending on your def).
Get it now?
So the OSX core /did/ begin as a microkernel project and it /was/ later modified into a hybrid or mono (depending on your def).
OSX was never a microkernel. That’s what I said and I was right. Just because you want to change your argument now doesn’t make you right. We all know MACH is a microkernel and we all know OSX uses MACH. The fact is OSX never used pure MACH, hence they never had a microkernel based OS.
Maybe I’m just not understanding your point of view because as I see it your comment only supports what I said earlier, that there is no general usage microkernel that is very useful and there never was. It seems that in order for MACH and the original NT kernel to be useful the developers had to move most things to kernelspace. This is a clear indication that the benefits outweighed the drawbacks.
It seems that in order for MACH and the original NT kernel to be useful the developers had to move most things to kernelspace. This is a clear indication that the benefits outweighed the drawbacks.
Reread my original comment and you will see this was exactly my point.
Reread my original comment and you will see this was exactly my point.
Ok so you are agreeing with me then. There are no useful microkernels used for general purpose operating systems. What exactly was your point then?
I would try a microkernel based OS if there was one that met my needs. That’s the problem and that’s where a lot of the arguments come from. If microkernels are so great how come there isn’t a viable implementation for us to use? There are research projects, and RTOSs, and half baked solutions, but not anything usable on a daily basis. Until there is something for all of us to play around with I’m willing to agree with Linus that microkernels are too clumsy and difficult to program for a dekstop operating system.
QNX is also a full blown desktop environment. I haven’t tried it, but it seems to support graphics hardware accelleration, mp3 and other multimedia playback, a complete graphical environment, …
Perhaps not what you are looking for, it is intended for people developing for QNX and not as a competitor for Windows desktops.
It isn’t a drop in replacement for Linux either because it has its own graphics system, so I don’t expect all X Windows System applications to work. The OS is however POSIX compliant and has a full graphical shell…
QNX, in its desktop version, is a fully POSIX-compliant OS, with its own windowing engine but also with a rootless XWindows server. Porting linux apps to QNX is really just a matter of recompilation, unless those apps depend on linux-specific features.
If microkernels are so great how come there isn’t a viable implementation for us to use?
There are viable implementations for use, it’s just that the uses they are designed for are not your uses.
The simple thing is — most people don’t need the kind of reliability most microkernels are designed for. Stuff like EROS or QNX is designed for situations where downtime is unacceptable and a security leak could be a national crises. Most people don’t need that level of robustness for their user desktop, and thus gravitate towards less robust and less secure systems that are nonetheless more polished for desktop use.
Exactly but it would nice to dual boot just in case.
PDA os doesn’t need it.
I`m also expecting the MicroUnix OS for my Intel 386 computer (1987), and not interested on academic quarrels between digital programmers.
Both have failed to fulfill such promise, Linux has grown bigger and bigger and Minix has stagnated without a suitable block terminal (Xwindow system) able to run on 2MB of RAM.
Now computers have as before ¡many possibilities! that so indeed thanks to those quarrels will remain in ¡possibilities!
Some components, such as the reincarnation server itself, the file server, and the process server are critical, and losing them crashes the system, but there is absolutely no reason to allow a faulty audio, printer, or scanner driver to wipe out the system.
The interesting thing here is that Linux can survive a crash of a printer driver, a video driver and a scanner driver as all three are in user-space (CUPS, X.org and SANE). It can also recover from audio driver errors assuming the audio driver deals gracefully with the error itself: i.e. the root cause of the error is hardware.
What’s more, the FUSE project could ultimately see the VFS implemented in user-space, which would be one more driver system out of the kernel.
The big problem, as ever, is performance: it’s unlikely people will move audio drivers out of the kernel as most audiophiles are still clamouring for better performance as things stand, particularly real-time performance. Likewise, putting the network stack in user-space without incurring a performance hit would be a big undertaking.
Note that it’s not impossible to do this: QNX is a realtime microkernel system, so the performance can be achieved; it’s just a lot of intricate work.
All in all, it’s interesting the way things are moving. Personally I think bunging everything bar the VM and scheduler in user-space would be a neat way of doing things, but I have very little idea of OS development, and so I’m not particularlyl well qualified to comment.
The interesting thing here is that Linux can survive a crash of a printer driver, a video driver and a scanner driver as all three are in user-space (CUPS, X.org and SANE). It can also recover from audio driver errors assuming the audio driver deals gracefully with the error itself: i.e. the root cause of the error is hardware.
Actually, I’m pretty sure most x.org drivers either are, or have a kernel driver counterpart. This can be pretty obvious when, whilst running a game, your entire machine just stalls. Hmm, I wonder whose fault that was?* *glares at nvidia*
* This happens about once every couple days up to once every hour.
Only the two proprietary ones do, ATI and NVidia. The normal “nv” driver works entirely in user-space, using some well defined interfaces such as XShm and DRI to to the fancy stuff it needs to do.
Incidentally, I’ve been using the proprietary NVidia driver for years and I’ve never had any machine instability as a result: I’ve played Doom III on Debian Sarge and it’s been fine. Are you sure there’s not something else going on?
What’s more, the FUSE project could ultimately see the VFS implemented in user-space, which would be one more driver system out of the kernel.
so it would be much more easier for virusdevelopers to infuse their code into critical system components!
way to go…
You name it “drifting”, but I name it compatible.
A monolithic design does not stop you doing any further modularity, whileas a micro- design does stop you doing any merge even if you have good reason.
The “drifting” trend does not show the advantage of micro-based design. Instead, it just shows the inherent advantage the so-called monolithic design has since day 1.
It is totally unnecessary to have the whole kernel micro-based. Just make micro-driver whenever you want.
Well, there you go, before making any more unqualified suggestions lets go and download and give Minix 3 a try… (really try to knock it over!)
http://www.minix3.org/index.html
“Please be aware that MINIX 3 is not your grandfather’s MINIX. MINIX 1 was written as an educational tool; it is still widely used in universities as such. Al Woodhull and I even wrote a textbook about it. MINIX 3 is that plus a start at building a highly reliable, self-healing, bloat-free operating system, possibly useful for projects like the $100 laptop to help kids in the third world and maybe embedded systems. MINIX 1 and MINIX 3 are related in the same way as Windows 3.1 and Windows XP are: same first name. Thus even if you used MINIX 1 when you were in college, try MINIX 3; you’ll be surprised. It is a minimal but functional UNIX system with X, bash, pdksh, zsh, cc, gcc, perl, python, awk, emacs, vi, pine, ssh, ftp, the GNU tools and over 400 other programs. It is all based on a tiny microkernel and is available right now.“
Also, with the various BSDs and Minix 3 release under the same licence, this could follow up with a greater amount of interest of combining the best of both.
I wonder if thats enough to topple Linux’s position?
Edited 2006-05-15 11:56
I wonder if thats enough to topple Linux’s position?
Probably not. The name Linux is something many people have heard about, and it has a lot of bandwagon behind it.
Besides Minix might be a wonderfully functioning system, but it just doesn’t have the driver support to compete with Linux in a serious way.
Nowadays Linux has many many drivers and still people complain that they can’t use their shiny toys! Minix would have to catch up a lot to be in a similar state.
“Academic fights are vicious because there is so little to lose” is particularly apt regarding this argument.
Since there is, clearly, not a single aim when it comes to writing an operating system, nor a single niche to fill, nor a single task to accomplish, nor an single user to satisfy, nor a single kind of developer to interest in order to get involved in the project…
I could state that between Tanenbaum and Torvalds the one is right is…
both of the two!
Yes, Linux kernel is more used than microkernel systems when we are speaking of server (and is more used in desktop computing, if we don’t take NT* in account) mainly for historical reasons (in early 90s BSDs was in legal trouble and Tanenbaum was interested in keeping Minix an educational project rather adapt it to mainstream usage) AND for the fact that a monolithic kernel is easier to understand (and so it’s easier to find developers), however there are niche markets that raise less attention for the average Slashdot poster but not are for that reason less important.
As pointed out in many of those niches (one for all hard real time controllers such as in dangerous devices) microkernel systems, often written or heavily customized ad hoc are more whidely used than Linux and monolithic kernel systems, because that kind of approach becomes better, or more properly more logical, to develope those kind of solution.
So, we should not put the technical micro/monolithic kernel debate (both have downsides and upsides) in the field of popularity, since both party can claim victories in different niches and moreover, as Windows teach to us, is not everywhere the better ones from a pure technical point of view that wins, often is matter of creating interest or debate, attract developers or have the luck of find powerful investors that belive in the project.
My position is:
when performance is critical, more things will be embedded in the kernel (as in monolithic and some hybrid kernels, originally microkernel adapted for the mainstream desktop/server market);
when is more important the cleaness of the project, to deeply modifying it or for debugging it, or for giving provable stability and security of special purpouse devices, or to arbirarily and provably scale down the complexity and the points of failure of the system, then comes the microkernels;
when is important to attract more eterogeneous developers, a monolithic kernel is more readily understandable for newcomers and may create a critical mass of active community quichly, while in the long run a microkernel is more maintenable… and the monolithic kernel may burrow some of the microkernel concepts, even if implemented in a quite different way, to bring many of the benefits of that approach, as happened with Linux kernel modules.
There is plenty room for different niches and I suppose that monolithic kernels and microkernels will live happily together for long time!
right, I’m with Tannenbaum on this one. The reason for this is that I know tannenbaum means pine tree and I don’t know if Torvald means anything.
As I am not quailified to really have an informed opinion on this debates this will be my basis for chosing sides.
(if Anyone can tell me what Torvald means then maybe I will revise this position, thanks)
Well, it seems that Torvald is from old Norse and means “Thor’s ruler”. Being that Thor is a God, I guess Linus must be more than a God. Does that change your position?
Thats a pretty good article.
Makes me want to give Minx 3 a try. I did class on Operating Systems at Uni last year, was really interesting but DAMN complex!
hmmm… the only ukernel OS that I’ve ever used for any extended period(that I know for sure was) is OSX.
Context calls are a little slower, as the ukernel layer does just that adds another layer to a wide variety of calls.
OSX memory management really and truly sucks. It is inefficient and slow, which makes comparisons difficult as eveything goes through the memory manager.
All of this being said Apple has managed to make user level operations at least appear to be faster revision over revision, although I suspect that more of this has to do with rewriting or doing other major overhauls of existing application code(e.g. Finder.app) than the base efficiency(or lack thereof) of the underlying kernel.
Of course, all of this is based entirely on observation of OSX release behaviors. I haven’t bothered to look at(or checkout) Darwin code in oh 4 years or so…
Monolithic kernels: prime example linux(mainly ppc linux which isn’t quite up to snuff): Doesn’t seem to be much faster from the GUI perspective, but then again there are alot of additional layers there as well KDE/GNOME/etc. (Although fluxbox wasn’t bad, but a mainstream user wouldn’t touch it…)
I haven’t run linux based servers that get any kind of heavyish load in some time, so I can’t really comment on apparent performance that POV ATM either.
Memory management under linux appears to be more efficient in operation on the same machine.
I;d go into FreeBSD as well, but again I lack servers that bear enough of a load enough of the time to really be able to make comparisons here, and the *BSD servers would be on x86 v. ppc for the others…
(If GUIs were dragged into this, and they shouldn’t be, OSX would be the clear winner even given it’s horrific memory management which I think is OSX’s buggest problem by FAR…)
technically, I think that tanenbaum is correct. OSX still DOES have a ukernel, it just layers another kernel on top of that layer though which is what APPEARS to handle most system operations…
For Tannenbaum to include Coyotos in his list of microkernel examples is laughable. Coyotos doesn’t work – heck it doesn’t build in any meaningful way. For any reasonable purposes of comparison it simply doesn’t exist.
Giving the uK approach credit for inspiring interesting theoretical software that doesn’t exist (Coyotos), or software that violates key uK principles (Windows, OSX), illustrates Linus’ point better than Linus does.
for a fast microkernel best is to have hardware that can be programmed to pass messages quickly
ask your usual cray distributor for more information
http://www.cray.com/products/xt3/
Edited 2006-05-15 14:30
this thread and the laughable comments in it, really proves that tanenbaum is right – to many people talk about it without knowing anything about the subject.
personly to me tanenbaum holds my respect for his knowledge on operatingsystems and the learning material he has suplied the world with. while linus really hasnt done anything that commands any kind of respect in this regard.
/stone
Well I was bored, so I found NetBSD 1.1, and lites, and got it booting!
Ok so its a microkernel but it feels like netbsd.. Honestly you have seen one you have seen them all. On the other hand it is sort of cool to have setup. Now lites is from 1996 I wonder would anyone even be remotly interesting in updating lites/mach4 since its all ‘cool micro kernel’. Part of me says this is a fad and that nobody will largely care.
ftp://flux.cs.utah.edu/flux/mach/
http://ftp.riken.go.jp/pub/NetBSD/NetBSD-archive/NetBSD-1.1/i386/
For the adventerious. Also boot it with grub the netbsd 1.x loaders dont like the mach kernels.
Now for a little useless trivia, the NetBSD 1.1 mirrors are all but broken.. Is it a sad/bad thing to see old BSD getting wiped out?
The argument whether ‘This OS’ or ‘That OS’ are authentically ‘perfect’ examples of micro or macro kernels is stupidly trivial. I equate this argument to racial bigotry, where someone says “You’re not as white/black as me – so you don’t know what being white/black is.”. If you examine a diagram of what describes these different programming models actually shows – they are almost always colored blocks with pseudo-modules partitioned in either user or kernel space. Very little specification in the term ‘micro’ or ‘macro’ acutally documents the location of the code itself – this is left to the programmer’s implementation for THAT OS. NT, OS X, BeOS, and other ‘hybrid’ kernels have the traits of BOTH types but often market themselves as microkernels. It’s not a lie, it’s just not the complete picture. Of all the systems I’ve ever personally touched, the only TRUE microkernel I’ve experienced is QNX – and it is DAMN reliable, but it has it’s limitations when used as a desktop system – this is why most microkernels get modified to perform better on/as desktops and servers. Many traffic light systems use QNX, complex medical equipment controllers use QNX, and as an embedded OS there is basically no equal. But when you put that on a desktop/PC, the performance factor changes the focus to the ‘user experience’ and not ‘kernel modularity’. End-users don’t care about the internal architecture (and neither do programmers, for the most part), the only factors that matter are reliability, ease of use/programmability, and responsiveness. In these three categories, BOTH kernel models have their ups and downs and NEITHER is a perfect design. The best design is the hybrid model, as Charles Darwin pointed out – ‘purebreds are weaker in nature’ and it is only fitting that software operates in a similar fashion. Take a little of A, and a little of B – and the result can often be the best of both.
On a completely different point, I am sad to see that development of the VOID/Unununium (non-kernel) system seems to have come to a halt. This was (to me) the most interesting and unique approach to OS devleopment in decades (or ever). Removing the kernel altogether (and proving both camps wrong) would be a great way to end this type of biased debate once and for all. One can still dream…
http://unununium.org/
I wonder how’s that JohnX’s (first) post got from +4 to +1, while one shouldn’t vote negative just because he disagrees with someones opinion? I believe he wouldn’t be voted down if he questioned competence of any other disputant in a dispute, but Linus’.
I see lots of people here actually agree with his opinion that Linus is not omnipotent and allknowing, although Linux is such success. Does that mean that they should be all modded out from OSNews discussions so that rest of you can enjoy in you “my (favourite) OS rulez” discussions and feel good about your self?
I couldn’t agree more with JacobMunoz’s previous posts. And if you are so sensitive about Linus being “right” or “wrong”, then you should go and check your head.
Edited 2006-05-15 16:15
“And if you are so sensitive about Linus being “right” or “wrong”, then you should go and check your head. ”
– Or write your own OS.
Thanks.
1992:
Tannenbaum: “Dude, you are such a loooser writing monolithic kernel in 1992. They are like totally out, you know.”
Linus: “You are just jealous because no one wants to hung out with you and your micro-kernel. Ough, ough… and my kernel can beat the sh*t out of your kernel, with its super powers and stuff.”
15 years later…
Tannenbaum: “I was sooo right all the time, you know. Your kernel will die eventually. The clock is ticking. Tick-tack! Tick-tack! …”
Linus: “No, I was right! Look where the Linux is now and where’s your stuuupid Minix. I was so clever that I included sh*t-proof feature in its design.”
Tannenbaum: “But you did it all wrong, so it will die from cancer.”
Linus: “No, it wont! You’re just mean! I don’t care what you or anybody else is saying, my kernel is the best. I’m so smart that I can do anything my way, and it will still be better then anybody else can do.”
Tannenbaum: “Well, we’ll see in 15-25 years.”
To be continued…
:)))
I’d love to see these two get into a shot-for-shot drinking contest!
Most people who are new to this debate don’t know that Tannenbaum and Torvalds are actually good friends. It’s less of a ‘fist-fight’, and more of a “Yo mama’s so fat…” contest. Something today’s kids are more familiar with…
We should all be so lucky as to be able to constructively debate without taking everything so personally. (sigh)
You couldn’t be more right. Don’t get me wrong, I was just mocking with this whole subject.
I was about to explain to my girlfriend what was I writing about, when it struck me… this is sooo geeky! ?:)))
Edited 2006-05-15 17:03
Tanenbaum didn’t list any microkernel systems, which could be used as servers, desktops or workstations. Additionally, some of his examples aren’t true microkernel designs (i.e. Singularity and Mac OS X). The remaining operating systems on his list all are special purpose operating systems and some of them are highly experimental research OSes. — what did he want to show with this list?
We all know that’s impossible to design an operating system, which is perfectly suited for all purposes. Regarding servers, desktops and workstations (which, I guess, is what most OSNews readers are mainly interested in), today’s top priority is definitely security. Proactive security can sometimes be in conflict with stability. In times when I’d have to choose between stability and security, I’d prefer security.
In my opinion, lack of security is the biggest problem of current “mainstream” operating systems. Tanenbaum’s argument is mostly for stability and, at least for servers, desktops and workstations, this isn’t the main problem today. Even Windows XP is sufficiently stable for many requirements and rebooting a system from time to time also isn’t a problem in real life. In network environments, there are also alternative techniques for redundancy and failover available, i.e., CARP or VRRP — the common idea is to achieve redundancy and failover by deploying multiple PCs or clusters, which behave like a single entity. The same goals can also be achieved by virtualization techniques. For server/desktop/workstation stability, we simply don’t need microkernels because current monolithic designs are stable enough for most purposes. I’d argue that defective hardware can be bigger problem in real life.
Tanenbaum didn’t clearly explain why microkernels are a necessity from a security perspective. I understand that the more fine-grained separation of address space in microkernel designs could be a security advantage, but this doesn’t automatically mean that you won’t pay a higher overall price (security wise) because of the increased messaging complexity. Separating (read-only) executable code from writable (but not executable) code has already been done without involving microkernels at all. So are there really any crucial advantages of microkernels from a security perspective, which would justify the pain?
Furthermore, Tanenbaum seems to ignore the common release cycle of software development. (Monolithic) kernel code doesn’t get heavily changed all the time: there’s always a development phase, followed by a feature freeze, a testing phase and then, finally, a release. This scheme works well in practice. I suppose, a top-down designed microkernel which never changes after its first implementation, is quite unrealistic.
I will keep it short:
MacOS X and NT are both hybrid kernels. From a modern perspective, NT (2k3, XP, Longhorn/Vista) is closer to a true Microkernel in that application subsystems still exist (Win32 CSRSS, Interix) in user mode and the UMDF does provide a framework for non-interrupt driven drivers to run in usermode. Even so, neither NT nor OS X’s Mach based implementation are microkernel’s (not even close). From a CS standpoint they are modular monolithic kernels.
As for Tanenbaum, I am sorry, but he is your typical academic. He writes good books, but he doesn’t have the practical experience that folks like Linus Torvolds, Dave Cutler, Avie Tevanian and Rick Rashid have in trying to build a usable, stable and scalable kernel. Minix doesn’t run on millions of computers, doesn’t scale to many dozens of CPU’s and doesn’t have to run real applications. There is a reason why Linux, NT and XNU, even if they started out at different ends of the architecture spectrum, look more alike than different.
-Mak
Like most people with little formal education [1] you praise Dr. Tanenbaum’s books while clearly having not read them, as if writing the texts must be noted to hedge your bet but since you haven’t read them you assume that they’re insignificant and betray a lack of any practical knowledge of the subject at hand. If only Andy had any experience with scalable [2] reliable [3] operating systems that run real programs [4].
P.S. Don’t name drop. Avie Tevanian and Dave Cutler don’t know you from Adam. If they care to make comments about this discussion they can all by themselves.
1. The point of this post is to be presumptive and arrogant to mirror your own behavior. Don’t take it personally.
2. http://www.cs.vu.nl/pub/amoeba/
3. http://www.cs.vu.nl/pub/globe/
4. http://www.minix3.org/software/
P.S. Don’t name drop. Avie Tevanian and Dave Cutler don’t know you from Adam. If they care to make comments about this discussion they can all by themselves.
1. The point of this post is to be presumptive and arrogant to mirror your own behavior. Don’t take it personally.
My goodness, don’t you think highly of yourself. How kind of you to point out my “presumptive and arrogant” behavior.
The real point of your post was to be as rude as possible under the premise of “mirroring” my behavior. You could have just told me to go screw myself, but that would have required some intellectual honesty on your part.
-Mak
Don’t take your ignorance out on me. My post did exactly what it purported to do. It took your own presumptive, arrogant stick to you. You don’t speak for any of the people involved, but you certainly wish to present what you believe their experiences or beliefs, and apply them to assault the credentials of someone else.
If I wanted to be as rude as possible I could have inserted a few choice expletives to describe someone that speaks out of his rectum about other people. If you care to actually present your own credentials and something resembling an argument for yourself, then be my guest. What you’ve done is attack another’s in a cowardly and amusing way, given the work that they’ve done.
Finally a good discussion on OSnews
Not just trolling – but as said before :
Linux is not a pure mono- nor microlithic kernel .
So it is not a thing about Linux but purely about theoretical OS kernel design.
Its for the thinkin folk
Post Scribbles : & there is no reason at all to get emotional in any way – it is theory .
Its NOT Person A sucks – Person B rules .
Edited 2006-05-15 17:41
Actually, MINIX 3 and my research generally is **NOT** about microkernels. It is about building highly reliable, self-healing, operating systems. I will consider the job finished when no manufacturer anywhere makes a PC with a reset button. TVs don’t have reset buttons. Stereos don’t have reset buttons.
I know he’s just giving a wide goal here but it’s a pretty silly one. Manufacturers don’t make laptops with reset buttons…
Unless there’s some reset button on laptops I can’t see on the outside of the case?
Desktops have reset buttons today, probably, simply to not scare users. The motherboards are all made with a plug for it, so they ship all the cases with a button. It hasn’t been a necessity for a long time though! And it has been run by BIOS since ATX.
It seems to me that what Dr. Tanenbaum is saying is:
1.) muK is a better design and is much more OO. It’s more reliable, blah blah; I think we all pretty much buy into that (although the degree of extra reliability may be argued).
2.) It’s not slower.
Ok, so it’s an extra abstraction that gives you safer data via thin interfaces and all; but it’s not slower.
Ok, so it’s got to be slower. Even if it’s a constant factor of a thousandth of a percent; it’s doing more, it’s got to take more time.
Maybe some hard numbers on how much slower it is would be good? I don’t think anyone would cry over 1%, but 10% might see some annoyance for the moment.
Actually, they *do*. It’s called the ON/OFF switch and it *is* sometimes needed.
More than a few times, I’ve had high-end phones from well known companies (Sony, Panasonic, Nokia, Nortel) freeze permanently for no apparent reason. The only way to fix the problem is to just turn the thing on and off (thus flush the buffers).
I have had a case, though, where a phone just drove me crazy because it kept restoring the original crash state. It wanted to ensure data integrity by restarting, but that was exactly what I did *not* want. It took a while of digging throught the manual, but I eventually discovered the phone contained a little “restore to factory setting” button that could only be pressed via a pin.
Granted consumer and enterprise phones are normally reliable, but they are not immune to needing a good kick in the transistors some times.
“and has nothing to do with science.”
Nor has enumerating micro-kernel success stories.That’s marketing.
Ironically Tanembaum doesn’t mention NT as a real microkernel and says instead that its more monolithic, despite of the fact that NT was written from scratch and based on Mach
Quote: http://en.wikipedia.org/wiki/Kernel_(computer_science)#Hybrid_kernels_.28aka_modified_microkernels.29
Hybrid kernels are essentially a compromise between the monolithic kernel approach and the microkernel system. This implies running some servers in kernel space in order to cut down on the amount of system calls occurring in a traditional microkernel. Nevertheless, there is still kernel code running in their own memory space.
People are conflating a lot of subjects.
Modularity, encapsulation, polymorphism, object-orientation, microkernels, exokernels, monokernels, fault-tolerance, reliability, and so forth are all being tossed around in ways as if there are exclusive pairings or that they’re all the same things by different people.
People need to really decide what they mean when they believe that they’re making distinctions between implementation strategies in kernel architecture.
No matter monolithic kernels are faster and whatever it is, but the microkernel is the future.
Performance: Future machines will be fast enough to run microkernels smoothly and if the programmers are strong, then they can make it faster by optimizing the code.
MAC OS X is the biggest example which gives great performance than Linux. OS X have too many 3d graphics in its interface but still it gives good performance.
Modularity: Modular approach is always considered as the good way in Software Engineering for creating softwares. Microkernels are more modular than monolithic. No matter how much they make a monolithic kernel logically modular, microkernel will always be more modular than monolithic.
Modularity means u can create and maintain the kernel easily. Also it will be very much easier to work in teams on different components of kernel (Process, Memory, File System, etc) and also it will be easier to find bugs.
——————-
The concept of monolithic will slowly vanish just like structure programming have no importance today in big application projects.
μμμμμμμμμμμ_ 6;μμμμμμμμμμμ&# 956;μμμμμμμμμμμ μμμμμμμμμμμ_ 6;μμμμμμμμμμμ&# 956;μμμμμμμμμμμ μμμμμμμμμμμ_ 6;μμμμμμμμμμμ&# 956;μμμμμ
FIx that shit dammit
Edited 2006-05-15 19:46
> Modularity: Modular approach is always considered as the good way in Software Engineering for creating softwares. Microkernels are more modular than monolithic. No matter how much they make a monolithic kernel logically modular, microkernel will always be more modular than monolithic.
Bullshit.
Modularity is completely irrelevant to this debate. The difference between a microkernel and a monolithic kernel lies in their different dynamic behavior (heavyweight in the case of a microkernel vs quite lightweight for monolithic kernels), not in their static organization.
… and you have it.
AT referes to clueless slashdot posts, etc… I could agree with him more. There are too many convenient experts on slahsdot, but they don’t know what they are talking about.
Mr. T. repeatedlyimplores his readers to try Minix, it being the simplest way to have a look at a microkernel in action (and also being his baby). For those who are just way too lazy, here’s a torrent of a MINIX3 QEMU image courtesy of the free OS zoo:
http://free.oszoo.org/ftp/images/minix3_1_1_x86.tar.torrent
Now you have no excuse. Of course, how useful it is to try out a microkernel in emulation on top of a monolithic kernel is questionable…
I became interested in Minix several weeks ago, when this endless debate was in its early stages. So, I downloaded Minix and installed it in VMWare. I played around with it, skimmed through the sources och then left it running for a couple of days. Returning to continue my study I found this:
http://public.cplusplus.se/images/minix.png
Inet server panic due to an unknown message. The kernel is still running, but the system is somewhat useless since it’s impossible to login. But any other software (not relying on inet) with great importance I could have been running would have continued to run. This is the reliability which should be the focus of this debate, as stated by Tanenbaum. Choice of implementation is not the key issue.
I’m sure some people will post comments saying that Minix and microkernels in general suck because of this and I don’t care. Please remember that Minix is young. Linux wasn’t perfect back in its early days either. All new operating systems have bugs.
Monolithic kernels are ‘it’ in the world today because of the practicalities, and limits, of modern computers and the functions being performed.
And no, I don’t believe in the hybrid kernel either, because this generally consists of bunging some stuff into userspace. You tend to find, however, that the core is still very much monolithic. If you have a userspace in your OS you can pretty much make a ‘hybrid’ kernel any time you like. It really doesn’t mean anything. You either have a monolithic kernel, or a microkernel one where some pretty well documented methods have been employed to compartmentalise the entire kernel’s functions.
However, on a desktop or server, performance rather than reliability, and distrusting the code in your own system, is much more of a priority. The (possibly conservative?) figure of 10% slower that Tanenbaum gave for a microkernel (yes, he admitted it in the last article) is an absolute country mile, and under no circumstances is modern hardware capable of giving people the performance they demand along with some hypothetical benefits of extreme reliability (and paranoia), which 99.99% of people will never feel the benefit from.
Systems like QNX have their place, and certainly, it is clear in no uncertain terms what QNX is designed for. It has a specific purpose. However, if you were to try and move QNX over to perform some of the functions that Linux performs, for example, compromises over speed, performance and getting the best out of the hardware would inevitably follow to the point where the microkernel architecture would be rocked to its foundations.
The problem is, for the foreseable future any way (and probably our lifetimes and that of our children), in places like the desktop and server worlds software will always fill out and take advantage of any hardware to the fullest extent possible. There is no room for redundancy that hardly anyone will ever get to see. The only way this will change is when we get a proper Turing machine, or something like a quantum computer with unlimited resources and the ability to perform any number functions with negligible, or no, penalty.
Even then though, the premise of a microkernel is that each part of it does not trust any other part that it interacts with. When you have a clear set of limited requirements, like with QNX, that’s OK. However, once you go beyond that it is a very difficult, and almost an impossible, thing to square with. A kernel is called a kernel for a reason.
People can argue as much as they like, but that’s where we are today.
Dear Andy,
I’ve read your “Tanenbaum-Torvalds debate, Part II” web page at http://www.cs.vu.nl/~ast/reliable-os/ and found it interesting.
I believe I am qualified to comment on the great microkernel brouhaha, as I have been developing operating systems since 1975, and have experience with both QNX and Mach.
I’m surprised you still believe that a lack of a common time reference is a hinderance to distributed algorithms. I was pretty sure that Leslie Lamport put that to bed twenty years ago with his two part paper on relativistic time.
Actually, heartbeat issues, (your comment about determining whether a remote process is dead or slow) *do* come up on a single machine; especially in soft real time systems. A lot of the complexity in modern cell phone operating systems, even on single-core systems, revolves around precisely that issue. Multimedia systems exacerbate the problem.
Also, the block or character device models may have finally reached their limit. They were a good idea when Ritchie et al introduced them in Unix, but we’re finding in complex embedded devices that we have need for interactions that don’t model well as byte streams. This is especially true with respect to power management.
It is good that Minix 3 has few servers. That makes it, in my experience, atypical for even a desktop machine. Out of curiousity, I just typed ‘ps aux’ on one of my Linux desktops and it seems to be running on the order of 50 servers, for in excess of 100 processes, as many of the servers, such as the httpd show up as multiple processes (really threads) on Linux.
I will grant you that Linux is prolific in its introductions of servers, but I can easily see an order of magnitude more than you’ve suggested, once you include all the power management, database, web and other servers one finds on a development system these days.
I was mildly disappointed in your list of micro-kernels. As you know, operating systems for embedded applications have a rather different set of requirements than those for general purpose systems, and Linus argues about general purpose systems. Of your list, not one is a successful general purpose system. (Here, I mean technically successful, not market share.)
Unfortunately, none of those kernel, nor your IEEE paper, nor even this amusing page, make a case that microkernels somehow address the reliability problems.
The problem is that a microkernel is an implementation technique, not an architectural approach, and reliability is addressed by architectural approaches.
I’ve built large scale systems requiring high reliability, and have found that a few key architectural positions are far more important than the implementation technique for ensuring their success, including, but not limited to:
1) Autonomy is your friend
2) Competition is superior to cooperation
3) Know your fault model and design to it
4) The unexpected always happens.
5) Replication is good, until it becomes part of the problem.
6) Peer-to-peer applications are not a good fit for a client server model. (We discussed this in comp.os.research approximately 14 years ago.)
There’s a lot more, of course, but this is a letter, not a textbook.
All of the successful systems I’ve built over the last thirty years have had monolithic, but modular, kernel designs, except one, and it, like Chorus, and unlike QNX, took a much bolder approach than the “microkernel”. Actually, Chorus has come the closest of any commercial offering to getting it right: Modularity is an artifact of design, not implementation, and whether a particular service belong in “the kernel”, meaning supervisor space, or not, is an artifact of the hardware requirements of the system, not a reliability feature of the software.
Cloudy
(P.S. In answer to the opening paragraph of your IEEE article, I’ve had TV sets crash, and you’re misleading your audience. Digital TV sets crash less frequently than general purpose computers, because they have less to do. The law of requisite complexity is not an OS designer’s friend.)
Dear Cloudy,
I have almost 8 years of experience with Linux, Solaris and Windows NT/XP kernels and with whatever experience I have in desktop and embedded systems, I would like to say that yours was the only reasonable and rational post I could find in this lengthy thread of comments.
+10 from me
And, yes, that TV set’s analogy was the most stupid thing in that web page. With due respect, I think Dr.Tanenbaum should really have to think about real systems with a touch of commonsense.
And yes, I tried his MINIX 3.
Edited 2006-05-16 07:16
(P.S. In answer to the opening paragraph of your IEEE article, I’ve had TV sets crash, and you’re misleading your audience. Digital TV sets crash less frequently than general purpose computers, because they have less to do. The law of requisite complexity is not an OS designer’s friend.)
Thats sums it up in a nice, concise, and far less messy way.
(P.S. In answer to the opening paragraph of your IEEE article, I’ve had TV sets crash, and you’re misleading your audience. Digital TV sets crash less frequently than general purpose computers, because they have less to do. The law of requisite complexity is not an OS designer’s friend.)
+1
Linux is and will remain a monolithic kernel. End of debate. Why microkernel folks didn’t start their own community effort and build a rock solid and wildly popular microkernel? Bashing huge work that has been done leads nowhere, it doesn’t matter which kernel matches “the best design”.
Btw., who cares if bugged audio driver takes down a kernel or not? In next run I will disable it (and report the error or google for same bug reports). Anyway, most often it either refuses to work at load time, or it works.
So… after all this time. After Linux have been maturing,
and all the performance benchmarking and all that.
There have been other u-kernels out there as well.
So what is the verdict ???
I mean it’s been quite a number of years since the last
argument…
So what is the verdict?
That a microkernel is a usable implementation technique, as demonstrated by QNX; but that it is a mistake to base a design philosophy on an implementation technique.
Either that or that there simply is no verdict and won’t be for the forseeable future, and that people just like going round and round on this issue as they do so many others, rehashing the same points and considering themselves lucky if they convert even a single person their point of view. Oh yeah, and to take a break from a boring job periodically through out the day.