The security benefits of keeping a system’s trusted computing base (TCB)small has long been accepted as a truism, as has the use of internal protection boundaries for limiting the damage caused by exploits. Applied to the operating system, this argues for a small microkernel as the core of the TCB, with OS services separated into mutually-protected components (servers) – in contrast to “monolithic” designs such as Linux, Windows or MacOS. While intuitive, the benefits of the small TCB have not been quantified to date. We address this by a study of critical Linux CVEs, where we examine whether they would be prevented or mitigated by a microkernel-based design. We find that almost all exploits are at least mitigated to less than critical severity, and 40% completely eliminated by an OS design based on a verified microkernel, such as seL4.
This is only true as long as security is the only requirement.
However, operating systems also need to be usable and performant.
Reminds me of this story a couple years ago:
http://www.osnews.com/story/29028/Microkernels_are_slow_and_Elvis_d…
Drumhellar,
This has been debated many times, haha. Performance is always viewed as the microkernel’s achille’s heel. There’s a couple things we need to consider however:
1. The overhead of syscalls has gotten much better over the years. It’s really not the barrier it used to be.
http://arkanis.de/weblog/2017-01-05-measurements-of-system-call-per…
2. Batched IO allows us to aggregate the costs of syscalls across multiple requests (the “fread” graph at the above link shows this). Microkernel drivers that are written to take advantage of this model can benefit from the security & stability benefits of isolation while largely mitigating the overhead of syscalls. Consider this fictitious example of a socket daemon (aka web server).
Userspace:
write(socket1, “data1”)
write(socket2, “data2”)
write(socket3, “data3”)
…
write(socket20, “data20”)
With a conventional monolithic kernel and conventional API syscalls, there would be 20 syscalls to the monolithic kernel.
With a microkernel, one approach would be to pass those IO requests one at a time to the network stack running as a child process, making a total of 40(ish) syscalls. But, this is a naive approach. With batching, all of these requests can be combined into one or two microkernel syscalls for a total of 21(ish) syscalls. Much better.
We can actually do better than this by using a batching IO API at the userspace boundary…
io = {{socket1, “data1”}, {socket2, “data2”}, …}
iosyscall(io)
The entire operation from userspace to microkernel driver would only require 2 syscalls total, which could perform even better than the common approach used in conventional monolithic kernels today!
I don’t expect this debate will be settled any time soon, but there are software solutions to the syscall overhead.
With Meltdown and Spectre mitigations syscalls have gotten much worse again.
The security advantages of a microkernel however, can help mitigate the exploitation of Spectre and Meltdown, leading to a more secure OS
I don’t believe that is the case. If you can read memory on a different thread you don’t have access to ( ie user thread reading memory from kernel), you can read memory on a different thread (user thread to any part of a microkernel or any module providing any service).
It depends on the type of attack. Microkernel probably wouldn’t have protected against Spectre, but as I recall MINIX (and other microkernels) were not affected by Meltdown. It really depends on the style of CPU attack and whether context switching is a factor.
Microkernels could make it harder to induce targeted branch mispredictions but doesn’t stop them. I say the same about AMD Ryzen which have a branch predictor harder to manipulate but not (theoretically) impossible.
Meltdown effects depends on the design of the OS rather than the type, for instance my toy kernel would be vulnerable to the Meltdown attack as it maps all memory in the address space of each process kernel readable only.
L4 kernels use registers for IPC, physical or virtual (small buffer per process address space) so it’s likely they aren’t affected. It was however ages since I looked at the actual code of any of those kernels.
panzi,
With meltdown, yes, but only because intel arguably had a bug by allowing code to speculatively cross privilege boundaries unchecked, which is obviously very bad at least in hindsight. AMD has no meltdown performance loss because they never did this in the first place. Once intel fixes it’s CPUs, we can take meltdown off the table.
Spectre is clearly the more complex issue to solve, but it is unrelated to syscalls.
Your batching example assumes however that:
1. Each I/O request is not by itself significant (this is generally true for I/O to stream sockets, but not for datagram sockets, and may or may not be true for other types of I/O).
2. Latency is irrelevant.
If the first statement is true, then you need each individual call to the driver, period, and can’t do any batching. If the second statement is true, you’re usually better off not batching, and thus still save nothing.
Also, there’s absolutely no reason that a monolithic kernel can’t do a batched I/O API too, so either way you’re looking at this, you’ve got more syscalls in a microkernel, but more importantly, you have more context switches, which account for most of the overhead of system calls.
ahferroin7,
That’s true, there is a trade off. While batching is great for distributing the cost of syscalls across many concurrent requests, on an unloaded server each batch contains only a single request. In this scenario batching offers no benefit over the individual syscall approach. So individual requests will incur 100% of the extra microkernel syscall overhead (~100ns).
So there is additional latency, but it becomes less and less important as the load goes up.
Agreed, only if security is your only requirement…. or modularity, or verifiability or maintainability or resilience to driver crashes or …
Which isn’t a problem with microkernels or variants (exokernels etc). Or do you have a specific example of something that would degrade significantly with a microkernel design?
> scaling of hardware
Remind me which OS has sole occupancy of the TOP500 list of supercomputers again?
tidux,
I don’t know what Megol meant with “scaling of hardware” as it pertains to microkernels. But as for your post, I don’t think it’s fair to judge the merits of microkernels from a sample it’s not even represented in.
To put this a different way:
If we look at fortune 500 companies, only 24 have female CEOs:
https://www.cnbc.com/2018/05/21/2018s-fortune-500-companies-have-jus…
Q: What does that say about the quality of female CEOs?
A: Absolutely nothing one way or the other.
To draw an inference here disregards all the underlying reasons for a gap.
I tried to find some performance benchmarks comparing genode or sel4 or any microkernel directly to linux, but I’m having no luck finding any. This is something most of us would be interested in I imagine, does anyone have a good link showing directly comparative benchmarks?
lol, I don’t think your example says what you think it does. Women CEOs are disasters 99% of the time.
tidux,
This overlooks all the men who are disasters. You are making these assertions that lack well defined & testable criteria. It’s fine to say that’s your opinion, but for objective facts it does matter.
Anyways, sharing our opinions is fine, but having hard data comparing the performance of modern monolithic and microkernels would be enlightening.
Not to mention that you have to take into account the risk that the data will be intentionally skewed to cast women in a bad light.
Guys certainly aren’t above arranging for women to get opportunities to “be the fall guy”, whether it’s in business or politics.
For example, while it’s a bit more complex than that in her case, Canada’s only female prime minister to date, Kim Campbell, got the position as a result of an internal party election after prime minister Brian Mulroney resigned in the lead-up to an election after he and the party had done unpopular things.
(We don’t directly elect the prime minister but, rather, the leader of whichever party gets the most seats gets the job and, if they resign, someone else gets picked without needing to involve the electorate.)
Note that they were members of the Progressive Conservative party, our right-wing party and precursor to our equivalent to the U.S. Republican party… though, at the time, it was closer to what the Democratic party is now. Our politics have shifted U.S.-ward since the Reform (right-wing fundamentalist) party merged with the Progressive Conservatives to produce the Conservative Party.
Edited 2018-08-21 02:05 UTC
LOL! But just in case you are serious:
. Supercomputer(-clusters) execute specialized compute intensive code.
. Supercomputers use user space, hardware supported, high performance communication – MPI /Infiniband etc.
. Supercomputers use user space compute intensive code.
. Supercomputers are optimized for their purpose, placement of data close to nodes etc. are often done unlike in normal computers.
. It is common the compute nodes do not use multitasking as that reduces maximum performance (less time actually computing).
So what use is the OS in that context? IO – massive amounts of IO streamed from parallel arrays of
In IO nodes. There Linux is actually very good.
The compute nodes have very small kernels, some earlier (the time I actually looked at this stuff actively) had things like compute-node Linux which while severely stripped down still had scaling problems and other disturbances. E.g. timer interrupts making the user space message passing a little bit more variable, unpredictable, and so lowering compute performance.
No that’s not a good example.
did kernels of Windows and MacOS become monolithic? They’ve failed from the introduction.
Always?
XNU is definitely a monolithic kernel, despite it’s Mach origins.
Same with Windows. It’s still only fairly recently that some drivers have moved out of kernel space in Windows, but it’s still definitely monolithic.
NT had it’s graphics subsystem outside of the kernel,they moved it in during the Win2k/Xp days, and then back out during Vista. It’s always had that capability, I wouldn’t call it monolithic, but it isn’t a microkernel either. It’s a hybrid, and very modular
I use Linux now, but I remember back when I used Windows I once had a crash of the graphics driver. So what happened? An error message popped up and Windows kept running in lower resolution and color mode. That’s some impressive shit.
Linux has always done that too, if X11 crashed you would usually get dumped out to the text console (assuming you still had the text consoles enabled which isn’t a given in modern distros).. You can usually still login remotely via ssh too if that’s turned on.
X11 isn’t the graphics driver though. If the actual GPU driver crashes on Linux, you’ve usually got a useless system until you reboot. At best, the rest of the system still works, but your video output does not. At worst, you just had a full kernel panic.
Going to text console when X crashes isn’t the same as maintaining a working (even if in lower resolution) GUI and otherwise all apps continuing to run normally…
And what’s impressive about that? I fail to see anything impressive about that, unless the only OS you’ve seen your entire life was Windows 98 where OS crashes could be caused by farting too loudly and were considered normal part of using a computer.
No other OS can recover like that. I don’t know if you are dissing Windows for crashing in the first place, but robust error recovery is always the hallmark of good engineering.
It’s 2018, it’s OK to say Windows is a great engineering feat.
As soon as you move video driver out of the kernel, there’s no reason why OS can’t gracefully recover after video driver crash. That is what I am trying to say. There’s nothing “magic” about that: simply restart video driver.
XNU is not monolitic. It is hybrid. Please check your fact up before post something like this.
Hi,
Yes.
I’d characterise XNU as “hybrid with micro-kernel roots”; and I’d characterise modern Windows and Linux as “hybrid with monolithic roots”.
The difference is clear if you look at their respective device driver models – for XNU device drivers use “event loops” while the device driver model for Windows and Linux are designed for direct function calls.
– Brendan
“Hybrid kernel” is just marketing speak for a monolithic kernel with module support. Using the usual vague definition even Linux, which is always touted as monolithic, is a hybrid kernel.
Ever since they were put into real production. They were on microkernels in their original conception, and perhaps very first deployment
Have been around for much of the debate around monolithic kernels and micro kernels without having skin in the game personally.
It strikes me that the reliability and security gains of microkernels have been clear from an early stage. Also clear has been the performance of microkernels as one factor; and the necessary re-engineering for another. AFAIR Linux Torvalds pointed this out at the time.
Collectively, the rabble that we are, have allowed those two factors to keep microkernels on the back burner (for the mainstream). In practice we have collectively bought into a compromise on reliability and security mostly for the advantage of a bit of performance.
One thing on this landscape that irks me is Intel taking advantage of MINIX3 in it’s motherboards without feeding back useful code (like USB support) that is really needed. Frankly that ponks. It is parasitic in the short term and stupid in the medium term.
See https://itsfoss.com/fact-intel-minix-case/
Except that Minix 3 falls under the BSD license so Intel is perfectly within their rights to do this. I don’t mean to start a “religious” war here but there are probably better reasons to be annoyed with Intel.
It’s a shame they don’t want to share but that behavior is obviously condoned by the author based on his choice of license.
AST responded:
https://www.cs.vu.nl/~ast/intel/
Incredible review…
http://www.roshanikhanna.com
Edited 2018-08-18 13:02 UTC
Any argument towards security in the applications processor is complete bunk as long as the baseband processor can own the whole device on a whim.
Also that paper pushing microkernels probably has a decent amount of confirmation bias… micro kernels are potentially harder to break into because fewer people are familiar with them so attack methods against them are less mature.
That said as long as the a micro kernel has comparable performance to its monolithic counterpart why not give it a shot.
Edited 2018-08-16 22:28 UTC
When I saw the authors of the paper (the same people responsible for seL4) I was expecting bias, and read through the paper with a critical eye trying to find something to criticize. I failed to find any bias and failed to find anything significant to criticize (other that the title of the paper – it’s a little “hyped”, but that seems to be common practice for everything now).
– Brendan
Funny you would say that here, considering that ~most baseband processors run microkernels – IIRC Qualcomm uses seL4, which is under discussion in the linked paper.
No actual juries were used in the making of this paper. Their methodology is really iffy. “Look at some CVEs and decide whether or not they could be mitigated by microkernels in principle”.
Also, they’re not really testing monolithic kernel design. They’re testing an implementation. Monolithic kernel design is flawed, yes, but in the same way everything has flaws of some kind or another when actually implemented.
“These could’ve been avoided in a microkernel” is not much better than an opinion.
Edited 2018-08-16 22:36 UTC
Agreed. This is very similar to the case for rewriting everything in Rust. (eg. whether you abuse “unsafe” to violate Rust’s invariants is an implementation detail too.)
I like Rust, and Rust’s ability to encode many safety-critical invariants into compile-time checks could really help kernels, but you don’t see me writing “the jury is in” posts linking to equivalent Rust papers. (And at least two have shown up in /r/rust/ in the time I’ve been lurking there.)
Additionally, there’s a bias in the fact that essentially all popular kernels are monolithic (or partially monolithic) and so that skews all the vulnerabilities they’re examining.
If the situation was reversed – if the major operating systems were all microkernel, I could almost guarantee you could make the reverse case – that monolithic kernel design could have been used to mitigate the attacks – because the attacks would have been designed for and optimized for microkernels.
The major operating systems for some markets (smartphone, desktop, server) are monolithic; but the major operating systems for other markets (various embedded – e.g. automotive) are micro-kernels. This means that you can try to make the reverse case.
To get you started: https://www.cvedetails.com/vulnerability-list/vendor_id-436/QNX.html
Don’t forget to follow the same methodology – e.g. start by selecting all CVE’s from 2017, etc.
– Brendan
Edited 2018-08-17 06:58 UTC
My point was that the major OSes get the bulk of the investigation into exploits etc because they’re the greatest targets.
The same reason the majority of nastiness targets Windows for example (or did until recently)
Embedded devices aren’t a sufficiently large class of internet-connected devices with general purpose software to attract the same kind of attention.
So no, I can’t at this time.
Hi,
The alternative would be to find a market where there’s a monolithic kernel and a micro-kernel that are both targeted equally; then do the comparison both ways (e.g. CVEs from monolithic that won’t effect the micro-kernel vs. CVEs from micro-kernel that won’t effect the monolithic).
Of course what you’ll find is that all of the vulnerabilities that effect the micro-kernel (with its extra isolation) will effect the monolithic kernel (without the extra isolation); and some of the vulnerabilities in the monolithic won’t effect the micro-kernel because of the extra isolation.
Mostly, the opening paragraph of the paper is correct – (“The security benefits of keeping a system’s trusted computing base (TCB) small has long been accepted as a truism, …”) because a micro-kernel is just the practical application of one of the corner-stones of secure systems ( https://en.wikipedia.org/wiki/Compartmentalization_(information_secu…) ).
The problem is that (unlike performance) security is incredibly difficult to quantify; and people tend to compare based on measurements (performance) and not things that can’t be measured (security). Essentially; the goal of the paper is to measure something that people already believe to enable more accurate comparison.
Sadly, in a world where decisions are often made based on economics and not technology, these kinds of comparisons rarely matter anyway. People will happily use “worse” if it saves them $2 today (even if it costs them $20 tomorrow).
– Brendan
But it still doesn’t say anything about design. It’s still measuring the implementation. To say a design itself is flawed, not just the implementation, requires formal verification, funnily enough.
Hi,
They addressed that issue in the paper (the first point in the section called “3.4 Threats to validity” on page 4).
– Brendan
They didn’t address it. They mention it. They mention many things, but just like how they classify the CVEs:
“We perform the classification by having two authors independently examine each exploit and assign a mitigation score.
Where the assessments differ, the third author examines the exploit to determine the final score.”
– it’s all very weasel-wordy. And they only have three authors doing the classification. As my original comment implies, they would actually have benefited (though not by much) of a proper jury.
And in the paragraph you mention, they don’t address it. They’re still talking about implementations. Not design.
Hi,
If all implementations are poo (despite 20+ years of fixes), don’t you think that says something about the design itself being “prone to poo”?
What if I create a list of 1000 different ways to poke yourself in the eye (different angles, different objects, different speeds). How many times would you poke yourself in the eye before you starting thinking that maybe poking yourself in the eye is a bad idea and it’s not just 1000 bad implementations of a good idea?
– Brendan
Brendan,
Haha, +1 for the original analogy
I do understand why kwan_e wouldn’t be satisfied with the report, this is one of the most divisive topics in computer science! It may not change anyone’s mind, but still I think their methodology is an interesting new way to approach the debate. Thank’s for posting this Thom.
I don’t see how anyone should be satisfied with the report, no matter where you sit on the debate. It is a bad methodology worse than what you’d find in sociology. The only people who would be satisfied with the report are people who already decided their position and only agrees with any bad argument that supports them.
I personally lean towards microkernels, but this paper really does nothing to answer anything. And I’m pretty sure in other comments when microkernels come up, I’ve spoken in favour of microkernel performance improvements of things like L4 and such.
Such visceral reactions and red herring arguments only serves to show how willing they are to accept any bad argument as long as it supports their cherished position.
But that’s not what this paper purports to show. The paper would have been a better argument if it tried to argue this point. But it didn’t. It used a very sketchy method. That’s been my point from the beginning, and yours is nothing but a red herring.
Edited 2018-08-19 01:12 UTC
It amazes me how many people are incapable of understanding simple figures of speech. Do they no longer teach about metaphors in schools?
I mean, did anyone actually read this headline and thought to themselves “oh, there was real jury involved”? Absurd.
You talk about figures of speech, yet completely miss the fact that the very phrase I used was referencing the joke about movie or TV credits saying “no animals were harmed in the making of this movie/show.”
The whole “No X were Y in the making of this Z” is a very common snowclone.
Did not occur to you at all, genius?
And then, at least one other person replied knowing the point I was making was in the rest of my comment, which by word count alone dominated the whole comment. If other people could figure that out, why can’t you?
I mean, seriously, it’s got its own TVTropes entry: https://tvtropes.org/pmwiki/pmwiki.php/Main/NoAnimalsWereHarmed
Maybe people like you need school approved figures of speech to understand?
Edited 2018-08-17 10:23 UTC
Or maybe your references are neither as clear nor as funny as you like to think.
Also, excuse me, but I am not as big a fan of TV shows or popular culture as you are.
This is what you did in your comment:
{Sarcasm} {Serious criticism}{Serious criticism}{Serious criticism}{Serious criticism}{Serious criticism}{Serious criticism}{Serious criticism}{Serious criticism}{Serious criticism}{Serious criticism}{Serious criticism}{Serious criticism}{Serious criticism}{Serious criticism}{Serious criticism}
Its much easier when its:
{Sarcasm}{Sarcasm}{Sarcasm}{Sarcasm}{Sarcasm}{Sarcasm}{Sarcasm}{Sarcas m}{Sarcasm}{Sarcasm}{Sarcasm}{Sarcasm}{Sarcasm}{Sarcasm}{Sarcasm}{Sarc asm}{Sarcasm}
Its kind of tough when you hide the sarcasm like that. It makes it kind of unclear weather or not you understood the sarcasm to be sarcasm. Timing and delivery are everything in comedy. Of course there are sub genres that deliberately play with the timing to tell really funny jokes with bad timing and delivery, but if that’s what you were doing I give up. Here is your prize; { insert prize here }
Well, as grub will tell you, since he’s the self-appointed gatekeeper of English on this site now, that snowclone was not sarcasm because it wasn’t mocking anything.
And it wasn’t tough, or hidden, since literally no one else needed to be told that I wasn’t really expecting a jury. But then, there’s a reason why they don’t put the best and the brightest on gatekeeping duty.
Seems like no one likes other people being creative with their use of language these days. Everything has to be made super obvious and spoon fed to people, lest they have to read something closely. They say Western education is supposed to foster creativity, but evidently not, since it seems to just breed self-appointed gatekeepers who see it as their duty to stamp out any difference from the norm.
I’m basically saying you didn’t fit the pattern of expected humor. Its ok, not the end of the world. Its just the way you put together jokes. I understand why he didn’t get it, and I understand why you’re confused why he didn’t get it.
Memes are really popular because they remove the ambiguity that text leaves in. Wonder if Thom can add them @thom? @david?
Masking your own errors as “creativity” is really low.
Just accept that you’re neither as universally funny nor creative as you pretend to be. Just because someone (me) didn’t think your “joke” was funny/creative should not be such a blow to your ego as to invite huge flamewar.
In fact, it was totally unfunny and very cliche.
And unfunny jokes lead to misunderstandings quite often, just as it did this time.
Also, if you would be just a little bit smarter, you would notice that with my reply to your first post I simply replicated another conversation we had under another topic (where I sarcastically criticized the headline), just here we switched places: I gave you your own medicine.
Edited 2018-08-20 09:52 UTC
Give it up already.
Edited 2018-08-20 10:30 UTC
You wish… If I wasn’t actually getting to you, you wouldn’t be replying trying to justify yourself as “funny” and “creative”.
🙂
The same is true anywhere in security, the smaller your footprint the less risk of vulnerability, and less overhead of patching.
This is why a general purpose os is usually a very poor choice from a security perspective, as it will contain support for all manner of features that you’re not using which increase risk for no benefit.
So where are all those microkernel OS? For more than 20 years we have been lectured that a microkernel is a so much better architecture when in practice all the viable operating systems are monolithic. Somebody please go and build a useable OS with a microkernel! Until then it is no more than an academic concept.
I totally agree with L. Torvalds in dismissing it as practically not viable.
You must have heard of Hurd (pun intended) !
Yes. And I would say it proves my point. Heard it does not happen.
Hi,
It probably won’t be too long (a few years) before Google starts replacing Andriod with Fuschia. We’ll see what Linus says after that. 😉
Until then, OS X is more micro-kernel than monolithic, and for lesser known OSs about half are micro-kernels (Haiku, AROS, MorphOS, Minix, QNX Neutrino, …); but if you want to try something kinky Classic Mac OS switched from monolithic (on 68K CPUs) to nano-kernel (on PowerPC) about 20 years ago.
– Brendan
All the future operating systems are world beaters. And the main killer feature is typically that they are micro kernel. That has been the case for more than 20 years.
In the meantime we are writing this on some monolithic machine. But not for very much longer…
ThomasFuhringer,
I hope you realize the fallacy in judging practicality by popularity!
Unfortunately that seems to be where we are though, not only does the buying public seem determined to have extremely skewed markets at the expense of alternatives, but we’re actually judging the merit of alternatives by how popular they are.
I really enjoy discussing kernel design. Having different opinions is good, we should have all kinds of kernels! But boiling it down to a popularity contest just reinforces the incumbents without bringing anything meaningful to the discussion.
As for Torvalds, he won the unix popularity contest, but that in and of itself doesn’t imply Tanenbaum was wrong. Heck, I do quite a lot to promote linux on my clients, but you’d be mistaken to use that fact as evidence for my view of macrokernels over microkernels. I suspect there are many linux/macos/windows/android/ios users like me who use their respective platforms despite their flaws.
Edited 2018-08-17 16:07 UTC
For a few years Intel ships Minix with all its CPUs, it might be at this point more widespread on x86 machines than Windows. Qualcomm ships a microkernel OS on its baseband processors, and that’s most of them I think. Symbian is a microkernel OS, and not a long time ago it dominated smartphones… Also, feature phone platforms (not what’s most sold these days, but probably still what’s most used / largest installed base), such as “Nokia OS”/Series30/Series40 or Sony Ericsson A200 are typically microkernels.
And still you are writing this on a machine with a monlithic kernel, right?
Like all the other proselytizers lecturing me on the superiority of the micro kernel.
I do not care about popularity but to me it is just a flawed concept because it requires an awful lot of messaging between the (micro) kernel and what is then user land, which is obviously not worth the benefit. If it would be, after so many years it would have emerged as the winning concept in practise.
Linux use a design dating back to the 60’s, Windows NT a rewarmed design from the 70’s combined with parts coming from the 80’s.
Microkernels as a concept dates back to the 80’s with significant work done in the 90’s. At that time the other systems were already established and significant amounts of software designed for and around the existing OS designs.
Which makes most attempts a microkernel with upper layers emulating legacy OS design – adding overheads with the only benefit being reliability and less bugs. Hardware design was often geared towards monolithic kernels
That’s why.
Today we have another world. The Internet is absolutely everywhere, exploits are absolutely everywhere, people are less accepting of crashes. Standard operating systems are increasingly using extra protection around programs or subsystems – e.g. “jails” and virtual machines.
Microkernels have added overheads and emulating legacy designs adds even more. But in a world where there are useful programs programmed in Javascript(!!) ranting about microkernel overheads seems quaint.
ThomasFuhringer,
Ah good, this makes for a better technical discussion
I’d say it depends, if every single request is sent individually, then you are right. But you don’t have to design a microkernel this way. You can use batching or message queues that have much less context switching.
I wish I could provide recent benchmarks for a microkernel based on this model, but I couldn’t find any. Maybe I’ll have to do the legwork myself to get the data, but I don’t have time now.
In addition to keeping failures isolated, there are other benefits to microkernels. It’s easier to upgrade individual components without taking down the whole system. It’s easier to track down faults. It’s easier to enforce security. Some people would appreciate these things, but I think it’s fair to say that the vast majority of users are just along for the ride and don’t care one bit either way.
I think there’s more promise to your other idea about verified binaries. Verified binaries would make microkernels unnecessary.
kwan_e,
Is this referring to the discussions I had with Neolander about enforcing isolation through language semantics? Wow that goes way back… Yeah, I like that idea. Alas, I think it’ll be difficult for any newcomer to make significant inroads in the operating system market regardless because we’re so entrenched in the current oligopoly.
I really wanted to work on building an operating system around the time I was in college, and though I floated the idea to employers & investors, but nobody was all that interested in financing my work. I think the best shot at change would be for one of the major tech companies to promote it, but ironically enough I think they’re held back by their own fear of displacing their own existing cash cows. *Shrug*
Edited 2018-08-21 01:45 UTC
I think your best shot would be at something for the ~embedded OS world, it seems relatively open to newcomers / less factors making any OS too entrenched… plus it’s probably more viable for, at least initially, part/free-time project, essentially a hobby OS, because the embedded OS tend to be ~simpler.
There would still need to be code handling the basics of keeping the system running, probably including basic memory management and task switching unless the hardware does it. That’s a microkernel still.
The above also indirectly mentions another alternative: letting hardware be the microkernel.
It can handle multitasking (like the Transputer), communication (like the Transputer) and memory protection (segmentation, capabilities).
A long time ago I read about someone that combined the idea of software protection with hardware protection in a proof of concept system.
It used x86 segments combined with code scanning to make a relatively elegant system, segments provided most protection with the scanning detecting the few cases hardware didn’t protect.
https://sci-hub.tw/10.1109/pccc.2000.830360
But go ahead and try to move the goalposts, now you “don’t care about popularity” after I showed that microkernels are quite the thing – in fact, again, possibly more popular on PCs than Windows; on mobile, after a likely relatively short detour from micro to monolithic with Android, we will be probably back to microkernel with Fuchsia; it is the winning concept.
And you’re doing what you’re accuse microkernels of doing, theoretical arguments – while in practice, Symbian and feature phone OSes are quite speedy on limited hardware, and it’s also no problem for Minix on Intel CPUs or seL4 on Qualcomm baseband processors – they get the job done quite successfully.
zima,
http://www.osnews.com/thread?661374
(osnews won’t let me post in that discussion any longer, sorry)
Ah, I did address a different point, but then “compiling” was the wrong word to use to describe programming the FPGA.
Without power an FPGA looses it’s programming so typically there’s another CPU to program it at power on. I’ve only used one FPGA and it was USB powered, I never had to wait after plugging it in, but I wasn’t really testing for this so I’m not sure.
This link suggests 200-300ms, which depends on the size of the FPGA and it’s bus speed.
https://forums.ni.com/t5/Multifunction-DAQ/How-long-does-it-take-to-…
This might be good enough for “alt-tabbing” between applications as is, but I would think an FPGA inside of the CPU (or even on the PCI bus) could do better. Also while off the shelf FPGAs today aren’t typically multitasked, I’d expect a purpose built FPGA to incorporate better support for OS multitasking.
Edited 2018-08-20 17:46 UTC
Thanks for reply; I thought it could take more time to program an FPGA / but with less than a second as you write it should be OK as it is.
There does not need to be any degradation in anything.
That was what highlevel was to begin with. And the timer tick in the kernel is that too, to reduce overhead of highlevel calls.
As long as such structures are used, you get good performance.
One may argue if highlevel languages have had their time though, and one might aswell make a more sophisticated low-level language, that look more like high-level. JSR with arguments, etc.
A lot of the time high-level is only less masochistic approach.
Last time I checked Linux it had less than 200uS jitter, when configured well, and 90hz timer. Which is good. With low-jitter design one gets the best performance.
You should know that when Linus wants “monolithic” kernel, (to me: all the code in one place), its because of “idol”, not knowledge. His 1000hz timer and 10ms filter in sched.c also is this. If one really wanted available source to work, one could implement fair pay in it, and let specialists take care of their parts, which available source excels at, which could be included over the net at compile time, checked in the kernel, and fair pay alotted. There is no need for “monolithic” kernel.
Peace.
Edited 2018-08-18 19:31 UTC
90Hz seems low. Debian defaults to 250Hz and Arch/Manjaro to 300Hz these days. Most worthwhile distros also offer a PREEMPT_RT / 1000Hz kernel in the repos.
Ah yes, there was a poster also obsessed about jitter in OS among some weirder things / peculiar beliefs…
For some things jitter _is_ important, though I’d suggest going for a realtime configuration if that’s the case.