Redox OS, a microkernel OS written in Rust, hast just released version 0.0.6, which includes bug fixes and and update to Rust.
From the project’s 2016 in review post:
Today, we have a pretty mature project. It contains many core, usable components. It is already usable, but it is still not mature yet to be used as a replacement for Linux (like BSD is), but we’re slowly getting there.
The kernel was rewritten, a memory allocator was added, rendering libc out of the dependency chain, several applications were added, a file system were added, a window manager and display server was implemented, and so on.
It’s called Redox…
There’s an interesting side discussion in the redox forum about the license that the OS should take on, whether to go with MIT or GPL:
https://www.reddit.com/r/rust/comments/5klu34/funding_redox_os_devel…
I personally work on many commercial projects that reject GPL libraries, for better or worse. However that resistance is usually for userspace libraries. For operating system code, I don’t see much resistance since it doesn’t impose limits on userspace.
oheoh says:
I kind of agree with him, it seems unfortunate that apple has not given back as much as it took from the BSDs. Of course they’re under no obligation to because of the license, but it does seem fair to say the license enriched apple more than the BSD project. There could be more companies that took BSD code and never contributed back because they’re not obligated to.
While I understand why many people dislike the viral nature of copyleft, it still seems evident to me that the GPL has benefited linux greatly with all kinds of companies submitting code updates back to linux. The thing I think linux made a mistake with is in failing to be “GPL2+” instead of just “GPL2”, since that limitation is having negative repercussions and makes it virtually impossible to upgrade to GPL3 at any point in the future.
Hopefully I’m not stirring the hive too much with a religious licensing debate But I think some people here on osnews might have some informed opinions.
I, personally, think it’s a matter of opinion. I contribute to free software projects and prefer to contribute to GPL licensed software for a very simple reason: I put the work in, and these are my terms. I don’t ask for money, or recognition, or any such thing, instead, I want my work to remain available to anyone that wishes to use it. However, I can totally understand why projects like libogg/libvorbis, libopus, etc choose a BSD/MIT style license. They are not writing code as a hobby, these are reference implementations.
As for what Redox should do, I think it’s up to the developers. It’s definitely one of those choices that will stay with you ’til the end (see your example about Linux).
teco.sb,
Edited 2017-01-05 05:36 UTC
I’m interested in what ways Apple hasn’t given much back to the BSDs.
On the one hand, they use a totally different driver setup than any BSD, and a lot of their very low-level OS stuff is more Mach than BSD.
On the other hand, they do supply all the code you need to get a working XNU kernel (which can replace the one OS X ships with) including all BSD code needed, and all of the BSD-portions of their userland. Not to mention that the ZFS port in FreeBSD is in part Apple’s handiwork.
FlyingJester,
Edited 2017-01-06 02:19 UTC
OSX (and NextStep before it) has always been severely misunderstood. It has always been a mach-like system, with BSD-elements borrowed mainly for the Unix personality part of the system, and plenty of GNU stuff laying around thrown in for good measure (although they’ve moved away from a lot of it in the past few years). Yet many people, especially in the FreeBSD camp from my experience, see it as a mutated FreeBSD.
Edited 2017-01-07 03:27 UTC
There are still developers who intentionally close sources or restrict freedoms of permissive software derivatives. Even if it gives them zero noticeable advantage, they can’t be swayed. However, a GPL project that is non-portable, hard to build, and unmaintained isn’t much better off either.
If someone wants to help a project, they’ll find a way, regardless of license. And similarly if someone wants to remove functionality or kill a port to an unpopular platform, nobody can stop them. The ideal that GPL somehow makes us better, more cooperative people isn’t justified.
Software licensing should be pragmatic and generous, hands off, trying to solve a concrete problem, not an excuse to annoy others.
So, long story short, I’m disinterested in licensing because it shouldn’t matter (although I prefer free). If there’s no obvious advantage to doing so, there’s no reason to be restrictive. I’m not a lawyer, and I’m not interested in arguing. I want to solve actual problems, help actual people, not have a legal standoff.
Maybe I expressed myself badly, but my point is that licensing is usually the least of our worries. It’s an uphill battle no matter what you choose.
What do you mean? All the BSD components that Apple used ended up in their open source projects, including the XNU kernel, which is still open. While it has been a while since they used a merged a recent version of the FreeBSD kernel in with XNU, the two have diverged enough that any changes Apple makes wouldn’t necessarily be easy to port back to FreeBSD anyways.
There are a number of other pieces of software they created that are opensource under permissive licenses, too. LLVM/Clang is one (Not technically created by Apple, but they’ve been footing the bill for the creator and a whole team to work on it full time since 2005), mDNS, GCD, and others. A lot of OSX-specific tech is open source, too, such as launchd.
Edited 2017-01-06 18:28 UTC
Writing kernels is WAY beyond my level of knowledge, however, I think microkernels are a much better choice than the monolithic/modular kernels we have to deal with today. I just don’t understand why this design won out over something simpler and more manageable, like a microkernel. I sure hope this project moves forward.
I’m still hoping we will, one day, see a seL4-based general purpose OS.
Hi,
The reason for this is that most micro-kernel developers are incredibly stupid and make decisions that guarantee failure before they even write a single line of code.
To understand this; first you have to understand (what I’ll call) “The OS Developer’s Dilemma”.
An OS without applications is useless. If you port applications from some other OS then it’s virtually impossible to show that your OS is better in any meaningful way – it ends up being the same applications with all the same (user-visible) features as a more mature OS (that is faster and more stable and has better hardware support because it is more mature). If you don’t port applications from some other OS then you have to write a huge number of applications and that’s going to take multiple decades. This is the “The OS Developer’s Dilemma” – you have to choose between being screwed and being screwed, and it’s extremely difficult to avoid the consequences regardless of your choice.
For micro-kernels this becomes worse. The isolation between pieces causes additional communication overhead. If you port software from some other OS, then that software (and the APIs and libraries, etc it uses) will not have been designed to mitigate the communication overhead. What you’ll be left with is poor performance, plus a trivial way for people to do “apples vs. apples” benchmarks (e.g. same application on 2 different OSs) that “prove” that the micro-kernel sucks because performance is worse; combined with no sane way to show any advantages, partly because the advantages can’t be benchmarked and compared easily (how can you benchmark security?), and partly because ported software won’t take advantage of any of the OS’s advantages itself.
Essentially; for micro-kernels “The OS Developer’s Dilemma” becomes a choice between being extremely screwed and being screwed. That is where most micro-kernels fail – they choose to avoid being screwed (the need to write a lot of applications, etc) and instead choose to be extremely screwed (and port applications and support APIs designed for *nix that are inappropriate for a micro-kernel); and the end result is that they are doomed before they start.
– Brendan
This was the weirdest thing Ive read today.
A micro-kernel has much fewer lines of code, so you can reason about it mathematically. You can not do that with a large kernel. That is the point of L4 kernel, you can not prove things unless it is a micro-kernel – because you want to reduce the complexity. So you are wrong. A micro-kernel is simpler, because it has fewer lines of code. The more code, the more complex.
Hi,
A micro-kernel on it’s own is far less code than a monolithic kernel; but a micro-kernel on it’s own it’s also completely useless.
A micro-kernel plus all the drivers, file systems, services, etc that are necessary to make it actually useful adds up to more code (and more complexity) than a monolithic kernel with equivalent functionality.
– Brendan
Brendan,
This is a very interesting discussion; I like the way you lay out “The OS Developer’s Dilemma”. I definitely agree: a new general purpose OS isn’t going to be terribly useful if it doesn’t support existing software APIs, but once it does those APIs will get used almost exclusively and all the other native features that make the OS unique will go largely unused regardless of merit.
Can I just ask why you assert that a microkernel involves more code/complexity? I’m not claiming it’s right/wrong, I’m just having trouble seeing why it would be intrinsically true. Obviously microkernel calls will have to traverse the kernel barrier, but it doesn’t seem self-evident to me that this in and of itself increases the programmer’s code/complexity compared to macrokernel calls that don’t traverse the barrier.
Clearly, for performance reasons, a micro-kernel favors message passing compared to numerous function calls, which has an impact on complexity, but that’s not necessarily exclusive to the kernel type and conceivably one could develop kernel module APIs that work under both macro and micro-kernel with nothing more than a recompile, in which case they’d be equal complexity. So in this way complexity isn’t dictated so much by macro/micro kernel but rather the design of the API.
What do you consider kernels where the isolation is enforced by managed programming language semantics rather than CPU memory barriers (such as singularity or JX)? Is that a micro-kernel, macro-kernel, or something completely different? It’s an interesting question because it has all the isolation properties of a micro-kernel yet runs in a single address space with no CPU transitions.
Edited 2017-01-05 15:53 UTC
Hi,
As far as I’m concerned, for most OSs there’s some sort of isolation between “user-space things” and “kernel”, and that could be hardware isolation (e.g. CPU’s MMU) or software isolation (e.g. managed language) or some mixture of both. The difference between monolithic and micro-kernel is which pieces are isolated (e.g. if drivers are isolated from kernel or not) and has nothing to do with how they’re isolated (MMU, managed language, …).
– Brendan
To follow up on Alfman’s question and your responses, how is any of what you described different from the current crop of modular kernels? True monolithic kernels no longer exist, all the large projects have moved onto a modular design, where the kernel loads code at runtime.
The second problem you describe also exists when loading a modular kernel loads a module. Except that now, that code will be running in kernel-space, as opposed to user-space. How is the micro-kernel approach more complex than the modular-kernel approach?
The boot problem you describe also exists on modular-kernels. As a matter of fact, the Linux kernel has shipped with initramfs support for years exactly so they can work around this problem.
As for the other 2 problems you mention, I think you have valid points. However, I would argue these problems you are describe do not add complexity to the individual pieces.
Since micro-kernels are separated into smaller pieces and each one of those pieces are independently simpler. As an example, take a look at the DragonflyBSD kernel and what they’ve done in order to better support SMP (https://www.dragonflybsd.org/docs/developer/Locking_and_Synchronizat…). How is all that mess less complex? As another example, the Linux kernel, until recently, had that big kernel lock (https://kernelnewbies.org/BigKernelLock).
I just don’t think the issues you listed are any better in modular-kernels. Maybe different, but not better.
What about context switch? From what I heard, it used to be the main thing hurting micro-kernels performance because of the high cost of message passing between kernel mode and things done on user mode. I know that a modern CPU does lots of tricks to mitigate the cost of context switch but most optimizations are done for things not crossing the address space in use, which is, unfortunately, constantly swapped on micro-kernels.
Edited 2017-01-06 19:55 UTC
As far as I understand, the context switch overhead is only a real issue for the first generation micro-kernels, like Mach. New micro-kernels have negligible IPC costs.
Paragraph 3 of this section explains a little about what they do now:
https://en.wikipedia.org/wiki/Microkernel#Inter-process_communicatio…
Hi,
How about if I chop your arms and legs off, then isolate them by mailing each one to a different country, then build an international system of pumps and control logic to keep them working while they’re separated from your body, and then tell you it’s simpler like this (because if I ignore everything except one leg, that leg seems simpler than an entire “monolithic human”)?
For Linux SMP and locking, they added it to every piece – the memory management, the scheduler, the VFS, the file systems, the entire network stack, and about 500+ drivers. You’re comparing this work to adding SMP and locking to a micro-kernel alone (and not the VFS, and not file systems, and not the entire network stack, and not 500+ drivers); and then saying “only doing tiny fraction of the work is easier”.
– Brendan
Edited 2017-01-06 21:52 UTC
As a quick example of why modular kernels are not the same at all as microkernels, in kernel-space something to the effect of this is possible:
`
for(unsigned long i = 0; i < 0xFFFFFFFFul; i++)
((unsigned char*)NULL)[i] = 0u;
`
You could load code that does something similar to this in any kernel module. If you don’t know C, that simply sets all memory for the first four gigabytes to zero.
A microkernel would issue a segfault to the module, and then either the system would gracefully shutdown, or if the module was not essentially (say an FS driver, a networking or audio stack component, etc), it would be restarted.
Either way, there is no recovering from something equivalent to those two lines of code running in kernel space.
FlyingJester,
You’ve got an off by one bug there. The loop only sets four gigabytes minus a byte because it terminates before the byte at 0xffffffff is set.
Just being pedantic
(I often resent that C/C++ don’t support numeric overflow flags.)
Hi,
Yes; and a micro-kernel can be far easier to debug because of this.
More fun would be a dodgy/uninitialised pointer in code that is only executed occasionally. For monolithic it could take weeks just to figure out which module is actually causing the problem (while everyone is blaming other things – e.g. the things that get their data mangled by the bug). For micro-kernel, at a minimum you’d know exactly which module to blame, and there’s a much higher chance that you’ll get a nice page fault telling you which instruction is writing where it shouldn’t.
– Brendan
Brendan,
I’m trilled we have a technical discussion going, you have no idea
The interesting thing is that with managed language solution, there’s no overhead for the isolation. It’s a free bi-product of how managed languages work.
Edited 2017-01-06 23:47 UTC
Hi,
That’s a myth.
What you actually end up with is “overhead of hardware checks” (e.g. TLB misses) vs. “overhead of managed language checks plus the overhead of hardware checks” (e.g. checks that couldn’t be omitted by compiler, plus TLB misses caused by long mode requiring paging and the total working set not fitting in caches/TLBs and causing “least recently used TLB eviction”).
The other thing you end up with is an extremely complicated optimising compiler (more complicated than a normal compiler that doesn’t have the additional responsibility) that must be perfect to guarantee the system is secure but also must be expected to have several thousand bugs (e.g. 1 bug per 1000 lines of code for several million of lines of compiler code).
The other thing you end up with is hardware issues (temporary glitches, CPU errata, things like rowhammer attacks, etc) where even if your compiler is “impossibly perfect” you still have no guarantee of security because there’s nothing to shield you from “modified after proven secure”.
Mostly (at least in my opinion), “managed” is a researcher’s fantasy (that just makes things worse practice).
– Brendan
Brendan,
Every single one of these criticisms can impact a regular kernel like linux in the same way. GCC’s optimizing compiler is very complex and can have bugs, CPU errata can affect any kernel, TLB misses happen regardless of kernel. Temporary hardware glitches (injected deliberately or accidentally) can break any kernel.
So while I’ll acknowledge all of these things can be potential buggy, I definitely don’t think it’s fair to portray them as exclusive to singularity. And if you acknowledge this, then I’m not sure what it is we’re supposed to disagree on?
Even allowing for occasional bugs in the compiler toolchain, I think on the whole managed languages are much safer on average because they don’t silently ignore problems like C does. Numeric & buffer overflow bugs should have been exterminated decades ago, yet they still exist because we keep using C.
http://techreport.com/news/29627/serious-bug-in-linux-kernel-allows…
Edited 2017-01-07 23:32 UTC
First you said that a micro-kernel is more complex than a monolithic kernel, which is not really true. But what you actually meant, was the micro-kernel plus all it’s subsystems are more complex than the monolithic dito?
Well, that is another discussion that I dont really know too much about. But OTOH, if you have a micro-kernel, you separate cleanly the kernel from the subsystems. That should mean that you can examine each system locally, without having to consider other subsystems. In other words, you reduce complexity. You look at one simple system at a time, instead of one large system where many subsystems interact.
For instance, you could mathematically reason about one subsystem at a time, because they are encapsulated. This is not possible with a monolithic kernel where there might be side effects in other subsystems.
So I really doubt that a micro-kernel with all it’s subsystems is more complex than a monolithic dito. To me this is a weird statement. If you have independent encapsulated subsystems, then you reduce complexity because of divide-and-conquer. You need to provide some links or a better explanation why independent separated sub systems are more complex than a monolithic dito.
UPDATE: Now I see what you mean:
“…I think that’s where you’re going wrong. You’re looking at the complexity of one little piece and comparing it to the complexity of a whole system; and you’re not comparing the complexity of “all pieces plus the communication necessary to form a whole system” to a whole system….”
And this could be true. I see your point now. Ok.
Edited 2017-01-09 15:37 UTC
Kebabbert,
Yea, it’s the amount of aggregate work for all pieces when all work is said and done. The micro-kernel’s pieces are clearly separate execution units which enforce a strict API to intercommunicate, and this API might add complexity for developers, but I’m still on the wall about that. The modular kernel will also require it’s own API(s) (and in the case of linux an unstable one), so the question is whether the micro-kernel API has to be more complex than the modular kernel API, I haven’t seen enough evidence to convince me that it is.
A pure monolithic kernel might theoretically be done without APIs at all, with zero encapsulation and every piece of code modifying shared state as it needs to, but this is the anti-thesis of object oriented programming principals designed to simplify software. This is why it can make sense to use an API with strong encapsulation even in a monolithic kernel in order to reduce complexity. A case could be made that well designed APIs actually reduce complexity even if they add code. And in a micro-kernel, if I add a bit of code to the kernel proper, and it ends up simplifying thousands of drivers outside the kernel, I would consider that a net reduction of complexity.
For me it’s hard to come to a 100% solid general conclusion on this topic. Complexity is just one factor that has to be balanced with security, robustness, efficiency, etc. This debate is interesting because the usual criticism for microkernels is the low performance of synchronous calls, but I think it’s the first time we discuss complexity on it’s own. Good discussion
Edited 2017-01-09 21:35 UTC