“Since early 2009 NetBSD and rump has supported execution of stock kernel module binaries in userspace on x86 architectures. Starting in -current as of today, kernel modules will automatically be loaded from the host into the rump kernel. For example, when mounting a file system in a rump kernel, support will be automatically loaded before mounting is attempted.”
I hope someday all the drivers run and driver development is done in user-space (even if its slower).
I agree that this is great news for NetBSD, but I do believe that there are items that should be left in the kernel as well. Now it’s time to install the world’s most portable operating system on a Sun Netra T1.
I heard, that on the hardware using SPARC processors OpenBSD is doing much better, than NetBSD. But don’t ask me for details; didn’t dig into this deep enough.
One of the reasons AFAIR was that Theo de Raadt is fond of SPARC.
Everyone should be fond of SPARC not just because I say of course but because SPARC is a standardized design (IEEE) no lisensing required… SPARC is what loongson claims to be and I’m sorry to say but Richard Stallman was duped when he got that loongson netbook as its hardware isn’t open at all in fact its all extremely proprietary.
SPARC has inroads in space computing (high radiation environments)… mobile (S1 core) and Desktop/Server/Workstation with T1 and T2 processors…. if only companies would realise the advantage of using a design people can’t sue you for useing versus ARM or MIPS which require expensive lisensing.
Interesting what netbsd is doing here I wonder if it works on SPARC32? this sounds alot like exokernel ideas are being implemented (optimised userspace libraries for usuaully kernel tasks) … there was a comparison with MIT’s exokenrel work that showed mind boggling proformance increases when libraries were optimised to the workload.
Edited 2010-04-29 15:07 UTC
Loongson is MIPS-based, not SPARC-based.
“SPARC is what loongson claims to be” …
He wasn’t saying that it was MIPS at all. Read his commment properly.
Well, actually I fully agree – being an owner of Sun Ultra 10.
But ARM new processors look interesting as well.
In what way does Loongson “claim to be” SPARC? From what I read the instruction set is clearly MIPS-based.
I didn’t think it was all that proprietary; the OpenBSD guys have a port already, and they usually don’t have much fondness for closed hardware. Reading some of the devs’ posts, it seems that the problems are that documentation is sparse (at least English documentation), and that certain aspects of the implementation are buggy.
All that said, for the $500 it costs to buy one in the US, I’ll take a cheap i386 laptop.
I think he’s trying to say that SPARC is truly an open platform as the Loongson claims to be. Clearly cb88 thinks Loongson’s claim to be open is false.
The whole “rump” (what a name…) deal is really interesting.
I would like to see it being used even more. Push many more things to the user space, that’s they to go (unlike in Linux where the kernel absorbs more and more user space tasks). Even traditional Unix kernels can this way benefit from the “kind of L4-approach”.
Linux does absorb a lot, but it’s funny to see someone complaining about that rather than the far more common “Why are udev and dmix not in the kernel?” complaints.
That’s what I was thinking. You can’t please everyone.
But, what has the kernel absorbed lately?
Ingo Molnar was involved in a long flamewar on the KVM mailing list as he want QEmu pulled into the kernel.
KVM did make it into the kernel, but that’s not pointless kernel bloat; virtualization legitimately requires bits of code performing low-level hardware manipulation in a time-sensitive fashion, and needs bits in the kernel to achieve decent performance.
And, that’s one. Keep going.
KVM == kernel-based virtual machine. It would be pretty silly to have that outsiide of the kernel.
Ingo wanted the whole of the QEmu userspace split in two: the KVM bits and everything else. And the KVM bits of QEmu pulled into the kernel. In effect, forking QEmu.
Thankfully, the KVM guys were able to convince him of just how wrong that would be.
I’m not entirely sure I’m following you here. What of QEmu did he want pulled in, that wasn’t already covered by KVM?
(If you don’t wanna type a long, dry answer, you can just give me a link to thread and let me read it myself.)
Search the archives for the thread: [RFC] Unify KVM kernel-space and user-space code into a single project
If I’m reading that right, he didn’t want QEMU integrated into the kernel itself, he wanted a minimal version of QEMU maintained in the kernel’s repo (as part of the project, not as part of the resultant binary kernel image). He wanted something like a minimal reference front-end for KVM.
His motives in doing so, based on the little bit of that thread that I’ve read, are at least reasonably, if not compelling. And it’s not really a flame war, either (again, for just the little bit of that thread that I’ve read).
If I remember correctly, he wanted to put the qemu code in the kernel source tree and not to run the whole qemu in kernel space. It’s hardly the same thing.
Linux isn’t absorbing more and more into the kernel. Things continue to move out of the kernel, or remain out of the kernel.
I’m guessing you are complaining about KMS. Well, hardware resource allocation and management SHOULD be in the kernel and should have never been in userspace.
Was not previously aware of rump. Very cool. I wonder what are the disadvantages of this kind of a set up.
* Duplication of code in memory ?
* Slower execution speed when going through rump?
* SMP Performance hit maybe?
Its like a mutant between micro and macro kernels. You get the macro kernel benefits of easy kernel aware development with some of the isolation of microkernels.
I’d really like to see some (rigorous!) performance measures. It sounds technologically nifty, but I’d also expect a significant performance hit (from wandering back out of the kernel and into userspace again, and then probably at some point crossing the kernel/userspace boundary the other way several times while the driver operates). I’d really love to know what the real stability gain vs. performance penalty trade-off that we’re talking about is.
Me too. I am not sure where they are heading with this, but on the other hand, many things are not performance-critical. Anyone know if they are using this by default for… I don’t know, say FAT file system or USB sticks or something like that?
Really interesting… I’d love to see this kind of innovation in the (boring) Linux world.
BSDs are like Linux 10 years ago… hot and sexy.