GNU Hurd 0.7 and GNU Mach 1.6 have been released.
The GNU Hurd is the GNU project’s replacement for the Unix kernel. It is a collection of servers that run on the Mach microkernel to implement file systems, network protocols, file access control, and other features that are implemented by the Unix kernel or similar kernels (such as Linux).
Since day one of the GNU project, Hurd was supposed to be its kernel – as we all know, of course, it turned out Linux provided a far better kernel with a much faster pace of development, and it’s been used as the de-facto GNU kernel ever since. Those with an appreciation for history will love the lingering, mildly dismissive tone of “…such as Linux”.
“It turned out Linux provided a far better kernel”. I wouldn’t sign that. Linux was not the technically better approach, but a far more pragmatical approach. It was worse in technical aspects that it kept the old Unix approach of running kernel, file systems, network protocols all in ring 0, but in the same time better in compatibility and performance aspects for the same reasons.
Better is only better if it ships. Better is only better if people use it.
Linux user here, but not going to defend it on here. Hurd up to today has not decided who is going to be.
Uncompromising sadly works both ways.
Yep. It’s true that Hurd has a more elegant design which in theory makes it superior to Linux.
But well… one of these is a design which has proven itself on many millions of servers, desktops, tablets and phones around the world. The other is an academic experiment that doesn’t even boot on most modern hardware…
1987 is Minix : https://en.wikipedia.org/wiki/MINIX
1991 is Linux : https://en.wikipedia.org/wiki/Linux
1992 is Hurd : https://en.wikipedia.org/wiki/GNU_Hurd
Now who predate who ? While Linus is a pragmatic one, considering the level of hardware available at that time, Minix is still superior on the paper and with current CPUs (multiple cores, PMMU, …) it should shines with all wonders, provided the drivers were ready.
Hurd Kernel is very useful, well to students of computer science. What the user need is a usable kernel, not a well-designed kernel but unusable. I go for ReactOS anytime rather than a touted better designed kernel but unusable.
The problem is, very few userspace apps will run on a theoretical kernel. Linux has always been the better kernel in reality.
No linux was far better, My money isn’t safer if I transform it into a form that can’t be used as money ( like setting it on fire, or buying floose). My operating system isn’t better if it never actually does any of the things an os should do ( which hurd based operating systems couldn’t do until recently).
Anyone who disputes the above, doesn’t know the history of it. GNU Hurd wasn’t ready in 1989, 1991, 1996, 2000, 2006 and maybe became workable in 2008.
You can’t say something is better because of an un-provable theory and some half working code. Mach went from Mach to L4 to Coyotos and to Viengoos back to Mach. It wasn’t stable until after it got back to Mach. I wasn’t just a casual observer, in the early 2000’s I went through the various machinations to try and get some of those crazy things to work.
Linux ran with a known, understood approach; whereas Hurd was relatively exploratory. So, with hindsight Linux was going to move faster, and perhaps the more curious comparison is between Linux and BSD.
Success seems often to be a reflection of how rapidly a project reaches a critical mass of mindshare than necessarily any technological merit – so VHS and Betmax for an example.
I am glad the Hurd is still moving forward and given that both Linux and OS X crash on me from time to time, I look forward to reliability and resiliance whenever it happens!
Keep in mind that the Linux/BSD balance worked out as much for legal reasons as for technical ones: The CSRG started removing AT&T code from BSD in 1988, and considered that to be basically finished by NET-2 in 1991. Still, USL sued BSDi in 1992, and up to their settlement and the release of 4.4BSD-Lite in 1994 the legal state of the BSD codebase was unclear.
During that period, Linux appeared and gained a lively developer base. Some of that was probably from it being an easily accessible new fun thing, but I suspect the balance would have been different if 386BSD had been a viable choice a few years earlier.
Edited 2015-11-01 16:03 UTC
Is that the entire Changelog? Have I missed something?
Linux monolithic kernel was ‘uglier’ but a more practical approach: easier to develop and more performant. Microkernels impose several overheads both in terms of development (sorting out communication) and performance (since various services work as ‘independent’ processes, some form of IPC is required). Early 1990s PCs with small memories and slow single-core, single-threaded CPUs would struggle with it. Moreover, while microkernels prevent low-level failures (thanks to separation of address spaces), the system may still crash on the higher level. For example, a non-recoverable crash of one service may trigger a cascade of failures of dependent services.
Basically most of the time, it doesn’t matter if the file system crash is in the kernel or the fs module, if the file system is un usable.
The operating system may still “work” for anything that doesn’t need the file system. Maybe the file system can be restarted by some sort of pid 1 init monitor like systemd?
A microkernel can really make a difference when it comes to crashing various components of the OS. For example, if it is a non-root file system that crashes, the rest of the filesystem may still be okay. Or some microkernels can have per-user non-global filesystem namespaces. A crash in one namespace will not affect other namespaces served by different instances of the file system server(s).
On the other hand, have you ever experienced a kernel panic in an IPv6 networking stack while working on your text document? I have. Or have you ever experienced a kernel panic while trying to connect to your smartphone via Bluetooth? I have. In both cases with a supposedly stable and reasonably up-to-date mainstream Linux distro. These are both examples in which a microkernel would beat Linux. That’s the whole main point of having a microkernel: to have non-essential parts of the system (file systems, networking, Bluetooth, USB, you name it) out of the kernel and isolated from each other.
Not really. I haven’t had a kernel panic in a long, long time ( maybe 5 years? ). Now bluetooth screws up occasionally, true. But I’ve never had it screw up my entire system.
You have been lucky so far 🙂
Maybe lucky. I also source my components based on recommendations of the community. So I never get anything bleeding edge and always use drivers integrated into the upstream kernel.
I realize that’s very different than someone that just takes what ever hardware is lying around, or grabs the newest hardware and throws linux on it with out of upstream drivers.
I kind of wonder if its better to have this pressure of upstreaming for future compatibility, or the micro kernel methodology of just restarting crashed services and hope everything will be cool.
I am not a big fan of restarting crashed components. The fear is the bug will hit again. But smaller components may also be easier to study and perhaps even formally verify. Even without restarting, I can see the benefit in better damage control. You lose a failing server process? Probably not a big deal, depending on what server crashes. You may still be able to save your work, file a bug report and try to fix the bug or upgrade to a newer version. Decomposing the system into components may also have positive effects on its overall structure in that explicit interfaces (IPC) and process isolation make it harder to interact with a component in an undesired, unsupported or undocumented way.
For development work, microkernel design with driver isolation is convenient and safer. Altough plenty of crashes can be avoided in monolithic kernels with appropriate debug code.
For users, the kernel code should be stable for both microkernel and monolithic kernel, which actually takes away the reason for isolated microkernel design.
So for users, whats left is the management of queues between processes in microkernels, which takes a bit more time than in monolithic kernels.
But this “should” is not aligned with reality. As I have written elsewhere in this discussion, the monolithic kernel does and will continue to crash (IPv6, USB, Bluetooth, whatever less-tested corner of the kernel). The Linux kernel currently has around 18.000.000 lines of code. Even with a very very conservative estimate of 10 kernel bugs per 1000 lines of code, there is a well over 100.000 kernel bugs. Surely, not all of those are exploitable and not all of those will result in a kernel panic, but some of them definitely will.
as we all know, of course, it turned out Linux provided a far better kernel with a much faster pace of development, and it’s been used as the de-facto GNU kernel ever since.
I think this kind of comment belongs to TheVerge, not to OSnews. It sounds a bit too much like “AHAH THOSE LOSERZ”.
I give them credit for sticking to their guns and still developing it.
<3
It is a hobby project that is going nowhere.
Edited 2015-11-01 09:44 UTC
So?
Your avatar is so perfect with this post.
Well, for a couple years, there wasn’t anyone really seriously working on it, when they were just investigating the right micro kernel to use. But its always kinda been there, much more activity since they went back to mach and Debian hurd started.
Even in the field of microkernels, Hurd is pointless. The L4 family is more advanced, with (relative) high IPC performance and (absolute) high security (seL4).
The point of HURD is it will be the best kernel ever. Sort of. GNU is not interested in competing with Linux or other kernels, there is no point in releasing a kernel with similar characteristics as Linux, no. GNU wants HURD to be totally superior, in all aspects. And that is a goal worthy of pursuit, right? For instance, you can hot swap drivers, etc etc.
http://www.informit.com/articles/printerfriendly/1180992
Sounds like the Wankel rotary engine. It was great in theory but abysmal in practice. It even bankrupted the company (NSU) that designed it.
Sounds like a really bad analogy
It is a very good analogy. The Wankel engine was considered to be an extraordinary design when it appeared in 1957. It was licensed by dozens of vehicle and aircraft manufacturers. Five decades and billions of R&D dollars later it has been almost entirely abandoned because it’s thirsty, highly polluting and had a very limited life. [However it is great for race engines where fuel economy, emissions and lifespan are largely irrelevant.]
Edited 2015-11-02 08:37 UTC
They wasted billions on the rotary engine? I had no idea.
Edited 2015-11-02 17:26 UTC
The wankel engine has some advantages over the Otto engine as long as fuel consumption is not a high priority.
So, when can we use GNU/Hurd?
Otherwise, I hope there are more microkernel OSes in the near future.
There will.
Now. https://en.wikipedia.org/wiki/Debian_GNU/Hurd
It’s obviously not very stable. But it can be played with.
It is a pitty they do not use IOKit and driver userland tools. They could run IOKit in user space as an app and connect through mach the userland driver utilities. Just like normal applications (…with patches of course).
A lot of people keep repeating the myth that microkernels inherently have major performance issues. It’s basically only Mach-like kernels that suffer from them (for several reasons). Minimalist microkernels like QNX (which actually predates Mach by a few years) and L4 have relatively little overhead. For that matter, Xen could be considered a microkernel (albeit a somewhat atypical one, highly specialized for virtualization). It seems like a lot of people assume all microkernels are like Mach when nothing could be further from the truth.
I wonder if things would have turned out differently had QNX had been a research project at a big-name university rather than a somewhat obscure commercial OS. In that alternate history Mach might not have even been developed and microkernels might have become the dominant kernel architecture.
I need your educated guess about this. Can Mach microkernel fuction decently in the near future?
Future? There have been quite a few Mach-based systems in production in the past 3 decades.
Doesn’t Mac OS use Mach kernel? How about its performance?
Is it just me, or has Genode gone ahead and done what GNU/Hurd meant to do? What’s HURD doing that Genode isn’t?
GNU
Very, insightful, I’ve thought the same thing in the past. Its very similar to the original idea behind hurd.