Cisco has made public the details on it’s new super-high end router, the HFR. It is using a new OS, IOS XR, which is based on the QNX/Neutrino microkernel.
Cisco has made public the details on it’s new super-high end router, the HFR. It is using a new OS, IOS XR, which is based on the QNX/Neutrino microkernel.
This rocks. QNX partnering with such a high profile company can only be a Good Thing.
Sweet…
NT, OSX, JunOS, and now IOS are on microkernels (or hybrids). Anyone else notice a trend here?
Neither NT nor OS X are microkernels. NT originally claimed to be one, but never was (because stuff like filesystems and drivers were in kernelspace from day one). After large parts of userspace were moved into kernel-space between NT4 and Windows XP (like the GDI), NT became less of a microkernel than Linux. OS X was never a microkernel — one of the biggest changes between NeXT OS and OS X was moving the BSD layer into kernel space, and switching the Mach -> BSD interface from messaging to procedure calling.
“NT, OSX, JunOS, and now IOS are on microkernels (or hybrids). Anyone else notice a trend here?”
first of all NT doesnt follow the microkernel design. there was an earlier attempt at it and they dropped claiming that long before
Darwin isnt a microkernel either.
junos and ios arent mainstream and probably never will be
all the mainstream os have been either monolithic or modular. microkernel isnt that bullet proof solution people think it is
IOS is very mainstream, as is QNX. Mainstream != Desktop OS. Mainstream depends on the target market. IOS is *everywhere* in its target market of network hardware OSs. QNX is very common in its target market of embedded systems. VxWorks (also a microkernel) is everywhere in the embedded industry.
What are the advantages of a microkernel? I have tried QNX 6.2.1 and really liked it, apart from not being able to connect to the internet (due to lack of a driver for my eth0).
“IOS is very mainstream, as is QNX. Mainstream != Desktop OS. ”
if i had meant that i wouldnt have included linux. maybe microkernels do work better with embedded system. the important point was that it wasnt and hasnt become a generally better solution
There are lots of advantages to microkernels (as well as some disadvantages). In the embedded space, microkernels have some great properties. Specifically:
1) Microkernels can have extremely low latency. Since most code is in userspace, there is very little code that could hold a kernel spinlock too long (which prevents the kernel from being preempted). It then becomes possible to verify all the code-paths that hold such locks, and then make some guarantees about maximal latency.
To understand why low latency matters: consider something handling gigabit ethernet packets. Typically, you get 1500 byte packets, which means 80,000 packets (and thus 80,000 hardware interrupts) per second. Those packets need to get handled immediately, so having some guarantees about latency are a big help. They are also important for lots of embedded systems, where there are government regulations specifying maximal latencies for particular operations.
2) They can be made very reliable, because the only code that can crash the system is the (relatively small) microkernel. A well-coded application can gracefully handle the filesystem server going down, or the graphics driver going down, but can’t do anything about the kernel going down. The smaller the kernel, the less likely that is to happen, and the more possible it is to verify that that doesn’t happen.
Gigabit NIC’s arent interrupt driven, they’re polled.
if i had meant that i wouldn’t have included linux.
You didn’t.
maybe microkernels do work better with embedded system.
Maybe that’s why microkernels are so mainstream?
the important point was that it wasnt and hasnt become a generally better solution
Your original statement was: junos and ios arent mainstream and probably never will be
IOS is be any definition of the word mainstream. It doesn’t make any sense to say that IOS isn’t mainstream because its not generally better, because it wasn’t designed to be generally better. It was designed to be a kick-ass network hardware OS. Saying its not mainstream because nobody uses it on their desktop is like saying Photoshop isn’t mainstream because granny doesn’t use it to edit her photos.
Anyway, there are 30 million copies of the VxWorks microkernel in the wild. That’s bigger than the OS X userbase, and most people consider OS X to be mainstream.
I understand that to be a “pure” microkernel that that literally everything (eg. file system, drivers) must be separate from the primary kernel and use message passing (inefficient for desktop).
I was however under the impression that OSX was a hybrid design and my Google searches seem to confirm this. Most of the information I have on OS X being a microkernel OS design is a bit dated, do you have info supporting the claim that OSX is a monolithic operating system?
What is the typical size of a microkernel (in x86)? I have a few Linux kernels in my /boot which weight upto 1mb. Thanks.
Some drivers do poll the GigE card, but most do not. Nearly all GigE drivers support generating one interrupt per packet, and the good one’s support generating one interrupt per several packets. That’s the mode Windows and Linux usually use (if its available). Sun has a particularly advanced setup that mixes polling and interrupts, but polling alone is not a generally good method, because the polling time has to be extremely short (100 usec or so).
OSX uses mach, a microkernel, but not as a microkernel. BSD rides atop mach in kernel space, and both funtion as a single unit.
more about microkernels,
http://www.mega-tokyo.com/osfaq2/index.php/Microkernel
A microkernel can be any size. The “micro” in microkernels is misleading. It just means that the bulk of the os stuff(drivers, networking, etc…) is in userspace.
that this was under development, way back when. It seemed that Cisco was seeing the light – that to keep on going the way they have, that is, to be writing IOS largely in microcode was becoming an unmaintainable mess.
From what I’ve been told, writing microcode is EXTREMELY hard to do, an order of magnitude more difficult than assembly.
@Niice: From Apple’s developer docs:
http://developer.apple.com/documentation/Darwin/Conceptual/KernelPr…
However, in Mac OS X, Mach is linked with other kernel components into a single kernel address space… it is much faster to make a direct call between linked components than it is to send messages…
So everything is in one address space, its all linked via procedure calls, that makes it a monolithic kernel. The “hybrid” approach refers more to the combo of Mach and BSD, but there is no “microkernel” aspects because Mach isn’t used as a microkernel.
@X:VxWorks can go down to 60-80kb, though TCP/IP + IDE will get up to ~250kb.
I have said it once and I will say it again: the microkernel is a clean design, and the real power behind the design will be seen once more and more distributed systems begin to utilize its model.
Excellent choice, Cisco.
Linux 2.6 (and poss. backported to 2.4) support Interrupt Mitigation through the New Network API (NAPI). This allows the driver to switch from interrupt to polled mode if data starts to arrive at a rate high enough to make the switch worthwhile.
just in case someone isn’t familiar with old Cisco’s IOS (thorugh 12.x release): it was a sorry mess. No parallel processes, no threads, everything is essentially the same process, sort of. Meaning a bug in OSPF will crash everything. At some point I was working for ISP and we were working very closely with Cisco developers (as one of the testing ground for their new features), and they kept telling us that rel 12 will be the last ‘monolitic release’, new forthcoming release will be highly structured, separated processes or threads, crash of one component won’t affect the rest of IOS. If course, no details were available at that point. The old IOS was also very hard to maintain, again due to its lack of modularity. It’s good they are starting from the scratch, at least at their high-end offering. Hopefully, this new design will propagate down to mid-range and low-end routers as well.
A sorry mess it is/was indeed. I don’t know how many times my old company had to upgrade the customer edge equipment because of random reboots due to one “Bus error” or another.
Some versions of IOS 12.0 were so fragile, they made Win NT look stable. One feature about IOS that I always hated was that the ROM monitor or mini-IOS would rewrite the config to exclude statements that it couldn’t process. This meant that, if you had to drop to these modes to fix a problem or implement a workaround ( a rather frequent occurrence a couple of years ago), you had to redo the router config once you rebooted back into the IOS.
>What is the typical size of a microkernel (in x86)?
Depends. I’ve seen many which are 0.5-1.0 mb.
Remember, though, your linux kernel is compressed, so it’s
actually alot bigger. And you probably load quite a few modules as well, making the kernel even bigger.
“I understand that to be a “pure” microkernel that that literally everything (eg. file system, drivers) must be separate from the primary kernel and use message passing (inefficient for desktop).”
It seems to be rather efficient in Amiga OS, which is fully usable as a desktop OS, even on a very slow processor. This may be because there is only one address space, and the messages are passed as pointers.
“NT, OSX, JunOS, and now IOS are on microkernels (or hybrids). Anyone else notice a trend here?”
JunOS being based on FreeBSD, I’m not sure it applies as microkernel, does it ?
Heres an interesting page on Microkernels:
http://cbbrowne.com/info/microkernel.html
There’s also an interesting comment here relating to their performance, seems a fast microkernal is quite possible, but not in Unix.
http://groups.google.com/groups?selm=hfn7eb-uj4.ln%40cohen.pays…
Thats a Huge Funking Router.
Doesn’t BeOs run on a microkernel. To date it’s the most effecient desktop Os that I’ve seen.
>>>Doesn’t BeOs run on a microkernel.
BeOS is NOT a microkernel.
Excuse me, why does it matter why something is a microkernel or not as long as it works?
As I recall, for QNX 4, the kernel itself was around 8k. Everything else was a loadable module that you could choose to bind in or not. A kernel with a full set of modules for filesystems/networking/etc was in the 200k range.
Because “works” is a slippery term. No one is arguing one is superior to the other. Monolithic and microkernel OSes have different advantages for different applications (meaning uses, not programs). It’s hard to argue that monolithic kernels haven’t dominated the desktop OSes, but there are many microkernel OSes that are vital in embedded and mission-critical systems.
The discussion isn’t about which is “better,” because there is no better; the discussion is more about how OS underpinnings are evolving.
Personally, I would love to see a decent exokernel OS in some form. Yes, the idea is full of problems, but some people are giving it a go:
http://www.pdos.lcs.mit.edu/exo.html
Excuse me, why does it matter why something is a microkernel or not as long as it works?
Quite a bit actually, but like everything, it really depends on your requirements and particular situation. The biggest benefit of microkernels is that they are inherently fault-tollerant as most of what would be part of the kernel in a monolithic system has been broken up into seperate processes each running in their own protected memory space. If one goes down, it’s very unlikely that the borked process will negatively affect other components. These individual parts are also easier to debug, as they are smaller, isolated chunks of code.
This is why QNX is so often used for things that just can’t fail. You’d not see Linux, Windows or BSD in some of these (say, medical) devices, because they simply are not designed to be reliable enough to trust somebody’s life to.
Microkernels however are quite often slower than monolithic systems, due to the extra context switches taking place.
It’s likely that a better approach is to have a hybrid, basically a microkernel (complete with message passing system etc.), that has some performance critical code built into the kernel. Mac OS X’s kernel does this, the Windows NT/2K/XP/2K3 kernels do this, and in their own ways, try to get the best of both worlds.
One would hope that someday Linux too will do this, but we’ll have to see.
the kernel itself doesn’t even have a memory manager. The fact that they run on protected space is just an option they made to get extra performance on x86, they could easily pass them over to user space, wich would be impossible with a monolithic kernel like linux.
If most all Microkernals and Monolithics run on cpu HW that don’t have much or any support for scheduling or message passing in HW, no wonder the Microkernals will be critised for using these features more at their lowest level and having to carry the cost.
Since a number of cpus are coming out that are threaded in their basic design usually running 8-32 fixed threads round robin in HW and having near 0 context swapping (like Ubicom), it will be interesting to see what kinds of OS are designed to run on them.
Even more if the processor can support arbitrary no of Processes with 0 or near 0 cycle context swaps and also includes message passing and memory allocation directly in the HW. It was done once before (the Transputer) but that was 2 decades ago and it was more in the 20-100cyles league and it did run Helios a nix like OS. If these sorts of cpus become really common again esp in embedded use, I can see alot more shift to message passing micro kernals as the more natural way to describe par processes.
I work at the HW cpu level and not the OS layer, so I don’t see things the same way as OS guys, but it also seems that OS design needs to be much closer to the cpu design stage from the design start.
regards
johnjakson_usa_com