As stated in the previous OSNews article our project, Yamit, is trying to address some shortcomings of Mach that are usually results of insufficient tuning (sometimes caused by the fact that code comes from the 80’s). In the current alpha release we are addressing issues that seem to be more critical to overall performance than others. We also work on either hardware independent parts of code or on x86 dependent parts. Modifications will include existing subsystem tuning as well providing support for hardware that was not sufficiently supported until now. Plus we are trying to address some bugs that were crippling the system.
On the userland side we are planning to release at least one server either 4.4BSD-Lite or Linux. The Linux server is a slightly modified MkLinux available in source form from their web site. Modifications are simple bug fixes for some bugs that were introduced to it after project left OSF. We currently have server based on Linux kernel 2.0 and if there will be interest in the community we could update it to 2.4 kernel and possibly even produce complete distribution on CD(s).
The BSD server is a modified Lites. Lites required much more work, but it is almost complete and could be finished soon.
In general we don’t think that we will be continuously upgrading these servers, but rather design our own multithreaded single server as well as multi-server operating systems. However, since software is under open source licenses, community could have their own way and continue developing existing code-base.
After the alpha release we are planning to roll-out beta. It will be based on feedback from alpha and architecture dependent optimizations will be ported on other hardware platforms, for instance PowerPC. Plus we are planning to have more optimizations for review, those that are not yet ready for prime time, but hopefully will be sufficiently tested in several months.
Once that is done we are going to do mega commits of tested code and roll out first release of our OS.
On non-technical side we are considering name-change and looking for some hardware contributions.
Namely we are interested in:
1) x86 SMP machines
2) Dual-CPU Macs (servers and workstations)
3) 64-bit PowerPC machine
We think that progress we are making could not only be beneficial to our effort, but Mach community abroad. Same optimizations that make our system work better could help Darwin and even GNU/Hurd (until they will change microkernel for L4)
About the Author:
Igor Shmukler is a systems programmer, who has a bit over ten years of industry experience. He started working on PDP-11 machines and has been doing low level programming on and off even since. He is very happy to be born in computer times otherwise he does not even know what he could be doing for living.
Why not SPARC?
hmm, maybe because the developer has no access to such machines?
SPARCs are fine CPUs, but usually people who buy SPARC based hardware buy it from Sun Microsystems along with Solaris.
x86 and PPC based systems are being manufactured by many different vendors. Thus it was deemed that these are first ports to work on.
If you are interested in porting our system to SPARC, you could download our sources along with Mach 4 SPARC port and start working on machine dependent files. That’s the buty of open sources systems – code is widely available.
I don’t know why they cite OSF and Tru64 as being microkernels. They may have used mach as a starting point, but those kernels are decidedly monolithic.
Well, first, let me say the goal of Yamit is admirable. I think it’s great to see people trying to make Mach work instead of hacking it into something completely different (i.e. Apple and Darwin/XNU)
SPARCs are fine CPUs, but usually people who buy SPARC based hardware buy it from Sun Microsystems along with Solaris.
The same could be said for Irix/MIPS, yet this is listed on your project status page as one of the platforms you’ll be doing an alpha release for (but oddly enough, MIPS isn’t even mentioned on your front page)
I don’t know why they cite OSF and Tru64 as being microkernels. They may have used mach as a starting point, but those kernels are decidedly monolithic.
The same goes for Darwin/XNU, which replaced Mach servers with (Free)BSD components.
Err, there is no such thing as a “Mach server.” Mach is just the microkernel. Darwin runs 4.4BSD on top of Mach, with some code from the filesystem and networking layers replaced with code from FreeBSD and NetBSD. In particular the vaunted FreeBSD VM (or the less vaunted but academically interesting NetBSD UVM) is not in there.
> Err, there is no such thing as a “Mach server.” Mach is just the microkernel. Darwin runs 4.4BSD
So what is your point?
If err is short for error, I think you may need to reread the article.
It does not say anywhere that Darwin has servers.
Point made was simple. They tuned system and they claim that opmizations could be ported to Darwin.
What is wrong with that?
Err, there is no such thing as a “Mach server.”
Jesus christ, some people sure like to argue semantics. However I don’t get where you come off saying there’s no such thing as a Mach server. For a thorough explanation of what a Mach server is, I will present you with this source I hope you consider authoritative on all matters Mach, Carnegie Mellon. This is the Mach server writer’s guide:
ftp://ftp.cs.cmu.edu/afs/cs/project/mach/public/doc/osf/server_wri…
Mach is just the microkernel. Darwin runs 4.4BSD on top of Mach, with some code from the filesystem and networking layers replaced with code from FreeBSD and NetBSD. In particular the vaunted FreeBSD VM (or the less vaunted but academically interesting NetBSD UVM) is not in there.
Yes, the FreeBSD UBC was grafted onto the Mach VMM, and from there the FreeBSD VFS could more or less be linked directly into a Mach kernel. But… you’re only reiterating what I already said…
The article seems to think that Linux is not good on multiple CPU’s.
I thought that had been fixed. Kernel 2.6 is supposed to scale to 128 CPU’s. Who’s wrong, who’s right?
I thought that had been fixed. Kernel 2.6 is supposed to scale to 128 CPU’s. Who’s wrong, who’s right?
The issue isn’t one of how many CPUs the kernel can handle, but how well the kernel itself is designed to scale across multiple CPUs. In a monolithic architeture, this is accomplished by locking various portions of code if they’re in use on another processor. The courser the locking, the worse the kernel will scale to multiple platforms. This was a huge problem in Linux originally, which wasn’t designed for use on multiple processors at all. Linux was originally made SMP worthy through the use of the Big Kernel Lock (BKL), which would only let the kernel execute on a single processor at a time. Over time various subsystems of the kernel were given separate locks, but overall the Linux kernel by most standards remains coarsely grained.
In a monolithic kernel, for optimum scalability what a developer desires is fine grained locking. This means that kernel code is broken up into as many independent pieces as possible, each with their own separate locks. Probably the best example of this is Solaris, which is widely regarded as one of the most scalable kernels out there. FreeBSD is attempting fine grained locking with their SMPng project in FreeBSD 5.0. Fine grained locking requires an extraordinary amount of developer effort however, as debugging locking bugs can take an enormous amount of resources. FreeBSD 5.0 DP2 has been delayed for several months now due to locking issues.
Mach takes a radically different approach to the SMP problem. Instead of using locking, it breaks various duties of the kernel up into what are essentially different processes with different tasks, and all portions communicate with message passing. This makes for a scalable kernel without all the hastle of fine grained locking, but up until now at a serious performance cost. A microkernel architecture is it’s a very nice general design, much more so than a monolithic kernel with fine grained locking, and it would be wonderful if some of the performance issues with Mach could be remedied by the Yamit people.
The courser the locking…
s/courser/coarser/
Ah, the realms of theory and practice. If you want the big fight academia vs. pragmatism I’ll refer you to the infamous Tanenbaum-Torvalds dispute.
A micro-kernel takes very little effort to make it scalable on a large number of processors, a monolithic architecture has considerable more difficulty implementing this.
That set aside, Linux is quite good in this area right now. It has more problems with more exotic SMP configurations, but the usual servers should work quite well. AFAIK the bigger problem is the huge memory systems mainframes have, as it’s quite hard to get the kernel to perform equally well on a normal, “low-memory” desktop system and something several orders of magnitude bigger.
So you’d probably have to test the systems to see whether the micro-kernel has any advantages that won’t get cancelled out.
I think Linus feels that, currently, the locking situation with Linux is under control or at least getting that way. He’s focusing future scalability work on NUMA architectures, and at one point mentioned that good performance on NUMA memory systems was one of his prequisites for a 3.0 kernel version release.
Good to see that Yamit is still alive. I’ve looked at it a long time ago, but I wish the build system was easier to use. I even tried to work on it some, but it was too overwhelming at the time. I’d like it if it had a BSD style build system. I’ve been subscribed to the list for a long time, but volume is so low, it looks like maybe Igor is the only one working on it, or else all the development is taking place off-list. Anyway, what kind of improvements have been made to Mach to reduce overhead?
Hey, this is a great starting point, I’m actually looking at scalability and locking for my senior research paper, can anyone point me at a good resource about this? I’m especially interested in how linux does this, and how it compares to other OS’s?
Thanks
so what happened to XMach? how does this compare to xmach etc?
Why use Mach ? It’s a dead microkernel and not developped anymore by CMU since 10 years. Its design is also completly archaic compared to modern microkernel like L4.
Concretly this result in horrible IO perfomances, really slow context switch (on x86), in a 10 years old VM… And in fact it should even be called a macrokernel, as it’s so big, containning all low-level drivers.
This is why the GNU people are going to get away from it, and this is why the Apple people have completly hacked Mach, and broke it microkernel design to try to get some more reasonable performances.
So why focus on Mach ?
I think one idea they had for solving some oft he scalability on Linux was to experiment with optimizing the kernel for 1-4 CPUs and when the systrem goes past that level, it starst up an extra kernel for each set of 4. They still needed to do experiments on if it would work right and if so if it is worh while but sounds kinda interesting since too many locks makes bugs harder and cuts back on throughput.
Though I am a fan of microkernels. Hmm, since HURD wants to get a certain microkernel independence (supporting L4 will help get this is one of teh mroe barebone microkernels) it would be inetresting to compaore Mach vs L4 (All Implementations) vs Yamit vs VSTA
xmach died a swift death for several reasons. the brunt of the core team (including myself) jumped ship due to “unresolvable differences” with the “team lead”. then said “team lead” got his dream position as a FreeBSD committer (although i don’t think he’s done much committing) and let the project die.
It’s a dead microkernel and not developped anymore by CMU since 10 years.
IIRC, University of Utah did Mach development for awhile after CMU basically killed the program. I don’t know of anyone who is actively developing a standalone, new, Mach kernel. (not that I’ve looked real hard, either) OsKit (aka GNUMach 2.0) is based off Utah code.
Its design is also completly archaic compared to modern microkernel like L4. … And in fact it should even be called a macrokernel, as it’s so big, containning all low-level drivers. This is why the GNU people are going to get away from it,
Using GNUMach 1.3 as an example….it’s certainly smaller than linux these days. I think the source tarball is something like 3M. Doing a “du” on gnumach.gz on my hurd partition reveals a 700k kernel, compared to my linux kernel, which is 1.3M bzipped. BTW, I don’t think the GNU people are going to get away from Mach. It appears that the goal right now is to make The Hurd microkernel independent, meaning the servers can run just as easily on L4 as they do Mach, or any other microkernel. There was quite a bit of mach-specific stuff there that they’re getting rid of. The L4 port enables that work, but no decision, as far as I can tell, has been made about which microkernel will be used for the next release. It could very well still be GNUMach with the OSKit drivers.
The size of the source tarball is misleading. Linux has a far larger set of drivers and filesystems than GNU Mach. If GNU Mach had these drivers, they’d be in the kernel too. The kernel proper also has many features (like IPV6 and whatnot) that GNU Mach doesn’t have, but would be in kernel space if it did. Even though Linux has grown beyond several million lines of code, the core of the kernel is still some 70,000 lines of code in linux/kernel, linux/mm, and linux/fs.
Thanks for the info.
Yeah. Xmach was killed by differences of developers with the project lead. In my one year on the core team, we went through a great many very high quality developers (many much better than I), which one by one jumped ship because of problems with the development lead. I guess at the end there weren’t any people left that were interested in xMach that had not been burned yet. So I sold my 6-way smp system that I got to work on xMach SMP and went on to other things.
JAn
Everything further is just IMHO.
I would not want to flame anyone. However, I think that what happened to xMach team was sort bigger problem than just JM’s lack of experience.
First, project was started by a person who was in mid-teens. Had none or *very* little software development experience and probably zero experience leading projects. When developers were joining this effort, they should have done a bit of a homework to figure out who they are working with.
Second, our project was open source for almost two years now. We have over fourty people on our mailing list, but there has not been even one line of code contributed by any volunteers who have contacted me in a past. (One developer actually did contribute in way of editing my startup document to be more complete. That is important too.) For a project lead only goal is the success of the project. Unless particular developer really makes difference in terms of contributions feedback in terms of ideas would have little influence; leader cannot shift project goals several times per week. Otherwise effort will just die in planning stage.
Someone here (others in private emails to me) suggested that our project could change build tools. My question to them is as following. If you are capable of migrating makefiles to better environment, why don’t you do that then offer your result to moderator?
Otherwise, enjoy results and complain about bugs, but don’t think moderator *should* listen to you ideas.
🙂
> IRC, University of Utah did Mach development for awhile after CMU basically killed the program.
And they also lost interrest for it ages ago.
> I don’t know of anyone who is actively developing a standalone, new, Mach kernel. (not that I’ve looked real hard, either) OsKit (aka GNUMach 2.0) is based off Utah code.
On GNUMach 2.0 only the drivers are updated, everybody is affraid of modifying the Mach core, the Hurd ppl, like ppl from Apple (for their OS, not for GNU .
> gnumach.gz on my hurd partition reveals a 700k kernel
I would not called that “light”, for something supposed to be a micro-kernel.
> but no decision, as far as I can tell, has been made about which microkernel will be used for the next release
Porting to L4, is really a lot of work, so it will certainly take time, but they really want to get rid of Mach as the “default” kernel because it’s too slow and too old.