Way back in 2006, to celebrate the introdiction of MINIX 3, Andy Tanenbaum, the operating system’s legendary creator, published an introduction article to the new version here on OSNews. I’ve followed along with development ever since, with the last item we ever posted dating from 2015. Over the weekend, a link to the MINIX 3 git repository made the rounds, noting that the last change is dated 14 November, 2018.
It seems like MINIX 3 has pretty much stalled, and digging through the Google Groups group isn’t of much help either. There’s certainly interest in the platform, but even the people frequenting the list state while MINIX 3 isn’t dead, because open source projects technically rarely die, it is in a “coma”, in a post from 2021. There’s been various proposals for improvements or new directions – notably this very detailed one – but nothing has come of them.
It probably does not help that MINIX’s creator and steward, Andy Tanenbaum, retired in 2014 from VU University, my alma mater, where he and a team of doctoral students worked on MINIX 3 for a long time. Without its main creator, who is now 79 years old, and without the funding and manpower from the university, it makes sense MINIX’ development petered out in the years after 2014.
But did it?
As many of you already know, MINIX’ development isn’t actually dead at all – one of the biggest technology companies adopted the platform, and MINIX currently runs on every processor that company sold since roughly 2015. Yes, every decently modern Intel processor runs MINIX on its Intel Management Engine, including things like a networking stack, storage drivers, and more. This was first discovered in 2017, but Intel has kept the source code to its version of MINIX entirely closed, so this fact is not of much use to anyone interested in revitalising the platform.
The fact of the matter is that MINIX 3’s development has halted, and this effectively means that MINIX is, for all intents and purposes, dead. With the last commit being almost five years old, even simply picking up where development left off would be a big undertaking, and would require some seriously bright minds and dedication. I’d love for it to happen, but I have my doubts.
From what I heard, intel is running a different software stack, and moved to a proper RT/OS within their management cores now a days.
As a person who had to deal with an OS class that used MINIX; it was a neat teaching platform, but I don’t think it was supposed to scale into a full blown OS with a life of its own.
Incidentally, Andy Tanenbaum published some great/seminal teaching books for networking and (distributed) operating systems classes. Which I think had more impact that MINIX really.
Interesting. Most probably it biggest impact was on assisting Linus get Linux off the ground.
javiercero1,
Interesting.
When I search for “minix inside cpu” it still shows older articles mentioning it is being inside Intel CPUs. But I could not find a reference to being replaced.
Could it be a special version of Minix that they use?
@sukru
For the most part, Intel (or any other SoC manufacturer) doesn’t release any details about the intrinsics of their system management cores, as that is very very sensitive information. So any information you’re likely to find out is from old systems.
Last I heard from some colleagues they were moving to RISC-V cores + some RTOS, obviously they didn’t share any further details. So I don’t know if it is MINIX-based or what (although I don’t think MINIX had the trappings for real time scheduling). Although having controlling the source for the management core tends to be a priority since it has a lot of time-sensitive scheduling as well as the memory footprints of the OS+processes have to be very tailored to the available resources within the SoC. None of these details are visible to the external world, as it is all very proprietary. Especially when it comes to power and system limits.
The thing is almost nothing actually needs a real time os…. especially not a hypervisor monitoring system.
@cb88
Lots of things, controls for example, require RTOS.
The system controller is not a “hypervisor.” It’s a deeply embedded firmware which runs and controls a lot of the interactions within the IPs in the SoC. The NoC control, for example, is extremely sensitive to deadline misses. Plus stuff like the thermal/power limit controls loop definitely require real time scheduling. Especially when it comes to systems that run off battery.
Nah… that low latency stuff should be running in either hard or soft logic… not a microprocessor at least in my opinion.
MPUs should only be doing things like loading firmware, to those parts of the chip and shouldn’t be part of real time logic loops for power consumption reasons.
You might be right. There is tangible competitive advantage in the fine tunings of the system management. Especially when the chip market is really heated with AMD and Apple’s M2 are in the mix.
That is true for Minux 1 and 2. Minux 3 was supposed to be a real OS people could use, and it had made significant efforts in that direction. Such as it could use more advanced filesytems and had NetBSD ports.
Remember the systems requirements/compatibilities and the development tool chain ?
Now try to connect this with “a real OS people could use”.
Even Linux fails at that at some extend.
So Minix 3…
How about you do that first.
Minix 3 is built with clang and maybe GCC these days AFAIK…. and they did some fairly deep integration in some projects there to do rebootless code updates and such. It wasn’t completed but they were working on it.
ACK is pretty much retired.
It’s an interesting story indeed. On how such type of operating systems are used in the wild. That is micro kernel type and more of a special purpose operating systems. On top of that having a permissive open source license. Intel ME is a good example of that. Operating system will have access to the whole of your hardware and software stack. You won’t have any control or access over it whatsoever. It’s basically an undocumented proprietary blob. Usually being advertised as a security feature. On top of that you know it’s vulnerable due to CVEs. And you can’t do anything about fixing the vulnerabilities embedded in your hardware. You can just wait. Knowing that and if you actually care about all the things i wrote about. You more or less must support a license as GPL, mandatory root access, open source device drivers and not just open source mini kernel. So things like single company controlled open source mini kernels, with permissive licenses, are a no go. It doesn’t meet the minimum criteria.
Geck,
Much of the same hardware locking happens with linux + GPL2 with a whole slew of internet devices. After all, the practice which became known as tivoization was based on linux. Same deal with roku and others. I agree with you it’s frustrating to not be allowed into your own hardware, but GPL2 is just as complicit. If you really want to stop this, you need an anti-tivoization clause such as the one present in GPL3. But for better or worse, linux is stuck with the incompatible GPL2.
I thought this was apropos with how wildly more popular Windows and, to a lesser extent, Linux are:
https://dreamsongs.com/WorseIsBetter.html
History has shown us that usability and feature richness most often trump over correctness and consistency.
I have great respect for Prof. Tanenbaum, but we live in a harsh utilitarian world.
devloop,
Thanks for linking that story, I had never heard of him, but it is interesting 🙂
Yeah, you need to be in the right place at the right time with the right people. While minix is in some ways better than linux, it wouldn’t really become that relevant to the technology landscape.
To paraphrase another Tanenbaum quote…
“The nice thing about standardised UNIX is that you have so many to choose from!”
Whilst fundamentally, MINIX, Linux, BSD, et al are all similar enough that code portability is pretty trivial, there’s probably one out there that fits your desired use case. Be it embedded, scalable, permissively licensed, small, large and feature filled etc etc.
MINIX was primarily developed as a teaching tool. Sure, it was fundamentally a true OS, but ultimately it succeeded in it’s goal of educating students on OS design. After all, that’s what Linus Torvalds learned kernel development on, and ultimately the reason Linux exists in the first place.
I don’t think this is surprising. Computing has become fully pervasive in the last 2 decades. For an OS to have any general purpose use, it needs to be fleshed out enough to run a full fledged browser with all the bells and whistles and it needs a large enough software catalog to be useful day to day. These days it also needs to be able to cater to most entertainment purposes.
By the time it was anounced that Minix3 would become a full fledged general purpose OS, it was already in catch up mode. Being Unix-like in this scenario is a detriment. Why go for underdeveloped Minix3 with less of everything when Linux is $ 0 and delivers most of what you would want? While Minix has interesting technology under the hood, it probably wasn’t enough of a differentiator to spark massive interest. In practise a microkernel doesn’t deliver significantly more than a well written monolithic kernel.
r_a_trip,
That’s the case with every alt os in existence. It’s almost irrelevant what they bring to the table if they cannot solve the chicken and egg problem of users and apps. Not only is it critical for desktops, it has killed off all alternative mobile platforms too. It takes too many resources to break into the market. If not for the heavy lifting done by mark shuttleworth and google, even linux itself would have faced a far more difficult struggle.
Obviously it does get you better isolation, which is something both macos and windows have acknowledged is important at least as far as hybrid kernels go.
/quote If not for the heavy lifting done by mark shuttleworth and google, even linux itself would have faced a far more difficult struggle.
I think IBM has been more instrumental to Linux than Google has, especially in its earlier years. I don’t really think Mark Shuttleworth/ Ubuntu has really contributed that much to the real success of linux on servers. The impact was mainly on the desktop, which was very big, but at the end of the day linux on deskt0p is still kind of niche.
Bill Shooter of Bul,
Yes I was talking about desktop (and mobile), but I also agree with your point on servers. Linux achieved organic growth on servers mostly thanks to being able to siphon off customers from Sun, who despite having a dominant position with ISPs & hosting and being responsible for creating a great number of innovative FOSS technologies, could not compete with cheaper commodity hardware and free clone operating systems.
In terms of IBM specifically, I haven’t knowingly come across any of their linux products in the field (at least before they bought redhat). I’ve heard that their mainframes have good support for linux, but IMHO mainframes don’t add much value over commodity enterprise servers that officially support linux.
@ Bill Shooter of Bul
I don’t know why it has to be an either/or issue.
IBM owns Red Hat and has done a lot to commercialize linux mainly to sell their HW/services. And Google has contributed a lot to the linux code base and their run most of their infrastructure on it. Plus you know, Android.
In the early years of Linux, DEC was a big supporter. IBM was pushing AIX and not sympathetic.
To be fair, most computer vendors had their own proprietary UNIX. DEC adopting Linux was more likely due to issues porting their own Unices to x86 rather than any true benevolence.
I still have the 1987 version of “Operating Systems: Design and Implementation” in my book case. Almost half of the book is the source code :).
In 1987 hardly anybody had the eBook version to open on a 720×348 Hercules screen.