Maestro is a lightweight Unix-like kernel written in Rust.
The goal is to provide a lightweight operating system able to use the safety features of the Rust language to be reliable.
↫ Maestro’s GitHub page
The state of this project is actually kind of amazing – roughly 31% of Linux systemcalls are more or less already implemented, and it also comes with a daemon manager, a package manager, and can already run musl, bash, various core GNU utilities, and so on. It has kernel modules, a VGA text mode terminal, virtual memory, and a lot more.
It is genuinely incredible. And almost entirely the work of one person.
It goes to show that if there was the will, Linux could be quickly replaced.
The only scenario I see that actually happening is if enterprise grew tired of Linus himself and wanted to manage their OS kernal following a different model.
But I look at the Rust code compared to C and wonder if this is a future Wayland vs X argument in a decade or two…
Adurbe,
I agree, linux could be replaced by code written in a far more robust language like rust and IMHO we’d probably be better off for it IF you could get people to use it, but that’s the challenge.
Realistically a successful kernel needs a lot more than a base kernel that implements syscalls. Drivers are critical for an OS to be practical for the wider community. A small project can either dictate the hardware users must buy & use, or it has to provide working drivers for the hardware users already have. The later may takes decades or even centuries of developer time, which doesn’t lend itself to a small dev team. Even with linux scale assets, hardware support isn’t perfect. Long term hardware support is quite exceptional under linux. Once a driver is supported, it generally stays supported (unlike windows). I’d expect this to be true with other alternative operating systems as well, but it would take a very long time to get a solid hardware base.
I agree. I’ve seen many of these kinds of projects with potential but it’s just hard to take market share away from established incumbents. Linux would have to misstep fairly badly to incentivize people to look for something better. For me, rust is appealing but not necessarily enough. However things could get more interesting if it were able to tackle longstanding linux gripes, such as the unstable ABI, the weak asynchronous IO implementation, or other features for highly scalable clustering, automatic failover, distributed file systems, etc. These all make for a more interesting OS.
I think this is something that all OS devs have to contemplate: is it better to create a common denominator OS, where all your decisions are guided by compatibility with incumbents. Or do you maximize innovation and accept that your new OS may not be very compatible with existing software? Personally I find clones boring, but historically speaking I have to concede that compatibility has been one of the most important factors to success. The popularity of linux itself is almost entirely down to unix compatibility. If linus had set out to build something more innovative than unix, it probably would have failed. 🙁
Yeah, it makes sense for you to say this. I prefer to see more external competition than internal competition, if this makes sense. But network effects generally tend to make competing from outside extremely difficult.
Luckily virtualization is widely use, and the virtio drivers are the only thing new OSes need to worry about initially. GPUs might be the exception to this though.
Importing drivers from a BSD or Linux and implementing shims is widespread. FreeBSD uses shims for various Linux drivers. Haiku uses shims for the drivers they’ve ported. Linux recently removed the NDIS wrapper from long ago.
He could have, but that wasn’t his goal. Also not what IBM wanted when they dropped all that cash.
This was the time when microkernels were the future. NT was originally a microkernel. NeXT, which became macOS, debuted a microkernel in their OS. Minix, macOS, QNX, and probably others have proven *nix could be based on configurations other then a monokernel.
POSIX/Unix compatibility only mattered for a little bit while Linux marketshare was building, and it doesn’t matter at this point. Linux is Unix, and it can go anywhere it likes.
The BSDs have the excuse of being old, old. Linux was a hipster’s retro project.
Flatland_Spider,
I agree that virtual tools make sense for early development. However running on bare metal should be a goal IMHO. Having to run maestro in a VM on top of another operating system isn’t ideal for making the case that this can replace linux.
I know it wasn’t his goal, his goal was to clone unix as best he could. On the one hand I find it unfortunate he didn’t try to design a better OS. But on the other hand, if he had, it probably would have gone defunct in the 90s like the rest. In other words, *not* pushing new innovation is what make linux attractive for replacing unix.
Linux compatibility was so good that it basically made unix redundant. Whether or not a company needed service contracts, they could typically save a ton of money replacing expensive Solaris servers & workstations with commodity x86 boxes while running clones of the same software and toolchains. Linux took over the unix marketshare and today linux leads the “unix” pack by such a wide margin that the onus is now on everyone else to copy linux APIs for compatibility.
If I were attempting to usurp Linux I’d probably not target generic “bare metal” like desktops but instead pick a niche to build from. Personally id target Docker containers or cloud computing but you could argue ARM would be another opportunity.
I’d use the same logic that people will start using at home what they use at work.
The other Obvious aspect to success/failure would be the choice of licence… But a whole different flame war there haha
There are a lot of VMs out there running services. Something smaller, more focused, and Linux compatible could have a niche.
IBM kind of changed the direction by dumping money into creating an Unix server clone it could install on low end x86 servers.
Linux could have evolved a different way had it stayed focused on being a **desktop** Unix-like OS. It could have maintained Unix compatibility while doing some more interesting things. Like macOS.
Odds are it wouldn’t have considering Linus opted for a monokerne, but optimistically, it could have.
GNU tools also deserve some credit. They were installed on many of the expensive Unix boxes, and being the defaults on Linux was a boon.
Adurbe,
Docker containers? Container host?
I could see K8s needing something thinner then Linux it could integrate to make K8s a full OS.
I could also see a very specialized container OS with better networking options then Linux.
A few years ago, only running in a virtual machine would be crippling. I mean, if I have to use another OS to run yours on my machine, why should I bother?
These days though, the situation is quite different. If I count OS “instances” that I run, most of them are run as VMs or containers. Those instances are mostly running Linux ( though I have a few FreeBSD instances in the mix ). Do I really care about the hardware support available in an instance that is hosting a DNS server, a VPN, or even a game server? Not really.
In fact, my attitudes towards running a desktop via virtualization have completely changed too. Like many, I have a machine on my network that exists exclusively to host virtual machines and containers. With that in place, creating other “machines” on my network is trivial. I host a Windows instance in a VM that I use only for a few purposes including running the tax software that I have been using forever. I have a VM that has all the JetBrains IDEs installed that I can pull up from whatever machine I am on. I have a Linux instance that I use only for reading eBooks ( I can pull up that desktop anywhere, even over VPN, and be right where I left off ). I even have a networked VM dedicated solely to building and checking in on SerenityOS and Ladybird. I used to have the SerenityOS source on multiple machines ( waste of space ). More and more I am creating purpose built VMs. I can pull up those desktops whenever I want ( from wherever I am ). I move between macOS, Windows, and Linux and these VMs are the same when run from any of them. I have become so comfortable running remote desktops that, when they are full screen, I almost forget sometimes. I frequently realize that I have been using a web browser ( doing research or checking email ) in a remote VM for hours without even thinking about it.
It does not work for everything obviously but running a VM fullscreen overtop of another OS is totally normal for me now.
tanishaj,
I think you are misreading my gripe about the VM solution. I actually use virtualization quite a lot, this isn’t my criticism. Rather, it is that a robust OS stack shouldn’t rely on unsafe C code. This is one of the main selling points of rust, to eliminate large classes of memory & threading faults caused by human error and I am sold on this. We deserve a robust host OS and IMHO we should be getting away from C-lang for all privileged code. However, obviously running a rust-lang guest on top of a C-lang host falls short.
I can see Linux becoming more of a spec then an OS after Linus steps away and industry gets tired of dealing with the subsystem maintainers. “Linux” becomes separates into 3 projects in order to serve the embedded, desktop, and server markets better and keep the code manageable. The 3 work together to make a mostly portable spec. At some point, the abstractions need to be flattened.
People also speculate Linux will become a vertically integrated OS like the BSDs because people are tired of having to deal with generic user land tools.
Not really. C is pretty steadfast in being what it is, and not changing. Aside from people who see better ergonomics as a fad, most people understand something like Rust is necessary for improvement.
Rust might be a way point to something better then C and C++, but it’s a necessary evolution to prove the concepts.
If it becomes possible to run OCI containers on Maestro,, this could become quickly practical. You can run an entire Linux distribution in something like Distrobox on top of any Linux kernel. If this is Linux compatible enough, you could do it here too.
The biggest roadblock will be drivers of course. Then again, Linux is allowing Rust in the kernel now and drivers written in Rust seem likely to be some of the first code to appear. The GPU for Apple silicon Macs is written in Rust already and I think there is a Rust NVMe driver.
Sadly, the GPL is going to cause problems sharing code back and forth. Maestro is MIT licensed while Linux is GPL2 of course. It would be quite interesting if Maestro and Linux could share code and converge over time. Unfortunately, the licensing will get in the way.
tanishaj,
I know it sounds ironic, but the rust drivers in linux aren’t written to take full advantage of rust, they’re forced into being compatible with linux’s C API, global structures, threading,which aren’t protected by rust’s safety mechanisms. And while RUST explicitly supports this, it does mean that rust’s safety checks aren’t used throughout most of the linux kernel environment. It know it sounds ironic, but unless those linux drivers are be rewritten as native rust to support safety primitives, supporting them could actually be detrimental to the goals of a real native rust OS.
This is a separate point, but the linux kernel support for asynchronous code is limited (where you continuing execution without blocking). This is because *nix was originally written to use blocking primitives with explicit multithreading to provide parallelism. Well when asynchronous primitives were added to userspace, this created a fairly large compatibility gap between what userspace was doing and what the kernel drivers could do since they didn’t (and still don’t) support asynchronous calling. So now asyncrhonous actions in userspace are converted into multi-threaded blocking calls in the linux kernel. This is not ideal, but it was the easiest way to have asynchronous applications use the existing non-asynchronous drivers. Furthermore Linux doesn’t support asynchronous file system calls at all. Again this is due to kernel limitations, but it means the POSIX asynchronous API cannot be implemented directly on linux without being converted into blocking multithreaded calls, which is exactly what the POSIX implementation on linux does. While it does make the POSIX api work on linux, frankly this approach is pretty bad for performance. It also introduces new challenges since killing threads is a notorious source of bugs. This is also why network shares, usb disk drivers, etc under linux can end up locking up user space processes in the “D” state.
https://stackoverflow.com/questions/1475683/linux-process-states
They can’t be killed because there are still kernel threads belonging to them, which themselves are locked up inside the drivers. Linux mitigates this by adding arbitrary timeouts for file system operations, but it’s very frustrating when these have to be relied on.
Anyway, the point of all of this was just to say that Maestro may be better off with non-blocking drivers than copying the linux blocking driver model, This may be taboo, but one can look to windows “overlapped IO” primitives for a source of inspiration on a more consistent approach to async IO.
Agreed, these FOSS license incompatibilities are an issue.
I’m not sure if writing a new driver using linux sources as a reference without a verbatim copy would be frowned upon or not.