Sortix is a small self-hosting Unix-like operating system developed since 2011 aiming to be a clean and modern POSIX implementation. There’s a lot of technical debt that needs to be paid, but it’s getting better. Traditional design mistakes are avoided or aggressively deprecated by updating the base system and ports as needed. The Sortix kernel, standard libraries, and most utilities were written entirely from scratch. The system is halfway through becoming multi-user and while security vulnerabilities are recognized as bugs, it should be considered insecure at this time.
Sortix 0.9 was released on December 30, 2014. It is a very considerable improvement upon Sortix 0.8 and contains significant improvements all over the base system and ports. The previous release made Sortix self-building and this release works hard towards becoming fully self-hosting and installable. Several real-life prototype self-hosting installations of Sortix exists right now, I expect the following 1.0 release to make real Sortix installations available to the general public.
The unique selling point for this is just not yet clear. It is hosted in a university – is this undergrads flexing their muscles? Or postgrads putting a platform in place to do something peculiar?
Maybe something startling will emerge but it kind of feels like another one in a line.
The low KLOC figure may reflect excellent internal design (leading to higher performance through, say, better use of cpu caches). Alternatively it may be that gaps in the feature set is all that the KLOC figure shows.
Either way, I hope it progresses rapidly…
Suppose it doesn’t really matter. Some of the most influential projects today were “just undergrads flexing their muscles”. In either case, glad to see the hobbiest os world isn’t completely dead.
I’m a university student. They provide personal directories and public www directories on their machines. I’m simply recycling those locations because I can and to save on hosting. This project is unaffiliated with any university and is a personal project.
You’d better rethink that… using university resources for hosting your stuff that is they could claim ownership due to your use of university resources.
cb88,
I’ve heard rumors that some universities do this, but I never knew if it was true.
sortie – I might be able to donate some (US) hosting if that’s helpful.
Perhaps not ownership, but once you quit being one of their student, they delete your storage and url associated with it, hence orphaning several links you shared for a long time.
That happened with Stelios Xanthakis’ Lightweight C++ :
http://students.ceid.upatras.gr/~sxanth/lwc
http://lists.gnu.org/archive/html/lwc-list/2003-09/msg00005.html
Hence beware of obsolescence.
It’s missing the /usr directory. I guess this makes Thom especially happy.
Does it already have some sort of compositing windows?
By going by the official screenshots, yes it seems so, or at least transparency.
https://users-cs.au.dk/~sortie/sortix/screenshots/sortix-display-ear…
Not yet. Those screenshots are, as the filename suggests, merely tests. A GUI is not as important as being properly installable and developable on real hardware.
This is hobbyist-unix-clone #851.
I wonder when a non-unix hobbyist OS with some original ideas is going to appear. SkyOS was the only one, and we couldn’t even see the source code.
There’s still MenuetOS, which is pretty nice. Latest version was released less than a week ago.
People writing POSIX operating systems are driven by three things: the current best text book is about a *nix OS, POSIX (Single Unix these days) is well documented and that documentation is freely available, and there’s a huge amount of freely available software that will run on top of anything vaguely POSIX.
That last one is a huge motivator. You can write an OS that isn’t POSIX compatible, but then you have to write literally everything for it yourself. Want some code to handle compression? Tough; either port Zlib or write your own. Now, if you had a POSIX API, you could just recompile Zlib and wouldn’t that save you a lot of work?
There’s nothing fundamentally wrong with having a POSIX API. You can have other non-POSIX API’s alongside it, if you like. Having a common well known API is a heck of an advantage when you’re trying to pull yourself up by your bootstraps, though.
Yeah, it’s just that once you start porting posix “stuff”, you drag along other assumptions, ending up with an OS with zero room for creativity. It’s all just another Unix clone. I can’t for the life of me figure out why they do this year after year.
I wrote a small OS a decade ago that was not Posix compliant. Great fun, until you realize that being non-posix means every tool must be rewritten. Also, writing the 50’th device driver got tedious. On the upside, working on the design from a clean slate was great fun.
And yeah, I forgot about MenuetOS. Nice little OS.
Because it’s a good learning experience?
Well, that answered that question?
That is why Spin, written in Modula-3, had a POSIX interface.
Sadly the whole Olivetti, DEC, Compaq ongoing merges killed Modula-3 and the Spin effort.
For the record, I implemented POSIX and rewrote zlib anyway. I just released Sortix libz, a much cleaner fork of zlib, at https://sortix.org/libz/ yesterday. The lesson here is that third party software is often of surprisingly bad quality compared to what you can do yourself in the long run. Yes, by doing POSIX you can port stuff, but do you actually want to if you want to make a good operating system, not just a operating system?
Edited 2015-04-20 15:11 UTC
Since you said operating system and not kernel, “good” is judged by the amount of high-quality drivers for graphic cards and wifi dongles, etc. No one-man project can provide that anymore, and this is what is plaguing “large” projects like Haiku. The only way is at least to do a compat layer against BSD or Linux.
Any plans for that?
Whether something is good depends on its use. I would rather have that what Sortix does, it does well, than have it do more. A lot of effort goes into improving the base system to be of higher, simpler quality. This is reflected in correctness and security.
Developing drivers for everything is a losing game. The winning move is not to play. I’m developing an operating system for the computers I own. I rather like standard interfaces like AHCI, IHDA, and so on, because it allows you to target a large set of devices with a single driver. This is design. As for everyone else’s computers, if we assume this project succeeds in the distant future, why not just tell people to buy a particular desktop/laptop model with hardware I approve of, and then make my OS work really well on that hardware. Like Apple does.
Edited 2015-04-20 15:59 UTC
That makes NO sense. You’ve been watching War Games too many times.
Since you’re into Posix, the winning move is to build upon what others have done instead of “inventing” yet another posix clone. We have enough of those.
ryak,
I’m not sure there is “a winning move” anymore, posix or not. all hobby operating systems are falling once their developer’s have to go get a job. The operating system domain is consolidating, not expanding.
You stated that writing yet another posix clone results in lost opportunities for creativity (and I agree 100%). It’s good for compatibility that’s true, but hobby OS devs often aspire to build a better mousetrap. Given how shoddy posix is at times, this is not terribly difficult even for a hobby dev. But then you end up living in no-man’s land; a shinny new OS with good design but no software and no hardware support.
There’s definitely lots of room for improvement in the OS space, but realistically it’s chances of succeeding are slim to none without billionaires to promote real world adoption. I’m also hypocritical, I recognize deficiencies in the operating systems I use, but I keep using them because they are already well supported. This logic, repeated by almost everyone on a global scale, produces this cyclic pattern, which all but ensures that new non-clone operating systems ultimately fail to gain widespread support.
The creator of an Hobby OS is often the sole driver of progress.
Once real-life takes hold, progress usually slows down or even stops entirely…..unless others, who encountered it, played with it, are excited enough to join in and contribute.
Nevertheless, the lack applications used in daily life by the “common user” is generally the hurdle facing an Hobby OS toward gaining traction.
Historically, people have adopted a new OS to use an application they needed which was available only on that OS.
Unfortunately, we don’t seem to be getting that kind of excitement about applications anymore.
I guess there is also the argument that although writing something cleanly means you can re-think things from the ground up without having to carry over the bad decision of the past or having to deal with unfamiliar code that could harbour nasties underneath the layers of changes that have been made. Reminds me of the decision Apple made to go with KHTML/KJS rather than Gecko – although KHTML/KJS was less feature complete the small size of the code base meant that they weren’t having to wade through a mountain of code to turn it into something useful.
Right, the zlib had encountered some problems in the past.
http://www.codeproject.com/Messages/640005/ERROR-XUnzip-cpp.aspx
sortie,
Thank you for bringing interesting news to osnews!
Congrads on this development. I’m always excited with homebrew operating systems. I really enjoyed working on my own some 15 years ago (Oh crap! What happened to my youth?). I’ve become friends with others over the years who also did homebrew OS dev. All of us have moved on regrettably, you do enough to learn that you are able to do it, and then you join the workforce and never do it again. I think we’re mostly driven by the same desire to build an OS from the ground up, to implement our own ideas and see them come to life. It’s all so much fun when things are coming together, solving tough problems, I just love that feeling. It also made me feel proud at the time. Although for me that’s faded over the years, there’s not much demand for OS devel skills, and I’m not happy about being overqualified in the workforce.
I enjoy discussing highly technical things. Otherwise I find things get dull…at work and even in social interactions. Once you leave university, a lot of that’s gone. I feel it’s even faded away here on osnews, comments have become more of a place for poignant debate rather than for immersive technical discussions that we used to have. I’d love to talk more about OS tech, you need a place to contact you on your website
Thank you.
I do worry I can’t apply these osdev skills in a professional capacity. I have not done an exhaustive search for opportunities to do when I graduate, though. I don’t play on stopping with Sortix. It’s an excellent hobby.
I do have contact information on the website. You can find my email or pop up in #sortix on freenode IRC. Or even join #osdev which is a nice generic place for osdev.
Haha! Okay, poor choice of example (I was originally going to say “decode a PNG”) but I think you get my drift.
There is plenty of rubbish but there’s plenty of high quality open source software you can use; I doubt you’d consider writing your own replacement for GCC or LLVM/Clang, or ffmpeg and libjpeg “just because”.
Vanders,
Although I agree with your general point, I think it’s flawed to itemize individual things (mpeg/jpeg/gcc/png/avi/mp3/svg/aes/ssl/ssh/etc) and suggest someone wouldn’t want to write their own replacement. People can and do write their own implementations for all these things; it’s a great way to challenge oneself and realistically we can often do a better job building a cleaner design, more efficient code, better interfaces, etc. There is an endless list of things to be fixed. The dilemma with os-dev is that there’s so much ground to cover and there are only so many productive years in our lives. If one doesn’t compromise in making the OS compatible with other code, it leads to very slow progress, lapses in functionality, and burnout.
For example, on the top of my wish list for an OS would be correcting one of the big shortcomings conventional software libraries – the lack of asynchronous interfaces for blocking tasks, things like DNS resolution. This has resulted in projects like nginx implementing their own internal asynchronous DNS resolver, which is anything but ideal. An indy-OS could easily offer it natively. The conundrum is that there would not be any software to use it, even including even nginx unless you committed to maintaining a fork. It summarizes my mixed feelings towards standardization, it enables lots of software to be ported to new operating systems (yay), but it hinders adoption of better design (bah humbug). Still, I don’t see what the alternative is.
Edited 2015-04-21 16:51 UTC
Sure, but not all of them, and probably not as a general effort to write an entire operating system, and probably not to the same level of functionality. That’s how you get the GNU project.
Vanders,
That’s the point actually. A single developer can work on specific micro-domains to make those pieces exceptionally robust. But it’s going to take very long time to develop something as complex and diverse as GNU from the ground up. Any theoretical improvements over the GNU tools has to be weighed against the additional effort and opportunity cost of not having a fully functional system until later, if ever.
Which approach makes the most sense depends on the purpose of the project. If the goal is to learn, experiment and be original (as I think is the case for many hobby OS devs), then to be honest I’d have more fun designing my own interfaces than mucking around with posix ones – it’s been done thousands of times, it’s the least original thing an os-dev can do.
If the goal is to achieve functional parity as quickly as possible, then reusing GNU tools is obviously better than reinventing them.
That’s the thing. Osdevers are generally infected with Not Invented Here, and I’m definitely infected. I do wish to write my own replacements just because. Obviously I’m crazy.
Even if I correct for the irrational bias towards implementing things myself, I pften find supposed high quality open source software is not of the advertised quality. You’ll often find software takes really bad approaches to compatibility where it is dumbed down to the worst common denominator (see see openssl approach) which results in vulnerabilities and worse code. You’ll find unsafe coding practices, fundamentally broken interfaces, portability issues, various bugs, complexity for no earthly reason, lack of robustness, old cruft and so on. That’s the kind of stuff I see, and I believe something should do be done.
Yes, it’s too much effort for me, but I can lead by example and do my part making things better. When I do want to rewrite software, for real, it’s for good reasons. Whether I do it is a matter of scheduling priorities.
Edited 2015-04-21 17:23 UTC
sortie,
What a weird coincidence that you’d bring up openssl, I’ve never liked that code. It does a lot but if you have simple needs it’s just way too complex. I’ve implemented many crypto primitives using clearer code. Simpler code is less likely to contain the serious problems that can plague complex code bases. Which way is safer depends on where you stand on the “to many eyes, all bugs are shallow” theory, but the openssl vulnerability controversies might increase the merit of code simplicity.
I suggest you examine what OpenBSD did to OpenSSL to turn it into LibreSSL. It’s a major inspiration of sanity. The OpenBSD developers have a competent culture of pruning and polishing as discussed in (Unangst 2015) http://www.openbsd.org/papers/pruning.html
The problem with OpenSSL was not as much the number of eye balls, but rather that the code was horrible that anyone that looked at it threw up their hands “I assume someone else can maintain this code”. Then all those vulnerabilities happened.
Oh God, I certainly wasn’t. “Is there a solution out there that works and isn’t complete crap?” was my second priority (after “Does it fit my design?”). This may explain my I’m an Ops guy, and not a software developer?
I concur. It makes senses to use the POSIX API to quickly get to explore/improve what one wishes to.
Nevertheless, the real fun appears to be once one moves into a GUI API and there appears to be as many variants as there are hobby operating systems.
MonaOS ? HouseOS ?
MikeOS
Visopsys
Details at http://visopsys.org/. It’s actually fairly useful, and is part of the program Partition Logic.
Now, that’s what I’m talking about. I love these small OS’s that go their own way to make a coherent little universe instead of patching up 40 years of cruft.
Visopsys’ partition manager even seems to be pretty useful
Yes. Another Unix Clone. http://wiki.osdev.org/User:Sortie/Yes_Another_Unix_Clone
The points there are really saying Noooo, Not Another Unix Clone, to any sane person.
But hey, it’s you wasting your life, not ours.
I hope they will merge the ahci driver before the last pata drives die.
Apart from that, 1Gb Ram is quite a lot for a system lacking nearly everything required to do something beyond booting.
Maybe not – the real test of usability might be the capability to boot a netbook (typically 1 Gb RAM) from USB.
I remember BeOS having a upper limit around 512 MB that prevented it to boot if your had more. Strange indeed…
1) Contrary to what has been said before, I argue that today is actually a much better environment and a much more fertile ground for hobby OS development:
Virtualization can be a life-saver for hobby and research OSes. It is perfectly fine for an hobby OS or research OS to run only inside a hypervisor or a virtual container.
Targeting a tiny set of virtual hardware is even better than targeting “hardware APIs” like AHCI, IHDA, etc. to keep the code base small and maintainable with limited developer resources.
As a bonus, targeting virtual hardware keeps the OS lean, bloat-free and easy to deploy in the cloud for running some application.
Running a stock Ubuntu server inside a virtualized environment, where 99% of the praised device drivers are unused, is a lot of overhead. Maybe the AMI Linux etc. cuts down on that overhead, I do not know, but please understand the point.
I say, the “driver argument” does not hold any more against hobby OS and research OS development.
Let the host OS worry about drivers and let the virtualization software do the translation to a tiny set of virtual hardware. That is a pretty good abstraction over a lot of physical hardware.
2) Also, Linux is a great jumpboard for hobby OS development, I would even say research OS development – compilers, cross-compilers, libraries. All there to bootstrap from.
It’s not worth it to write an OS with the goal to dominate the world and displace Linux or Windows or whichever. They are great jumpboards.
Also, an hobby OS does not automatically have to be a general-purpose OS, trying to displace mainstream OSes. It is fine to serve a niche and do it well.
3) BTW, you (Sortie) will be judged less by what you have achieved now, but by whether in 5 or 10 years, you resp. your project is still there and actively maintained.
That will also the time when some people may consider using it to build applications depending on the foundation that is your OS – if your OS offers some distinguishing features until then.
Edited 2015-04-21 17:14 UTC
FireballAT,
I’ll address your point about virtualization in a moment, but in terms of hardware, I have to disagree. It was much easier to access hardware directly back then because the reality is that DOS & it’s software didn’t have many drivers. Consequently hardware was designed to be compatible with software instead of the other way around. Hense the “soundblaster compatible”, “svga compatible”, “ne2k compatible”… At that time even normal applications included their own drivers. Having programmed each of these first hand, I will attest that this made things much easier for os-dev back then. Of there were rough spots, but from that point forward the increase in hardware complexity brought about by windows drivers would only bring more difficulties to alt-os development.
Point taken, targeting a virtual machine is plausible. When bochs became available, I made good use of it to debug OS code, but it actually emulated standard hardware such that your OS code would simultaneously run in bochs and on real hardware for the same effort. That’s not a luxury we have anymore. Honestly my expectation would be for an OS that can run both inside a VM as well as on bare metal. Do you think this is unreasonable now days?
It’s relative though, while the VM is running outside your OS, it is still arguably a big source of bloat. One of the appeals of alt-os is running on bare metal to eliminate bloat:
http://www.returninfinity.com/baremetal.html
I do get your point, and it has some validity. But part of the appeal of alt-os is the philosophy that your not tied to another os. It’s like running an emulator, it doesn’t feel authentic like running on original hardware. Part of it is mental, and the other part is technical. If your going to accept up front that your software needs to run inside another OS, then what functional purpose does the VM layer serve at all? It would be much more efficient to build your software as a process inside a logical isolation container rather than inside a relatively heavy VM that performs worse in all aspects.
For some while I have suspected that we will end up with a minimum of two operating systems running on our desktops, laptops and servers. One will handle hardware drivers (and crash recovery) and the others will sit on top and focus on providing productive capabilities (with a modest selection of drivers).
The current phase with VirtualBox, VMWARE, et al; may turn out to be a transition in hindsight. We may end up with a landscape that involves proprietary operating systems with closed source drivers (and a microkernel?) providing virtual machine hosting for (predominantly) open source operating systems.