Maybe its pervasiveness has long obscured its origins. But Unix, the operating system that in one derivative or another powers nearly all smartphones sold worldwide, was born 50 years ago from the failure of an ambitious project that involved titans like Bell Labs, GE, and MIT. Largely the brainchild of a few programmers at Bell Labs, the unlikely story of Unix begins with a meeting on the top floor of an otherwise unremarkable annex at the sprawling Bell Labs complex in Murray Hill, New Jersey.
I acknowledge the importance of UNIX – who doesn’t – but I hate how it has become a huge roadblock to any meaningful rethinking and improvement in lower-level operating system design. The best we can do seems to be to hide the ’60s guts underneath ever more layers, instead of addressing the actual shortcomings of such an old design.
But hey, I’ve learned over the years that criticizing UNIX is akin to drowning kittens, so maybe I should just fall in line and parrot the party line – UNIX is great, UNIX is perfect, and UNIX needs zero modernisation because it was instantly perfect.
You don’t enumerate or otherwise detail the “shortcomings” of Unix.
So I’m curious – One problem is that UNIX in essense is the minimal set of calls, open, fork, exec, etc. Usually criticisms are directed at some layer built (badly or not) atop the core. Or microkernel v.s. monolithic, or something else that means “a particular implementation of…” as opposed to the essence.
Also there are tradeoffs – well known. You might want something different, but it is like wishing most fast food wasn’t founded on hamburgers of some sort. Where’s my McSaladBar with drive-thru? And there is the terrible tyranny of mathematics and complexity. The tradeoff is art – aesthetics – more than engineering.
I normally do embedded and RTOS and see, if anything, a dozen RTOS that do the same things with only slight variants on the calls, almost all can be mappped to the Posix …4 standard.
POSIX is NOT UNIX, but PNU isn’t like GNU.
(I’m self-taught in this area, so please take what I say here in the spirit in which it’s given.)
Unix has many shortcomings, but its best feature lies in acknowledging them. Look at how many man(ual) pages have sections titled “BUGS”!
By publishing the stuff that doesn’t work in “sensible” ways, Unix opened the path to improvement. Even when Bell Labs was forbidden to support Unix after the tape & manual were delivered, the Unix users found ways to improve it, to make it faster, more capable, with tools like sed, awk, cut, vi. The users also developed libraries to assist in diverse environments, like VT102 terminals, home PC’s running Kermit, or telnet clients with unknown capabilities.
As Ken Olsen stated in 1984, at least according to
$ fortune -m VMS
the beauty of VMS is that “it’s all there.” Yes, VMS was a power-house operating system. In my day, I managed to use barely 0.1% of its capabilities for my purposes,
He also stated that the beauty of Unix is its simplicity. It has lots of small parts that can be built up for a user’s purposes. So I’d contrast the systems this way:
VMS is a very large sequoia tree, from which you should carve out a house, including the door and windows, plus the plumbing. “It’s all there,” right?
Unix is a set of oak, pine, and cedar trees, which give you enough material to frame a house, and then you fill in the cracks yourself, after you figure out the water issue (but someone else already did that).
The Unix viewpoint might be simpler, or more simple-minded, but I’d argue that it’s also far more practical. It lets more people accomplish a lot more work.
I’m curious too – can you think of a single piece of Unix that is not improvable?
The people responsible for standardization (e..g. Single Unix Spec, Austin Group) fell asleep at the wheel several decades ago, and have mostly only mumbled in their sleep since then. It’s reached the point where an incompatible and uncertified hobby project (full of people that can’t even agree on how “init” is supposed to work) is more of a “(de facto) industry standard” than the official standard. It’s like whipping a dead horse until maggots crawl out, putting a saddle on the maggots, and entering the maggots in a horse race.
Multics made use of CPU protection rings.
Unix and Windows did not (not really).
There is something to be said in favour or protection rings…
https://en.wikipedia.org/wiki/Protection_ring
I agree that conventional Unix (Linux, BSD, SysV) is obsolete and has been for a while, but I definitely don’t think Unix-like OSes in general are obsolete. There’s no reason why it would be impossible to write a Unix-like OS that is both modern and reasonably compatible with legacy code, and in fact I am doing just that. My OS will have advanced features like a capability-based microkernel architecture (it won’t be a pure capability system at the user level but will have a security model that is nearly equivalent), extensive use of safer languages and verified code, flexible containerization that is fully integrated rather than being a middleware layer, functional package management, and a well-integrated full-featured GUI based on a compositing window server, while still following the ideals of early Unix well (in some ways, it will actually take the Unix philosophy further than Plan 9 does, and will certainly follow it a lot better than any mainstream Unix since 4.1BSD) and also being compatible with most Linux applications and drivers.
andreww591,
One of the issues being alluded to is why does it have to be Unix-like? What is it about “Unix” that makes “-like” such a desirable property?
“Everything is a file”? That’s one of the common Unixness. Well, not everything is a file. Just like how not everything is an object. Or whatever flavour of “the one thing” is this week.
“The Unix philosophy”? Where everything is supposed to be so “do one thing and do it well”, that a minority of Unix variants, eg MINIX, actually does it for the kernel. Not even the original BSDs did it – most non-MINIX kernels are monolithic.
———————————————–
For what it’s worth, I think Unix is fine. Not because it’s perfect, and I don’t agree with Thom that it stops rethinking low level OS concepts. The truth is once you get into the business of an OS – communicating with hardware and managing hardware resources – there isn’t much that’s going to be so different from each other. The techniques to do scheduling and hardware abstraction and interprocess communication are pretty much going to be the same. Whether or not “everything is a file” or “everything is an object” – memory protection is going to be the same; preemptive scheduling is going to be the same; there’s always going to be some generic way to interface with classes of hardware ;filesystems are mostly going to be hierarchical.
As you know but, perhaps, some here may not, “everything is a file” is a concept that really means that the data pertaining to any object must be “streamable” or accessible, or goes through pipes, by using file descriptors, it does not mean that things must be present on disk. What is so powerful on this concept is that it does not matter where the data really resides or how are the control structures designed, its content, the data sans the metadata, will still be available in a simple way. This is precisely what plan9 tried to fix on original unix as there are distributed objects that are not contemplated and I really wished they had success.
The problem with current unix approach, and we had a discussion about it on a previous thread, is that sometimes we need also a simple way to access the metadata. But contrary to what many may think, this limitation is not intrinsic to unix but more related to our shells and tools. I, for all, would like to have a xml or json stream when invoking our current tools with an option like –structured-* and, also, to Alfman happiness, a sql-like optional interpreter. 😉
That’s just a shell. You can run Xonsh for a Pythonic shell, or even Microsoft PowerShell if you’re crazy, on Linux right now. That doesn’t necessitate a redesign of the entire OS. What is it about the design of the OS itself that you don’t like?
I didn’t say that I don’t like it, quite the opposite, and was just pointing out that what some may see as a weak point on unix is not intrinsic to it but a “deficiency” on shells and other tools we use. I also pointed out that I would like to see the “everything is a file” extended, like plan9 planned.
There are, though, things I would like to see resolved on linux (not all things are related to the kernel): a default IPC/RPC mechanism (dbus is the de facto, unloved, solution used right now); a better file system access granularity (somewhat addressed with ACL), a good RDP solution and better coordination between the big distros toward a sane minimum layout for services (dirs and structure of configuration files, part of this is addressed by systemd consolidation).
Some others like to complain about packages management (even if far from perfect, is way better than what is found on other platforms), containers support (being addressed) and individual, out of distro tree, package installation (this isn’t particularly easy to solve as there are concerns about security versus convenience). I have a relatively lax view toward security, i.e., if you own the system you should be responsible to what gets installed (and its failures) but many have a more strict position about it, perhaps because concerns about the implications on corporative deployments.
Most routers run fine, using a cut down version of Linux.
Yes, this is a well known fact but how is it relevant? What is it a reply to?
He said that Linux is outdated or obsolete in some aspects. They are running fine on Linux, so being obsolete/outdated, are not really true. Code can be updated or done differently, yet if it is secure on a router (not that anything is truely secure), then it is doing it’s job and is not obsolete or outdated.
brostenen,
Well, I’d say it depends on how you define obsolete. If it’s defined as unsupported, then millions of linux routers are obsolete and millions of linux routers are not obsolete, same with phones and a whole lot of other things. If you define obsolete as using obsolete paradigms, that’s where it gets very tricky due to subjectivity. Take the IBM mainframe, it’s definitely supported, but many would consider it obsolete tech anyways. The same could be said about bits and bobs in unix. IMHO there are certainly linux hacks that we would never do that way today if it weren’t for legacy & backwards compatibility. Same goes for the PC and x86 architecture. Is that obsolete?
There are clearly things we can do better given the opportunity, but in the real world we have to contend with very slow momentum and we often have to make do with legacy designs having far more critical mass regardless of merit. It’s why AMD64 won the 64bit desktop over other architectures – it was an extension rather than a replacement. New designs often lack the critical mass needed to make them popular regardless of merit. Through luck or foresight, linux become popular because linus set out to develop a unix clone rather than to fundamentally improve on unix. This allowed linux to inherit critical mass from unix. Plan 9 was technically worthy of succeeding unix, but it had no market share and fell by the wayside.
So, for better or worse, I find myself having to concede that extending linux is a more viable strategy forward than trying replacing it outright with something cleaner. Still, even linus himself would acknowledge that linux carries a lot of legacy baggage. Coming back to the notion of linux&unix being outdated or obsolete in some aspects, I’d say that’s true, but I don’t see it going anywhere since everything is pretty much based on it and nobody has the marketing muscle and financial resources to move the entire industry to something else when what we have today despite any shortcomings is already good enough.
I think the biggest problems with Plan 9 were that it needlessly breaks compatibility with traditional Unix even where breaking compatibility has little benefit (e.g. stuff like making user IDs and errors from functions strings rather than integers) and that it focuses on minimalism to the point of being austere and uncomfortable to use for most people. I believe any OS that is trying to be a successor to conventional Unix should try to retain compatibility as much as is possible (this can often be done by reimplementing old APIs on top of new ones), and while I do think minimalism is important it needs to be balanced with extensibility and generality (there’s nothing wrong with being full-featured as long as it’s easy to add and remove features)
andreww591,
Yeah, in theory that might be possible and we could try to clean it all up. I think the scope could be of similar magnitude to the wine project, except instead of reproducing windows APIs, it would be reproducing unix APIs. To me this seems similar to the x86 model where we keep accumulating complexity and cruft to support multiple sets of primitives.. Don’t get me wrong, I get why this is often necessary, but it can get ugly and there are instances where old APIs can’t be supported without being closely tethered to and polluting the new APIs.
Whether it’s CPU architectures or OS design, users tend to be isolated from the underlying mess, consequently there’s not actually that much end user demand to improve things. But when you look under the hood, some parts of linux are pretty bad. IMHO unix terminal control was terribly engineered and linux did inherit a lot of that bad engineering. These ancient unix terminal control APIs that are so full of hacks and inconsistencies that they should be thrown away.
https://stackoverflow.com/questions/4968529/how-to-set-baud-rate-to-307200-on-linux/7152671#7152671
For UX/RT, compatibility with Linux and other conventional Unices will be of a much lesser magnitude than Wine. For the most part, the API will be a close match to that of conventional Unix, with the cleanup consisting mostly of breaking traditional Unix primitives down into simpler primitives with the old APIs implemented in terms of simplified ones (e.g. fork() will be broken down into several calls to create a new process, set up its state, and start it running; all process-state-related system calls like getpid(), getuid(), kill(), etc. will instead be pure library functions using procfs; and anything to do with anonymous memory will instead use memory-mapped files in a per-process tmpfs). The main area where UX/RT will break compatibility will be for stuff dealing with authentication, login, and session management (the security model will be an approximation of a pure capability model that is still compatible with most regular Unix applications, but anything dealing with logins or sessions will have to use the new API to log in users and set up sessions since the UID won’t be the only thing that determines permissions).
As for the terminal interface I may just start out with the legacy interface implemented at driver level but I eventually want to re-introduce the concept of a listener from Multics as an intermediary between the terminal and applications, providing services like line editing and completion (no need for stuff like libreadline anymore, and such stuff will be implemented externally to the core listener itself of course; as much as possible in UX/RT will be extensible) and a cleaner API, with the actual driver-level API used by the listener itself greatly simplified. There will of course still be a server that implements the legacy API but it will be reasonably self-contained.
andreww591,
…
The thing is sometimes there’s no clean way to make a break without polluting the new API with legacy concepts.
For example, I consider the unix file permission bits to be legacy remnants of a less flexible way that unix did things in the early days. Ideally you could get rid of those, however some modern software and even protocol stacks are still dependent upon those bits. Trying to fake it in the API translation layer can be messy because posix compliant behavior may require those bits to actually be implemented, otherwise software can break.
I don’t know if your project will get far enough for it to matter, but I expect that you’ll struggle to be fully compatible with linux drivers and software without implementing many of the hacks that linux has.
One significant area that I find linux to be deficient in is overlapped IO&AIO. Unix and linux were designed around blocking calls. In contrast to windows, which was designed to handle overlapped IO in the core. Support for this in linux is lacking and consequently the POSIX implementation of AIO uses threads to overcome the kernel limitations, which is extremely inefficiently and therefor not scalable. There’s no easy way to fix this in linux without reengineering kernel IO mechanisms to use AIO natively instead of blocking threads, but it changes the whole driver model. So if you are writing a new OS, you should take the opportunity to address these limitations up front, however if you plan on being compatible with linux drivers, then you’ll probably end up falling in the same trap.
Anyways, keep us apprised of your progress 🙂
If you look at how the OS is structured and the code that it is build from. Then yes, it is obsolete if you say that backwards compatibility and other stuff like that makes it obsolete. Yet what is obsolete? I will argue that a new and updated Linux, or BSD or MacOS or Windows10 are not obsolete. Because they are in use, and are being used daily and like everywere. Linux as Android flavour, Unix as iOS flavour or the newest Kubuntu/Ubuntu/Ubuntu-Mate/Any other Ubuntu deriative and flavour. None of that is obsolete, as we are using all that on various machines in various places and all over the globe. In that sence, they are not obsolete at all. Yet if you look at the code behind, then yes, there are parts of the source code that are obsolete and other parts that are not. Obsolete actually need deeper explanation on what exactly you mean that are obsolete. As a whole, the newest and various operating systems are not obsolete. They are pretty much the newest bells and whistles that people enjoy for work and play. Running Windows98 are obsolete in nearly every way, except in the field of vintage computing and are a nice tool to use, when you are teaching someone about how computers have evolved.
I think the conventional Unix architecture is showing its age and has been for quite a while now. It was perfectly fine when Unix ran on a PDP-11 with 256K of RAM but it’s not very well-suited for modern OSes, the biggest problems being inadequate security and limited extensibility.
What is this OS of yours? Website? Blog post about it? Has this “OS” actually progressed beyond the idea set forth in your comment?
I do have some code and it is bootable but it’s still quite early and it’s a ways away from running actual user processes. There’s no website or blog post for it, but I do have a GitLab project (https://gitlab.com/uxrt/) and there is a file with a somewhat disorganized list of notes on my planned architecture in the top-level repository (I have almost the entire design planned out). My progress so far is basically:
Utilities for generating a QNX-like execute-in-place FS boot image
An in-memory chain loader using Multiboot2 with extensions for booting from an XIP image
Patches to the seL4 microkernel to support the Multiboot2 extensions
An enhanced version of the feL4 Rust-on-seL4 framework with ports of the mid-level libraries from Robigalia
The beginnings of a root server linked with the feL4 libraries (currently it starts to initialize the capability allocators and then deliberately hangs)
Arstechnica should be aware by now, after all these years, that Linux is NOT Unix. Every Unix user will laugh at you, once you claim that Linux is a Unix OS. I is like saying that Windows7 is Dos or saying that some Dos games require Windows3.11.
A dog is no wolf.
What? Where did Ars mention Linux in the article or say it’s Unix? Yes, Windows 7 has little to do with DOS and while Linux is not Unix it certainly is based on Unix and is in fact Unix like and (mostly) POSIX-compliant.
“…OS that powered smartphones…” right there. In the headline. Plus the message that they give me, on writing, is that they are talking about the past. (the word “powered”).
Now…
It can be, that the author comes from a different galaxy or dimension, in were there is nothing else but Apple. Or it can be that the author comes from a background, in were the author have been told by a teacher, that Linux is Unix (I think the SCO court filings set that record straight). Or simply the author comes from a Windows world in were the author have taken a look at BASH and said that it looks like Unix.
I don’t know… All I know, is that iPhones are the only smartphone (except really specialised ones, if they are on the market at all) that actually uses a cut down and modified version of Unix. The rest uses Linux as a basis and that is not Unix. Hence the word “smartphones” in the headline, are kind of wrong. Unless you only count iPhones as the only smartphone. And Apple does not really have patent or copyright on that word, do they?
Do you follow me?
Here is the full quote (with CAPS by me) because you are leaving the relevant part out of it for some reason…
“Maybe its pervasiveness has long obscured its origins. But Unix, the operating system that in ONE DERIVATIVE OR ANOTHER POWERS NEARLY all smartphones sold worldwide, was born 50 years ago from the failure of an ambitious project that involved titans like Bell Labs, GE, and MIT.”
I don’t read “Linux is Unix” but a derivative of it which is true nor did they say ALL smartphones. Saying that, Android and Apple qualify to me. Are we really arguing which is MORE Unix like now? LOL
It entirely depends on your definition of Unix. Linux is not a “genetic Unix”, but it is very much a “functional Unix” since it is architecturally close to BSD and SysV (not necessarily in the specifics of the implementation but the broad design is close). A few distributions also qualify as “trademark Unix” since they have the certification from the Open Group (which doesn’t require an OS to be a genetic Unix and basically just depends on paying them enough money and the OS passing the test suite). I usually tend to use the “functional Unix” definition since that’s the one that has the most relevance to the real world.
Unix has it’s shortcomings nowadays. That’s not because of bad design itself. That’s because of the way people code now as oppossed yo the earlier times. It used to be straightforward, simple, organized. Now in the days of object oriented programming, a lot of code sis full of bloat, which gives way to holes. OOM. and exploits, etc. Can’t blame UNIX itself for that. My school of thought is that I’m a fan of whatever OS for the given purpose at hand.
UNIX was designed for a different era of computing. In our current era of computing we need software/OS that is scalable, adaptable, dynamic, deployable, secure and safe. The design goals for the original UNIX where
* a pleasant environment in which to write and use programs (The UNIX Time-sharing System – A Retrospective)
* simplicity over efficiency
* written in ‘C’; Actually ‘C’ was invented specifically to code UNIX
* an interactive, general purpose timesharing OS
* files are a uniform stream of bytes; the system makes no assumptions as to their contents; no records
* Integration of I/O devices into the file system
Not part of the original UNIX design:
* no general interprocess message facility
* no synchronization facilities like mutexes or semaphores
* security
* safety
UNIX is the OS of choice to operate the cloud. This was a combination of laziness,necessity, practicality, and a lack of creativity and vision. I do like UNIX. I use it every day for work. It is a very good interactive system to develop and run programs. But it is showing its age.