But today’s breakthroughs would be nowhere and would not have been possible without what came before them – a fact we sometimes forget. Mainframes led to personal computers, which gave way to laptops, then tablets and smartphones, and now the Internet of Things. Today much of the interoperability we enjoy between our devices and systems – whether at home, the office or across the globe – owes itself to efforts in the 1980s and 1990s to make an interoperable operating system (OS) that could be used across diverse computing environments – the UNIX operating system.
[…]
As part of the standardization efforts undertaken by IEEE, it developed a small set of application programming interfaces (APIs). This effort was known as POSIX, or Portable Operation System Interface. Published in 1988, the POSIX.1 standard was the first attempt outside the work at AT&T and BSD (the UNIX derivative developed at the University of California at Berkeley) to create common APIs for UNIX systems. In parallel, X/Open (an industry consortium consisting at that time of over twenty UNIX suppliers) began developing a set of standards aligned with POSIX that consisted of a superset of the POSIX APIs. The X/Open standard was known as the X/Open Portability Guide and had an emphasis on usability. ISO also got involved in the efforts, by taking the POSIX standard and internationalizing it.
A short look at the history of UNIX standardisation and POSIX.
Posix brought many benefits, like source code portability – the importance of which can never be understated. With that said, what a headache it can be, sometimes I wish POSIX could be replaced with something more modern and less quirky. They would standardize existing APIs without asking how much merit they had for standardization.
I wonder what the view on that was in those days, would they also think those were bad choices.
So how many “POSIX” code bases are still maintained for lesser platforms? Everything nowadays is effectively Windows or “POSIX” (Linux or Mac), whether it would run elsewhere or not. Only very few still care about lesser platforms, unfortunately. Which means most code is (indirectly) platform-specific.
I feel like portability is a false promise, never living up to its ideals. It’s very discouraging. Even simple things aren’t portable (unless you kick the tires until your foot falls off).
Keep in mind that AutoTools still relies (unwisely, IMO) on a POSIX Bourne shell. Can you imagine Linux having to use a 4DOS/4NT clone just to configure and build sources? It would be ridiculous, yet here we are.
I’m not really complaining to any one part, just saying that the ideals are often loftier than the reality.
Rugxulo,
I hate autotools with a passion! It’s whole reason for existence is poor standards, and the way it goes about brute forcing everything by reinvoking a compiler hundreds/thousands of times is absolutely grotesque, I just hate how they addressed one problem with another.
I’m not sure what direction you want this discussion to go in, but yeah there’s no disagreement from me that things could be much better
Hi,
That article is extremely biased (so biased that it can be considered “almost pure bullshit”).
The breakthroughs we’ve seen would’ve been made regardless of whether Unix existed or not; and without *NIX and POSIX stifling innovation we would have seen far more breakthroughs.
Much of the interoperability we enjoy between our devices happened despite Unix/POSIX, not because of Unix/POSIX; and could be more accurately be attributed to the existence of multiple very different OSs (Unix vs. VMS vs. Novell vs. Windows vs. …).
All OSs may or may not be not known for their stability and reliability regardless of whether they are/aren’t an implementation of *nix. Without any correlation between “implementations of Unix” and “stability and reliability” you can’t pretend there’s causation. If you look at OSs for mission critical servers (NonStop OS, z/OS, etc) you’ll see the opposite – most aren’t Unix.
There’s a huge amount of evidence to show that Unix/POSIX failed to evolve. For every new type of device that has been introduced in the last 30 years (mouse, sound cards, touchscreens/tablets, scanners, cameras, 2D graphics, 3D graphics, virtualisation, …); every single “Unix OS” has to add non-standard (and typically incompatible) extensions. If you write a modern application (anything that doesn’t make the end user vomit) portability (even just portability between “*nix clones” alone, and even just portability between “Linux running Gnome” and “Linux running KDE”) can only be obtained through non-standard libraries that have nothing to do with POSIX. Almost all of the portability that actually does exists comes from programming languages and not the OS (e.g. being able to run Java applications on almost everything).
Apple’s OS X, the first HTTP server, the establishment of WWW, IBM’s deep blue chess computer, DNA and RNA sequencing and Slicon Graphic’s digital effects all owe their fame to talented application developers and/or marketing people and/or other factors (porn!); and Unix/POSIX is not even slightly responsible for any of these things.
The fact is that (despite the pure drivel that this article is) for every possible use case, Unix is either dead (e.g. anything involving modern user interfaces) or irrelevant (e.g. anything where the only thing that matters is the application and programming language and not the OS).
– Brendan
Lennie,
I see what you mean, but I usually give that distinction to MSDOS At least it eventually became obsolete.
POSIX is considered by many to be “good enough” and continues to displace better alternatives in exchange for basic compatibility. Despite this, gaps in functionality and scalability have lead to fragmentation anyways, just take the basic socket polling mechanisms for example:
select
poll
ppoll
epoll
kqueue
io_getevents
/dev/poll
Did I miss any? The functional overlap is ridiculous, they all do the same thing in different ways to address the limitations of the original select call. I’ve come across countless examples of this over the years and IMHO it’s one of the banes of bad standards.
Unquestionably we could develop better standards, but since POSIX isn’t realistically going away, this is what happens in practice:
https://xkcd.com/927/
Edited 2017-02-03 13:36 UTC
Look at all that innovation that was somehow “stifled!”
zlynx,
Are you suggesting that POSIX has not stifled the industry because over the years different implementations have solved it’s shortcomings in different ways?
I can’t tell if you are being sarcastic or not, haha
IMHO it would be far better to collectively push for a unified standard that actually fixes the problems everyone has with the earlier standard. Although I fully appreciate that doesn’t happen and so we end up in the xkcd cartoon I linked to earlier: standards = standards + 1
But as a historical perspective, almost everything we use today came out of the fact UNIX/POSIX is a relatively clean and simple interface, which does well to keep out of your way. The wealth of tools, libraries and applications on top of UNIX is, I think, testament to the original UNIX design.
And without the critical mass afforded multiple vendors supporting a single API, we might well have continued living in a proprietary networked world, or perhaps not even had a single internet as we know it at all. We might have all been stuck with AOL/CompuServe/MSN BBS like systems if no single system had gained momentum.
The MS/DOS should serve as a warning of what the world would have been like without UNIX. I for one, thank my UNIX/POSIX creating overlords. Once PCs were powerful enough to run UNIX (like) OS, MS/DOS and Windows was forced to catch up. Without UNIX, we’d all be running Windows on our PCs, servers and phones. And a monoculture is not good for anyone.
The main reason why those companies, and universities like Berkeley and Stanford chose UNIX was that it was free of charge.
Why spend tons of resources developing their own OSes, when AT&T was obliged to just charge a symbolic price for the license.
Had AT&T been allowed to charge for UNIX the same amount of money something like the prices of VMS, the history would have been quite different.
christian,
That right there is another example of legacy that’s holding us back.
There’s no denying that interoperability is extremely important, which is why we continue to use IPv4, and yet there’s also no denying that these legacy standards are tearing the modern internet apart at the seams. We are in a scenario where we simultaneously desperately need to replace the existing standards, and yet we are unable to due to legacy compatibility.
I’m not going to pretend I would have foreseen today’s problems 4/5 decades ago when IP communications were invented, maybe or maybe not. But we have to take our heads out of the sand and at least admit there have been several negative long term consequences to their early decisions. What we have to decide on now is if we want to continue to live with bad IPv4 standards for today’s internet or if we want to break them to get something better, like IPv6. Both choices are painful and will have negative repercussions.
Left to it’s own vices, the industry has avoided short term transition costs, at the expense of continued long term problems.
IMHO it’s pretty clear that in the long term, breaking away from inadequate legacy standards is the right thing to do. Nobody wants to go through the transition, but the alternative violates the end to end connectivity principal that the internet founders took for granted. So in order to uphold their vision for how the internet should work, we HAVE to replace their legacy protocols. It’s ironic.
The long slow transition from IPv4 to IPv6 is not due to any technical limitations. The problem is, as always, money: companies don’t want to pay to upgrade network kit that can’t handle IPv6, ISPs don’t want to spend an extra 10 cents on every consumer router for IPv6 support, and so on.
So I’m really not sure what you think is being “held back” by IP or UNIX.
Vanders,
The old legacy IPv4 protocol is clearly no longer adequate but the fact is that IPv4 is so critical (aka mandatory) for so much technology that it has become an impediment to it’s successor, IPv6.
The official plan was a universal dual stack deployment of IPv6 with the eventual intent of phasing out IPv4. Well, we’re nearly six years past “world ipv6 day” and the first phase of IPv6 deployment is MIA. I don’t even have the option since my monopoly ISP doesn’t offer IPv6, I can only test it through an IPv6 broker that forwards my IPv4 traffic. Even then, most of the web doesn’t support IPv6.
Don’t take my word for it, go ahead turn on IPv6 and disable IPv4, the majority of the web goes black, including some properties from big companies like Microsoft. If you are an IPv6 user and don’t have IPv4 to fallback on, you are effectively a second class user on the web and it’s quite likely many of your websites and P2P games/file sharing/telephony won’t work for you. This catch-22 is a large part of the problem for IPv6. This is what I mean when I say legacy incumbent standards can end up displacing better ones indefinitely.
Don’t get me wrong, I’m in total agreement with you that we have to blame the industry for not investing in IPv6 to make the transition happen, but you have to concede that so long as IPv4 support is much better than IPv6, that’s an obstacle to creating critical mass for IPv6 such that people and companies would naturally demand it.
So I think we’ll both agree on the merit of IPv6, but without some kind of artificial incentive I’m afraid that because of our complacency with legacy standards, we could end up with carrier grade IPv4 NAT as a permanent fixture of our infrastructure.
Edited 2017-02-03 22:44 UTC
Hi,
Unix should serve as a warning of what the world would have been like if people weren’t smart enough to distinguish between idiotic hyperbole and a logical argument.
If history were different it’s impossible to predict what would have happened; but increased competition between different OSs in the 1970s and 1980s (rather than too many people settling for “bad but convenient Unix”) is a more likely possibility than most, and in that case maybe Microsoft would’ve had stronger competition in the 1990s and might not even exist now.
– Brendan
Dear lord, the lack of self awareness…
DOS and OS/2 and Win9x/NT all had POSIX toolsets (MKS, gnuish, DJGPP, EMX, Cygwin). It did help tremendously … for a while, but eventually everyone gave up most on those (probably excepting Cygwin, but even MinGW is more popular, and that’s not very POSIX at all).
So I don’t think “POSIX” is a savior for anything. You can certainly praise certain specific tools or APIs or OSes, but overall everything in software is chaotic. Thus, success isn’t tied to anything else but hard work and dedication. (Or luck, timing, inertia, marketing, money, licensing. But I prefer to dream that good software will always rise to the top. FPC, FTW!)
Background maybe, but the times something like Qt can use a POSIX standard are vanishingly small and increasingly so. The biggest pro left is probably filename encoding and shell escaping, and I am not even sure those were standardized by POSIX. The filesystem and memory management implementation does use POSIX APIs but they have diverged so much the POSIX implementation is in practice split four ways: Linux, Darwin(Mach kernels), BSDs+Solaris and QNX. For a long time pthreads was a proud POSIX API but has now been deprecated in favor of C++11 threads. Add to that that the mobile offsprings of these systems all removed POSIX APIs (iOS, Android and BB10), and a cross-platform toolkit can rely even less on anything POSIX.
Still I do appreciate at filesystem conventions are the same. Just had to fix a new build-system for Chromium to integrate with Windows. Baah.
Edited 2017-02-04 11:33 UTC
MSDOS was basically a clone of CP/M (though on an API level) and later integrated some POSIX inspired stuff. The first IBM PC had 64kiB RAM – try to fit a Unix type system + application into that.
PCs could run Unix type operating systems when released. Xenix, QUNIX (later QNX) etc.
Microsoft planned to evolve MSDOS and Xenix into one system as Xenix was popular (for a long time the Unix with most installations) however the interest for Xenix shrunk while the interest for MSDOS exploded. Microsoft prioritized as their customers wanted.
Later MS and IBM collaborated on a modernization of (IBM/MS) DOS called OS/2. Different development practices, different goals (e.g. IBM wanted to run on 80286 systems which caused extreme problems* while MS wanted to optimize for 80386 systems) and the increasing sales of Windows made MS hire the main designer behind DEC VMS and create Windows NT.
*: one example: OS/2 ran in protected mode on 80286 processors and was to support multitasking even for MSDOS real mode (no protection features, 1MiB memory max.) programs. This meant that the system had to switch between protected mode and real mode at preemption time – but the 80286 was braindead in that it couldn’t per design switch back to real mode when running in protected mode. IBM solved this by patching the BIOS code so that when a flag was set by software it would jump into a programmer specified location and bypass the normal BIOS setup, the operating system could then specify a routine to handle the real mode task and then send a reset command to the keyboard controller which would reset the processor returning it to real mode. That was _slow_.
Megol,
I remember all that, it was such a hack, and it probably still exists in today’s keyboard controller too, just like abusing the keyboard controller to mask the physical A20 memory address line. What a glorious mess we made of things
It’s so irrelevant that Microsoft have invested quite some effort into Linux compatibility on Windows.
POSIX/Single Unix is a programming interface. There are quite literally billions of devices out there that support POSIX. That’s quite an odd definition of “dead”. Is there going to be a film about it at 11?
Hi,
Wrong. Single UNIX Specification is a set of standards that define the design of an OS; including things like shells, commands, utilities (and their command line arguments and behaviour), etc. It does includes APIs (for one language only), but that’s only part of a larger whole.
– Brendan
How did I know someone would come along and attempt to split a hair over “Linux” v’s “UNIX”?
PROTIP: Linux is, for all intents and purposes, Single Unix Specification compliant. It’s Unix.
Hi,
Linux distributions are “Unix like”, but are not “100% Unix compatible” and not “Unix(tm)”. However this is missing the point.
Even if Linux was both “100% Unix compatible” and “Unix(tm)”; “Linux compatible” (including support for the extensions Linux added on top of POSIX that software designed for Linux has grown to depend on) would still matter more than “Unix compatible”.
– Brendan
a) The trademark is UNIX (see I can do it too)
b) That is exactly the point.
Every Unix (and every UNIX) has their own extensions. They always have. Those extensions sometime get folded back into the SuS and become new features of SuS; sometimes those features are not adopted wholesale but are used to create a generic standard that everyone can implement. Sometimes they’re never merged into SuS.
So unless you are claiming that Linux is not a Unix because it has non-SuS extensions, you’re insane.
Because, in the specific example you cited, it is extremely important.
WSU was a UNIX-like POSIX environment, originally designed, in large part, so WindowsNT could be used on government production systems where POSIX compatibility was mandatory, even if the software being written for it wasn’t POSIX.
It didn’t run binaries from other Unix system. You couldn’t drop in an iBCS binary and expect it to work. You had to build software from source.
However, WSL is different – it isn’t just “Linux compatible” with source code, it runs actual Linux binaries – the exact same binaries that you download in Ubuntu. The purpose of WSL isn’t meant to run on productions systems, it is so Linux developers can run their development stack on Windows.
This isn’t splitting hairs. It is a significant and important difference between the two.
…but it isn’t a significant difference in the context of this discussion? WSU & the Linux compatibility layers implemented POSIX API’s for Windows. That the Linux compatibility goes farther by offering ABI compatibility is irrelevant in a discussion about APIs.
The point is incredibly simple: POSIX & Single Unix and Unix are not dead, nor are they in any way irrelevent. They’re relevant for multiple reasons, not the least of which is that Linux implements them, and it’s clear how important Linux is to the world: so important that even Microsoft consider compatibility with the APIs (& ABI) that Linux implements: namely a large portion of the Single Unix Specification!
I’m not sure if I’m understanding your point, but indeed “portability” is a ruse. Most compatibility is really just avoiding compiler bugs, tiptoeing around dialectical differences, and tons of preprocessor #ifdef magic (or separate modules, libs, etc). Rarely is anything automatically supported except for simple stuff. Heck, compilers themselves are rarely portable, irony of ironies.
They are POSIX, though.
z/OS is actually a bona-fide UNIX:
http://www-03.ibm.com/systems/z/os/zos/features/unix/
You mean, like your post?
Biology is full of legacy nonsense, you have more or less the same hox genes as a fruit fly that’s 670 million years of legacy. Cytochromes are ubiquitous a represent over a billion years of legacy. In evolved systems legacy doesn’t seem to stop innovation. Evolution is capable of absorbing bad design mitigating its consequences.
Not saying redesign isn’t a good idea just maybe legacy isn’t the break on innovation that one would expect.
Gone fishing,
That’s a very insightful comparison, however I also think there’s a crucial difference: our DNA evolved by by means of (near-)infinite amount of entropy as input evaluated over hundreds of millions of years using the natural fitness selector we call surviving on earth. In effect, the allowable complexity in DNA is practically unbounded when physics itself is the computer.
Focusing now on human programmers, we do have limits to the complexity that we can handle. It’s the reason we have to actively fight spaghetti code or suffer the consequences of complexity overload. It’s the reason we have to refactor overly complex code. When our systems become too complex, it does actively impede our mental abilities to work on them, even if physics itself wouldn’t otherwise have a problem with it.
Mind you I think it’s a very interesting point and this is merely my initial gut reaction, there’s a lot to think about.
Alfman
I think it is an interesting thought too.
I’m not sure how much we call thought or design isn’t actually evolutionary in a Darwinian sense. Design initiating the almost sexual reproduction of ideas and then initiating an initial culling process.
The biological process is heavily constrained. Redesign is almost impossible, working bad solutions (such as RuBP carboxylase) are kept because fitness cannot be reduced even temporally; and although there is “(near-)infinite amount of entropy as input evaluated over hundreds of millions of years” the initial inputs are extremely modest changes. They always need to maintain backwards compatibility (at least initially) and are generally the cumulative product of point changes.
Do you think these constraints are similar to design constraints (with less famine and death)?
Gone fishing,
In the sense that we are biological systems and as designers we are able to decided when to ‘cull’ complexity from designs of our own, then it would seem rational to conclude that biological systems are technically capable of culling, at least indirectly. Does it follow then that a biological process could somehow encode the tools & processes & knowledge for managing “planned” changes to it’s own DNA within it’s own DNA?
I definitely have questions about the limits of protein folding. If DNA can produce a human body & brain, I would think that maybe organs that deliberately alter DNA could be theoretically possible too?
If so, then the question then would be if there could ever exist a natural (or artificial) fitness function that would produce DNA for such an organ? Would there be a meta-fitness-function for this intelligent DNA to control it’s self-altering capabilities?
It all has the feel of a bad scifi movie
Alfman
Well the cell does use proteins even ribozymes (RNA that has catalytic functionality and may be legacy of pre protein biology) to change and manipulate DNA. Proteins are also essential factors in gene expression. However the central dogma tells us that information always flows from code to phenotype and not the other way, but code (genes) can change code (genes) indirectly. If Richard Dawkins is right, all your genes are in competition with each other and you represent a temporary alliance within this competition. The problem is, that the gene is working for the gene, and not you, you might think characteristic is good for you, society the planet but the gene acts for the gene not you, society or the planet, obviously genes being inert code arent looking into the future consider cancer as an example.
Gone fishing,
Sexual selection only works within a species though. Would there be any way for the process to make a dramatic change in one go? If not, then that seems to constrain the possible paths of evolution. Evolutionary paths from A to B that require sub-optimal intermediary steps (aka less desirable for the sexual/natural selection process) would end up being culled, which makes B unattainable even if it would be favorable to A.
So, with the above in mind, my unqualified answer to your earlier question would be that these constraints would seem to be different than design constraints which don’t have to have this limitation in we make a conscious decision to reach point B.
I guess you could make sexual selection a “conscious decision to reach point B” as well, but it would require tens of thousands of generations acting in unison based on artificial data – I don’t it would actually work in practice.
Well put! This cannot happen in biology – redesign in this sense cannot happen and so biology is full of ad hoc fixes to sub optimal design. Modest redesign can occur with all this legacy code and often significant redundancy, we can have evolutionary experimentation, but the redesign cannot significantly reduce fitness.
I know in human design this is theoretically possible but is it in reality? I started visiting osnews as I was interested in BeOS it seemed a system with more potential than win9x. However, it was less fit, and Windows with all its legacy problems is here and BeOS isn’t (not making a comment about Haiku). We as this thread points out have significant legacy systems in IT and redesigning these systems seems problematic. That “sub-optimal intermediary” seems problematic in IT and not just biology.
Edited 2017-02-06 15:46 UTC
Gone fishing,
I see what you mean now, I wasn’t thinking quite in these terms. I guess we were fortunate to be around during the earlier years.
Edited 2017-02-06 19:00 UTC