We have ended up in a world where UNIX and Windows have taken over, and most people have never experienced anything else. Over the years, though, many other system designs have come and gone, and some of those systems have had neat ideas that were nevertheless not enough to achieve commercial success. We will take you on a tour of a variety of those systems, talking about what makes them special.
In particular, we’ll discuss IBM i, with emphasis on the Single Level Store, TIMI, and block terminals Interlisp, the Lisp Machine with the interface of Smalltalk OpenGenera, with a unique approach to UI design TRON, Japan’s ambitious OS standard.
This is an hour-long watch, but I’m getting some coffee and snacks ready this weekend. This seems like total OSNews bait.
Bloody hell… resetting the password on this old account of mine was a PITA 😀
Anyway… I just wanted to say: Thom… you were right about this being a total OSNews bait… so much that I HAD to comment on that (after a gazillion years of lurking from the RSS side).
Regarding the video… too bad that the audio quality hampered my ability to properly understand portions of it (my “English” being self-taught certainly didn’t helped with that). Still. seeing the first speaker chuckle (twice!) at that issue with C compilers and the size of ints on that Lisp machine… made my day 😀
Ah, yes, that was a couple of years ago now, I think, but a great talk.
It was on Lobste.rs a year ago:
https://lobste.rs/s/3yhn8s/what_have_we_lost
And Hackernews last April:
https://news.ycombinator.com/item?id=26723886
(Links in case the discussion should prove interesting.)
Can’t watch now, but AS 400 would be my pick. IBM was awful to deal with in a lot of ways, but there are some really cool tech they had built in and it ran on really reliable servers, but its so different from Unix. I think you could still find useful features of it to grab and throw in linux or a completely new operating system, maybe you’d get sued IDK I’m not a lawyer.
Miss the BeOS days, and even more I miss the TOS, and even even more I miss the old days of 8 bits microcomputing here in Europe when it was like a Choose your own adventure book in terms of quantity and quality of computers and systems…
Ahh BeOS, that was great, and OS/2 Warp was a blast as well. They ran so fast even on the shite old hardware we had back then. MSFT really needs to take a page from those guys as I swear my R5 3600 with 32gb of RAM and a 2200mps NVME doesn’t feel as snappy as those did back in the day.
Of course, the catch back then was they would crash just as fast and easily whereas I haven’t seen a BSOD since the days of Vista but those were very “by the seat of your pants” days.
Gosh, that wasn’t my experience. OS/2 2.x and 3.0 had a lot of technical similarities with Win95, but were available years earlier so Win95-era hardware wasn’t available to run them on. At different times I used 386 systems with 4Mb RAM and 486 with 8Mb RAM; on 4Mb Win-OS/2 wasn’t usable and on 8Mb the additional RAM overhead of OS/2 made it less capable than the same system running DOS + Windows. This wouldn’t have mattered if OS/2 had lots of native applications rendering Win-OS/2 pointless, but alas…
There’s plenty in OS/2 that would belong in a “what have we lost” talk. I loved the per file extended attributes allowing for per file associations (without file extensions); per file icons and an integrated icon editor; per folder customization, etc. The “true” object oriented workplace shell meant it was possible to subclass system provided object types and extend them trivially. Templates were just user files with a flag, so they could be anything or anywhere (unlike ShellNew where you get one template per file type.) REXX allowed for interacting with these objects in a way that’s arguably now possible with PowerShell, 15 years later.
16 MB RAM and a 486 CPU running at 100+ MHz was the sweet spot for OS/2 3.0.
I had a 486DX4 running at 133 MHz with 16 MB of RAM and switched between Windows 95 OSR2.1 and OS/2 3.0.
The feature I really miss from OS/2 was shadow linked files. Windows 95+ had shortcuts to files that let you open/access a single file from multiple file paths. However, the links were set-and-forget, meaning, if you moved or renamed the destination file, all the shortcuts pointing to it broke. Shadow links were by-directional, meaning if you renamed the destination file, the shadow was updated to point to the new file. Essentiallly, UNIX hard-links, but were per link extended attributes (so you could set icons, colours, fonts, permissions, etc on the shadow separate from the destination file).
At the time, it seemed like magic and not really useful. 25 years later, after using hard-links and soft-links in FreeBSD and Linux, I miss shadows. 🙂
have a Look at filsystem junctions (work about the same as symlinks in linux/osx) I wonder why MS has not exposed this in the gui, but I digress
BeOS is still around in a form as Haiku,
TOS is still around in a form as EmuTOS, which reached v1.0 recently.
https://www.osnews.com/story/129006/emutos-a-free-operating-system-for-atari-computers/
We have lost… hardware diversity.
There were Amigas, Ataris, Amstrads, and many many more on the market competing with very different architectures.
Today we have AMD64, and ARM64, both using DDR/PCIe/NVMe, and that is about it. Yes there is RISC-V, Atmel AVR micro-controllers, and other “niche” archs. But they are not seen on the desktop, in our hands, or pockets.
And CPU design goes hand in hand with OS developers. AMD/Intel and MS/Linux strongly influence each others’ designs. While there is some interesting things going on the ARM end (Apple M1/M2), they are limited to a single developer.
And that leads to: all alternate OS’es having to play catch up, forever.
Can you design a LISP machine today?
No.
It will not have the specific instructions to speed up garbage collection, for example. It will run on one of these two archs (AMD64, ARM64), and have to deal with all the intricacies of device drivers, memory model, i/o systems. All the same time emulating needed hardware operations. It will be slow by default compared to native C++ code.
It does not stop there. For practical reasons you have to reinvent Linux, or designing your system on top of a Linux kernel. And then, you become a simple program running in GNOME like million others.
I don’t think we can bring back hardware diversity anymore. The required investments are huge, and only a few companies can actually finance that. And being stuck with two primary architectures dictates all other software designed for general purpose use.
sukru,
This is an interesting topic!
I would push back on this assertion though, not because it’s necessarily wrong, but because it seems to make a lot of assumptions, To the extent that the same essential program logic can be represented in different forms (ie progamming languages), there shouldn’t be a performance gap in principal. However obviously in practice we are very dependent on the quality of optimizers, and that’s where C & C++ have a sizeable resource advantage. This may make it hard to beat for reasons that have nothing to do with the language.
Still, C does have some weaknesses. It suffers from pointer aliasing, which can limiting optimization opportunities.
The “restrict” keyword was introduced to help out the optimizer, but few people use it in practice.
https://www.geeksforgeeks.org/restrict-keyword-c/
Apparently languages with “better” semantics than C by default can produce significantly better optimizer output…
https://github.com/nim-lang/RFCs/issues/204
Ultimately I don’t feel we should be making conclusions without benchmarks in hand. The tradeoffs are complex. At the same time though I must concede that reality is going to be very biased in favor of the incumbent technology that has been getting all the resources and investment from the industry many decades over. I do agree that today’s hardware, which has evolved symbiotically with C, is likely less optimized for things like garbage collecting and lisp-like call frames.
I agree, many markets evolve into duopoly and don’t recover after that without some kind of intervention. It isn’t that alternatives don’t have merit, but that they aren’t taken very seriously once the dominant platforms have the throne.
Alfman,
I agree, high level C is not always effortless to optimize. It is very easy to produce slow C or C++ code. However it is possible to write very efficient code as well. That is why this cycle continues.
I am not sure intervention is even possible anymore.
How can a government entity say: “too many devices use PCIe as their bus. In order to get diversity; from now on Acer, Dell and Apple are required to come up with an alternative”? Or something along that lines to migrate to 3rd party CPU architectures, not using POSIX APIs, or not using C/C++ for system development?
Any such mandate would be a crippling limitation for that target companies’ competitive ability.
sukru,
To be clear, I’m not expecting there will be an intervention. But hypothetically one of the better arguments to do it might be around security and it could take the form of new requirements to use safe languages for important contracts. But there would definitely be blow back to do it at scale.
That’s true, but it’s also true that incumbents hold back progress. Once in control, the main goal is staying in control and the innovation is steered towards that goal even (especially) at the expense of competition. We’re unlikely to see rapid innovation as in decades past before the market solidified. New markets can give us temporary glimpses of innovation, but to really see an era of rapid innovation across the board we would somehow need to break up the power structures that are strangling competition and blocking innovation opportunities for those with new ideas.
Alfman,
I would agree more competition is better. However, regulation might actually be counter-productive. Take:
An upstart can now tap into existing talent pools from Python, C/C++, Java, JavaScript or one of the other “popular” languages. However they will have more difficulty building a team specializing in Rust, ADA or another safer language.
Anyway, that is a minor detail compared to the big picture. The real shakeup will probably come from somewhere we don’t expect. Like IBM / Microsoft, or later Microsoft / Linux+Internet.
sukru,
No doubt, there would be a lot of resistance, haha. However the reason I used government contracts as an example is because I thought it was more plausible than the outright regulation of private enterprise. In other words, they’re not restricting your rights on your own time, but the requirements could be stipulated when you’re working on the government’s dime.
BTW it’s unclear to me if you already know this: the government actually did require contractors use ADA in the 80s & 90s, though there were plenty of exceptions. This is why ADA is common in military platforms (I think they made the right choice not using C haha).
I don’t know…I’ve been advocating for changes inside of both microsoft and linux for such a long time and it seems like neither really wants to change. It actually seems to me like apple is willing to embrace changes more so than the other giants, which makes them more interesting. Still, I think apple’s selfishness may be stifling innovation there just as much.
If not for wall street poisoning their objectives, I think apple could be uniquely positioned to shake up the industry in a number of positive ways. They could open source macs and make major contributions to FOSS , create a free market for apps, open hardware, do a 180 and wholeheartedly endorse our right to repair, etc. Do I expect this to happen? No, of course not. But if they did it would really shake down the industry setting a new baseline of expectations and opening the doors for new competitive innovation.
“They could open source macs and make major contributions to FOSS , create a free market for apps, open hardware, do a 180 and wholeheartedly endorse our right to repair, etc”
I think this wouldn`t change much. And won`t benefit Apple, since theire clients don`t care about such a things.
Marshal Jim Raynor,
My social circles may not be representative, but I know several people who are mac users and very into FOSS. Linux user groups are dominated by mac users. In fact one of the people who introduced me to FOSS was a mac user. I actually think it would extremely well received by those mac users using and promoting FOSS. But *if* apple used it’s influence to genuinely promote openness & FOSS, then it would become a mainstream talking point in ways in ways that linux hasn’t been able to. I think it really could be transformational.
Of course it’s hard to see how this could happen with apple. But who knows, if the antitrust courts come down hard enough on them, significantly ruling out the “benefits” of vendor locking and anticompetitive restrictions, then instead of resisting and dragging their feet apple might take it on as an opportunity to improve in other ways. Taking closed & anti-competitive models off of the table liberates apple to consider alternatives that are positive for consumers. They could whole-heartedly embrace the competitive aspects of openness in a way that hasn’t aligned for apple before.
I’m not sure Tim Cook would be the right CEO to do it, but under the right leadership…maybe.
I understand the nostalgia for diversity and I fondly remember the optimism surrounding the different architectures we went through the in the ’90s. Back then, progress being seemed imminent. Now, innovation is stifled, prolonged or postponed due to markets evolving into duopolies. Both hardware and OS platforms have stagnated the last 20 years or so. Unfortunately, this is all quite understandable – even inevitable.
What I did not expect was the lack of – even regressive – progress in user interface design.
tomchr,
There was some interesting UI concepts that started with Smalltalk, which then streamlined into NeXTSTEP by Steve Jobs’s new company. However those are rolled back into Apple as Mac OS X, and then are completely forgotten in their latest iterations.
Linux also had very interesting ideas like tiling window managers. But tight integration of everything under DBUS/system-session/compositor umbrella made their maintenance more difficult. And today they are becoming increasingly obsolete.
Yes, I agree we need more diversity in the UI / UX land as well.
Smalltalk is a language not an User Interface.
The only connection between NeXT and Smalltalk is that the original creators of Objective-C took a few concepts from articles they read about Smalltalk when designing their language.
Tiling window manages were a thing well before Linux.
True,
Smalltalk is a programming language, but it is also a system, almost “hermetically” contained.
I could recommend looking at the modern open source version: Squeak [ https://squeak.org/ ], or the JavaScript emulator: https://squeak.js.org/demo/simple.html
Everything, including the UI itself shares the same Object ecosystem, and you can modify them on the fly.
Objective-C ties to this, by being essentially the “Smalltalk in C syntax”, albeit missing the runtime environment: https://en.wikipedia.org/wiki/Objective-C
(There is also Scratch for teaching programming as their cousin, but that is a just a side-note).
Although the authors of Objective-C were inspired by some of the features from Smalltalk, I wouldn’t go as far as saying that ObjC is “Smalltalk in C Syntax”
ObjC most definitively has a runtime.
I’d say in anything Java is a more directish descendant of Smalltalk, as they both are contained within their virtual machine targets.
javiercero1,
Going on a real tangent here… but I like a good technical discussion…
Anyway, Smalltalk is
+ prototype based
+ message passing instead of method / function calls
+ can intercept invalid messages
+ null does not crash, is a valid target
+ expressions use named syntax, and don’t follow “math syntax” (1 + 1 * 2 = 4)
Objective-C is
– class based, but classes can be replaced during runtime
+ message passing
+ can intercept invalid messages
+ nil is ignored (optional)
+ expressions use named syntax
Java is
– class based, with no (easy) mechanism to replace them
– uses method calls instead of messages
– cannot intercept invalid calls (except for runtime exceptions)
– null causes a crash
– expressions use math syntax, like C
I think even JavaScript is more similar to Smalltalk than Java would be.
I’d say object orientation targeting a virtual machine represent far bigger commonalities than specific implementation details regarding the handling of null.
I’d say in spirit Java and Smalltalk are very similar on what they were trying to achieve, while Objective-C and Smalltalk bear some similarities in how they implemented their achievements.
But to each their own.
javiercero1,
actually…
the history goes like
smalltalk + c -> objective-c
and
objective-c + simula-> java
at least according to the designers of the Java language themselves:
https://cs.gmu.edu/~sean/stuff/java-objc.html
http://archive.computerhistory.org/resources/access/text/2019/10/102781105-05-01-acc.pdf
anyway, sorry for dragging this.
The irony is that right now there is far more HW diversity in the marketplace than ever before. And that you can actually make your own LISP machine if you want with an FPGA, which will be orders of magnitude cheaper and faster than anything that came from that era.
Some people tend to look at the past with undue nostalgia, while they contemplate their present with unfair hostility.
javiercero1,
Not really in mainstream though. The vast majority of our computing devices have consolidated to relatively few architectures. Granted many different companies are selling them but under the hood there isn’t a whole lot of diversity. I think globalization had a lot to do with this. Who knows, maybe if world events could reverse globalization long enough, we could see a rise in the popularity of locally developed computers 🙂
I imagine that’s true, but to be commercially relevant it probably has to be competitive with devices in the present rather than those of the past.
Well, LISP machines were never commercially relevant even in their heyday.
People really need to stop confusing instruction encoding with the entirety of a computing architecture.
Even within the same x86 vendor, there are significant differences between microarchitecture families. And an AMD, intel, and Apple PCs are very different system architectures internally. Even if interfaces have been standardized, and that is a very good thing BTW.
You have even more architectural variety among SoC vendors. Apple, Qualcomm, Mediatek, Huawei, etc all offer very different systems architecturally,
You can now walk into a BestBuy and can buy dozens of different architectures in all sorts of configurations and form factors. Back in the 80s, most of the “differences” in stuff you could buy were actually far more superficial, on account of the systems being far far simpler, than modern products.
Heck, you can now buy yourself a PC which supports at least 3 different programming models right off the bat, and you can program it with tools that are basically free. And the PC itself will be running at least 3 different ISAs.
javiercero1,
It’s because the syntax is just awful, haha. But actually it’s functional nature was powerful and the ability to use itself to naturally manipulate code makes it powerful for things like AI even to this day.
Ok, but nobody here has confused those.
Yes, there are many licensing and/or implementing the same or highly derivative architectures, but I think it’s safe to say the rest of us were talking about outward facing diversity in computing especially the user experience and not just micro-architectures.
> Today we have AMD64, and ARM64, both using DDR/PCIe/NVMe, and that is about it.
I don’t disagree with the high level point, but note we didn’t really have ARM64 or NVMe a decade ago, or AMD64/PCIe two decades ago. Somehow we went from a fairly fragmented ARM32 phone chip to ARM64 on phones to ARM64 on desktops. The optimist in me thinks that if the ubiquitous things we have today are really fairly recent developments, the ubiquitous things we will have in future will not look the same as today.
Sure Windows have taken over the world. But Unix??? No, no and NO. Not Unix but Linux have taken over the world. The entire internet is more or less driven by this operating system, because it can do the same as Unix but free to use and mod the way one would like something like a router to function. Sure there are 900 million phones, running Unix kernal but you also have 2.5 billion running Linux kernal.
brostenen,
You’re technically right, Linux isn’t Unix TM. Regardless, Linux was designed to be a unix clone from the start and in taking this path it has inherited both it’s strengths and weaknesses. Eventually linux would grow and take over the role of unix in setting defacto standards for *nix. This isn’t always appreciated by others like BSDs, who often end up having to copy linuxisms to be compatible with mainstream, but that’s the way it goes.
Linux *is* UNIX™.
I am so tired of this story. It is not true and it has not been true since 2 years after Linux was invented.
Up until 1993 “this OS is a UNIX” meant “this OS is based on licensed AT&T code.”
But then Novell bought AT&T Unix Labs. Novell kept the OS (as Unixware) and donated the *trademark* to the Open Group.
The Open Group owns the Unix trademark. Anything that passes Open Group testing is entitled to call itself a Unix™.
Here is the list:
https://www.opengroup.org/openbrand/register/
Note: a Linux distro is on the list.
https://www.opengroup.org/openbrand/register/brand3622.htm
Huawei EulerOS passed the testing. It is a RHEL clone. It is a UNIX™.
Therefore, Linux is a Unix.
EulerOS is not alone. Inspur K-UX also passed testing, but Inspur doesn’t market K-UX much any more so has not paid the annual license fee.
https://www.opengroup.org/openbrand/register/brand3617.htm
What in the 1980s was called “POSIX testing” has for 29 years been called “Unix testing” and anything that passes is Unix compatible and if the vendor pays for the trademark it’s allowed to call the OS “a Unix”.
Linux is a Unix. Has been for years.
The AT&T code thing has been irrelevant since anyone in their 40s now was a small child. It is time to let it go.
lproven,
Unix is a trademark, and I’m glad you acknowledge it, but then you should also acknowledge that to be “Unix” or not is NOT merely a matter of capability and compatibility, it’s a matter of licensing and legality.
That’s kind of the point, the majority of linux distros have specifically not passed the official requirements (whether we agree they should be there or not). Even your wording “is unix compatible” conveys a logical distinction between being unix and being unix-compatible. And you are also right that a vendor needs to pay a license fee to be allowed to call something “unix”, which linux distros in general are not doing.
Your links don’t refute these points, they don’t even mention linux at all. I think you were going for this logic: a RH linux derivative was certified as unix, therefor linux is a unix. But it doesn’t work that way. While I’m not privy to the the unix certification requirements, just as a point of logic: a RH derivative being certified does not automatically mean RH, centos, ubuntu, debian, slackware, etc would pass those requirements without any changes.
I won’t pretend to be an authoritative source on compatibility since I haven’t touched a real unix in years, but there were times in school when code & makefiles needed to be “ported” between my linux computer at home and solaris unix at school.
It seems that you are bothered by people who are being pedantic about it, and I suppose that’s fair, you can push back on the pedantry itself for being tiresome. But that still doesn’t invalidate the technical arguments being made.
Ok but for the record I never brought that up, you did.
If I were in your shoes, my response to “linux isn’t unix” might be “who cares, for all intents and purposes linux is an industry accepted substitute for unix”, which is hard to argue with. Anyway hopefully we get more interesting stuff to talk about soon 🙂