The story of the Amiga family of microcomputers is akin to that of a musical band that breaks up after one incandescent, groundbreaking album: the band may be forgotten by many, but the cognoscenti can discern its impact on work produced decades later.
So the Amiga 30 event held at Silicon Valley’s Computer History Museum in late July was more than a commemoration of some interesting technology of the past. It was also a celebration of the Amiga’s persistent influence on personal computing.
The Amiga was easily 10 years ahead of its time. Too bad the good ones rarely win. This is also a good moment to repost the 8-part series on the Amiga at Ars.
I just…I have something in my eye, okay!
Let me have a guru meditation for a couple of days…
I made audio codecs for Amiga ext Boards back in the days, and in many ways, it was clearly well ahead of its time.
But software-wise, in may respects, it had an extremely primitive OS that caused endless problems for our users (no real memory protection for one) Commodore was pretty good at putting lipstick on the thing though, but the OS kernel was pretty bad.
That said, I’m still booting up the workbench for a thrill every year.
The OS didn’t have memory protection because the 68000 processor doesn’t have memory protection. That’s not the fault of the OS or the developers.
That was just an example, and while the lack of an MMU was the real reason, it still made for a primitive OS.
Not really true, there might be other reasons, like :
http://www.fultonsoft.com/category/atari-st/revisiting-gem-for-the-…
Parts 8 to 1 (oldest)
Also, memory protection takes more memory. Windows didn’t even have proper memory protection until the NT series and they had pretty steep hardware requirements at the time.
The apologetic fleet has arrived I see. Point is, AmigaOS was anything but perfect and it was ahead of its time in only some respects.
The problem is, you’re calling the OS “primitive” because it didn’t have memory protection (for example). Except back in 1985, it was up against MacOS & DOS/Windows, which didn’t have memory protection either.
The fact that you’re comparing it to far more modern systems, which came quite some time after Amiga OS was developed, goes to show that for the standards of the time, it really was ahead of its time.
No I did not. I gave that as an example. As a device driver developer for the audio extboards, I know exactly in what ways it was primitive. The “awesome preemptive kernel” for instance had major issues with spurious irq’s screwing up the entire system.
Was it primitive compared to it’s contemporaries? Inserting IRQ handlers under DOS wasn’t exactly a laugh a minute. It’s easy to call something “primitive” 30 years after the fact, but it was still well ahead of what the others were doing 30 years ago.
The AmigaOS was awesome back then. It is not awesome today. AmigaOS had preemptive multitasking, MS-DOS had not. Neither MacOS, Windows 3.11, etc.
I think you are unfair. How many OSes even had preemptive multitasking at the same time as Amiga? Sure Mainframes had it, and Unix – but both where very very expensive.
Thom’s “10 years ahead” claim wasn’t qualified by cost.
It was 10 years ahead of its time for an operating system for a £400 computer. And much better than what you could get for some £1,000+ computers too.
But it was so ugly with these psychedelic colors on screen (Atari fan teasing)
10 years ahead means it could rival a 486 running Windows 95 and a Gravis Ultrasound.
The Amiga certainly had some cool things, especially its audio hardware and graphics acceleration, but 10 years ahead it certainly was not.
Edited 2015-08-18 13:13 UTC
Being pedantic AmigaOS released July 1985 and Windows 95 released August 1995 so Windows 95 was over 10 years away from AmigaOS I’m kidding of course…
Haha, OK I stand corrected.
Heh, I was there, right in the middle of it, and I’m afraid you are wrong.
AT+T had an official port of unix in 1990 for the Amiga.
Google AMIX
AT&T System V Release 4
Yes, Commodore even sold the A3000UX which came with a copy of AT&T SVR4 (As “Amiga UNIX”). Sun even talked to C= about licensing it as a Sun workstation; in typical C= management style it never went anywhere.
I don’t think people are being apologists, just questioning the use of the word ‘primitive’.
Obviously by the standards of today it is primitive, or if you try and cherry-pick the best features from Unix and other OSes of the time, but that’s not entirely fair either. It deserves to be compared to its competition – things like System 6-7, MS-DOS, Windows 1-3.x, GEM, RiscOS, etc.
True that AmigaOS it wasn’t perfect. If they’d had the time to finish the foundations instead of having to use TripOS, it could’ve been really special. Still, for me, it’s one of the all-time great OSes. It’s a shame we never saw a NT or OS X-style version.
Edited 2015-08-18 09:29 UTC
They could add MMU with memory protection
but Atari ST had custom MMU but they did not bother to add memory protection logic – today it is obviously that memory protection would be important but back than in 1984. there were more important thing about computer.
“How the Atari ST almost had Real Unix”: http://www.dadhacker.com/blog/?p=1383
I used Amiga’s pretty heavily in my early production days as both an animation tool and a television-graphics workstation. I remember about 4-5 different models of it still around lower-budget studios in the early 90’s when I started.
It could definitely do some things that no other computer of the time could. Crash on nearly anything was on that list.
This was the time of Mac’s system 7 and Windows 3.11 and early Win95, so things crashed all the time. I used to live with my hands on the save-command.
That said, my experience with the amiga can be summed up with “as soon as some other platform adds this feature, this thing is done”. They stuck because of their strange feature set but they were so unstable in a production or live environment.
Lightwave…
I remember in 1992 opening a box of stuff I’d sent myself by surface mail from Taiwan and finding a few things in there I didn’t recognize, including one copy of NewTek’s Video Toaster (the origin of LightWave 3D) for the Amiga. Customs must have messed up when they checked the box and were putting things back in.
I’d never heard of Video Toaster but later was told it was the premier *desktop* video editing & effects software in the industry at the time (used in Babylon 5 etc IIRC) and worth thousands of dollars. I think I just left it my dad’s basement!
Great article complete with an obligatory 90s references to Wayne’s World; a “borgasm”; and a cheesy video starring Will Wheaton, Penn Jillette and Tony Hawk that successfully heralded the coming user-generated video revolution that would break the stranglehold of the TV networks, hosted unironically on Youtube
http://www.8-bitcentral.com/blog/2014/kikiNewtek.html“>Chief
I’ll always have a soft spot for the Amiga. It was my computer of choice whilst growing up from the age of 7 until finishing secondary school at 16. Amiga 500, 600 (with SCSI CD drive), 1200, CD32 (with SX1 expansion module), 3000… They all served me extremely well. How differently the world would be now if Commodore had become the dominant PC manufacturer…. Ahhhhhh…. Sigh…..
With Tramiel and Gould at the helm it would not necessarily have been a better world.
Too true. After reading ‘On the Edge’ you’d think they were more hell bent on destroying the company rather than making it successful.
In 1985, my high school evaluated three new computers as potential replacements for the lab full of Apple IIe computers.
The IBM PC Portable was a reliable workhorse, but had a limited display– it was primarily used by the yearbook staff, as their software ran on PC’s.
The Apple Macintosh worked well, but was eye-wateringly expensive, and no longer benefited from the “creative” academic sales programs of the Apple II series.
Then there was a new computer– the Amiga. Shiny. Pretty 3D bouncing ball demo. Compatible with absolutely nothing, practically no applications, and could be crashed by moving the mouse too fast.
Now– this was 1985. This was one of the first Amigas shipped to dealers in Florida. I know the Amiga went on to be a much, much better computer than the initial impression we got evaluating the first model– But that initial impression lasted a long time.
By comparison, the Apple IIgs came out the following year, and was compatible with all the software our lab already had invested in.
I really miss my 3000T :’-)
Primitive indeed, humph! Off now to play Superfrog and Speedball 2 on the A1200.
AFAIK, the OS kernel never changed much between 1.0 and 3.9, so it never gained that many features, other than the ability to suspend programs, which only half-worked.
I think, that if Carl Sassenrath had stuck around for a rewrite (he wanted resource tracking at the very least in the OS, but never had the chance to do it), he probably could have gotten those things in there for a much more stable OS. Alas, Commodore never embraced an MMU as a standard thing for Amigas.
But for 1985, it was pretty damn good and performance was through the roof. Reboot times were fast for the time, which was perfect if you had a crash, or if there was a power outage.
The architecture of the OS was fun too: Build your own Workbench laboriously from scratch with exactly the features and programs you need. Did that many times. Try that with Linux.
I used my Amigas until 2004 and at the end, they could still do things, my PCs at the time couldn’t, but they were left in the dust with regards to CPU power.
Most of the BCPL was replaced with C in 2.0, but the functionality didn’t really change that much no.
The worst part of MMU handling was all Motorola’s fault. In order to sell more parts, Motorola would mark CPUs whose MMU failed in ANY WAY as an EC part, and sell it as a “CPU without an MMU”. Except it DID have an MMU, just a faulty one. Many people found their EC030 or EC040 processor ran memory management code anywhere from 75 to 95 percent okay before crashing. There was simply no way for developers to tell with software whether or not a CPU had a working MMU, so if you supported the MMU, you had to write two versions of the app – one with MMU support and one without.
If you look in any of the 030/040/060 hardware manuals, the EC versions do “unspecified actions for an unspecified length of time” when they encounter MMU opcodes. In other words, different processors will have faults in different parts of the MMU, so you can’t count on any particular opcode or page entry working in a set way. Most often, code to test the presence of the MMU passes just fine. But when you try to use page tables, things don’t map right and stuff crashes.
My own code would have a user control in the gui that turned on MMU handling when it was a part of my app. It would default to off, and the user could turn it on if they knew they had a properly working MMU. Then I’d have two sets of code for using or not using the MMU where applicable.
Motorola SHOULD have had a easy to burn integrated fuse for the MMU so that if it tested bad, they burn the fuse and the MMU status opcode would generate an exception. A simple method to tell if there was an MMU or not.
I’m not sure it’s fair to blame Motorola for binning parts. The real issue was that the original 68000 didn’t even support the concept, and the 68010+68451 sucked. Other microcomputer OSes didn’t have memory protection so it wasn’t seen as a priority.
By the time the (non-EC) 68020 was available AmigaOS was already in place and the design explicitly assumed no memory protection was in use, so adding it would have been a major re-design.
Also let’s not forget this is Commodore we’re talking about; given the option between the 68020+68851 or the cheaper 68EC020 and no MMU, they were always going to pick the cheaper part.
Edited 2015-08-19 09:53 UTC
Sure it’s fair to blame Motorola. From the beginning, the 680×0 MMU was based on Line-F opcodes for coprocessors (just like the FP coprocessor). On ALL processors in the family, the use of Line-F opcodes without the coprocessor generates a Line-F exception. Without exception! So if the MMU is truly missing, you ALWAYS get the Line-F exception. Motorola chose to save a couple bucks by selling chips with faulty MMUs as chips without an MMU, thus making looking for an exception to determine the state of an MMU impossible as the MMU really IS there, it simply fails at some unknown point of operation.
They didn’t have this problem with the floating point – the LC series (has an MMU, but no FPU) properly generates a Line-F exception when checking for the FPU. So you can check for floating point in code and rely on the result. That should have been how the MMU was handled as well.
The OS wasn’t primitive, it was limited. OTOH it took its limits and ran with them.
Yes, there are all kinds of things wrong with Exec et all, and an MMU would not make much change as it would need a re-design of the OS that would be totally different from what it became.
So instead let me point out what I think are the flaws of the OS. Note that these could all have been fixed with 20/20 hindsight and later versions tried to offer workarounds:
– public OS structs that are directly readable (and writable!). There should have been some sort of OS call needed to access them so that the struct size was not set in stone.
– public OS structs that are forever set in stone size-wise.
– application programs can allocate public OS structs and so locks down what the OS can do. This should never have happened – there should have been OS calls for alloc’ing all structures so they could change(expand) at the OSs whim.
– a global address(memory) pool exposed to all processes. Each process should have been given a default private memory pool (could have a field for custom size somewhere in the executable header) that the OS worked with and which was expanded if it was too small. This hurt speed and fragmentation, and made it easier for a single task to bork the whole machine.
– some kind of resource tracking and ownership should have been defined.
– code segments should have been defined as read-only even if that was impossible to enforce.
None of these would have been enough to make it a “modern” OS with all the bells and whistles that you need, but these things made it an uphill struggle for the OS developers to expand and add features while staying compatible. While the graphics parts of the OS is forever tied to the planar architecture, the use of set-in-stone structs makes new screenmodes and architectures (even AGA which was just 2 extra bitplanes and 24 VS 12 bit colour) an exercise in jumping through hoops to get anywhere.
But the exposed nature of the OS made it exciting for tinkerers who has been able to do all kinds of interesting stuff both under and over the hood.
If some of the FPGA re-makes can get some traction (I have been waiting years for that though) I think people will be very happy, but then after a while I think(hope?) there will be a few “what if?” approaches that will try to add some zest to both the hw and sw side that can make it interesting again (though I wont claim “relevant”).