This is a quick demonstration of the QNX 1.4 megabyte floppy disk demo.
QNX is an advanced, compact, real-time operating system. This demo disk, released in 1999, fits the operating system, the “Photon MicroGUI”, and the HTML 3 capable Voyager Web browser all on a single 1.4 meg disk!
So far no emulator or virtualizer I have tried will run this QNX demo 100%, so this is running on real hardware. The video is captured with a VGA capture device.
QNX is one of the most intriguing operating systems of all time. This demo disk is one of those things that, even today, blows my mind. Be sure to watch through the whole video, especially the part where extensions are downloaded and run from the web, all on a single 1.44 MB floppy.
So QNX made a fully functioning graphical OS fit on a floppy in 1999… Shame Microsoft can’t make Windows fit on a CD in 2013…
Yeah, for whatever reason Windows went from like 1gb installed (XP) to around 15gb installed (Vista). I know they didn’t pack 14gb of new features in there, so where the hell did all that space go? If we go back further, I’d have about 60MB of used disk space after installing Win95.
Vista Enterprise 64-bit is 8.4GB installed, not counting the swap file and the hibernate file.
Where does all that space go? Well, now there’s both 32- and 64-bit system libraries, plus all the drivers that Windows comes with are now copied to disk, plus all of the extra Windows features that aren’t installed by default also reside on disk, that way when changes are made to the install Windows doesn’t ask for the install DVD.
And none of that crap can just be compressed as small as possible until its actually needed by someone? I don’t mind having the drivers on hand, but do they all need to be installed by default? It’d make more sense to compress them and if that hardware is ever detected uncompress that driver and install it, or not, since I like most people will just go to the manufacturers site and download the most up to date driver.
Do drivers really take up that much space? I’d have thought it would be the .NET runtime support.
that wonderful bit of MS Software.
48Mb download (V4)
200+Mb of patches to it.
Says it all really.
Isn’t it time for a re-master?
(All sizes approximate)
Used to save 4GB on the MacOSX Leopard install. So yes.
I have several hardrives that have a Terabyte’s worth of space each, I have a internet connection that can download a DVD within about 20 minutes.
Given this situation, I would rather they package as much as possible into the OS from the go then having to constantly nag me for the install DVD.
That internet connection and the fact that mobile storage has shrunk, thanks to SSDs, are two reasons why it makes no sense to have all the possible drivers.
However, I don’t believe that it’s the drivers that take up that much space.
I updated my GPU drivers on my work machine and the download from nvidia.com was about 200-250mb.
Removing the printer drivers from the Leopard install ISO from 7-8GB to less than 4GB so I didn’t have to burn using a dual layer DVD. Plus I expect there is loads libs for backwards compatibility, art work, help files etc.
I recently wrote a small web app and after I included libraries from Amazon, the usual JS and CSS libs and I already had a solution that was about 12mb.
Edited 2013-11-11 16:05 UTC
Most of those 178mb are various GUIs and support for several dozen of GPUs.
Art-works and libraries would explain it.
I was more commenting on these things can quickly add up.
I couldn’t believe it when my 5 page web app was at about 3000 files (I use libraries a lot, due to time constraints) and 11mb in size (though the actual page load was quite light … lots of server side stuff).
To be honest, that size is idiotic. I bet the lions share of that is not really drivers at all but control panels and god knows what crap they put in there to reach these ridiculous sizes.
4GB…say no more.
I would expect, and hope, that’s what really takes up space.
So did I. I wrote one to visualize OpenTSDB data. It’s ~800KB. That’s with python code, js, css and Bootstrap.
Fast/reliable connections aren’t always available, just as the Windows CD isn’t always available.
Plus, a lot of users might not know to go to the Internet when their printer, or some random USB device, doesn’t work.
I’d wager that these use cases are far, far more common than “Boy, I wish I had that extra 4 gigs, since the use of that space bothers me aesthetically.”
Weird how your argument refutes your own point. Your internet connection is 3+MB/s. The software you need is probably less than 100MB which is 30 seconds of downloadtime. At best you gain 30 seconds of precious time by having everything you ever might need installed and updated.
So wouldn’t it be nice to give users the option of not installing all those extras and only downloading them when needed?
Wouldn’t it be nice if backup stuff like winsxs(11.5GB) and installer(4.6GB) and driverstore(4.6GB) could be placed on my 2TB harddrive instead of my 0.120TB ssd?
Putting WinSXS on your 2TB drive would defeat the purpose of having Windows on the SSD, since it gets accessed a lot for compatibility reasons, such as when a program needs an older version of a DLL. It also can’t be on another drive, since it’s made up of hardlinks, which can’t link to files on other file systems. At the same time, that means that it doesn’t actually take up 11.5GB of space – Explorer doesn’t handle hardlinks intelligently when calculating file size and disk usage. Interestingly, Explorer does calculate disk usage for sparse files correctly.
Still, it is possible to shrink it (and quite easy), but you’d lose the ability to uninstall updates or service packs.
As for the other things, well, sure it’d be nice, but it really isn’t that big of a deal. For the majority of use cases, the convenience and predictability of having those parts stored locally outweighs the benefits of saving 8 or 9 gigs.
Edited 2013-11-12 23:55 UTC
It’s even more of a shame that this type simplicity, integration, and functionality is still missing from graphical Linux based desktops in 2013.
Edited 2013-11-11 01:12 UTC
But any GNU/Linux distribution is just a collection of parts put together to make an OS. It’s hard to make any sort of consistent OS when there’s, for example, various different windowing systems with a multitude of window managers developed for each.
Edited 2013-11-11 01:34 UTC
There is some truth to that, Linux distributions do tend to be a collection of parts more than a whole system. But the parent’s argument could be made for the BSDs too. They are just as badly off with integration and features while being developed as a whole system.
You’d get some who’d argue that’s the “UNIX philosophy” at work… lots of small independent pieces, glued together to make something bigger.
No, not for the part, ie the base system, that is actually developed as a whole. That part is very well integrated and documented.
Now, when you start adding X and crummy apps developed with the “eh..that works on Linux, who cares about anything else” mindset then the integration obviously suffers a bit.
Care to share any specifics? I’d have to disagree about several of those points, especially functionality. This from someone who used the qnx demo floppy on a regular basis back then. There is no way its more ‘functional’ than a modern Linux Desktop.
Functionality? There’s no Linux distribution available with as little functionality as there is on that QNX floppy.
The only Windows sort of fitting on a CD is WinPE which is only used for installation and executing a number of utilities.
As far as I remember, none of the “lite” after-market tools to shrink the Windows XP or Vista install got below a 2 GB recommended USB flash drive.
And for the Linux distributions, there are few which can still fit on a mini-CD (135 MB) and one of the better known is Puppy Linux.
There are a number of hoppy OS projects which pack much functionality within a floppy disk. MenuetOS is one which has been discussed on OSNews a few times.
Could it be that the installed disk requirement of an OS is exponentially proportional to the number of programmers in the team?
BeIA could live in a 16MB image with only file ystem compression and give a fullish OS experience when booted to the Desktop. I believe it could be made a lot smaller when crushed (which is what Be Inc called the process of creating compressed ELF binaries by removing common Symbols in to a common dictionary loaded by, IIRC, the kernel.) Crushing the ELF binaries to CELF (magic symbol in binary header goes from ELF to CEL), and using the CFS file system, I think one could get the entire OS down to circa 8MB. If more was stripped out, I think it was possible to get it under 8MB total, i.e., you’d have the OS and actual disk space left in the image file.
IIRC the QNX4 disk (which I used back in the day, circa 1998/1999, when it was pretty new still) was not writeable in any way. Every boot the user had to set the OS up. No data was saved to the disk.
Was writable LiveCD technology common then at all? What OSs allowed that?
Are you referring to InCD, allowing for the use of CD-RW disks like a floppy or flash drive?
As far as I know there were no CD-based operating system distributions that used that or similar. Some, however, could load configuration files and user settings from a floppy.
http://web.archive.org/web/20010608092643/www.be.com/products/beiaf…
Which has what to do with BeIA? The most common way BeIA was distributed was on Flash, but it would run from any device that Be had written block device drivers for. For example, the DT300 Webpad has a compact flash card internally. Other units used different flavours of flash storage. None used ant kind of CD based medium. CFS was a compressed version of BFS, more or less. The version I used had spotty support for attributes as I remember it. The built in Search and live queries flat out didn’t work, at least from Tracker. CFS was definitely nothing to do with CDFS.
The Floppy was just not writeable due to the size restrictions of what they managed to do cram in.
Hmmm :
http://www.tuxboard.com/photo.php?large=2013/11/Infographie-Combien…
Kochise
Heh. I’m highly amused to see this come back again. I still have it in my retro software archive. It was quite nifty in ’99 ..and it’s even more so in ’13.
Bloat anybody?
Same here, I’ve been impressed by this demo disk at that time, and most important : it is a realtime OS 🙂
… meaning that finally who cares about the gui ?
Dunno about QNX but the Amiga style music had me pumped the whole way to the end lol
Only that very first version was floppy sized, as I recall. All of the subsequent releases were CD ISO files, which I probably still have filed away.
More amazing, a fully working MenuetOS + software still fits on a single floppy.
Edited 2013-11-11 02:56 UTC
“QNX is one of the most intriguing operating systems of all time.”
Holy hyperbole Batman!
I worked with QNX, it was not that intriguing of a platform. It was very well documented, and the programming paradigm was pretty well understood.
It was a very well engineered system, perhaps that’s what some may find “intriguing?”
Well, that certainly qualifies it as unusual…
I agree.
When it comes to most intriguing, my vote goes to Amoeba.
A distributed OS.
http://en.wikipedia.org/wiki/Amoeba_(operating_system)
Sounds like Erlang’s distribution mechanism, yet with the suitable Erlang functional and concurrent programming language, unlike Amoeba with Python.
Kochise
Also one very interesting was Tao’s TAOS:
http://c2.com/cgi/wiki?TaoIntentOs
Binary distribution ? Source code ?
Kochise
As far I could find, Tao Group closed and its IP was sold in 2007, unfortunately.
The concepts themselves are very interesting and I think they were quite ahead in their time. I can only imagine what could be done in today’s hardware.
Edited 2013-11-12 13:54 UTC
Same could be said about GeOS. But unfortunately, closed source disappeared, puf!
Kochise
How about the fact that it is a working microkernel RTOS? The Hurd, Mach, L4 and Minix guys keep going on about it, but QNX actually makes it happen.
L4 usage is actually pretty widespread, with more units in distribution than QNX.
If you have a Qualcomm SOC in your phone you’re probably using L4. It is the kernel that runs their modem stack.
Yeah, Mach never went anywhere… /s
tylerdurden,
“Yeah, Mach never went anywhere… /s”
You know what’s funny, I didn’t even think of OSX when I read kwan_e’s comment. I’m guessing most of the people who upvoted that post overlooked it too!
That’s because OSX isn’t fully Mach. It’s a hybrid kernel.
I’m guessing people who upvoted that post have a better memory of details and are generally more intelligent.
That’s funny because I didn’t say anything about OSX. During the 80s and 90s there were many commercial/working OSes based on Mach, that were both actual microkernels and had nothing to do with OSX.
A working microkernel really is not that big of a deal, since it has been done many times.
That’s funny, I was replying to Alfman, who did mention OSX. Learn to read, and stop pretending you’re the only person on this site.
And that most of them didn’t continue to now means they went nowhere. That’s kind of the definition of “went nowhere” – when something dies off after a while.
Edited 2013-11-12 03:17 UTC
This is an open discussion forum, with multiple people involved. I don’t need your permission to post wherever I feel necessary. I was simply pointing out the both of you were going off on OSX while I had never mentioned it, and I was simply providing the context that there is more to Mach than just Apple’s OS.
For what it is worth Mach is a 3-decade old project, so obviously it is of little relevance TODAY. But it was the basis for a few commercial and academic OSes. So it definitively went places, which is why I was pointing out that you including them in the list of failed microkernels was a bit uninformed.
I’ll just let this little bit of cognitive dissonance sink in, shall I?
So basically you think that old == failure. I see what the problem is; your comprehension skills are basically nil. Which would explain things, I guess.
I was simply pointing out that Mach is a very old project, it did its thing, it had a lot of influence in the field, and a bunch of products sprung out of it. You know, the opposite of being a failure. 3 decades in the computing field are an eternity however, and a lot has happened since then.
Do you get it now, or do I have to write more slowly?
No it is a failure as it isn’t used – and for good reasons. The overheads of MACH meant that anyone using it had to have much better hardware than the competition.
It’s also a failure in that it retarded the micro-kernel research for a long time as many people thought microkernel = MACH level performance.
Those products are dead. Even when MACH was used IBM and Apple designed their own kernels to improve performance – and skipped the resulting products when they performed much worse than anticipated.
OS/X isn’t a MACH based operating system, it incorporates MACH, yes but:
. it isn’t a microkernel design.
. drivers aren’t user space.
. drivers doesn’t use the MACH model.
. most system calls doesn’t touch MACH code.
Now let’s look at some other old operating system designs that inspired modern systems:
VMS – still going as the core of Windows NT, also as itself.
UNIX – still going in a variety of versions including one project strongly inspired by it – you may have heard of Linux?
Ironic as it’s you just don’t understand.
So you do need me to write it slower. Sigh. OK then.
Let’s see if you can grasp the point I was trying to make: I’m not saying Mach took the world by storm, or that was the best thing since sliced bread. I’m simply pointing out that labeling it as a failure is a bit uninformed since it did influence plenty of OS designs, and some commercial products which were based on it were released. It’s an old academic project, so it has little relevance currently. But saying it went nowhere is indeed uninformed given how it lives, partially, as part of the 2nd most popular desktop and smartphone OS.
Had it’s share of issues? Absolutely. But guess what? So does QNX. It’d be equally disingenuous of me to label QNX a failure because it has very little market penetration, or it has had to be re-written from scratch a few times during its lifespan because it had obvious scalability issues.
So I must ask again; are you grasping this (kind of) basic concept yet, or do you require office hours?
LOL, this was my favorite part of your post. The level of misunderstanding is hilarious, and yet a bit sad at the same time given how OpenVMS was just EOL’d.
That you picked NT is interesting. Some people who worked on VMS were in the NT team initially and influenced the design indeed. I’m assuming that’s what you’re using to make the claim that VMS is somehow the core of NT. Incidentally there were a few people from CMU’s Mach (Rashid’s team) working on NT as well. So in a sense if VMS supposedly is the core of NT, so is Mach. Specially given how NT started as a microkernel (something that VMS definitively is not).
Wait, now you switch to using “inspired by” as the benchmark for success? Well, in that case given how OSF/xx, Digital Unix, NextStep, MkLinux, etc were all Unix (or Unix-like) variants running on Mach. Using your own metric, Mach was a success after all.
OK, then.
kwan_e,
“That’s because OSX isn’t fully Mach. It’s a hybrid kernel.”
Still, you gotta admit that it *did* go somewhere in that case. Even if it changed, it’s still relevant to make that connection because others may have overlooked it too.
“I’m guessing people who upvoted that post have a better memory of details and are generally more intelligent.”
Hey now, no need to be a jerk, mine was just a lighthearted comment
Edited 2013-11-12 06:53 UTC
Mach itself didn’t change. It was copied and re-engineered onto a new kernel. I would argue what made it go somewhere was the BSD part of it.
A lighthearted comment that assumed someone would have missed the bleedingly obvious.
kwan_e,
“A lighthearted comment that assumed someone would have missed the bleedingly obvious.”
This still doesn’t warrant the earlier response… but no biggie. We good?
Well, I didn’t think my earlier response was all that bad. Lighthearted, even.
You said those people who upvoted the comment missed the bleedingly obvious.
So I said those people who upvoted the comment probably took that into account already and didn’t think they needed to spell out the blindingly obvious – or as my mother likes to say: “draw intestines on a stick figure” – and thus generally more intelligent than you gave them credit for.
Maybe not as lighthearted as yours, but warranted.
kwan_e,
“Maybe not as lighthearted as yours, but warranted.”
Ad hominem attacks are never warranted.
So you’re saying the people who upvoted my comment were not more intelligent than you gave them credit for? Isn’t that an ad hominem attack?
To be fair XNU was based on Mach, but isn’t strictly a microkernel.
The fact that Thom can’t make it work on emulators or VMs probably tells you something about the kind of black-magic hackery that was required to make something like this possible…
Uh, none at all?
It’s not my video…
I tried that QNX demo back in ’99 or 2000 on an IBM PS/2 Model 80 (386DX 16MHz) and was blown away by how much they managed to cram on that floppy, and how fast it was on such old hardware
QNX has given me a nerd boner since.
+1 for plan9 mascot.
Now, seriously, there is a reason to something be fast, and on my personal experience QNX was not (the ever coming back, personal, statistically insignificant source of mockery).
Oh yeah, boot to GUI was fast, and click around on the GUI apps/settings was fast, but as soon as you really started to use the system for “real” you could feel the trade-in involved.
Perhaps, the hardware was not ready to give the needed performance by then, being it micro-kernel and RTOS at same time probably did not help as it had to deal at same time with problems related to context switching and resource reservation.
These problems are fixed now on current hardware (actually they were years ago). Of course, non RTOS/micro-kernels were also benefited and as so there are even less incentive to opt for one kind or another OS based only on it being RTOS, for example.
There are particular cases where you still may have to think about what would give you best results, be it maximum performance (i.e. numerical simulations, rendering and other computational intensive applications) or well-defined time processing (like on computation of events on high-constrained schedules), but, for most of us these high specialized constraints really do not apply and any reasonably well designed OS can handle well our tasks on current hardware.
Besides doing a lot of stuff in low level language (which explains in part the speed and size), QNX did not have to deal with all the DRM crap MS not only champions, but poorly implements.
-_-
Have you ever tried to get access to QNX’s source code ? You’ll lot of DRM until you can. Not to speak about their “extension packages” like media codecs for QNX. No license ? No codecs, no OpenGL, no whatever…
QNX 6.1 was the last interesting community edition.
Kochise
I still have that disk. It was a cool demo, but pretty much useless.
Later I got that QNX 6 developer free demo CD, or whatever was it called, which could be installed and actually being useful. And it had a much better GUI
Ah the good old days… helping Dan Hildebrand (RIP) build that demo (at least a little, zlib was still new in 1999) is one of my favourite memories from working at QNX.
“Ah the good old days…”
Yea, it doesn’t feel the same any more. I worked on an alternative OS as well. Although I didn’t know it at the time, another alternative OS had many of the ideas I had.
http://en.wikipedia.org/wiki/Sprite_%28operating_system%29
None of the alternative OSes have much viability. Some, like QNX have managed to take root in a niche, but most are lost in time. It’s not because they weren’t any good, they were just victims of network effects making them redundant. Consolidation rewards the established market winners at the expense of the little guys. Nowadays the OS scene is a sad sight.
What is troubling is that the same consolidations that have killed the OS scene are taking place in regular application and web space. Fifteen years from now in hindsight, we might all be looking back with nostalgia for today’s alternatives that we will have lost by then. That’s progress for you.
Edited 2013-11-11 14:53 UTC
See also mobile software; we’ve got Windows and OS X again in the form of Android and iOS, and a tiny handful of other systems with minuscule market share.
Homogeneous computing environments are bad news for innovation, not to mention resistance to exploits and malware…
QNX is now owned by Blackberry (formerly RIM). QNX is the foundation of BlackBerry OS 10.
It’s also about the only asset that Blackberry has, that seems to really have a future….lots and lots of licenses sold every year for those using QNX as the basis for their embedded devices.
For instance…all of the newer slot machines from IGT are using a slot OS based on QNX. Some other major slot makers use a Linux based system.
Yeah, I really, really wanted to use qnx back when I was working on embedded systems ( mainly because of that demo, actually). But more senior engineers had something else up their sleeve that was also very good ( green hills), but lacked a demo floppy.
I bought QNX back in the 1980s. At that time it was one of the very few UNIX-like systems you could buy for a PC. Its main competition at that time was Coherent by Mark Williams Company. Linux and BSD systems may have existed back then but had no publicity and most in IT had never heard of them on PCs.
QNX came on a 3.5″ floppy and ran off that. If I recall correctly, it had a second floppy with some utilities on it.
I ran it on a 286 (PC-AT). It was command line only (no GUI) and ran pretty well, even off the floppy.
I bought QNX specifically as a way to learn UNIX better from a home PC. It was a good product and did its job. I was able to learn UNIX much better without having to find access to a shared system like a VAX or PDP.
Fun times.
Edited 2013-11-11 20:14 UTC
The reason you could get a reasonable experience on most PCs on a single floppy was that most PCs were basically identical in terms of input and graphics – PS/2 and 4:3 up to 1024×768. Nowadays there’s so much oddball crap out there that you need a few megabytes of drivers (and that’s just the kernelspace stuff!) to make it work properly.
Nope buddy :
http://www.returninfinity.com/baremetal.html
Put a SDL or Allegro LIB/GUI atop and voilà !
Kochise
That just proves my point. It uses PS/2 input, VESA output, and only has drivers for two ethernet chipsets, so of COURSE it can be tiny. Shit, I think a 3.12 Linux kernel plus busybox still fits on a floppy if you use xz compression and strip out all but the most rudimentary drivers.
How many people, even advanced computer users like readers of OSNews, even have floppy drives on their new computers? I’d guess most of us build our own machines, but how many have bothered recently to include a floppy drive?
Can’t remember the last time I saw a laptop with a floppy drive.
I won a copy of QNX years ago, It’s version 4 I think… I never played with it much though, I should still have it somewhere. Maybe time to go hunting for it this weekend and see how it works on an old laptop or something.