Since the last article on the text-based IDEs of old, I’ve been meaning to write about the GCC port to DOS, namely DJGPP. As I worked on the draft for that topic, I realized that there is a ton of ground to cover to set the stage so I took most of the content on memory management out and wrote this separate post.
This article is a deep dive on how DOS had to pull out tricks maximize the use of the very limited 1 MB address space of the 8086. Those tricks could exist because of the features later introduced by the 80286 and the 80386, but these were just clutches to paper over the fact that DOS could not leverage the real improvements provided by protected mode.
↫ Julio Merino
The DOS memory story is a string of hacks upon hacks that somehow managed to work – and that still work today.
Poor DOS gets the blame, but perhaps the real hack was the x86 hodgepodge.
stereotype,
You’re not wrong about the hodgepodge. But with the introduction of protected mode (kind of rough in 286 but well established in 386) all the functionality needed for modern operating systems was there and DOS itself became the limiting factor rather than the hardware.
DOS could have been modernized. MS competitors did introduce protection & concurrency features to DOS, for example:
https://en.wikipedia.org/wiki/Multiuser_DOS
IIRC microsoft had a more advanced version of DOS too, but MS didn’t release it because they didn’t want it competing with windows.
Alfman,
I think DOS “lost” the race when you had to include bunch of drivers with no standard APIs.
There were TSRs, but they did not work with DPMI or other protected mode programs. Or usually even with each other (it was essentially a hack on top of the interrupt mechanism, usually taking over RTC and/or the keyboard).
Add in lack of access to memory, and proliferation of viruses, Windows as you said seemed like a better replacement. Especially when it could support most of the existing DOS programs, usually better than bare DOS itself. (Built in network drive access, separate address spaces, no more TSRs, and overall more stable).
sukru,
I’d say there were quite a few useful APIs provided through software interrupts, which did work as advertised, but were expensive both in terms of memory and CPU overhead.
Technically both TSRs and bios could provide protected mode services. VBE3 did for example. But DOS software in general was real mode whereas protected mode software often relied on DOS extender functions.
You’re right, DOS software did this, but usually it’s because developers opted for doing direct IO. For example, there were standard services for accessing the console and serial ports, etc, all of which worked. But back in the days of DOS it was quite normal to write your own drivers instead of using DOS/bios services.
Up to (and even including windows 95) many of those “windows” services you mention were actually provided through DOS. But the windows NT line was a pure windows OS. Compatibility was typically worse on windows NT, at least until more developers started supporting it. I tried playing DOS games on WinNT for example but compatibility was often bad.
Obviously direct access and multiuser/concurrent operating systems don’t really mix well.
Alfman,
Yes, there were standard services, at least for basic I/O. However they were also very slow. I remember writing my own code even just to print strings in text mode (B800 segment, plus cursor manipulation). The reason was BIOS or even DOS routines that did the same were much more slower in comparison.
But my concern was a bit different, More for “non-standard” stuff.
Like adding a network drive. Novell Netware for example had IPX drivers, which had a “well known but proprietary” interface. On top of that, they would map network drives which more or less required hacking into DOS internal data tables and overriding interrupts.
https://en.wikipedia.org/wiki/NetWare#:~:text=Novell%20NetWare%20shares%20disk%20space,letter%20to%20a%20NetWare%20volume.
That meant if you had multiple things that gave you drive mappings (say ATAPI for CD-ROM, or a memdisk, too) the interactions of all these TSRs were pretty much non-determinsitic.
Good read, but I thought they would finish with the “UNREAL” mode.
Unreal mode is basically 16-bit 8086 instruction operations, with segments set to 4GB size. (Yes, that is a *massive* increase from the usual 64KB).
https://en.wikipedia.org/wiki/Unreal_mode
How does it work? The software enters and exits the protected mode once, but does not reset the CS, DS, SS, ES registers. They are still pointing to “selectors” from 32-bit mode, and can essentially view the entire memory in one go.
Downside?
Loses compatibility with a bunch of things, including DPMI for running under Windows 3.1 or other protected mode operating system, or even accessing the DOS interrupts anymore (at least not without adding shims).
sukru,
It was common to keep FS and GS as 32bit flat segments, since these were untouched by normal DOS & bios interrupt calls.
Wasn’t flat real mode a function of DPMI itself? If so, it might have worked under windows with the caveat that the MMU was exposing virtual addresses instead of physical ones. Not sure about this one.
Obviously programs that entered protected mode themselves directly were not allowed under windows.
IBM f#$ked up the design of the original IBM PC.
1) They set the hardware interrupts to map to CPU IRQs that were reserved by Intel. So for example ISA IRQ 0 was mapped to CPU IRQ 8, directly violating the Intel 8086 manual that clearly said IRQs- 0 – 31 are reserved. This was a clear mistake by IBM engineers, extremely disappointing.
2) They failed to design the ISA bus properly. From quick google search, I believe Apple II slots were plug and play. Yes the Apple II released in 1977 had plug and play slots. Didn’t need to set jumpers on the expansion cards. IBM should have modified ISA bus to make it plug and play as well.
3) Following on from 2, I think IBM made a mistake with the 640KiB boundary. IBM should have made 512KiB – 960KiB reserved for ISA “slots” (plug and play). Allows a maximum 7 slots with 64KiB of address space reserved to each slot. Also reserve IO Ports for each “slot” as well. The highest 64KiB is reserved for the BIOS.
IBM’s priority was quickly getting PC XT to market, instead of designing it properly. This caused kludges and problems for many years….
To give the IBM engineers some credit, none of the developers expected the IBM PC to be any more than a stopgap product to fill the niche of low-end business machines. I can imagine most of the engineers expected to be pushed onto another unrelated replacement product sometime in the future. The fact that the PC became somewhat of a defacto standard wasn’t in anyway planned or expected. Any bugs or issues would have been fixed in the next product line that the engineers would be tasked to build, once the IBM PC was the flop everyone was expecting.
The IBM engineers have no excuse for issue 1 – wrong mapping of ISA IRQs
I believe the IBM engineers programmed the PIC to map HW interrupt 0 to CPU interrupt 8, directly violating the instructions provided by Intel .
Cleary they should have programmed the PIC to map HW IRQs to CPU IRQ 32 or higher.
Not sure why you are defending the IBM engineers, it is undeniable fact they made a bad mistake and there is no excuse.