I’ve spent the past several months trying on and off to make Linux run on the Presario. The 486SX is the oldest CPU Linux still supports! I was quite hindered by my lack of any floppy disks – fortunately, I managed to get my hands on a few working ones for Christmas this year and made some headway, first getting MS-DOS 6.22 installed on the new hard disk, then messing with the Linux kernel configuration until I got it to work.
And yesterday I finally got it! Here are the steps for configuring a basic kernel with Linux 5.14.8.
A lack of usefulness should not be a hindrance to having fun.
I gave away some 10year old machines to people who needed them and could use them during the pandemic. Still perfectly usable at that age especially if not for gaming. But I’ve had to throw away older machines because nobody wanted them. I’d say they’re good for tinkering, but IMHO there are better options for tinkerers today, like raspberry pi.
Yeah, and there’s certain something to be said for that 2002-2008 period of really uninviting econoboxes that aren’t great for modern usage but aren’t retro enough to be interesting either. It’ll probably change I guess, but a machine without ISA slots just isn’t much of a curiosity.
Sort of… I think the sweet spot for the ‘retro’ gamer, is a 486DX with ISA and PCI slots. This way you can get a PCI VGA 2D card, a Voodoo 1 or 2, and then an Awe64 (or the new Orpheus card).
Amusingly, I ended up with an Atari PC3; which is a 286@12 and I added an FPU, then managed to get one of those memory upgrades for my Awe64 (so it has 24mb of ram), and the system currently has 5MB of RAM (1mb is soldered on in the form of 256kb SIPPs (missing the sockets) and 4x1mb SIPPs. Still need to see if I can track down some slanted SIMM slots to replace those and get 8mb of ram…)
Unfortunately there is a LOT of software that requires a minimum of 386. Linux used to, up until really recently, support 386. I’d have to look up to see when they dropped it. It ran quite smoothly (debian Bullseye) on my Pentium 3 @ 1ghz setup. Still very usable, after all these years.
Yeah, I guess so ๐ I still have a P2-300 machine that I use for DOS gaming. Way overkill for the most part but it has a few ISA slots so aside from the speed and the occasional incompatibility it plays pretty much everything.
I’m more interested in the functionality and end user accessibility issues. That’s partly why my frameworks were portable and scalable. My cut off point was a P200 class machine but code could be extended to run on a 486. One advantage of testing against slow machines is your code runs fast and light! I know the world moves on but I think bad design and losing sight of designing for the lowest level of a general class problem is a bit of a mistake. Well designed, and well abstracted and layered code is not “kruft” nor does it suffer from “bitrot”. When it does happen it’s almost always due to “kludges” and “forced obsolesce”.
Anything newish is likely to be smaller and faster and much more energy efficient than old kit but for some people old kit is serviceable. It may be all they have. They may simply be happy with it.
Those are more observations than an opinion… tbh, I’m more interested in the philosophy and psychology and sociology of it. Take for instance what is modernity? Modernity is a collective delusion shaped and limited by our psychology and society. The so-called Overton window is a subset or aspect of this.
Going backwards in time a little further:
https://www.nytimes.com/2019/05/17/science/math-physics-knitting-matsumoto.html
https://www.sciencenews.org/article/how-one-physicist-unraveling-mathematics-knitting
Like Elisabetta Matsumoto I first knitted with my mum when I was a teenager. It was only a small scarf so nothing special. I’d like to take it up again but I don’t know. So many things to do and I’m lazy. Materials and colours and textures are fascinating. One of the biggest things which motivated my coding was graphics. I think doing computing was a mistake so in a funny sort of way I’m coming full circle.
Well, I’ll say… good job(?) ๐ Another thing I’ll say is the “oldest” (i.e., it was new back then) computer I’ve installed linux on (I think it was slack) had a cyrix m1 with honestly-don’t-remember-how-many ram and hdd (not too much), when that cyrix was still new (I remember going down to the corner computer store to buy it). And all I can say about the experience is that although it was usable and I liked it, no way I’d want to re-experience it today. No nostalgia wins over 64+ cores ๐
Do you actually have 64 cores, or is that just posturing? I’m genuinely curious what you use it for ๐
A 24core xeon is the highest I’ve worked with for real time SDR + trans-coding. but it’s a trade off because they usually end up having lower clock speeds. For this reason consumer CPUs are often faster than server grade ones for games & desktop applications where having fast threads can be beneficial over having many threads. The obvious use case for so many cores tends to be hosting and perhaps compilation assuming you have an absolutely huge code base.
I built a dual core Xeon machine 5 years ago with a pair of quad cores with hyperthreading. Frankly, the lacklustre single core performance is the real dampener on using it, it’s rare that all 16 threads are pushed to the max in the workloads i use it for. The graphics card is a bit “meh” for it too, it’s like putting a miata engine in a ferrari.
my system has dual E5-2699V4 22ccores each so 88 threads and 256gb quad channel ram. i can tax them pretty well with my cad renderings. There atre plenty of nice tasks you can do to use that power.
NaGERST,
Autocad or something else? Any FOSS cads you would recommend? I’ve tried blender but it is so quirky… I used to use midnight modler with POVRay, but I haven’t touched that stuff in years.
I actually use inkscape for a lot of technical diagrams because I’ve become fairly proficient at it using it for work. But at the same time I know it’s the wrong tool for the job. Also inkscape’s renderer is very badly optimized, so it can crawl even on a high end machine.
Maybe they mean a different CAD app. From what I have read on autocad’s own documentation it does not seem like much of it supports multi-core outside of “2D regeneration”. In their forums autodesk representatives argue that multi-threading would actually be less efficient and that it’s better to offload to the GPU then use more CPU cores. There are other things like 3D point cloud generation with Trimble where many cores are hugely useful.
“Due to the lack of multi-threading, AutoCAD can’t use more than 50% of the CPU on a dual-core computer. So, there is no significant performance gain over a single CPU computer, except in 2D regeneration.” https://knowledge.autodesk.com/support/autocad/learn-explore/caas/sfdcarticles/sfdcarticles/Support-for-multi-core-processors-with-AutoCAD.html
“If you are using AutoCAD on a multi-processor system (i.e., more than two cores), you may see a slight performance improvement, but only as much as the operating system is taking advantage of the multiple processors.”
But why? 486SX won’t even be able to run systemd, let alone anything modern.
I can imagine it will be able to run busybox, maybe elinks and that’s it.
And to be honest the modern kernel has become somewhat bloated. The Completely Fair Scheduler in 5.14 will be a ton slower than e.g. whatever process scheduler that existed in Linux 2.4.
Is this a joke? Windows 95 booted into full featured GUI for the same amount of time.
Exactly. The modern Linux kernel needs at the very least 128MB of RAM to be somewhat functional in text mode. 486SX most likely didn’t even support that much RAM.
https://en.wikichip.org/wiki/intel/80486
Apparently 486 theoretically supports something up to 4 GB, but I don’t think anyone produced boards or mem sticks to achieve that.
Yes it looks like the 486 has the full 32 address lines for access to a physical 32-bit address space, so it’s theoretically possible. I’m surprised they did that actually, they could have reduced costs with a smaller pin count, which is done on modern 64-bit chips that often have a reduced physical address space e.g. 48-bit. Maybe it wasn’t worth it, since dropping only 6 pins for example brings you down to a 64MB address space which is probably not big enough.
The 486 has 30 external address lines A2-A31 because, from the CPU’s point of view, the “local” bus for high speed I/O (VESA Local Bus video, scsi, and disk controllers) is also the memory bus. This is long before integrated memory controllers for desktop CPUs. The memory controller lived in the chipset on the motherboard.
The 72-pin SIMM was the highest capacity memory form factor late in the 486 era. The maximum capacity of a 72-pin SIMM is 128 MB. You would need 8 72-pin SIMMs slots just to reach 1 GB. No chipset or server motherboard ever attempted this, as far as I know. 4 slots was the usual limit, and frequently those slots were limited to 32 MB.
No OS of the time expected a 486 to have much memory. Even Windows 98 SE falls on its face with more than 512 MB installed. I’d be curious to hear what the Linux 1.2 or 2.0 kernel theoretically supported on x86; I know that Xenix only supported 16 MB, which is the maximum on a 286.
LightStruk,
4GB of ram could be addressed on 486 and the CPU doesn’t care what is behind those addresses. However in practice the motherboard would have partitioned addresses between memory DIMMs and PCI devices and may not even implement all the bits.
http://www.informit.com/articles/article.aspx?p=130978&seqNum=28
https://en.wikipedia.org/wiki/I486
So in theory the 486 could support 4GB minus a little bit for PCI devices, but asking how much was actually achieved is a good question. I see many systems had a maximum of 128MB, but many people achieved 256MB and some say 512GB could be possible on some equipment but I couldn’t find any.
http://www.vogons.org/viewtopic.php?t=56970
This JEDEC Standard lists 72 pin configurations up to “512M”
http://www.ele.uri.edu/iced/protosys/hardware/datasheets/simm/Jedec-Clearpoint-8MB.pdf
However the largest I found on ebay was 128MB per slot. Maybe you could stick four or more of them in the right motherboard, not that I found one that officially fits the bill. I find it funny the things people want to do with ancient hardware when commodity hardware is better in every way, haha.
Heh, I learned from this that it is now once again possible to compile a kernel small enough to fit on floppy disk. I think I remember it was sometime back in the 2.6 series of Linux kernels that even with everything turned off, any kernel image would still be larger than 1.44MB. I really didn’t expect things to have actually improved since then.
That was incidentally about the last time I ever cared about floppy disks, so that worked out OK.
A Gateway 2000 keyboard on a Compaq!?!?! What blasphemous nonsense is this????