Thanks to open source, no technology ever has to become obsolete, so long as a community remains to support it. You can sync Newtons and Palm Pilots with modern desktops, download web browsers for long-discontinued operating systems, or connect vintage computers like the Apple IIe to the modern internet via WiFi. Every year, new cartridges are released for old-school video game consoles like the Nintendo Entertainment System and Game Boy.
People keep old software and online platforms alive as well. The Dreamwidth team forked an old version of the early social network LiveJournal’s source code and built a community around it. The dial-up bulletin board system software WWIV is still maintained and there are plenty of BBSes still around. Teams are working to restore aspects of early online services like AOL and Prodigy. And you can still use Gopher, the hypertext protocol that was — for a brief period in the early 1990s — bigger than the web.
↫ Klint Finley
Retrocomputing is about a lot of things, and I feel like it differs per person. For me, it’s a little bit of nostalgia, but primarily it’s about learning, and experiencing hardware and software I was unable to experience when they were new, either due to high cost or just general unavailability. There’s a lot to learn from platforms that are no longer among us, and often it helps you improve your skills with the modern platforms you do still use.
The linked article is right: open source is playing such a massive role in the retrocomputing community. The number of open source projects allowing you to somehow use decades-old platforms in conjunction with modern technologies is massive, and it goes far beyond just software – projects like BlueSCSI or very niche things like usb3sun highlights there’s also hardware-based solutions for just about anything retro you want to accomplish.
And we really can’t forget NetBSD, which seems to be the go-to modern operating system for bringing new life to old and retro hardware, as it often runs on just about anything. When I got my PA-RISC workstation, the HP Visualize c3750, I couldn’t find working copies of HP-UX, so I, too, opted for NetBSD to at least be able to see if the computer was fully functional. NetBSD is now a tool in my toolbox when I’m dealing with older, unique hardware.
Retrocomputing is in a great place right now, with the exception of the ballooning prices we’re all suffering from, with even successful mainstay YouTubers like LGR lamenting the state of the market. Still, if you do get your hands on something retro – odds are there’s a whole bunch of tools ready for you to make the most of it, even today.
UI design was much better in the past, so was social media. MSN was unobtrusive and users had a lot more control compared to Facebook. E.g you could be anonymous with handles and have anonymous contacts, or you could rename contacts and control their appearance/group by category. Facebook is awful at all of that and Discord doesn’t keep your nicknames for people across channels, only on your friends list. Desktop interfaces were laid out better and designed more around keyboard/mouse usage and less around touch screens. Even Steve Jobs said that touch screens are a fad which would end when the last generation born without computers would die off. Desktop applications had more functionality and power, and everything wasn’t jammed into a low performance webpage or web app.
A lot of people have a hard time adapting to change, so they end up associating the “best” version of something as the one they came in contact first or had more familiarity with in the past.
Old GUIs and applications were terrible and had their own sets of warts as well.
I would agree with that to a point.
I think the larger issue is not just touch screens but other basic HCI concepts have been lost. No one counts clicks anymore. There were several issues with the windows 95 start menu, but also several things right about it. Everything stayed in the same place. A user could navigate it without looking if they took the time to organize it. All keyboard presses. Now, the windows 11 menu is so useless you have to use search to get to anything quickly. Computers used to have keyboard shortcuts for important things that were easy to find. Now, they just assume you will use a mouse or a touch screen for everything. Notifications are a nightmare. They get in the way and don’t pay attention to context. if I’m actively clicking on one, I don’t want to lose context with a new one. I don’t want to have one appear when I’m in a menu that will disrupt work. I don’t want one when my mouse cursor is already near the area doing something else. This isn’t just a windows issue. Consider ubuntu. The notifications are on the top of the screen and frequently get in the way of browser tabs in the middle of the screen. If you have a chat or email client open, it’s constantly spammed with messages. You can even force clear one and another will appear. Apple at least puts them opposite of the window controls and in a area that fills up last with tabs with full screen browsers. Most alerts are useless. They haven’t figured out a use for AI yet to filter them intelligently. it’s even worse on smart phones. On some platforms, you can turn off notifications completely per app but what if you actually want to see some of them or want a list of them that you can reference but not have them in the way?
I was a kid when Win95 came out, but I remember plenty of people complaining about it being an atrocious ergonomic mess and what not. And now a few decades later, it is held as some sort of GUI panacea. 🙂
I personally have found myself to be much more productive with Win11 than with any previous version of Windows. I rather just type the app or document I am looking for, than having to rely on muscle memory. I have no idea what you’re going on with the notifications. I happen to manage just fine with them on Windows, Gnome, OSX, iOS and Android.
Typing stuff is for intermediate or advanced users. It is completely unintuitive for new users. It presumes you already know what you need and don’t need any form of discovery. For a newbie it is a nightmare. They are vastly helped with a well structured menu that doesn’t overwhelm them with umpteenth options or irrelevant info.
Windows 95 is extremely barebones in comparison to later systems based on the same concept, but the core of a very usable workflow is already contained in it. I don’t think anyone is advocating to go back to the literal Windows 95 implementation. When Windows 95 is held up it is to point out the origin of a workflow that appeals to many people. It is space saving, yet directly discoverable. Gives important info in one glance and “hidden” info is easily found.
You assume that win95 is the “correct” way of doing things because that is what you were used to. But when it was released it was far from intuitive and there was a whole cottage industry of “computer training” classes/videos etc to explain to people why to shut down the computer, for example, you had to press the “start” button.
No, Win95 is not THE “correct way” of doing things. It is A way of doing things that appeals to many people who like to work graphically. I have worked with Comodore C64, Amiga Workbench 3.5., DOS 6.22., Windows 3.11 for Workgroups, Windows 95, Windows 98, Windows NT 4.0, VAX VMS, BeOS, Haiku, Windows XP, Windows 7, Windows 10, FvWM95, Window Maker, KDE, Gnome 2.x, Gnome 3.x. So it certainly is not a lack of exposure to different paradigms. I am not afraid of a TUI either.
The taskbar, menu, window list, system tray is just one of those paradigms that feels very comfortable quickly. For general purpose use, very serviceable. Are there other ways of interacting with a computer? Certainly and some ways are better suited to particular needs. Doesn’t take away the fact that the Win95 paradigm is one that has stood the test of time.
But again, that’s the thing; You’re extremely biased towards a specific type of GUI paradigm, very much set in what made sense in the late 80s/90s. Because that is what it is familiar to you and your experience.
If you put young kids in front of a windows 95 machine, they recoil in horror about how awful and counter intuitive it is to them. Because GUI paradigms have evolved and expanded.
From my experience, a lot of the older folk that were in the computer geek sort of crowd. They want as much of the computer to be exposed as possible so to speak. They want the mechanics of the OS/App to be exposed. Because they learned a specific method of interaction w the computer that required a lot of input to perform task. Whereas younger folk have encountered technology at a much commoditized, pervasive, and performant/capable level. So they don’t really care as much about the black box making the “magic” happen, so to speak. They expect the OS to do a lot of the contextual work for them. It is just a tool to get to their apps of the information/entertainment they are seeking.
Xanady Asem,
Kids are adaptable, They learn quick and I honestly don’t think they’d have a problem.
As someone who played games, browsed the web, played music, did word processing, etc, I don’t follow why you are saying this. Do you have a more specific example? You are talking about windows 95 and not dos, right?
Kids also present a clear snapshot of what is and is not “intuitive” in terms of interfaces now that we live in a reality where computing is pervasive and commoditized.
When presented with a win95 interface, a lot of kids marvel about how cludgy and unintuitive it is for them.
We’ve gone from having to adapt to the interface. To interfaces being more adaptable to the user. Which is ironically, counterintuitive for many people who had to adapt to older interfaces.
I just tried putting a 3+ year old relative in front of a windows 2000 PC, he had never used windows in his life, only android phones and tablets. With some small guidance he understood it fairly quickly. At least on a superficial level. In the beginning he wanted to touch the screen to navigate, but the concept of the mouse was not hard for him to learn.
As you said, the 95/98/ME/2000/XP-classic ui is no panacea, but it is a lot better than many other options around nowadays.
NaGERST,
Yes, this is exactly it. It’s really not that hard to pick up. Not that there aren’t any differences, but the foundational desktop interaction principals haven’t changed much.
“3 year old relative” using windows 2000. That is just too much internet bizarre for me. Sorry.
Xanady Asem,
I take it you don’t have kids then? If you did, you’d see that their brains are sponges and they pick these things up very quickly. It’s not the obstacle you’re making it out to be. Windows operating systems today are mostly used the same way today as they were back then and honestly I just don’t see much merit to the claim that kids will “recoil in horror about how awful and counter intuitive it is to them”. You accuse everyone else of bias against new technology, but to the extent that you consider this a fair accusation, how do you know you’re not biased against old technology?
I don’t think the issue is as black-and-white as you’re suggesting. Nobody is booting their modern system to Dosbox to live like it’s 30 years ago. The question is what positive lessons we can learn, or even what negative lessons we can learn, from past experiences since the industry is no longer young.
By way of bio, IMO “peak UI” happened about 15 years after I started using computers. It’s not the thing I first used; trying to edit command lines in pre-doskey DOS was horrible, although still better than edlin.
I was one of the people who criticized Win95 at launch, because things like modifying a start menu shortcut’s properties was so obtuse and painful compared to the File/Properties system in Windows 3.1. At the same time, the Taskbar was a great idea, so I started writing one for 3.x. The epilogue though is start menu properties got a lot easier to change in Win98 (just right click the shortcut, select properties.)
Once the elements can be distilled, it’s not difficult to identify things that work well and work badly. Windows 95 was not perfect. Windows 95’s New menu still works better than OS/2’s templates folder.
So now, almost 30 years later, the issue is which things that worked well have been dropped for bad reasons. Personally I’m having no difficulty finding things that worked better in the past and can be transposed into the future. They still need lots of adaptation though (my Win95-like taskbar needs to support multi monitor, per monitor DPI, monitor configuration changes, battery life, and all sorts of things Win95 didn’t.) But the core UX remains shockingly usable, even now.
*chuckle* BlueSCSI? Try BlueSCSI, PiSCSI, and GBSCSI, with PiSCSI having whichever PiStorm-like powers SCSI enables while GBSCSI aims to be a less expensive expression of the same Open Hardware sources that BlueSCSI is based on.
…and that’s not even counting lesser-known things like the MacSD which is more expensive but can do CD audio out.
SCSI drive emulators have become a surprisingly fertile field in recent years.
The part that entices me most about retro computing is how much was achieved Despite hardware limitations
These days So much code is written without even considering the resources required to run it.
The amount of times the decision is to buy more server hardware instead of optimising the code it runs.. Waste creates waste.
More of the good ol’ “back in the day we was tough and learnt better because we had to walk to school barefoot in the snow uphill both ways.”
A lot of modern software complexity comes from added functionality and performance. Some people miss out how much optimization modern tools achieve. The level of productivity is definitively larger than back in the day.
There’s a lot more rush in completing projects in software development these days. Programmers aren’t given time to optimize code and often we don’t have requirements for target hardware. Sometimes we do target a configuration and then management tells us that we need to cut costs in half but they don’t clear the schedule for a few months so we can actually tune or optimize code to make that happen. It’s got to be instant. Then we get punished with constant bug reports and nightmare on call weeks that could have been prevented.
Some programmers are lazy. Some are overworked. Some simply don’t get buy in from management or product to fix issues even though we know about them. All of the pressure is for us to get something done as fast as possible and move on to the next story. That leads to garbage software.
laffer1,
+1
I believe this is a very common experience.
Sometimes we blame it all on bad developers, but good developers under this kind of pressure will struggle too. It is stressful.
That is a management issue, not a modern software and/or systems issue IMO.
Some organizations and/or divisions/projects are very badly managed, and that is a whole other can of worms that applies to anything really, not just the software industry.
In the end, Garbage software that gets shipped now is infinitely better than some mythical perfect software that never makes it to the customer.
In that quest for perfection and super optimization can be equally dangerous, as that is what leads to projects like Hurd stuck in development hell for 4 decades.
Xanady Asem,
There is a balance to be had, but IMHO we’re on the wrong side of this balance today. Our software can afford to do be more optimized. In the push for ever more profits, we’re expected to do more with less, something has to give. For software, that something is often optimization and I’m afraid to say even debugging can be a casualty of extreme cost cutting, leading to exploits, etc.
One hand we do have more frameworks, which are meant to save time, also add more layers of bloat. Another consideration is that native APIs used to have a consistent look and feel. Now days, especially after microsoft did away with classic interfaces in favor of metro, and ribbon, a lot of the UI consistency and best practices were thrown away. It’s a matter of opinion whether these new interfaces look good, but in terms of consistency and discoverability I’d say things actually have a harder learning curve without the ubiquity of WIMP interfaces. I think these went out of fashion not because they were bad for desktop users, but because they didn’t work well for mobile phones and other devices without a mouse and keyboard. Microsoft (and others) desperately wanted to make one platform to rule them all acrossform factors. Since desktop UI didn’t work well on mobile, they brought a mobile first UI paradigm to the desktop. Despite much effort to convince us this was the future, the new metro apps were so widely shunned by desktop users that the legacy win32 software that microsoft intended to deprecate still remains the dominant standard for software today. It turns out that software designed for one form factor doesn’t work as well on others.
Sure, but there’s also a gradient existing between the extremes. Optimization isn’t an all or nothing. proposition.
What are we talking here, application optimization or GUI aesthetic guidelines. It’s hard to follow your point.
Xanady Asem,
These are both reasons why some people might prefer the way old software and operating systems used to be written vs modern.
Personally I’ve found it easy and fun to write efficient software for my own needs in my own time. When you don’t answer to an organizational management structure, the strategies haven’t changed much.
It not so much much that, I develop code and build infrastructure for a living. I rarely, if ever, need to optimise code, in many cases its simply not cost effective. Take for example a simple database, will I spend months optimising MySQL or.. up the RDS instance size to cope with additional load?
The hardware available is well in excess of (most) needs, so we opt for the easy option first.
Compare that to the development of hblank trap for scrolling on the NES for Mario Bros or how duckhunt used its gun.
With a hardware limit you have to work around those limitations, and that created space for innovation.
Adurbe,
Obviously this all depends on specifics, but very often a code that doesn’t factor in any code optimization will just scale poorly. Experience gives us an eye for designs that are likely to perform better. I feel it is worth keep developer optimization skills honed in everyday coding rather than making it a huge task later in the project (where it inevitably gets de-prioritized).
Just as an example I worked with some magenta websites. Its developers clearly had the philosophy that software optimization doesn’t matter. They dynamically generated SQL to generate pages, which in itself isn’t a problem, but they issue so many queries with so many table joins to produce a page of output. Someone obviously felt this gave them flexibility, which it may, but the performance is atrocious. Once you start adding extensions, the performance is so bad that it’s just taken for granted that everything needs to be cached rather than generated on the fly because every cache miss causes a massive spike in load. For read-only pages full page caches are effective at masking performance issues in generating those pages. but problems start to creep in when it comes to dynamic pages. Changing any number of tables (like product availability) is meant to be reflected on the live website, but now you need to invalidate and regenerate multiple levels of cache and that involves all sorts of complexity with tricky caveats.
The full page cache masks magenta’s performance inefficiencies UNTIL someone logs in (requiring the pages to be custom rendered for each user). The whole thing was such a mess. Given the inefficiency they will have to start scaling out servers much sooner than efficient alternatives that run comfortably in a VM.
So yes, you can rely on hardware and more expensive hardware. But even given your opinion, I’d expect that your experience provides some sense of efficient design before you even write a single line of code. Right?
Depends on the job. There are still a lot of jobs that are essentially: Parse this file, for each entry in it, do the thing, record result. For some cases, that was just as easy if not easier with COBOL, FORTRAN, PASCAL as it is today with python/ruby/go/whatever. Sometimes I come across problems that are 100% better suited to an older language, but of course you can’t So it takes you maybe an hour to code something that would have been under a minute with an older language. But yes, also there are a lot of modern problems that are down right trivial 5 minute coding, that would have taken months with older languages, even if you ignore their built in memory constraints.
Software devs do not “consider the resources” because resources are effectively unlimited. And I do not mean that they are actually unlimited but rather that pretty much all software can just assume that the resources required to run it will be made available.
These days, the right market decision is hardly ever to optimize for CPU time ( or other HW resource ). Rather, you optimize for developer time. The biggest way that this is done is to use library upon library and framework upon framework. Whatever you are building, the end developer builds only a fraction of it and relies on mountains of code already written. There are layers and layers and layers of abstraction. This is where all the resource utilization goes. The end dev is probably only truly in control of a fraction of the resources expended.
To truly reduce resource consumption, you have to either do stuff yourself ( much, much less time efficient for the developer ) or at least choose frameworks which are themselves much lighter. Typically, being lighter is their big feature which means they lack other features which will make the end-devs more productive or the end-applications more feature rich ( for “free” ).
There is little incentive to take longer to make worse apps ( worse other than resource consumption ) when the market will not reward it. The reason the market will not reward it is that the resource pain is not felt that badly by the end-user. It is not felt because the application will run in an environment that has an embarrassment of riches resource wise.
I am typing this on a 2013 MacBook Air ( running Linux ) that I got for a recent family vacation. I was back-packing for part of it so I bought this computer for $70. It is light. I did not care if it got broken or stolen ( well, cared less than if it was something better ) and I thought it might be “good enough”. It is surprisingly excellent and, though we have been back a few days, I have continued to do almost all my daily computing on it. Don’t get me wrong, it would not take much to overpower it. But even with all the “waste” in software, I find it perfectly reasonable to use this decade old machine. The only thing that I have not gotten to work well on it is Davinci Resolve. I even played a bit of Steam on it the other day ( surprising even me ).
tanishaj,
I agree this is often what happens, although I think the logic can become somewhat faulty once you factor in the cumulative costs and time lost by by all the users of said software. Say there are a million users, just few pennies per user could go a very long way to make the software a lot better for everyone. It’s hard to make the case that this isn’t well worth the price and effort! The problem isn’t that this isn’t justifiable, but that it’s competing with executive pay. Paying for these types of improvements means somebody’s going to have to give up a 3rd home with a 2nd yacht.
Agree, there can be many layers of bloat. Many projects call for their own abstractions, it’s a sensible thing to do, but since they’re not writing the original implementation it can become an abstraction of abstractions.
@Alfman
We agree.
I should be clear that my personal preference would be that we take more pride in our software and engineer it to maximize performance and minimize resource use. I am a vintage hardware enthusiast and hang onto or even purchase hardware that many people would be throwing in the trash bin. I need software to still work well on older hardware.
I also totally agree with you about the cumulative costs of our current software culture. At the very least, it is hastening the death of our planet by wasting resources. They key point is that these are externalizations. These are not costs shouldered by the dev or company creating the software.They are also typically the cumulative result of a long chain of decisions made by many different parties. From the perspective of their own immediate self-interest, each of these parties is making the right decision.
As a human, I would do things differently. As a Product Manager, I would be directing my dev team using the 80 / 20 rule and a focus on monetizable value. In other words, as an industry participant, I would be part of the problem precisely because I would be acting rationally. In a way, I would also be acting ethically as my direct responsibility is to my employer and its stakeholders.
In many ways, modern software “bloat” is a Tragedy of the Commons.
tanishaj,,
Yes, this insightful, but at the same time disappointing, haha.
This has been a long term problem. We put a lot of faith in capitalism to solve our problems, but when look close, nobody takes responsibility. You are absolutely right about incentives. We even idolize a philosophy. of “greed is good”. We don’t seem to be able to act in our collective long term interests when there are short term personal gains.
So many social problems could be solved by working together, but we don’t seem capable of it. At the extreme we spend trillions on warfare, which isn’t only tragic in it’s own right, but the opportunity cost is mind boggling. I don’t know how to fix but, It’s just a shame.
Yes, retro computing was fast.
In DOS, we booted in seconds.
Before that the BIOS had a BASIC interpreter that just came on.
However all of those had drawbacks, and except for dedicated devices, it is quite difficult to go back.
I will say something not very much discussed as needed.
I believe the main reason is storage, neither RAM not SSDs have kept up with CPU speeds.
How come?
In the past you’d have 64KB of RAM, and a 360KB storage (5.25″ floppy).
For a 10MHz 8086, it could read about 5MB/s, which would finish the entire RAM in 13ms, and the floppy controller would finish reading the entire disc under 11.8 seconds.
Compared that to today.
We have 64GB of RAM on modern desktops, along with ~2TB of NVMe storage.
For a modern i7 CPU, that would be 830ms with high quality RAM (more than 63 times slower!)
And the SSD? More than 5 minutes! (Compare that to 12 secs)
We might not be reading the entire storage every time, but this should give a meaningful idea on how storage access is much slower relative to computation speed and storage sizes.
We can no longer do “instant” computing because of this.
I think you’re ignoring the massive changes in storage capacity. Yes, i can drain a pint in a minute, but the bar might empty a keg in a night. The beer pumps might be faster than i can drink it, but the barrel is much larger than my glass.
A 360k disk might hold the OS and a few often used programs. It’s totally possible to read the entire disk in one session, because it doesn’t hold that much. But i’m not going to read the entirety of my SSD because there’s hundreds of games in my Steam library, all many gigabytes big, and i’m not going to play all them at the same time.
The way we use computers has changed, along with what we use them for.
The123king,
I think I already mentioned the storage capacity changes.
However that is still a good proxy of the data processed.
The older DOS screens were text mode, and contained 80×25 (or even 40×25) characters. That is 4,000 bytes in color mode.
Today, we have 4K monitors almost as standard. That is 24,883,200 bytes in RGB format. 6,200x increase!
Same with documents we process. When in DOS you could use a word processor to edit your resume. It would be akin to a binary RTF file, and would measured in a few kilobytes. Today, my resume is several hundred times that size. (And it does not even include graphics).
The DOS booting process required touching only a few files (IO.SYS, MSDOS.SYS, CONFIG.SYS, COMMAND.COM and AUTOEXEC.BAT, that loaded: HIMEM.SYS, and maybe EMS.EXE). You see I can still list them 20+ years later.
Today? The modern OS would literally need thousands of files to come online, and I don’t think anyone can list them anymore.
Yes we don’t read the entire disk, or the entire memory. But those sizes have increased for a reason (I don’t see anyone complaining “I have too much RAM, I can’t even fill half of it!”). Actually thinking back, we read the entire RAM many many times during regular processing.
sukru,
The problem is that a 2TB disk isn’t comparable to a 360k floppy.
Even back in the day, one disk wasn’t enough. You had disks for every application/game. That floppy couldn’t even hold a single MP3. By contrast the 2TB disk can hold countless MP3s and photos or a thousand movies/games,, etc. in addition to the OS.
I think modern operating systems are doing too much at boot. A well optimized OS could get boot times much lower.
It’s not just bootup either, I find that both windows and normal linux distros take too long to turn off too!
By comparison my linux distro boots up in well under 10s and shuts down instantly because I took out all the unnecessary bloat. Maybe it’s just me but I like a minimalist OS.
Alfman,
The point is data grew over time. You mentioned mp3s, but during the floppy times we were playing MOD files, and I could fit my entire library on a single disk (or a few).
Just for this I tested my Windows system (Intel NUC 12 Enthusiast)
From cold boot 9 seconds to first BIOS screen.
From BIOS 8 seconds to Windows login
From Windows desktop 7 seconds to complete power off
My Mac would be even faster.
(Though I am not sure what the point would be here)
True. But our data sizes really exploded.
The AxelF.MOD is 64KB in size. And can be played on a 80386 SX.
The FLAC version is ~70MB.
The audio quality is of course non-comparable.
Again, I mentioned screen buffers (4KB vs 24MB). What is more, direct rendering is no longer viable. For a multi-monitor, multi-DPI setup you have to have a desktop compositor. That also helps with security as you can have partial views rendered in another application with different authentication context. That means your basic memcpy is replaced with texture formats, and GPU acceleration, remote procedure calls, and whatnot.
Fonts? No more bitmaps, but modern scripted ones with ligature and other features for a nicer (and scaling / multi-DPI compliant) world.
Disk access? No more INT 13h, but complex scheduling of NVMe command queues.
And so on…
Yes, if I take my i7-12700H and run DOS on it, it would be real fast. So fast that programs will crash being unable to count cycles. However our requirement are much more than what we did back then.
sukru,
It sounds like a very small collection to fit on a single floppy., but then I had hundreds. Anyway the point is floppies were very limiting. Later versions of DOS needed several floppies. And I don’t remember exactly, but games like kings quest needed several too.
We had an LS-120, which offered a nice capacity upgrade in the same form factor, but those never become popular.
https://en.wikipedia.org/wiki/SuperDisk
Obviously CDs became the standard distribution medium for a long time. In theory CD-RAM could have been used as a working disk, but that didn’t become popular either.
This is m.2 media? At least modern storage reduces latency bottlenecks, which were very significant back in the day.
My distro runs from a squashfs image, which loads very quickly even from relatively slow media like cdrom because it is just a sequential file. Many operating systems are guilty of random reads even though booting is extremely predictable. It’s an obvious area for optimization. If this were sufficiently optimized, I honestly think we could do away with sleep/hibernation. Cold boot in 2s FTW!
I’d like to optimize the bios to be faster, but that’s out of my hands.
Yes, clearly.
Alfman,
Of course. The single NVMe is not enough for me today either. The point was about data sizes growing faster than data access speeds.
I think those were the worst times for data size / speed ratios… Or second worst before SSDs became mainstream, but spinning drives went over TBs. (A RAID5 scrub could easily take days!)
I still remember getting faster CR-R drives. All from 2X (300KB/S) to much faster ones later in their life:
https://en.wikipedia.org/wiki/Optical_storage_media_writing_and_reading_speed
Yes,
As shown in the wiki link above for optical storage, every new medium gives us some reprieve, but not for too long.
Floppy -> ATA -> IDE -> SATA -> SSD -> NVMe
in parallel, we had CD-ROM, DVD-ROM, and Blu-Ray, but that has become a dead end, except for distributing licensed read only content.
Each start roughly at good total read speeds, but very small sizes (I remember using 40GB fast SATA SSDs for boot drives. Today they would be slower than average USB stick). Over time technology matures, sizes increase, but speed not as much.
sukru,
For me it’s because of work and VMs and backups and security cams and everything that piles up, but we may be outliers. A lot of people I know have their phone and that’s it. My dad prints everything he wants to keep. Even his email folder is clean, haha. I appreciate he is at the other extreme.
This seems rational to me. A given interface may be able to address petabytes of data. No updates to the standard are needed for manufactures to sell increasingly larger storage capacities. However when it comes to interface speeds, Manufacturers can’t just add bandwidth, that implicitly requires computers to implement new faster hardware standards (including NVMe and USB). These speeds have increased very dramatically, it’s just that the progression isn’t as smooth because the standardization process takes time.
sukru,
I suspect there will be many of us who take issue with this comparison. Just because an old OS might take up a whole 360KB floppy, it does not follow that a new OS should take up a whole 2TB drive. God help us then, haha.
Given the same amount of work though, it’s no contest at all. Modern hardware performance beats the pants off older hardware performance: cpus, memory, storage, networking, all of it. To me the problem doesn’t lie with the hardware but rather the software. It’s gotten so much bigger and more bloated…sometimes counteracting the hardware gains. 🙁
Old hardware was slow, but it was thanks to developers who took optimization seriously that it performed as well as it did. They optimized out of necessity, but modern generations of developers have lost this optimization mentality. I’ve heard it plenty of times in my field: developer time costs more than a hardware upgrade, so don’t try to optimize when a hardware upgrade will do. Though I understand this rational, as a user, slow & inefficient software is really frustrating! The inefficiencies get multiplied across the user base. For software with a large number of users (ie millions) one or two people spending a week optimizing an inefficient process could save millions of man hours, not to mention electricity & hardware costs. But it’s a low priority for modern CS grads and their employers who would rather save the money. Meh, oh well.
We can most definitively do “instant” computing. You could hack your own FORTH interpreter that would boot from within the SoC firmware.
The thing is that those “instant” basic interpreters really didn’t do much, like the forementioned FORTH interpreter.
Dynamic CMOS logic has increased in speed at a much faster pace than DRAM and NVMe (and by implication HDD and network) speeds. Thus, the whole need of bizantine memory architectures with all sorts of Caching along the way.
There has also been a tremendous increase in design complexity all over a modern computing architecture. From the out-of-order extremely speculative cores, to ridiculously high performance accelerator on the same SoC die (GPU, NPU, Video, Network, Modem, USB, etc).
On the other hand, the amount of compute power per dollar increase is also remarkable. Now you can have a Raspberry Pi for peanuts, which has better CPU performance than a Cray from the 90s, and better graphics than a high end SGI from the same vintage. All running on a tiny board using just a few watts.
Yes, that is my point.
There is a fundamental limit thanks to the the speed of light. As you move further away, the distance have certain hard limits of latency (At 1GHz, the speed of light limits you to ~300mm. The electricity even slower. So you cannot place anything further than ~15cm and expect to have signals reach in a single clock cycle).
Yeah. but a lot of the complexity in terms of microarchitecture is due to trying to find workarounds around those latencies and physical transmission limits.
I don’t know if I am following your argument.
Okay, the original argument was, RAM and physical storage speeds did not ramp up as fast as the computation speed nor the storage sizes.
Hence, we are bottlenecked with I/O transfers from memory, SSD, ethernet, GPU, and whatnot.
(Back then even the 100 megabit ethernet, which was common, was faster than local hard drives. However today, if I want something faster than local SSD, I need to go for 50GBe+ fiber, which is extremely rare and expensive. And yes that SSD is also slow relative to the rest of the system growth).
I don’t remember early PC boot times being fast though. You’d turn on a PC, it’d slowly spin up its hard disk, the BIOS would check the floppy disk is still there which made grinding noises, then it would check all the RAM, then it would start booting the OS, which required many disk seeks so the disk would jump around…
Agree with the point about ratios. We don’t check RAM at boot anymore because it would take too long.
I think https://datagubbe.se/stupidslow/ is a good read, and somebody wrote a nice elaboration on it showing that video playback is using hand-optimized assembly with GPU offload that’s blazing fast (so you can get that 4K video rendered) but then navigating available videos is using higher level JS and DOM and is generally sluggish.
malxau,
I agree, your link hit the nail on the head!
As the author mentions, things that are computationally hard, like gaming and 4k video are heavily optimized and we’ve achieved amazing speedups. But things that should be simple can feel sluggish despite the benefit of faster hardware.
The web is a good example. Another might be blender. The raytrace performance is downright jaw dropping on modern hardware. But at the same time some parts of the UI are painfully slow and in desperate need of optimization to improve feedback latency. Sometimes I even think it’s crashed if I haven’t waited long enough (and sometimes it does actually crash).
malxau,
Thank you for the read, yes modern software can be really bloated (that is another reason to slowness. And yes, it also is affected by the memory limits, as it would hit either cache invalidation, or even swap).
Alfman,
We don’t have enough engineers for all the work that has to be done.
You mentioned game engines being heavily optimized. It is usually the case (but definitely not always so, remember Cyberpunk 2077 at launch?). Because they usually have sufficient number of highly skilled engineers to optimize the heavily used paths.
However for UI, we don’t normally expect a senior engineer to do the job. Even though as you alluded to, this is a very important surface, literally interacting with humans, they are commonly an afterthought.
So, you have the junior engineer fresh out of college (or even just finished a boot camp), building a UI under a very stressful, short deadline.
There are exceptions of course. In Adobe Lightroom, for example, the UI does not feel sluggish, but the operations are. Then again, we are usually managing terabyte sized collections of media, which would naturally be slow to access.
sukru,
I think this lets companies off the hook too easily though. Often times we’re already there working on the projects, we are willing and able to finish the job well, but they won’t let us because the higher ups want to cut costs, which is the real culprit IMHO. Not for nothing, but I want my work to be good as a matter of pride. It bugs me when when companies don’t let us do a good job.
I think it depends. Sometimes a heavily optimized game engine is the whole product and it needs to be optimized otherwise it won’t sell. But the game studios who buy these game engines are notorious for rushing projects with unreasonable deadlines even when it’s obvious to the employees working there that the end product will suffer.. From what I’ve heard about the games industry, this is the norm. It’s what you should expect if you go into gaming.
Most senior software engineers I know personally stopped coding a long time ago. You can be a great heads down coder, but management opportunities are generally more lucrative than coding. I stuck with coding because that’s what I like, but honestly I kind of regret it. It’s a difficult living. Managers who don’t do any coding are always higher up the food chain and with local companies there’s never enough money. Ideally one would land a unicorn company that you grow with, but in reality many end up shutting down.
When interfaces are good, we take those for granted. It’s just when they’re bad that we notice anything. So I guess if it weren’t for bad UIs we wouldn’t appreciate the good ones. Haha.
I made a new little Game Boy game in 2023, written in Assembly language, and it was SO MUCH FUN. It’s hard to put into words why… The feeling of powering on the console, knowing the exact address in the cartridge ROM that it’ll begin reading from, and understanding every bit of every byte that’s read and processed, is so refreshing and so cool.
Compared to, say, running an application using a library built on Python read by an interpreter running on an OS running on a CPU. Almost any single layer is too complex for any one person to fully understand, and there’s just layers upon layers.
drcouzelis,
That is cool. You actually did this on authentic hardware? Even cooler 🙂
Thanks! 🙂 You’re welcome to browse the code, here.
https://github.com/drcouzelis/icecreamcastle-gb
I actually wrote the soundtrack to another game that just came out today. I didn’t do any of the programming on this one, but the person who did also writes everything from scratch in Assembly language.
https://snorpung.itch.io/pixel-logic
Fortunately, almost everything can be emulated today. This means that on a moderately powerful computer from 2024, you can run basically any configuration of a home or office computer from the 70s, 80s and 90s. Considering this, buying e-waste for nostalgia’s sake is completely incomprehensible to me. You won’t buy your youth back this way. Besides – back in the day – you didn’t use thirty-year-old barely functioning junk. Those devices were brand new and fully functional back then.
Educating yourself in the field of systems that in the old days you only knew from magazines or didn’t know at all is a different matter. For example, in the 80s and 90s I never touched Amstrad systems, 16-bit Ataris or 8-bit Commodore computers other than the C-64. Discovering and learning about these systems today, comparing them with other retro systems that I knew back then is also an interesting experience for me. Except that it can be done cost-free and rationally using emulation.
But at the end of the day – it’s your business and your choice what you do with your money and your home environment.
A 2024 Toyota wil get you from A to B quicker, safer, and more comfortable than a Ford Model T will. Why is there any point in collecting 100 year old junk when a modern car is much better? Why not drive that Model T in a driving simulator instead?
Same goes with computers. Yes, a modern PC will emulate all these systems. but it’s just not the same.
People have all sorts of irrational emotional attachments to all sorts of things.
It’s fascinating to read posts claiming how the very same systems, back when I was a kid, are now held as examples of good designs/systems/products while trashing modern designs/systems/products.
When back in the day I remember the contemporary old farts bitching about those systems as being crap, and how back in the day things was better. The arguments seem verbatim. It’s like people really become their parents when they get old. Ha ha.
Not just with tech, I have seen the same dynamics with music, sports, politics, etc. It’s the same cycle over and over.
Some people just love the nostalgia. Turns out a lot of people don’t like change. It is what it is.
Xanady Asem,
Well, sometimes things that are well received gradually become worse. People don’t necessarily perceive immediately because it doesn’t happen over night, but we have to consider that sometimes criticisms are actually true over the long term.
A lifecycle similar to the one we’re experiencing has already happened to radio, cable tv, web, game consoles, and now that it’s the desktop computer’s turn do you think we will fair better? No, probably not. All of our tech giants have a vested interest in capitalizing on anti-features and anti-patterns that aren’t designed for the user’s benefit at all They won’t say it, but we know it and they know it. It’s just the reality that once they have all the user’s they are going to get, the original focus on attracting new users through innovation is no longer enough to satisfy investors. There are two ways to increase profits once market growth peaks, and generally neither are good for the consumer: platform enshitification & layoffs.
Just like the radio stations that have already gone through this lifecycle: we’re experiencing this lifecycle in computing today. It’s obvious that ads and tracking are getting worse, but also some of the other cost cutting aspects are happening too, like laying off dev teams and then not having enough developers for the work that has to be done as discussed in another thread.
https://www.osnews.com/story/140653/what-we-can-learn-from-vintage-computing/#comment-10443164
Not for nothing, but these aspects of business are the basis for some of our criticisms by consumers.
Vintage computing is a testament to how open source keeps technologies alive and relevant. Projects such as BlueSCSI and NetBSD bridge the old with the new, not just preservations of our historical tech, but an enrichment of our understanding and skills with both legacy and modern platforms. The integration of the past with the present signifies that retro computing continues to be of relevance in today’s digital landscape.
Similarly, maintaining your vehicle’s interior is essential for a comfortable ride. If you need a mobile car roof lining replacement in Sydney, we at https://www.sydneycarrooflinings.au/ offer expert services to ensure your car’s interior remains in top condition, just as we preserve the legacy of vintage tech.
Vintage computing shows that technology can remain relevant through community support and innovation. Just as preserving old tech takes care, so does maintaining your vehicle’s windscreen—trust https://www.bankstownmobilewindscreens.com.au/windscreen-rubber-replacement for top-notch service and durability.