I expect our global supply chain to collapse before we reach 2030. With this collapse, we won’t be able to produce most of our electronics because it depends on a very complex supply chain that we won’t be able to achieve again for decades (ever?).
[…]Among these scavenged parts are microcontrollers, which are especially powerful but need complex tools (often computers) to program them. Computers, after a couple of decades, will break down beyond repair and we won’t be able to program microcontrollers any more.
To avoid this fate, we need to have a system that can be designed from scavenged parts and program microcontrollers. We also need the generation of engineers that will follow us to be able to create new designs instead of inheriting a legacy of machines that they can’t recreate and barely maintain.
This is where Collapse OS comes in.
That’s one way to introduce an operating system. This is a very unique project aimed at creating an operating system that can run on microcontrollers and which can self-replicate.
This is such an interesting idea! I like it, however as it stands the supported equipment is a bit on the exotic side even for today: a sega genesis?! That won’t be helpful if everything collapses. It might make more sense to stockpile something with cheap commodity parts. I never laid my hands on one but I’m reminded of the ‘one laptop per child’ being designed to be highly serviceable and low power.
In this hypothetical collapse, I think the biggest shock to modern society would be the loss of telecoms. With no cell towers or functioning ISPs. we would likely have to revert back to telegrams and fax machines (not at home, but at regional facilities).
Sometimes the doomsday preppers talk about wireless mesh technology, which isn’t a bad idea. However the viability depends on massive numbers of people possessing the necessary equipment and software before a collapse. Once a collapse happens, it’s too late. It just sucks that we’ve already got the hardware for potentially billions of mesh nodes today (existing routers, phones, etc) yet the firmware is too proprietary and locked down to re-purpose as mesh network nodes. Ideally all network capable devices would be capable of being upgraded by the user to a mesh networking firmware in the event of a calamity.
LoRaWan is almost the embodiment of a post-apocalypse mesh, isn’t it?
I’ve been watching this grow in recent years, and it’s accelerated in the last year or so due to widely available maker solutions being published openly and free. With a little persistence your grandma can probably get one up and running for $20.
I may be mistaken, but most (all?) LoRaWAN deployments use backhaul internet to connect nodes. The first hop may be over long-distance low-bandwidth LoRa, but once it hits the gateway it’s just TCP/IP over the usual internet.
AFAIK there’s no mesh routing in LoRa, but I’d be delighted to find resources that indicate I’m wrong.
Yes, most stations are just connected to the internet. Some have radio uplinks, especially in remote areas.
However, lorawan really isn’t a good candidate for general purpose type internet use. The very low duty cycle means low bandwidth and unpredictable latency when using more than minimal bandwidth. It’s meant mostly for telemetry, sensors, smart city stuff, etc.
Things like NPR (new packet radio), or even good old ax25 for low bandwidth are a better match. Or even just wifi meshing with increased power for the backhaul, etc can span large areas.
I think we are debating two different things.
I’m discussing a solution suitable for a post-apocalypse low bandwidth communications network, on the assumption 3G, 4G and 5G plus the internet backbone as we know it isfoobar! An low bandwidth long range SMS level network to advertise or find some grub or a doctor might be more than useful! People might beg for 256kbps if it all hits the fan, there are a lot of resources within a 5 mile radius!
@bartgrantham, the hardware is already available for Ad-hoc LoRa without a need for an internet node. But of course nobody would bother to implement it, they all want a node to access the internet.
But if there is no internet….?
Can’t reply below so here I go 😀
256kbps isn’t practically achievable on LoRa devices I’ve tried… I couldn’t get more than a few tens of Kbps at a range of under 1/4 mile. Some of the ham radio inspired long range low power communication methods would probably be more apropriate for long range… otherwise sneaker net would probably make the most sense. The original floppy disk hardware would not be impossible to replicate it’s fairly low tech (magnetic paint on a mylar disk in protective sleeve) and a constant speed drive…etc…nothing crazy. MFM hard drives are basically arrays of floppy drive heads on platters… that would get you into the 10-100MB range with relatively low tech.
Discrete transistors are not difficult to manufacture and many computers have been designed with such. ECL logic would be the fastest way to get back to a fairly high amount of engineering CPU horsepower…50-100Mhz etc… especially with a vector unit like a Cray 1.
People have made integrated circuits in thier garage so… it would not be like starting from zero.
cpcf,
I understand your point that it’s not just about bandwidth, however from what I understand of LoRaWAN, it’s designed around a very specific type of use case for intermittently connected sensors. It’s highly optimized for low power consumption, and synchronizing many devices around a centralized gateway connected to a backbone. which is great for sensors. But in terms of mesh/backbone connectivity I don’t think it would do a good job because it wasn’t designed to be a backbone technology. It’s not clear to me that it could be easily re-purposed for that role in the field.
IMHO WISP technology might be a better candidate for an alternative adhoc backbone.
LoRaWan is a proprietary technology, but I don’t know if this matters during a collapse?
802.11ah could be an option.
https://en.wikipedia.org/wiki/IEEE_802.11ah
The thing is, in this hypothetical scenario, we wouldn’t have much luxury of choosing an ideal tech. We couldn’t order new stuff, we’d have to rip stuff out of homes and businesses, it would be a struggle for most people who just aren’t prepared and never bothered to select user serviceable hardware.
For all of it’s other potential weakneeses, one major benefit of 802.11 gear is that off the shelf it’s already designed to support packet forwarding networks, a lot of equipment can be used as is and wouldn’t need to be hacked to rebuild a network, which could be an important factor to consider.
cpcf,
“An low bandwidth long range SMS level network to advertise or find some grub or a doctor might be more than useful! People might beg for 256kbps if it all hits the fan, there are a lot of resources within a 5 mile radius!”
This has existed for many many years. Packet radio started becoming popular in the 70s.
There were (and still are) a ton of stations online that you can reach over ax25. They are all text based bbs systems, and you access them through a TNC. You can connect nodes together, send messages (mail), do live chats, post bulletins, etc etc. Over HF, it’s easy to span the entire planet with just a few stations.
As for the sms type text, position reports, etc. This has also existed for about as long. The system used by HAMs is called APRS.
Have a look here: http://www.aprs.fi
It’s really very slow, but since you only exchange small amounts of information at a time, this is not really a problem. 2 meters and 70 cm APRS and plain ax25 generally runs at 1200bps, but can be run at 9600 bps reliably. HF packet is usually 300bps to 1200bps depending on band conditions, etc.
Then there is stuff like the winlink (hate the name) network, etc. That allows you to send email among a few other things.
There are many many modes that can be used to send data over the airwaves, ranging from the old school (rtty, psk, etc) to the new (freedv modem, gmsk, etc).
Projects like NPR can do a reliable 2mbit over 50+ km easily. Pseudo full duplex.
As has been demonstrated many many times before … when mainstream communication networks fail, HAMs come in and are able to get information out and help in very effectively.
This is mainly due to the fact that we do not rely on infrastructure for most of our comms. There are many services that do make use of the internet (echolink, DMR, dstar, fusion, etc), but they are “optional”. Just convenient ways of instantly connecting with people from all over the world. It takes away many of the variables like band conditions (space weather, solar flares, etc) and … nonsensical neighborly disputes (what’s that big antenna doing there, it’s giving me cancer …) etc.
I can go sit in a park somewhere with a small lead acid battery and a small portable radio and reach half of the world with no use of any infrastructure whatsoever. My radio to their radio, nothing in between.
And guess what … CB can do the same!
p13.,
Ah yes, good to mention the HAM operators, their low frequency radios predated centralized networks and they can operate independently. It’s a great low tech option. But I do have a question: what is the practical upper limit is on simultaneous active users, say if hundreds of millions of “normal” people wanted to keep in touch with their friends/family? In a way it might be good that not too many people are HAM radio operators because it would result in too much noise on the airwaves.
I guess at some point during a collapse, HAM operators might start disregarding their frequency boundries and start broadcasting on any open frequency, which gives them a lot more spectrum. I don’t know if that would help or hurt things?
Regardless, you are probably right that HAM operators would play a key role!
“But I do have a question: what is the practical upper limit is on simultaneous active users, say if hundreds of millions of “normal” people wanted to keep in touch with their friends/family? In a way it might be good that not too many people are HAM radio operators because it would result in too much noise on the airwaves.”
Well, the bands would definitely get crowded 🙂
It kind of depends on what you set out to achieve. If you look at it in terms of mail or email, with a delay of a few hours at most, then yes, the whole world could definitely be connected for text only email. HAM modes on HF are usually very “narrow” in the spectrum, at most 3ish khz wide. the 40 meters band for example is 200 khz wide (for most of the world). The 70cm band is 30 MHZ wide for the US. 23 cm is wider still, 13 cm even wider, etc.
It all depends on the purpose. Everything above HF needs line of sight, and propagation will go down as you go down in wavelength.
You could easily envisage a 70CM local (say 30-50km) coverage with an HF uplink for stuff that needs to go out of country or overseas.
With proper planning, quite a bit of capacity can be created, relatively speaking of course … suffice to say, you won’t be streaming 4K netflix over this.
Also, quite a few people transmit out of band … even if you’re not supposed to haha. There is quite a sizeable community that uses repurposed military gear (echo charlie band, look it up).
Essentially, there is no one really stopping you from doing this, so in a real crisis, people will just do whatever they want, or whatever works.
High speed data is also possible, of course, but that will just consume more bandwidth. I believe a 1mbit/s transmission rate on NPR is 750khz wide.
The thing is, we already have a truly vast network in place with the ability to relay information worldwide. Just not many people know about it today 🙂
HAMs regularly exercise these scenarios. We set up events called field days where we set up a mobile station and attempt to establish as many communications (QSOs) as possible. It’s perfectly normal for a single radio, single operator station to log over 500 QSOs in 24 hrs.
These mobile stations are set up off the grid in a remote location (hence the name field day). We run off of batteries, solar cells, generators, etc. The rules are that you cannot be connected to the grid in any way. It’s fun 🙂
I was also part of the team that set up communications with the ISS here in Belgium for a school. You can watch a recording here: https://www.youtube.com/watch?v=8DWLphGXnhg (interesting stuff starts at around 19:20)
Talking to the ISS was pretty magical.
It’s a fun hobby that might turn into a real life saving necessity some day.
And yes, it might happen, and we will be ready to help in whatever way we can 🙂
This kind of reminds me of TempleOS in how it has a purpose that has more to do with ideas than computers.
It also reminds me of TempleOS. More specifically, the part where someone with a severe mental illness (and/or a conspiracy theory crackpot) can still get attention from people that should know better.
Some of those people are rejects of society from that one site which have no obligation to be better. It does strike me as either a child’s idea or that of someone with a mental illness. The idea that we’ll suddenly run out of silicon, trillions of used x86 computers, the ability to repair them, the expertise to create new chips, etc. is so ridiculous that it should be pointed out before this guy becomes a target of that group,
…and before 2030 it is much more ridiculous.
As interesting a premise as this is for an OS, without semiconductor manufacturing it’s not particularly valuable. It feels like the retrocomputer version of a prepper who can metalsmith a gun from raw material, but has no idea what to do if they ran out of gunpower.
> Sure, modern technology is generally fragile and unfixable, but there are many robust modern computers hardware and those that are lucky enough to run a self-contained operating system will continue to provide a modern computing environment for decades.
>
> Therefore, Collapse OS won’t be actually useful before you and I are both long dead.
Either hardware survives, in which case why bother with this OS? Or it doesn’t, and the first thing to do is build a new hardware platform, which is _MUCH_ harder than hacking together a simple z80 assembler. This whole idea is a commodity. There has to be tens of thousands of software engineers could hack together a project like this.
On the other hand, there’s about 50 different resources that will become scarce long before the knowledge of how to build Z80 programs disappears. His time would be better spent researching and publishing how to build makeshift semiconductor manufacturing without the aid of modern chemical supply. In a post-collapse society that would be a FAR more valuable piece of knowledge than yet another Z80 bootstrap.
And if you truly had to start from zero, analog electronics would be a good first step back to modern civilization, not getting a Sega Master System to run some bootstrap SD driver.
> I’ve spent some time doing software archeology and see if something that was already made could be used. There are some really nice and well-made programs out there, such as CP/M, but as far as I know (please, let me know if I’m wrong, I don’t know this world very well), these old OS weren’t made to be self-replicating. CP/M is now open source, but I don’t think we can recompile CP/M from CP/M.
SMDH. CP/M could almost certainly self-host development, I just don’t know if anyone has bothered. It would probably be far easier than building something from scratch, and once it worked you could run the wide variety of CP/M software, which is more than what his OS will have. FFS. Dude should try spending some quality time with CP/M archives, it would be a good wakeup call.
If he really wanted to build a bootstrap-from-nothing roadmap to etch into stone tablets for our post-collapse ancestors, Forth is a far better starting point. Not glamorous or ergonomic, but the bang-for-buck makes it a better bootstrapping system.
I get the feeling this person hasn’t really done their homework.
The Late Bronze Age Collapse (~1177 BCE) might be considered a collapse of a global supply chain, http://etc.ancient.eu/interviews/what-caused-the-bronze-age-collapse/
After a collapse we don’t need to be able to make new CPUs, we just need to be able to keep them running. Most of what breaks in computers is not complex logic, it’s batteries, capacitors and power supply (a subset of capacitors). They we also have issues with solder whiskering. All of these have very low tech solutions. I guarantee you that if everything collapsed tomorrow, there’s be millions of modern Intel based computers still running 100 years from now, as long as there’s power and a good supply of very low tech spare parts.
Yeah no. While this is true for older machines, it just isn’t true for newer ones. It used to be that things like cpus, rom, etc would last forever, but that just isn’t the case anymore with the way the feature size has been becoming smaller and smaller. Don’t expect a 7nm process cpu to last for 20 years. It just won’t.
Board repair isn’t very straight forward anymore either. Old boards used fairly large components, and on really old boards, the components that are likely to fail (like capacitors) are even through-hole. That’s an easy fix, sure, but to replace 0201 sized SMD components at home … yeah … not so much. Filter caps and such are easy enough though, and the power supplies still use through hole components. Older boards / logic last a long time, and will continue to do so. Modern machines suffer from utter bullshit like BGA solder joints failing, components desoldering from what is basically just overheating (and that POS RoHS compliant solder). Add to that the huge vendor lockdown that’s about to get a whole lot worse. You can’t reinstall your OS without an internet connection if the original drive fails …
Having a bodge wire on something that has a clock of a few tens of mhz might not be a big deal, but good luck trying that with say a pcie4 lane …
Also flash storage has a limited life span. Mechanical drives are much better in that regard, but they will still fail, especially high capacity drives. They are filling them with helium now … that seal won’t last forever.
Computers really have become a limited life cycle product. Sure, a well built home built PC tower with good cooling will last a lot longer, but that’s just a tiny part of the actual market. Most “pcs” nowadays are laptops, ultrabooks, tablets, etc with severely compromised thermals. Just look at apple … smh … those machines die after just a few years of use. Hell, many of them won’t even turn on when the battery goes bad.
If this scenario ever really becomes a reality, then our best bet is old old tech.
I’d like to see some evidence backing up that CPUs and RAM will fail simply because they are smaller, haven’t seen anything like that yet.
As for board repairs, while harder to do, not impossible. Macs die not because of just bad thermals, but mostly because of bad engineering. I believe the most recent incidents are “flexgate” and the butterfly keyboard that Apple is quietly giving up on (but the actual board design itself isn’t much better). Apple is simply a really bad example as they don’t make products designed to last (and should be called out for it mercilessly)
Yes, the leadless solder isn’t ready yet. One would think in a time when a gel that can replace tooth enamel is being tested, material sciences would have also provided as a leadless solder that can tolerate heat cycling.
dark2,
It’s simple really. Voltages go down with feature size. They have to, otherwise, you will just jump the transistor and/or exceed it’s breakdown voltage.
Let’s do some dead reckoning here. Nothing scientific, just ballpark figures. Grains of salt advised.
Nowadays, cpus are running <1.2-1.5 volts.
Let's take a ryzen 3700X as an example. Stock vcore will be below 1.4v. Stock TDP is said to be 65 watts. In reality, it will go higher than that, but let's stick with that. Let's be generous and say 60 of those watts go to the cores. That's almost 43 amps being fed in there at the highest advisable vcore.
A Pentium II 400 (just as an example) has a TDP of somewhere around 25 watts at a vcore of about 2 volts. That's 12.5 amps.
This is why old stuff lasts a long time, and new stuff won't last nearly as long.
You could make the case that newer chips have 2-4-8-whatever cores, but … at a macro level, this does not matter. It is the same die. If even one of the billions of transistors breaks, everything dies.
dark2,
This is all anecdotal, but my older devices lasted decades. My old nokia never stopped working after many years. My decades old 286 & 386 laptops never stopped working (though the battery was crap). I obviously replaced these because they became obsolete, but the hardware lasted multiple decades. More and more I’m having to replace new computers and phones “prematurely” due to component failures. I’m on my 3rd phone in the past 4-5 years due to non-voluntary replacements. So while I lack definitive proof, I think it’s plausible that new devices are being built for a shorter lifespan.
In the case of storage, I think it is well understood that when you’ve got cells using more matter/energy representing a state, it’s naturally more robust. The official specs for NAND flash make it fairly clear that new products have a much lower durability (ie fewer P/E cycles) than older products.
While I don’t actually have good statistics for CPU durability, in theory the smaller manufacturing process and tighter tolerances could mean faults happening earlier.
Granted my hand waving isn’t a good substitute for a scientific analysis, however given my experience I personally have more confidence in older hardware outlasting newer hardware. It’s also interesting to note that NASA uses much older tech for its mission critical space missions.
@p13, Thanks for the rundown, I’m not into HAM or other techniques, I was more talking about what you can do with a SBC or other micro-controller and not much else. Tiny cheap hardware with on-board 800MHz like Particle.IO or Espressif. Stuff you can scavenge and build on with recycled passive components, not a benchtop LW station.
I had a similar experience when I played a bit with LoRaWAN, I concluded 18kbps seemed to be all that was achievable, then I fluked a search result on the web that gave me an insight into the problem, not really a why or how but a pointer to a why. I stopped using generic development platforms(Mostly open source plugins for Eclipse or Atom.) and went to the chip-maker’s proprietary IDE. I should have noticed this but the code being uploaded by the generic platforms was massive, the overhead burdening the sytem with housekeeping tasks, great if you have school kids building breadboards to light an LED. A tiny routine took up 23k of space, off the proprietary IDE the same was 4k, and a side-effect was throughput rose to 67kbps! Hard to believe but reality, I realised to do this seriously and get anywhere near the theoretical limits means one thing and one thing alone, old fashioned hand written machine code, lean and mean!