In 2014, “60 Minutes” made famous the 8-inch floppy disks used by one antiquated Air Force computer system that, in a crisis, could receive an order from the president to launch nuclear missiles from silos across the United States.
But no more. At long last, that system, the Strategic Automated Command and Control System or SACCS, has dumped the floppy disk, moving to a “highly-secure solid state digital storage solution” this past June, said Lt. Col. Jason Rossi, commander of the Air Force’s 595th Strategic Communications Squadron.
These are incredibly difficult systems to upgrade, so this is no small feat.
Thom Holwerda,
Actually I imagine the contrary is true,: it’s likely easier to upgrade than to continue maintaining the legacy components. The article alluded to some of these reasons. OEMs no longer support the original computers, replacement components are no longer available, media is no longer available. These old computers require regular repairs. Engineers and technicians with incredibly niche skills are needed for maintenance and operation. While modern components are often not field repairable, they are laughably cheap and much easier to work with. All of the computing power from that era combined have less power than a single mobile phone today. Granted that’s just an example of how far we’ve come, and clearly computing power isn’t all that matters to the mission, but I wouldn’t be surprised if the original systems are rather deficient in areas of crypto at least by modern standards.
Not to mention the most obvious problem of all: how the hell are HAL 9000 and Skynet going to run on floppy disks?
🙂
It has nothing to do with cost or availability. The budget for the nuclear weapons is huge. The cost of the computers is not an issue, but rather the fact that the systems are validate and are part of a system architecture that has to met certain criteria that is beyond most control/embedded systems.
That the stuff is old is besides the point. The issues is that it is a system in which every single detail/component was part of an architecture/impliementantion that was fully validated and correct at levels that are probably some of the highest ever implemented. Changing a single element could have side effects that were not considered in the original design, and a tremendous level of validation/study have to be performed.
you can upgrade to some cheaper/more performance system. But in the long run the cost of the hardware is basically a non issue compared to the effort that has to go into re-architecture and validate.
It’d the same with nuclear stations. A lot of the hardware involved in the control is ancient, but that is besides the point. The point is that it works, not that modern HW is cheaper or more performant
There is an interesting industry dedicate to keep/maintain ancient HW like PDP systems, which are just absolutely outdate by any metric. And yet they fetch a tremendous amount of money.
javiercero1,
I have to disagree with you here, While it’s often true that cost isn’t the biggest factor, I only brought it up in response to Thom’s statement. So even if I agree that cost savings isn’t a motivating factor for the government, it doesn’t mean there aren’t other reasons to upgrade. When it comes to electronics, a lot of these ancient technologies predate major advances in microprocessors, which brought about improvements not merely to costs, but also to reliability & quality. It’s neat to see old tech being brought back to life, but we have to be honest with ourselves, it is a liability today. We had a lot of skilled engineers back then doing their best by hand, but it doesn’t really hold up well to the accuracy and consistency of modern fabrication techniques. The increasingly niche skills needed to maintain and operate these systems is concerning. Component availability is is concerning. Ancient crypto protocols are concerning, etc.
We hear that argument a lot in some corporate circles too, which is one of the reasons there are so many legacy systems around. But one of the ways I’ve seen this argument break down is that as time pases, the ancient system becomes inadequate and more and more effort & complexity is needed to bridge the gap and fit in modern requirements. For example, it’s highly doubtful that our nuclear arsenal today meets the security standards that the government requires of it’s IT contractors.
https://en.wikipedia.org/wiki/Federal_Information_Processing_Standards
Does it really make sense to hold our nuclear arsenal to a lessor standard?
Sure, but I already specifically said “computing power isn’t all that matters to the mission”. Also, your example of nuclear stations is ironic. Several catastrophic (and many other near catastrophic) events have all been the result of misunderstandings, inadequate backups, inadequate safeguards, inadequate training on old reactor designs. We have learned to engineer nuclear systems more reliable over time because the industry has learned from experience and evolved. New reactors are much safer than old reactors. The older designs put us in greater danger regardless of their former certifications.
So I guess to sum up, we should try to learn from past mistakes and should not take positions that impeded our own progress. At the same time, it is unproductive to rebuild constantly. Ideally we should recognize that there’s a balance to be that avoids either extreme.
An anecdote tangentially relating to your debate. This is from a member of the Bristol Hackspace, who was an IT consultant back in the day:
He got a gig to sort out apparently lax security at a British nuclear power station. In his meeting with the top brass they told him there had been curious systems faults consistent with a virus, so his job was to install the appropriate firewall to stop it happening again.
Off he went, into the bowels of the station, going from manager to manager, down the chain, trying to work out where he was meant to be implementing this firewall. None of them knew.
Until eventually he found the technicians. He told them he had been hired by the bosses to install a firewall to stop the viruses that had been causing system faults.
They laughed, said “knock yourself out,” and showed him the computer.
It ran off punch cards.
The system faults were caused by build ups of paper dust in the mechanism. The reason there were an increasing number of system faults was because the bosses kept reducing their budgets, so they had less man-hours to spend cleaning out the dust.
They had reported this but the hierarchy was so dysfunctional, and the management had such fixed ideas about ‘IT’, that it never got taken seriously, or even read.
So this consultant went back to the top brass and told them he had successfully installed a firewall, and that the technicians now needed an increased budget to “manage the firewall”.
They happily agreed; our consultant got paid handsomely; the technicians spent the increased budget on more man-hours cleaning paper dust out of the mechanism; and, to this consultant’s knowledge, everyone in the nuclear power station lived grudgingly ever after, continuing to completely ignore and willfully misunderstand each other, right up to the present day.
The End.
Great story and although my first instinct would be to yell “citation needed” I am just going to store this in my mind as “based on a true story”
Except that these components were engineered to the highest military specifications of the era. They were designed to not fail, and the QA would be vetting each component to ensure they are the highest quality and unlikely to fail. Part of the reason this kit has lasted so long is because it was designed to.
However, the process of vetting components is a long and expensive one. You can’t just go down Radio Shack and grab a bunch of microchips, and build a military grade computer. Those chips have to be vetted for failure rates, radiation tolerances, performance in regards to temperature, etc etc. And that’s just the hardware. Software goes through an equally long and expensive vetting process too.
One of the main reasons these ancient hardware systems are still in use is that it’s just too expensive to re-engineer the hardware and software to replace them, compared to just keeping them ticking along as is.
If you read the actual specification (https://csrc.nist.gov/csrc/media/publications/fips/200/final/documents/fips-200-final-march.pdf), you’d know that it very much does meet those standards. In fact, those standards apply to all military digital equipment, regardless of age. There is no grandfather clauses.
Also, woefully outdated doesn’t automatically mean insecure. If anything, it’s more secure, as the hardware to read/write 8″ floppies is pretty hard to find these days.
Name me one, ONE, instance where the failure of a computer system led to a catastrophic nuclear disaster.
Every disaster has been caused by some other external factor. Chernobyl being caused by overridden safety protocols, Three Mile Island being a terrible UI, Fukushima being an earthquake and total loss of power.
If you can find a single report of an outdated computer system directly leading to a nuclear disaster, i’ll buy you a coke.
The123king,
“Highest military specifications of the era” really translates to commercially available minicomputer in the shape of IBM Series/1.
The article suggests the boards are failing, and repairing them under microscopes is extremely difficult. I concede the manufacturing industry overall had higher standards before “the great cheapening” in the consumer space, But military still builds new tech and one can buy very high spec/quality hardened components today. Medical equipment/hospitals face the exact same thing and a lot of electronic components say right in the manufacturer specs whether they are certified for use in life critical devices – many are not and you have to pony up for components that are, which is not a bad thing.
The thing about building logic out of discrete components, regardless of quality, is that old circuits require so many discrete components that when you multiply the probabilities of some of those components failing the odds of failure actually go up simply because there are so many of them. And that’s before we even bring up human consistency versus machine consistency.
I agree. Alas, I’m sorry to say that radio shack went bankrupt about 4 years ago. Sadly, for me that means there are no longer any brick and mortar stores carrying electronic components I sometimes need 🙁
That was exactly my point about changing requirements. Yes it’s *supposed* to be in compliance, but a lot of our old systems from 4+ decades ago are not even capable of complying. The new engineering required to add such functionality to an obsolete minicomputer would not only require a huge amount of novel re-engineering for equipment of that time period, but it would automatically require re-certification and I’m not even sure the US government has enough old equipment to undertake new testing & development. Even if it did, it just wouldn’t make sense not to take advantage of the improvements over the past several decades.
Well, this is rebutting something I didn’t say. it actually sounds like we are in agreement based on what I literally said “Several catastrophic (and many other near catastrophic) events have all been the result of misunderstandings, inadequate backups, inadequate safeguards, inadequate training on old reactor designs.”
I never claimed that old hardware couldn’t have done the job, however recall what javiercero1 said: “The point is that it works, not that modern HW is cheaper or more performant”. Yes, it works, but there have been major disasters and most of those disasters would have been preventable in hindsight. Our understanding and processes have improved over time. Newer reactors are designed with more safety mechanisms and backup systems to protect from the catastrophic failures that older reactors were vulnerable to. So don’t you agree that it was an ironic choice to highlight nuclear reactors as something that just works and never benefit from being updated?
I’m going to hazard a guess that their “highly-secure solid state digital storage solution” is basically an encrypted USB thumb drive plugged into an industrial floppy disc drive emulator.
I bet that industrial floppy disk drive emulator is a GoTek, maybe with a custom FPGA converter to attach it to the IBM Series 1
They should have upgraded it to receive Tweets. Then US President Trump could launch using his favorite mode of communication