As reported on CNET today:
A huge proportion of internet-connected imaging devices at hospitals run outdated operating systems, according to research released Tuesday by Palo Alto Networks, a cybersecurity firm. The company found that 83% of these devices run on outdated software that can’t be updated even when it contains known vulnerabilities that hackers can exploit.
This is such a serious issue, but most people are oblivious to the problem of critical legacy systems that cannot be upgraded. Most critics just make uniformed statements like “upgrade” to a modern OS, but it’s usually a cocktail of ageing hardware and legacy software requirements that will stop upgrades from happening.
This isn’t news. It’s well known that hospitals are stuck with aging and outdated OSes.
Windows XP is extremely common in hospitals. 7 is also pretty common too. Windows 95/98/2k isn’t as common, but can still be found. I’m sure that in certain settings, machines with even more years can be found, such as DOS based machines, and even older minicomputer systems such as the Data General Nova and DEC PDP11 can be found controlling certain medical hardware. I’m almost certain that many medical records systems are still VMS based, and i wouldn’t be surprised if it’s running on VAXen.
Yes, it’s a problem. No, it’s not news.
The123king,
You’re right on all counts.
Not only do they run outdated software, but arguably the bigger problem is that they don’t run on isolated networks.
They ought to be following the NSA’s procedures for secure computers and public computers with no connections between the two. Updates don’t always fix the problem either, with zero-day exploits such as EternalBlue. Last time I criticized Thom’s for putting all the blame on administrators while not recognizing that updates to life-saving medical equipment HAVE to go through a certification process. Administrators have no wiggle room here.
http://www.osnews.com/story/29815/hit-by-wannacry-no-one-to-blame-but-yourself/
Thom: I’m glad you dropped the “I shed no tears for you. It’s your own fault.” line this time round 🙂
Of course you should update secondary equipment, like staff PCs, but automatic updates can not be applied to life saving equipment without validation. The risk of faulty updates can be higher than the risk of being compromised and people forget that the update mechanisms themselves are literally “back doors”. This back door should be blocked at the firewall along with everything else being blocked. Perhaps there’s an opportunity here, NOT to make it easier to update, but to make it much much harder to update, as in using read-only file systems and read-only-media. This way genuine updates could only take place offline and require a deliberate manual process.
I understand people’s gut reaction may be that disabling updates is irresponsible, but this applies more to consumers who aren’t taking any mitigating steps. Think about a mission critical operating environment like NASA launches, nuclear reactors, or Cathy’s minesweeping endeavors. It can be better to mitigate plausible threat vectors than to potentially disrupt the computer’s mission critical tasks by introducing new update variables.
VMS is still supported on Itanium and there’s a port to x86_64 in the works. It’s a much better choice than its bastard stepchild Windows NT for anything critical.
I wasn’t really saying that VMS itself was inherently bad. I’m aware it’s still under development, and an x86-64 port is due soon. No, i’m more critical of the 25 year old machines it’s running on, and it’s also likely the VMS install is also 25 years out of date.
Hahaha, bastard stepchild. I like that.
Not news to those in the know in the industry, but I suspect that Joe and Jill Average, or your local politician, would be shocked to find that high tech expensive medical machine hopefully saving their life is controlled by hardware and software many consider vintage.
The same applies beyond medical applications to a host of infrastructure control systems and high tech scientific research devices.
Another worrisome issue, is that many of those same devices are actively connected to the internet to upload data or be remotely calibrated and serviced. I admit I’m hypersensitive to this, because one of the roles I have is to keep such systems running, which is in itself an issue because me and my ilk are not getting any younger.
This is all true… not news, especially when these systems were deployed in teh era of proprietary OS/hardware combo when Security through Obscurity was more prevalent. With the Hospitals’s slim margins, what is an easier and cost effective solution? You have imaging systems for X-Ray, MRI, CT Scans, Ultrasound, etc. There’s Medical records systems that rely on the ole Novell Netbios IPX/SPX. Funny how one hospital still uses PCAnywhere for support.
This is one of those industries where they should have embraced open source software long ago. Some of it is ancient and oxompanies no longer exist to do more than maintain the equipment, and not fix software issues.
Sciences and medical in particular should definitely embrqce it as much as possible.
leech
+1
It’s not clear if you are referring to the operating system or software, but either way I agree it should be open source. Alas manufacturers would rather it all stays proprietary since proprietary software & vendor locking help them maximize profits. For better or worse this hardware can outlive the manufacturer’s support window and even existence, leaving hospitals with millions of dollars invested in proprietary black boxes with no support. How can we actually fix this? Whether it’s medical equipment or cell phones, capitalism doesn’t provide us the right incentives to develop long lasting maintainable devices 🙁
The debate here highlights the problems. Some people want the connectivity plug pulled, yet if their results needed transmission to a specialist they haven’t told us how they’ll do that without connectivity. Maybe your life saving specialist is in another county, state or country! A USB stick, security fail, game over go back to making uranium centrifuges! Maybe you are a remote doctor needing advice from a cardio-thoracic surgeon in life saving minutes not hours or days, no USB will do!
Others demand modernisation without intrinsically safe traceable and tested outcomes, open source is often cited as a pathway. It’s all a bit naive, successfully updating and unplugging little Johnie’s game station is not the same as upgrading a multi-million dollar critical system (critical in that lives might be at stake). Open source is mostly patch it up after the community finds a fault, tell that to Boeing.
I wonder if the same people would be so blasé about upgrading and patching the air traffic control system in charge of their next plane journey, or do they not give that a thought, better not to know!
cpcf,
Calling for isolated networks does not mean engineers cannot come up with other safer methods of achieving external interoperability goals. Quoting my past self…
http://www.osnews.com/story/29815/hit-by-wannacry-no-one-to-blame-but-yourself/
The point being these problems can be solved, but we must get away from this notion that the only way to keep a platform secure is through constant updates down the road. Instead of relying future updates, mission critical product engineers should assume that the OS & software are vulnerable even if they aren’t known to be, this forces them to implement mitigations for threats that don’t yet exist, but probably will. Block all access to absolutely everything from the get go, and then devise simple protocols that don’t have any means of modifying the system, escalating privileges, or breaking with fuzzed inputs. Ideally this should be built on a mathematically provable state machine. Extremely critical machines can implement redundant security measures like hardware firewalls. Developers hate to hear this, but languages that are vulnerable to overflows and corruption (you know who you are) should be avoided. Such a system can remain robust to remote penetration even as vulnerabilities are uncovered since those vulnerabilities aren’t exploitable over the limited protocols that don’t have a means of triggering them.
Alas, I’m well aware that security is more often than not an afterthought. It’s not taken very seriously by project managers or even clients until after a the damage is done. Unfortunately until this attitude changes, we all know it’s going to be more of the same going forward.
Was Boeing’s software open source? As I understand it, the fault was the lack of input redundancy rather than a software issue, right? Boeing tried to pass it off as a pilot error, but any system that ignores pilot input resulting in a crash is seriously flawed. Their initial fix was to patch the software to ignore the pilot’s input less aggressively & persistently, but the truth is the faulty MCAS unit rendered 737MAX aircrafts far more dangerous than they would have been without. Ironically MCAS was invented to eliminate the need to retrain pilots, it was a costly mistake. /OffTopic
Some people here are not realising just how much work is involved in updating equipment that has such critical functionality. I work in the medical industry designing this sort of thing, and one machine I was working on many, many years ago was conceived when XP was still current. Early in development, Vista was released and so we moved to support Vista, given that that was the most modern Windows offering there was and we wanted to have the longest period of stability possible. We all know what happened then, but anyway, by the time 7 came around, the development still wasn’t finished, but was at a stage where switching to 7 required a significant amount of testing – so much so that it would delay the launch of the machine by up to a year. So the machine was launched with Vista anyway since hospitals were eager to get the machines. They’re now running 7, with a project expected to move them to 10 hopefully next year.
That’s the situation with “current” machines. These devices, with 10-year development cycles and an expected 30-year service life, simply don’t fit the rapid development cycle of consumer PCs. Legacy machines that we built in the past or OEM from other manufacturers are still out in the field that run Windows XP, Windows 2000, MS-DOS, and one of our machines even runs an old QNX build. Interesting hardware though – we have one machine too that’s a sort of dual 386 (multiple SBCs on a backplane), due to be phased out years ago but still in the field in some countries, usually hospitals that can’t afford the €50,000 to replace it.
Open source doesn’t solve anything in this regard really. We have other OEM machines that run on Linux, but you can’t just upgrade the kernel or software whenever you like. There’s a 6 month project involved in that, by which time there’s probably another kernel update available. So key updates are selected every few years and bundled along with a user software update. None of that is taken care of by the OSS community, it’s by paid engineers and contractors so there’s really no significant difference when compared with proprietary software from companies. Kernel updates occasionally go wrong, causing issues with graphics cards, sound cards etc., typically less common hardware. That simply cannot be allowed to happen with the usually very obscure, custom hardware these machines use.
daedalus,
I agree very much with your post overall. Obviously simply being open source by itself doesn’t make updates more reliable and when it comes to certified machines, you can’t apply updates without re-certifying. However I strongly disagree with you that there’s no difference. You’ve completely overlooked the fact that proprietary platforms are heavily dependent on specific vendors. Whether it’s microsoft or some other vendor, it’s problematic that you’re stuck with no alternatives. This is one of the main differentiators with open source, it eliminates the dependency on proprietary vendors. While I don’t work with medical equipment, I’ve certainly been impacted by dead end proprietary devices that have limited support options. regardless of who you are it’s a lousy situation to be in. This is where the freedom of an open source platform shines.
I’d be willing to compromise and agree with you that many people in the industry may not be aware of the benefits that open source brings, but that’s a matter of education. I will not agree with you that there’s no significant difference between proprietary and open source when it comes to legacy support options. Open source = flexibility, proprietary = dead end.
Daedalus is right on the money!
Unfortunately Open Source would be more like Russian Roulette if used in the operation of many critical systems. It’s not even reliable enough to be successful in a ubiquitous sector like cellphones that accumulate millions of real user testing hours in just weeks or months!
That’s not to say the there won;t be closed systems built on open source seedlings. But there is no way the ongoing maintenance and operation will revert to it’s Open Source roots.
cpcf,
This is illogical to me. There is a “russian roulette” problem when you install linux on random hardware and expect it to just work. However when you buy purpose-built turnkey equipment, you’re getting hardware and software that were explicitly selected and certified to be compatible with one another. Even in the case where users are stuck on a specific linux kernel for their hardware (a very common problem with proprietary ARM devices), it’s completely illogical to blame open source for this when the real culprit is the proprietary (ie non-open source) bits. Your predicament obviously does not get better when the whole platform is proprietary, you are still stuck.
This depends on how committed all parties are to open source in the first place. And I realize this can be a problem. Many vendors see great benefit in using open source themselves and do embrace it, but prefer not to extend those benefits to their users. This violates the principals of the GPL and leaves end users with many of the same cons as proprietary systems. I concede this is a real challenge today. Nevertheless, it still doesn’t support your notion that “open source” is unreliable. A code base being reliable or not has everything to do with the engineering that went into it and absolutely nothing to do with it being open source.