On Thursday, tech giant Juniper Networks revealed in a startling announcement that it had found “unauthorized” code embedded in an operating system running on some of its firewalls.
The code, which appears to have been in multiple versions of the company’s ScreenOS software going back to at least August 2012, would have allowed attackers to take complete control of Juniper NetScreen firewalls running the affected software. It also would allow attackers, if they had ample resources and skills, to separately decrypt encrypted traffic running through the Virtual Private Network, or VPN, on the firewalls.
[…]
The security community is particularly alarmed because at least one of the backdoors appears to be the work of a sophisticated nation-state attacker.
Merry Christmas, everybody.
“The weakness in the VPN itself that enables passive decryption is only of benefit to a national surveillance agency like the NSA,†says Nicholas Weaver, a researcher at the International Computer Science Institute and UC Berkeley. “You need to have wiretaps on the internet for that to be a valuable change to make [in the software].â€
There, fixed it for them!
This is where a user like me throws his hands in the air and says “If the experts can’t keep me safe, what chance do I have”?
This is over 3 years ago. And not from any company but from Juniper. This is the tech-equivalence of VW cheating massively on their environmental tests, only worse because environmental tests are a side-issue for VW where security is the main business for Juniper. Just like in the VW example I expect others to announce they “had similar issues” and “are working on a solution for the future”.
It remains to be seen how the code got into the code base. It might have been a mole. Not that it matters, because we can’t really really trust Junipers or the US authorities answer to these kinds of questions.
If the business model is not to be fully open there will always be trust issues. Nothing you can do.
Lennie,
There’s no sure thing though. Open source is sometimes compromised too, and the theory is it’ll be found but as we’ve seen in the past open source projects can be insufficiently funded and a lack of qualified volunteers examining the codebase. It’s unfortunate that proprietary software gets most of the money.
It took Juniper 3 years to find it and they could have kept it quiet. That is why it is more likely a mole ?
Yes, keeping open source projects active and healthy takes a lot of time (read: money).
Well, most of the money is in: platforms and services.
Not in selling software: http://redmonk.com/sogrady/2015/06/03/software-paradox/
And I would prefer all platforms to be open platforms.
The Internet is a platform, the web is a platform, but so is Windows and so is AWS. But also Linux (distributions).
Edited 2015-12-22 15:14 UTC
And if you look at it with the view I’ve mentioned above.
Your biggest risk is the smartphone you probably have.
You probably always have it nearby, it has access to almost all your communication. It probably already uploads stuff to the ‘cloud’ (a bunch of computers some company owns and you might not even be a paying customer).
A cellphone actually has 2 operating systems. The Android or iOS or Windows (phone) you see and the software running on the baseband processor. The baseband processor has access to everything on your phone (more than for example Android has) and is fully controlled by your mobile network operator. Thus they control your phone, you don’t. And nothing it out in the open.
“The baseband processor has access to everything on your phone”…Are you trying to tell me that the baseband processor knows about the Encrypted filesystem on my phone…or knows how to access an SD-Card?
Of course I know that everything that gets transmitted back and forth to the provider is suspicious, but as far as I know the (encrypted) storage on my phone is safe.
Also, carriers/providers are basically the entire security hole of the internet and have been completely indemnified from everything. Worrying about them is like worrying about the software inside your internet facing router/firewall….oh wait
It depends, who has access to the encryption key ?
The key is kept in hardware ? But what part of the hardware has access to that key ?
If it’s an iPhone with encrypted storage, is a cleartext (not encrypted) version of that data kept on the iCloud or whatever it’s called this time. Yes, probably. If I’m not mistake: when you get a new iPhone, you’ll get everything restored on the new one.
Let’s make it even clearer: there is a risk (small) that when you connect your smartphone by USB to your computer can be attacked:
http://www.wired.com/2014/07/usb-security/
The encryption is only securing it’s content when it’s not decrypted, ie. when it’s off. When the encrypted filesystem is decrypted and mounted, it’s of course fully accessible.
It is fully accessible to the OS that has decrypted it, not automatically to another OS (the baseband OS). The question is how advanced a baseband (or SIM OS) is. Can they actually access the memory from the other OS (like a hypervisor OS could probably access the memory from its guest OS’s) or does it simply not have such functionality build in? Would a baseband OS actually have knowledge about filesystems?
Logic would say that the writer of a Baseband OS or SIM OS wouldn’t bother to put in such code. On the other hand, paranoia would say that this would be the perfect place for a (mandatory) backdoor. Without having access to the sourcecode we will probably never know (the reverse engineering of Rex OS didn’t show anything, but that is also not what reverse engineering would normally uncover)
It would be very interesting to know more about the capabilities of these ring-zero OS’s. If stealth access is an objective, I would expect they tap into the client OS. But I agree, that needs to happen, because just accessing the storage would not help.
What is the problem? I thought the government wanted back doors in all software.
/S
Security enforcing tech must be open source.
The issue of ‘how many qualified eyes” looking at the code is valid but secondary.
If stuff is open there are whole classes of bad behaviour that won’t be attempted.
project_2501,
I agree we need to demand that infrastructure systems are open source, but we really need more than that, we need to demand that the hardware is open also.
Moving linux to GPLv3 would generally solve this problem for linux firmware, but alas that’s a different debate…
absolutely agree – open means open.
half open is not open.
“baseband” software on mobile devices, for example, are scary – they run at a priviliged level beyond “software root” and can do anything, hidden, undetectale by normal software. and it is on every smartphone …
http://www.osnews.com/story/27416/The_second_operating_system_hidin…
Are we all forgetting about the state that OpenSSL got into, or Dual_EC_DRBG, or that the NSA have been attempting to get backdoors into Linux for some time now?
Just being Open Source isn’t enough. It needs to be Open Source and actively maintained and audited on a regular basis by a wide range of people. Which is expensive and time consuming, which is why it doesn’t tend to happen until something bad happens and then suddenly people are wringing their hands over how terrible it is and how we should have all worked harder to avoid whatever it was happening in the first place.
Are you sure the NSA is still just *attempting* to get backdoors in? You do know they write and maintain SELinux. Just saying.
You’re right that just because something is open doesn’t mean it is automatically secure.
What we’re saying is subtly different – that if something is open, there are whole classes of bad behaviour that are not attempted, and others less likely.
Which has got to be better than giving up and “throwing the baby out with the bathwater” as we say here in England
Wait…what?
Have they not done a code review since 2012? I find that hard to believe. If true, what kind of “security” company only reviews its software every few years?
On the other hand, have they been doing regular code reviews and missed this vulnerability for three years? That doesn’t sound good either. For a “security” company?
What am I missing?
so much for open source not having enough eyes.. it is a myth that corporate code is somehow better reviewed or audited.
the difference is this – it is easier to hde bad code in proprietary closed code than in open code.
that isn’t the same as saying open code is perfectly reviewed – it doesn’t need to be to discourage whole classes of bad behaviour…
Once documented, it’s an official feature.