Researchers discovered that a Transmission Control Protocol (TCP) specification implemented in Linux creates a vulnerability that can be exploited to terminate connections and conduct data injection attacks.
The flaw, tracked as CVE-2016-5696, is related to a feature described in RFC 5961, which should make it more difficult to launch off-path TCP spoofing attacks. The specification was formulated in 2010, but it has not been fully implemented in Windows, Mac OS X, and FreeBSD-based operating systems. However, the feature has been implemented in the Linux kernel since version 3.6, released in 2012.
A team of researchers from the University of California, Riverside and the U.S. Army Research Laboratory identified an attack method that allows a blind, off-path attacker to intercept TCP-based connections between two hosts on the Internet.
Researchers noted that data cannot be injected into HTTPS communications, but the connection can still be terminated using this method. One attack scenario described by the experts involves targeting Tor by disrupting connections between certain relays so that users are forced to use attacker-controlled exit relays.
More proof that open source isn’t as secure as the evangelists make out.
As much as I like Linux in some ways, the constant spouting of evangelists about how secure Linux is compared to other OS’s is enough to put you off it at times.
Also if they spent more time testing for vulnerabilities and bugs instead of trying to re-invent the wheel every day, things would be a heck of a lot better!
Sauron,
This particular issue has more to do with internet standards than with open source.
I wouldn’t say open source code is automatically more secure, but it IS automatically more transparent. Consider this scenario, you are a government in a country with concerns of espionage by US agencies. Do you opt for a proprietary platform developed by a US company? Or do you opt for an open source solution?
The advantages for open source become undeniable once you factor in the options that source code affords you. You can cut down the attack window by eliminating components you don’t need. You can audit those you do need. You can monitor exactly what’s changing and even who’s changing it. With proprietary solutions you have no idea what’s in those binary blobs, you have to take the company at it’s word. Tell me, honestly, which approach do you think makes it easier to sneakily inject nefarious code?
Granted, you were probably referring to ordinary users who can’t afford the same level of due diligence with source code. And I concede if you are just going to blindly trust the software with no audits or checks of your own, then having the source code isn’t as helpful for you – you are just leaving it to trust. However note that there is still a fundamental difference in the nature of trust in the open source model: trust is scattered over a much larger group of people and it only takes one of them to raise a concern. With a proprietary solution, all your eggs are in one basket, they can target and deceive you for years, possibly under gag orders, without your being any the wiser.
Notice the part about only implemented in the Linux kernel?
That is what I was commenting on. It’s been there for at least 4 years unnoticed? Speaks for itself. It’s nothing to do with standards, internet or otherwise if only Linux uses it and is affected by it.
It probably doesn’t affect most people, but still, it’s there. Most developers are real about these things and they are bound to happen from time to time. But it’s still one in the eye for the sermon spouting evangelists!
That’s all much of a muchness. What about the ones in closed systems that were not found. It’s pointless arguing either way about it.
Knew it wouldn’t take long!
I’m not a “Linux evangalist” though. I use Linux becase it’s what I’m used to. I’m thinking of switching my desktop to FreeBSD anyway.
Sauron,
Of course linux has had faults, all software has. But if you are looking for a smoking gun to use against open source “evangelists”, this isn’t it.
Edited 2016-08-30 15:24 UTC
“… the spec itself is at fault, and we know it’s at fault.”
Thanks for the explanation, Alfman. It’s very hard for a properly negotiated, long history open spec to be at fault. And that is a good thing :~)
dionicio,
Well, the information leakage needs to be fixed in the spec at least, because it makes known attacks easier. But the problem of TCP control packets being untrustworthy is a rather fundamental one because TCP control packets are able to cause a denial of service even with encrypted protocols on top.
Ideally, a denial of service shouldn’t be possible even with a passive network splice. The only way to fix this in the long term is to add crypto to the TCP layer (or lower).
Interestingly MD5 signatures for TCP was proposed to improve BGP security way back in 1998.
http://www.ietf.org/rfc/rfc2385.txt
Edited 2016-08-30 16:24 UTC
Those MD5 signatures are in pretty wide use for BGP. But it’s not a general solution.
You need to configure both sides of the TCP-connection with the same ‘key’. That obviously doesn’t work for visiting the average website.
Lennie,
On the one hand, I understand the motivation for ditching TCP with it’s legacy problems. On the other hand, I’m not sure I really like the idea of software re-implementing TCP equivalents at the application levels of the OSI network stack. There’s a lot that can go wrong if we leave things like network throttling to application protocols instead of the operating system. UDP protocols can easily take over all network resources if they’re not careful. Should TCP really be discouraged, or should it be fixed?
Totally agree of course.
The reasons they are doing it are several, among them we have these:
1. OS kernels don’t implement/adopt new protocols. When they do they can’t be used to test new protocols in the field, at scale and easily updated.
2. nobody uses the OSI stack in real life. We just layer whatever we want.
3. Newer protocols want to merge several properties into one single layer: HTTPS-like security, very few round-trips and an other: multi-path. As an example: try coming up with an API where applications determine the how validation of certificates is handled and everything else is in the kernel.
The solution ? Create protocols as standards as normal and implement them in userspace.
Edited 2016-08-30 19:20 UTC
Lennie,
I have an example of layer violation for you: remember that old protocol SCTP that didn’t get deployed because of problems with routers and firewalls not supporting it ? That has axtually been deployed on a large scale the last few years. It’s in your browser. It’s tunnelled over encrypted UDP and it’s called WebRTC datachannels.
Lennie,
I am in favor of officially supporting SCTP atop IP, but vendors (especially microsoft) have been ignoring it for over a decade, so I remain skeptical that it will ever be supported as a proper IP protocol. Of course one can tunnel it over existing protocols like UDP, but you are right it’s a layer violation and ideally it would only be a temporary solution. This has legacy hack written all over it and will probably become permanent, future generations will wonder why the hell the technology was built the way it is, haha.
Edited 2016-08-30 21:51 UTC
You mentioned legacy. The touchscreen keyboard on my smartphone is a qwerty keyboard. So I think we have a lot of that already. 🙂
“Old SCTP…It’s in your browser. It’s tunnelled over encrypted UDP and it’s called WebRTC datachannels”.
In which case any possible vulnerabilities of Old SCTP would be at the browser level. Being where it is now, not the same level of oversight than TCP or IP.
Will tray not to have any browser instanced, when unneeded.
“…The solution ? Create protocols as standards as normal and implement them in userspace”.
:~)
Yes. As commented before on the Rust tread, this is a good path to anything new. On reversing SUN philosophy: Only the best of the breed should be allowed to slowly permeate: From hardened sandboxes down to the kernel, and beyond.
“…The only way to fix this in the long term is to add crypto to the TCP layer (or lower).”
An AI router can quickly establish an attack pattern without the need to complicate [and weaken] the TCP spec.
Feel uncomfortable about idea of putting security below, because it would just move the problem to address spoofing.
[There should be additional mitigation strategies].
Open source versus proprietary is not the issue here, lack of proper auditing of code is, as is the standard in question. Both are issues that are independent of how the code is licensed.
Open source is not any more inherently secure, but it allows for a higher degree of assurance of security without requiring trust in a third party. The fact that bugs like this get publicly disclosed is a good thing, it means you can mitigate the impact before it gets fixed completely.
One of the caveats of building “secure” protocols on top of TCP is that TCP’s features (including reliability) are inherently insecure, and therefor vulnerable. In other words, even though the data bytes can be authenticated at the application layer (ie with SSL or SSH) the stream itself can be broken by spoofing.
Ideally, the reliability features of a tunnel should also be protected with crypto – all packets should be ignored/dropped if they are not authentic. It should not be possible to interfere with existing sessions using spoofed packets – even if an attacker has access to the legitimate TCP packets. While it’s possible to run TCP underneath IPSEC, this is usually reserved for VPN use cases, and IPSEC won’t work with many routers anyways – applications are usually built with ordinary TCP sockets instead, in accordance with the OSI model.
http://www.webopedia.com/quick_ref/OSI_Layers.asp
Of course it’s possible to use UDP instead of TCP, taking care to encrypt the session data, but then we end up having to re-engineer much of TCP, which is far from ideal.
Resilience was the foundational aspiration. ‘Security’ was not part of the concept, or was presumed already embedded on the message. Multi-Hash reports in between routers -through enforced alternative paths- should be axiomatic to TCP.
When you use TCP for HTTPS/SSH it’s not that bad, the only thing an attacker can do is reset the connection (as long as the server signature is verified properly of course).
Also TCP-connections are already being replaced, ‘as we speak’. If you use Chrome to speak to for example google.com it can already be using QUIC which is based on implementing a TCP-like protocol with security like HTTPS built in on top of UDP. In the IETF they are implementing something similar which will probably end up as TLS/1.3. If I’m not mistaken Firefox already supports TLS/1.3. But I believe that is the crypto part and not the UDP part ? I haven’t checked yet.
Soon you’ll end up with:
HTTP/2 over TLS/1.3 over UDP
Which will give you: “zero-round-trip connection establishment”
http://blog.chromium.org/2015/04/a-quic-update-on-googles-experimen…
Edited 2016-08-30 16:44 UTC
And this is why i stick with Eindows. it is more secure than open source software. it is why i would never use android phone as you get hacked left and right. Windows on Pc iPhone on the phones are the best