This morning, WireGuard founding developer Jason Donenfeld announced a working, in-kernel implementation of his WireGuard VPN protocol for the FreeBSD 13 kernel. This is great news for BSD folks—and users of BSD-based routing appliances and distros such as pfSense and opnSense.
Not something I personally use, but good news for those that do.
You’re behind the curve on that one. They’re not going to release it with 13.0, instead dropping it for now, to be re-added at a later date. See this thread: https://lists.zx2c4.com/pipermail/wireguard/2021-March/006504.html
ajs124,
Interesting. Funnily enough I took issue with wireguard code for hardcoding things and the lack of futureproofing.
http://www.osnews.com/story/131452/wireguard-gives-linux-a-faster-more-secure-vpn/
The original Wireguard FreeBSD kernel module was written by someone from Netgate who didn’t bother getting any input from anyone involved with Wireguard.
The new Wireguard FreeBSD kernel module has been written by, original Wireguard author, Jason Donenfeld, Matt Dunwoodie who worked on the OpenBSD implementation, and FreeBSD core developer Kyle Evans. This implementation has a much better provenance then the last implementation.
The hardcoding isn’t a bad thing as it seems.
What they are doing is reducing complexity and the plan is: want a new version ?
Then that is a separate piece of code, with a protocol which is not compatible with the old one.
That way complexity is reduced in a big way.
I agree with your other comments in the old post, I think out of the kernel is easier to deal with. You could maybe even run it unprivileged (for example dropping privileges after start).
OpenBSD does this for their routing software, privilege separation: one process talks to the kernel and only receives simple messages from an other process that talks to the network, etc.
Lennie,
This makes sense to do to an extent, but there are people who want/need more security & future proofing today and hard coding things like cryptographic key sizes is a classic case of planning for the present and not being future proof. Sure, technically the problems with V1 can be fixed with V2, but the net result will add even more complexity and bloat than having done it right in the first place. Realistically speaking consumers are going to end up with phones, routers, IOT devices, etc that will be stuck with the old manufacturer kernels indefinitely and that’s going to hold back deployment of the V2 standard potentially for years and even decades. This is why it’s so important to have future proofing up front.
IMHO the BSDs have a better reputation for planning/engineering than the linux community and could have done a better job coming up with something for long term stability, maybe I am biased though.
“This makes sense to do to an extent, but there are people who want/need more security & future proofing today and hard coding things like cryptographic key sizes is a classic case of planning for the present and not being future proof. Sure, technically the problems with V1 can be fixed with V2, but the net result will add even more complexity and bloat than having done it right in the first place”
Yes, it adds some bloat, but it simplifies the code a lot. Their is no ambiguity when doing code review. The code of more complex implementations is usually in the 10s of thousands of lines of code or even more and the simpler ones are in the thousands of lines.
Which is something the people building cryptographic code and protocols want to do after the experience with SSL/TLS and obviously: IPSEC.
And it’s the philosophy how TLS/1.3 was developed as well.
The trick is to release the V1 and then start to work on V2 immediately after it. They actually have a pretty good idea of what the state of the art will be in cryptography in 4 years or so.
Again, this isn’t my idea, it comes from the experience of people building the protocols & code, etc.
Remember: most of the protocols in cryptographic network protocols are all in the code, pretty much never in the protocols/design. So you end up shipping updates for the code to fix them anyway (aka security patches).
Sorry typo:
“Remember: most of the [security issues] in cryptographic network protocols are all in the code, pretty much never in the protocols/design. So you end up shipping updates for the code to fix them anyway (aka security patches).”
L:ennie,
I’ll tell you why I disagree, I’ve worked on so many projects where the sum of hard coded implementations over time gets to be more bloated and confusing than a single more flexible implementation would be. Hard coding is conceptually simple, but in the long run it’s usually better to have a future proof design. Seriously I find this to be one of the most frustrating things of maintaining old code since it increases the spaghetti code factor.
And even if the kernel developers wanted to be lazy, it might have been ok to reject unsupported sizes at this time, but this is NOT a good reason to not have a future proofed protocol up front especially when things like key sizes are easily anticipated. It’s like intel’s descriptor tables where they hard coded 24bit structures and by the time they went to 32bit they ended up throwing the additional bits anywhere they could fit them. Did intel need to implement all 32bits up front? Not necessarily, but they definitely should have anticipated 32 bits for the future, which would end up needing to be allocated the very next CPU generation. Instead they really messed up the future with obnoxious hacks that wouldn’t have been necessary. Fortunately AMD did a better job of future proofing with 64bit, I believe only the lowest 48bits were implemented, but they didn’t do something stupid like hard coding the design around 48bits, they made it possible to grow into 64bit in the future without having to increase complexity. When you want to engineer something for the long term, future proofing is the responsible thing to do. Wireguard isn’t an exception.
SSL/TLS are extremely complex and have tons of bloat and I do not think these belong in the kernel. But this should not be used to justify no futureproofing.
Consider what I said though, it’s not all or nothing. You can implement hard coded functionality today while still accommodating future requirements, wireguard failed to do this, which leads to more complexity down the line.
Of course nothing I say can be consequential since the choices have already been made, oh well.
Wireguard is still being developed. Compared to something like OpenVPN, it is very barebones, and it’s still working on feature parity with older VPNs.
For instance, it doesn’t support DHCP, and client IPs have to be managed manually. This is fine for server networks, but for clients it would be better if DHCP pools with DDNS was an option.
There are lots of other things, like not being able to read the keys from files. Not having a mechanism for key rotation.
Granted most of these are problems with the tooling, and not necessarily problems with the protocol. Tailscale has solved quite a few of these problems ((https://tailscale.com/), but that’s not a FOSS product.
I think we need to remember this is a v1.0 product, and the bare minimum was implemented. It’s possible it will look more like OpenSSH in the future. It already acts quite a bit like OpenSSH already.
When is v2 expected though? I’d have to look up the exact specs of each crypto algo, but if they survive for a decade, that’s a good run. With all crypto, it’s about
surviving until the info isn’t useful. All crypto is going to get cracked; it’s just a matter of when.
Flatland_Spider,
To me this is exactly why we need future proofing. Everyone with crypto experience understands that crypto needs change, and sometimes without much notice. Consider that we are able to upgrade the crypto in SSH, stronger keys, etc without requiring a new version of the SSH protocol. I am pretty sure wiregaurd is going to need this flexibility as well anyways, so why not do it right and future proof it the first time around?
I’m actually ok that the developers only wanted to implement one kind of encryption at this stage, but I think it would have been reasonable to build the protocol and interfaces to anticipate the need for different crypto in the future without requiring a new protocol or design.
Say V1 of the protocol did actually support multiple crypto algorithms/keys, I honestly don’t believe anybody here would have objected to this future proofing. And that’s the essence of my point, making it future proof would not have been controversial.
As mentioned any features you add are their own risk, just see the first bug:
https://arstechnica.com/gadgets/2021/03/openssl-fixes-high-severity-flaw-that-allows-hackers-to-crash-servers/
Notice how TLS/1.3 has no reneg, etc.
Lennie,
To be fair I’ve specifically said that openssl is way too complex for use in the kernel. I don’t mind nailing some things down for simplicity’s sake, but it doesn’t change my mind about the importance of futureproofing.
For a user space Wireguard implementation, there is BoringTun from CloudFlare. https://github.com/cloudflare/boringtun
Pairing it with a user space network stack would be interesting. https://github.com/F-Stack/f-stack
Alfman:
We do need future proofing, but historically that hasn’t been coupled with disabling the past insecure options, leaving people to continue using less secure connections than they could be using. I have no suggestions on how to make products future proof but also gracefully cut off support for older insecure protocols, but that sounds hard. I understand why Wireguard did what it did.
Bill Shooter of Bul,
I would say that the ability to rapidly switch to new crypto without needing to change our protocols has saved us in the world of HTTPS because we could just switch to different crypto without needing a new protocol or code to be developed.
I concede that many admins will neglect to fix it anyways. Still, I think we ought to agree that having a protocol that is capable of switching algorithms is superior even though we are flawed humans that might need more prodding, haha. Having a mono-culture that might take weeks to fix and redeploy (assuming one’s vendor even provides an update at all) is much more dangerous than being able to flip a configuration switch and be up and running with alternative crypto in less than an hour.
Ok, but my criticism was more fundamental than that: let’s set the bar higher than cryptographic mono-culture protocols for the kernel. The real irony is that the linux kernel already supports more diverse crypto today. Wiregaurd itself is the limiting factor.
I’ve implemented things like this myself and I agree with those who suggest an SSL stack is way too complex to have in the kernel. But since they had an opportunity to start from scratch IMHO it’s regrettable they didn’t do a better job future proofing and went with a mono culture design from the outset especially considering the cryptographic resources are already in the kernel. With just a bit more planning it could have been something that I would have gotten behind without these reservations.
There are those who say, well this is just V1, and they can improve it with V2, and that is completely true. As an end user, maybe that’s good enough. However as a developer it’s disappointing because this failure to plan ahead leads to more bloat and complexity down the road. It would have been one thing if these changes couldn’t have been anticipated, but in this case we’d be lying to ourselves because qualified devs knew (or should have known) exactly the kinds of problems that come up with crypto. That we still failed to plan for it is…ugh.
Anyways, I’m surprised not a single person agrees with me. Oh well, that’s ok, I’ll take one on the chin and resign to my corner in defeat, haha.
Correct. Part of the problem was the new implementation was based on 13, which is in its RC phase right now, rather then CURRENT, and everyone felt it would be better to rebase the code on CURRENT then merge it which is the normal workflow.
The expectation is the new wireguard module will be a part of 13.1.
Also, there will probably be a port to build the new module in the near-future. FreeBSD’s kernel ABI is very stable, so out-of-tree modules are less of a problem then with say Linux.
Oh good, it wasn’t my imagination. There really are differences between the FreeBSD wireguard implementation (OPNsense) and the Linux implementation (OpenWRT).
The current FreeBSD Wireguard implementation can be weird and has to get kicked when changes are made. It works fine once it’s connected, but it’s not as smooth as the Linux implementation.
https://twitter.com/mjg59/status/1371961576308142080?s=20
Seriously? LOL Netgate are bunch of clowns. LOL
Here is Scott Long, presumably someone in the c-suite, whining about how Jason wrote a better version and how he’s attacking Netgate by doing so.
https://www.netgate.com/blog/painful-lessons-learned-in-security-and-community.html
https://www.reddit.com/r/PFSENSE/comments/m5shda/wireguard_in_freebsd_13/gr6adaq/