Many older VPN offerings are “way too huge and complex, and it’s basically impossible to overview and verify if they are secure or not,” says Jan Jonsson, CEO of VPN service provider Mullvad, which powers Firefox maker Mozilla’s new VPN service.
That explains some of the excitement around WireGuard, an open source VPN software and protocol that will soon be part of the Linux kernel—the heart of the open source operating system that powers everything from web servers to Android phones to cars.
I’ve always been wary of the countless VPN services littering YouTube and podcast sponsor slots, since you can never be quite sure if you can trust them. Luckily I don’t need a VPN, but I’m glad Linux is getting it built-in.
The VPN services provide you with servers to connect to, to serve as an exit point for your data. WireGuard can’t magically replace that.
The “way too huge and complex” VPN offerings that WireGuard provides an alternative to are the software that they run on top of… primarily OpenVPN… so, until they start to offer support for it, WireGuard will only be useful for VPNs where you control both ends of the connection. (eg. securely connecting to your home LAN from somewhere else.)
As for not trusting the services, TorrentFreak surveys VPN services and posts a run-down of the best anonymous VPNs every year. Here’s the 2019 one.
Some VPN service provides have added WireGuard support as part of their offerings. Ex: https://www.azirevpn.com/wireguard
That’s a very simplistic view of WireGuard. It has more in common with mesh VPNs like ZeroTier, Tinc, and NeoRouter then traditional point-to-point VPNs.
People really shouldn’t trust VPN services. Quite a few are run by shadowy organizations with potential ulterior motives. NordVPN’s ties to data mining company Tesonet, for example.
I’m always torn about the utility of VPN, does it keep the nice people safe, or the bad people hidden?
My gut feeling is to work like some of the high security industries, central/reserve banks, intelligence organisations, etc., etc., and nothing of extreme high security travels by the internet.
You will not get better security, you just add some new complicated, pwnable, parts to your security chain. And complexity makes stuff unsecure. This trend of VPN used for so said safe browsing is a strange one.
VPN stands for Virtual Private Network, and was never intended for that usage. The web has never been private.
All it really does is hide your ISP-based IP address and replace it with a VPN-provided IP address. Which is fine if you want to avoid blocking (or warning letters) for using BitTorrent or whatever, but is otherwise pretty useless as there are numerous other ways you can be identified by all the data leakage from browsers, operating system telemetry, etc. It doesn’t stop the VPN provider being forced by government to reveal the identity behind an IP, just as ISPs are, even when services claim not to keep logs.
WireGuard really is awesome, and it is a significant upgrade to the current VPN landscape.
There are still features it needs, like DHCP support, but it’s usable right now. I’m running my own personal VPN on a VPS, and it’s been great.
So it sounds like it’s ideal for me to give it a try for point to point between my NAS and my remote location.
You would probably be better of with a PiVPN than turning your NAS into a death star hosting pattern and exposing it directly to the internet..
Definitely. It should work well for that.
Ultimately, I’m going to use it to build a management network between various VPS machines and link several locations together. I just need to work out the firewall rules for the routers at the sites. This would be for home and work.
Wireguard is just the transport layer. Even OpenVPN doesn’t have a DHCP server, it just uses dnsmasq.
tidux,
An added note is that although openvpn can use an external DHCP daemon, it doesn’t require one since openvpn provides a “similar” mechanism that is not based on DHCP.
https://community.openvpn.net/openvpn/wiki/Openvpn24ManPage
This is the configuration that my own network uses and makes it easy for openvpn clients to get their own IP address range without having to mess around with separate DHCP server (or client).
Anyone taking bets on just how long it’ll be before this company goes belly-up or otherwise vanishes from the face of the earth along with it’s subscriber’s money like so many other of these outfits have done?
The WireGuard protocol is now part of the Linux kernel, there is no dependency on any external service unless you want to.
The company behind Wireguard could go “belly-up” but the software is open source and has code and funding provided by many other people and organizations. I have no idea what “subscriber’s money” you are talking about…
Jason A. Donenfeld is the person behind the project. ZX2C4 is Jason A. Donenfeld. Edge Security is the company Jason A. Donenfeld works for.
There is no company; there is only a person.
OK? I pointed out that doesn’t matter. LOL
Here is a list of the companies that are making significant yearly monetary contributions –> https://www.wireguard.com/donations/
I have worked on similar projects before, but I worry that this approach of hard coding algorithms is potentially limiting the protocol’s ability to adapt to the future. This is why existing protocols are more complex. Obviously complexity is not ideal, but in developing software for the real world there’s a tradeoff between simplicity and future proofing (say to transition from one vulnerable algorithm to another uncompromised one). So while it may be tempting to hardcode everything to avoid complexity in wireguard, it’s likely coming at the expense of future-proofing and an increased risk of having to tack on breaking changes in the future.
Another point that warrants discussion is whether these services even belong in the kernel. Nearly everyone including linus thinks the kernel is too bloated, how monolithic do we want the kernel to become? Obviously linux is “modular”, which allows it to load code into the kernel dynamically, but which side of the kernel barrier should these things go on? It’s becoming somewhat arbitrary and technically a lot of OS functionality could exist in either the kernel or userspace. For example, SSH is an extremely useful security protocol that could be implemented in userspace or kernel space and it will likely remain far more popular than wireguard will ever become. Is there any logical reason the kernel should include one extremely niche protocol over a more widespread standard?
I mention these things for discussion’s sake. Personally my opinion is that network facing daemons ought to be fairly detached from kernel development. In other words, I should be able to upgrade the software that provides a network service without having to obtain a new kernel. Although this is somewhat tangential to the discussion, just imagine you want to connect your embedded ARM device to a network service that requires version 2 of a protocol that’s hardcoded in the kernel, so instead of just updating the software in userspace yourself, you are forced to wait for your ARM device vendor to release a new kernel. I’ve been in this boat many times: being dependent upon manufacturers for kernel support is awful, and IMHO hardcoding protocols into the kernel could exacerbate this.
I would think having this in kernel space would enhance security as compared to VPN solutions that operate in userspace. But then why not have ssh in kernel space as well..?
As to future protocol changes, I think the Linux kernel maintainers will make sure it gets upgraded when needed, with some sort of backwards compatibility solution in place.
pepa65,
Well, the problem is that in practice, unless you start with a future-proof design up front, you often end up unnaturally cramming things wherever they fit which can be much more convoluted than planning for the future in the first place. It used to be normal for the industry to build computers with virtually no future-proofing. As an example Intel hardcoded a structure in the 286 with no room for future memory expansion whatsoever, which became immediately obsolete. And while technically we can say that intel went on to “upgrade it as needed” with the 386, the new structure was terribly awkward and more complicated than had they future proofed the design in the first place.
https://en.wikipedia.org/wiki/Global_Descriptor_Table
See how some fields are split up and stuffed where ever they fit, this is indicative of the lack of future-proofing that permeated those early years.
Back then computers were ostensibly simple, but this also caused countless struggles for computer owners over the years as those of us who lived through that period can attest to. Fortunately today we are much better about future proofing and we rarely hit the type of limits that plagued early hard drives, bioses, video adapters, etc. So while it’s often easier to start with hardcoded structures for early prototyping, sometimes it can leave you with very clumsy code and workarounds down the line. If the future implementation is going to be forced to be backwards compatible, then this can result in a kernel standard that’s the worst of both worlds (both complexity and inflexibility). So with this in mind, the simplest hardcoded implementation today may turn out to be more complicated and clumsy in the future and IMHO this is why future-proofing is important: it reduces complexity in the future.
Anyways, just my 2c.
I wouldn’t worry about the protocol’s “ability to adapt to the future.” It’s something made to work *now*. When it’s no longer sufficient, someone (else?) will put together a replacement.
gus3,
Haha, well that’s one way to look at it.
The thing is some developer like me inevitably ends up having to maintain and support these legacy code bases. I think maybe you are just looking at it as something that can be developed for the present only and then thrown away and replaced in the future, but this is not the way protocols and code typically evolve in practice. The main problem won’t be coming up with something more flexible in the future, but doing so in a way that satisfies backwards compatibility with the existing code & protocol, this is where the ugly workarounds & code starts to come into play.
In my experience two of the main causes of complex/confusing code are 1) inexperienced developers who were themselves confused at the time they wrote the code and 2) an adhoc development process that lacked a strategic plan and left a lot of evolutionary baggage in it’s wake. I’ve reviewed parts of the wireguard code and I can comfortably cross off #1, but for #2 I can see that he does take shortcuts with hardcoded structures and crypto buffer sizes in the protocol that will clearly be problematic in the future.
I understand that you don’t care, but the end users who will build networks around this kernel protocol & feature will be dependent on this “legacy” V1 functionality and almost certainly will expect a level of compatibility across future linux kernel versions. Ergo, we’ve created the very conditions that produce the most evolutionary baggage. 🙁
The protocol is versioned and the Linux kernel userspace ABI is very stable, so you can just drop a new kernel and new userspace tools on a machine to get a newer protocol version if it comes to that. This is only an issue for people wishing to use Wireguard in combination with out of tree binary kernel modules, but really, who uses those anymore? DKMS is a joy to use by comparison.
tidux,
The protocol may be versioned, but having to support multiple protocol structure versions is precisely the sort of evolutionary baggage that I’m referring to. Rather than having to support a multitude of structures under conditional code ” if (version==1) /* use v1 structure */ else if (version==2) /* use v2 structure */ …” it can be better to start with a more flexible structure that can be reused. For example, hard coding specific key sizes is extremely short sighted IMHO. Oh well, I’m just pointing it out the negative consequences of not future proofing, but it is what it is, haha.
I want to be clear, when I was talking about version compatibilities, that was really directed at what gus3 said:
The point being that I didn’t want him to trivialize the need for compatibility between versions today and replacements in the future since users will be using different versions of linux and still expect to be able to interconnect (this could already be obvious to you, but I was making it explicit).