blog.

Wireguard as a Kubernetes sidecar

Normally, if your VPN provider offers Wireguard services, you’d install the VPN provider’s app and configure your application to use the network interface that the app creates. I’m running my server in Kubernetes, though, not on my desktop, so I needed to manually take the same steps a provider app would.

Sidecars

Since Kubernetes pods share certain resources, the “sidecar” pattern describes a multi-container pod with a helper container running alongside the main container. This helper container can perform any number of tasks; for example, maybe the main container outputs to a log or a cache file in an emptyDir, and the sidecar might perform log rotation or cache flushing to avoid hitting the emptyDir’s size limit. (I’ve actually implemented this exact solution to manage my local DNS resolver’s cache.)

For this use-case, the shared resource we’re interested in is the pod’s network interfaces.

Wireguard

Wireguard is implemented as a type of network interface. In fact, the implementation is shipped as a kernel module, and Wireguard interfaces are added via the normal ip command:

ip link add dev wg0 type wireguard

The ip command is also used to set the interface’s config. For example, you can configure the interface to act as a client, a server, or both; your server’s password is specified this way as well.

Because the ip invocations can get a bit tedious, the Wireguard team has provided a shell script, wg, that takes in a configuration file and runs the appropriate ip command automatically based on the file’s contents. For example:

wg setconf wg0 /etc/wireguard/wg0.conf

There may be some additional setup, depending on how you want traffic to be routed. Should all traffic be automatically routed through the Wireguard interface, or only some? or should routing be configured on an application-by-application basis?

Implementation

Since the wg0 interface requires no daemon, the “sidecar” label is applied here more broadly; the actual implementation only adds an initContainer to create and configure the interface.

The container’s command looks like this:

# delete `wg0` if it already exists
if ip link show dev wg0 >/dev/null 2>&1; then
  ip link delete dev wg0
fi

# create and configure `wg0`
ip link add dev wg0 type wireguard
wg setconf wg0 /etc/wireguard/wg0.conf
ip address add "$(cat /etc/wireguard/wg0.address)" dev wg0
ip link set mtu 1420 up dev wg0

# route all traffic except for defined routes (below) through `wg0`
wg set wg0 fwmark 51820
ip route add default dev wg0 table 51820
ip rule add not fwmark 51820 table 51820
ip rule add table main suppress_prefixlength 0

# route all local traffic though `eth0`
ip route add 10.96.0.0/12 via 10.244.0.1 dev eth0 || true
ip route add 10.244.0.0/16 via 10.244.0.1 dev eth0 || true

This routing configuration is implemented via an alternate routing table. In general:

  • Packets from wg0 are “marked”
  • The default route on an alternate routing table is through wg0
  • Packets that are not “marked” should route through this alternate table (i.e. packets not from wg0)
  • Any packets matching a route with a prefix length greater than 0 are exempt from the above rule

Then, we are free to explicitly define higher-priority routes that use the CNI default gateway for local traffic. Because they have a prefix length greater than 0, they are exempt from the rule that forces packets through wg0.

This routing pattern along with some others are described in more detail in the Wireguard documentation: https://www.wireguard.com/netns/#improved-rule-based-routing


Some things to note:

  • I used an alpine image for this container, and busybox’s built-in ip command didn’t support some of the above commands, so I needed to install iproute2 along with the prerequisite wireguard-tools-wg.

  • /etc/wireguard is a volume pointing at a pre-configured Secret containing wg0.address and wg0.conf (referenced in the script above).

    • wg0.address is the address of the Wireguard server (i.e. your VPN provider)
    • wg0.conf is the Wireguard config file (provided by your VPN provider)
  • If you’re using Kubernetes Pod Security Admission, the container MUST be both run as root and have the NET_ADMIN capability to enable this low-level modification of network resources. For example:

    initContainers:
      - name: init-wireguard
        # snip
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add: ["NET_ADMIN"]
            drop: ["ALL"]
          runAsGroup: 0
          runAsNonRoot: false
          runAsUser: 0
  • With the routing configuration, I wanted all traffic except packets destined for the local cluster to be routed through the Wireguard interface. This isn’t the most secure solution, as it will leave you vulnerable to DNS leaks. To resolve this, I explicitly configured my applications to bind to the wg0 interface.

  • Your cluster’s default gateway may be different depending on your CNI plugin; I’m using Flannel, which uses the 10.244.0.0/16 subnet and has a default gateway of 10.244.0.1.


Now, all traffic from all other containers in the pod will be routed through your VPN provider’s Wireguard server! 🙌

Automation

A common pattern with sidecars is to use a custom admission controller that receives events for pod creation on a webhook and modifies the pod config to inject the sidecar automatically.

Since I only needed Wireguard injected in one Deployment, I decided it wasn’t worth the hassle, but I’d highly recommend doing some research in this area if you’d like to use Wireguard in multiple services across your cluster!

@ezra