WireGuard Notes

After being first exposed to WireGuard, it took a while before its simple-yet-powerful nature started to dawn on me. By "simple" I mean simple from an end-user's perspective; the implementation, while supposedly a few orders of magnitude less hairy than e.g. OpenVPN or IPsec, isn't something that I can claim to have grasped. But the ascetic, no bells and whistles, do one thing well approach is something that resonates and got me to invest some time to play with the thing to see how it works in some common scenarios.

Getting packets to travel from A to B to C and back doesn't happen automagically with WireGuard. In fact there's no magic going on behind the scenes to begin with, which can be A Good Thing™ or not, depending on cirumstances. The only thing that WireGuard provides is making packets flow between the two ends of a link in a secure manner, everything else is left to be taken care of by whatever means is appropriate. For setting things up there's the wg program which helps with key generation and with setting up the WireGuard interface, and further the wg-quick script which does a bit more and sets up the interfaces, peers and routes based on an INI-format file. The latter also ships with a handy systemd service for running things permanently. These tools can be installed on Debian-like systems by saying apt install wireguard-tools. Getting the thing running on Raspberry Pis requires a few extra steps, see here.

Before going into the example configuration, a refresher about configuring cryptokey routing. When peers are configured, the "allowed IPs" settings for the peers mean that:

  • when a packet is sent to the WireGuard interface (e.g. wg0), the packet gets passed to the peer that has a prefix matching the packets destination address
  • which means that the prefixes can't overlap between the peers
  • when a packet is received from a peer, if the peer has a prefix mathing the packet's source address, the packet is allowed in
  • when either sending to or receiving from a peer, if no matching prefix is found for a packet, the packet is dropped
Or like it says in the cryptokey routing section linked to above:
In other words, when sending packets, the list of allowed IPs behaves as a sort of routing table, and when receiving packets, the list of allowed IPs behaves as a sort of access control list.

Connecting two sites, both behind NAT

This example connects two sites that are both behind NAT, which requires that there's a publicly accesible host running in between:

+-----------+      +-----------+       +-----------+
|           |      |           |       |           |
|    NAT    +------+ WireGuard +-------+ Network 1 |
|           |      |           |       |           |
+-----+-----+      +-----------+       +-----------+
      |
      |
      |           +-------------+
      |           |             |
      +---------->+ example.com +<-----------+
                  |             |            |
                  +-------------+            |
                                             |
                                             |
+-----------+      +-----------+       +-----+-----+
|           |      |           |       |           |
| Network 2 +------+ WireGuard +-------+    NAT    |
|           |      |           |       |           |
+-----------+      +-----------+       +-----------+

Suppose that the public host's network is 10.0.0.0/24, Network 1 is 192.168.1.0/24, Network 2 is 172.16.1.0/24, and that the WireGuard network is 10.1.1.0/24. And that all hosts in both behind-NAT networks should see each other.

The WireGuard hosts on the behind-NAT networks connect to example.com:51820, which has the following configuration:

[Interface]
Address = 10.1.1.1/32
ListenPort = 51820
PrivateKey = kKELbYxqmwHGUyjdHiVhQ/lzyiLep2kLgAocLF4CR3Q=

[Peer]
PublicKey = D+GcHTk8uRiggEj79IhbbsLWHSdZynYjUVPWcP8aJFg=
AllowedIPs = 10.1.1.11/32,192.168.1.0/24

[Peer]
PublicKey = up9LDZjYw8/LHH29ZQdp7Mg9bB+LIE7T4OsYLlEXLng=
AllowedIPs = 10.1.1.12/32,172.16.1.0/24

WireGuard host on Network 1 (192.168.1.0/24):

[Interface]
Address = 10.1.1.11/32
PrivateKey = 4DQYFpL2kkVd/rjEYLTES8Ah6K2BMOrH504TXRQyv0E=
Table = off
PostUp = ip -4 route add 10.1.1.0/24 dev %i
PostUp = ip -4 route add 172.16.1.0/24 dev %i

[Peer]
PublicKey = 2LbLqgg0hGjsQ+Y15l+mPhEtGN53Uhvzj8n9dpxVqDQ=
AllowedIPs = 10.1.1.0/24,192.168.1.0/24,172.16.1.0/24
Endpoint = example.com:51820
PersistentKeepalive = 25

WireGuard host on Network 2 (172.16.1.0/24):

[Interface]
Address = 10.1.1.12/32
PrivateKey = CPZBHHLywkMqgW70MIgnvJRculKKGyYaBP7rIUJbpXs=
Table = off
PostUp = ip -4 route add 10.1.1.0/24 dev %i
PostUp = ip -4 route add 192.168.1.0/24 dev %i

[Peer]
PublicKey = 2LbLqgg0hGjsQ+Y15l+mPhEtGN53Uhvzj8n9dpxVqDQ=
AllowedIPs = 10.1.1.0/24,192.168.1.0/24,172.16.1.0/24
Endpoint = example.com:51820
PersistentKeepalive = 25

Routing Table is off and routes are manually added in PostUp because otherwise wg-quick would set routes for the local network's traffic which is in AllowedIPs.

All three hosts should enable forwarding:

sysctl -w net.ipv4.ip_forward=1

In order to route traffic to WireGuard from other hosts in the networks, the hosts need a route. If fiddling with the gateway isn't an option, the route needs to be set on each separately, for example (assuming the WireGuard hosts on each network are .1.10):

# Network 1
ip -4 route add 10.1.1.0/24 via 192.168.1.10
ip -4 route add 172.16.1.0/24 via 192.168.1.10

# Network 2
ip -4 route add 10.1.1.0/24 via 172.16.1.10
ip -4 route add 192.168.1.0/24 via 172.16.1.10

Then, supposing that each of the configurations was in /etc/wireguard/wg0.conf, one can say wg-quick up wg0. To make the WireGuard configuration come up automatically, the Systemd service should be enabled:

systemctl enable wg-quick@wg0.service
systemctl daemon-reload
systemctl start wg-quick@wg0.service

Tagged with:

Categorised as:


Building virtual machines with vmbuilder

After installing qemu-kvm, libvirt-bin, bridge-utils and ubuntu-vm-builder, set up a bridge that virtual machines can attach to:

auto lo
iface lo inet loopback

auto enp2s0
iface enp2s0 inet manual

auto br0
iface br0 inet static
      address 192.168.0.8
      netmask 255.255.255.0
      gateway 192.168.0.1
      nameserver 127.0.0.1
      bridge_ports enp2s0
      bridge_stp off
      bridge_fd 0
      bridge_maxwait 0

Then /etc/init.d/networking restart.

Install apt-cacher and give it the following config in /etc/apt-cacher/apt-cacher.conf:

group = www-data
user = www-data
daemon_addr = 192.168.0.8
path_map = ubuntu archive.ubuntu.com/ubuntu
allowed_hosts = 127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16
ubuntu_release_names = trusty

The vm build script, using the local cache:

#!/bin/sh

if [ $# -lt 2 ]; then
    echo "usage: HOSTNAME IP"
    exit 1
else
    HOSTNAME=$1
    IP=$2
fi

firstboot=`mktemp`
echo "#!/bin/bash" >>$firstboot
echo "rm -f /etc/resolvconf/resolv.conf.d/{original,base,head,tail}" >>$firstboot
echo "reboot" >>$firstboot
chmod +x $firstboot

qemu-img create -f qcow2 -o preallocation=falloc $HOME/vms/$HOSTNAME-rootdisk 4096M

vmbuilder kvm ubuntu \
  --suite trusty \
  --verbose \
  --libvirt qemu:///system \
  --destdir $HOME/vms/$HOSTNAME/ \
  --install-mirror http://192.168.0.8:3142/ubuntu \
  --mirror http://192.168.0.8:3142/ubuntu \
  --raw $HOME/vms/$HOSTNAME-rootdisk \
  --rootsize 4096 \
  --swapsize 0 \
  --mem 128 \
  --cpus 1 \
  --hostname $HOSTNAME \
  --bridge br0 \
  --ip $IP\
  --mask 255.255.255.0 \
  --gw 192.168.0.1 \
  --dns 192.168.0.8 \
  --lang en_US.UTF-8 \
  --timezone UTC \
  --user ubuntu \
  --name Ubuntu \
  --pass ubuntu \
  --ssh-user-key $HOME/.ssh/id_rsa.pub \
  --addpkg linux-image-generic \
  --addpkg openssh-server \
  --addpkg sudo \
  --firstboot $firstboot

rm $firstboot
rmdir $HOME/vms/$HOSTNAME/

virsh autostart $HOSTNAME
virsh start $HOSTNAME

Edit: To inspect and edit the virtual machine disk image, use guestfish, part of the libguestfs project:

$ sudo apt-get install libguestfs-tools
$ guestfish --rw --add testserver-rootdisk

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: 'help' for help on commands
      'man' to read the manual
      'quit' to quit the shell

><fs> run
100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
><fs> list-filesystems
/dev/sda1: ext4
><fs> mount /dev/sda1 /
><fs> emacs /etc/resolv.conf
><fs> exit

Edit: Disk image pre-allocation can be done with QEMU-provided tools, which may or may not be a more kosher approach. Must investigate.

qemu-img create -f qcow2 -o preallocation=falloc $HOME/vms/$HOSTNAME-rootdisk 32768M

Edit: Fix nameserver enforcement in build script.

Tagged with:

Categorised as:


Salt Notes

I decided to go for Salt when picking a solution that would help me automate server management. Here are some things that required some figuring out.

Including keys in pillar data

Using Git as an example; deploy key is set in Github repo's settings:

sites:
  example.com:
    gitsource: git+ssh://git@github.com/you/your_repo.git
    gitidentity: |
      -----BEGIN RSA PRIVATE KEY-----
      <Deploy key goes here – mind the indentation!>
      -----END RSA PRIVATE KEY-----
    

Using the above in states:

{% if 'gitsource' in args and 'gitidentity' in args %}
/etc/deploy-keys/{{ site }}:
  file.directory:
    - makedirs: True
    - require:
      - pkg: nginx
    - watch_in:
      - service: nginx

/etc/deploy-keys/{{ site }}/identity:
  file.managed:
    - mode: 600
    - contents_pillar: sites:{{ site }}:gitidentity
    - require:
      - pkg: nginx
    - watch_in:
      - service: nginx

{{ args.gitsource }}:
  git.latest:
    - identity: /etc/deploy-keys/{{ site }}/identity
    - target: /var/www/{{ site }}
    - rev: master
    - force: True
    - require:
      - pkg: nginx
    - watch_in:
      - service: nginx
{% endif %}
    

Swap

Using a swap file here because DigitalOcean instances, at least the small ones that I've tested, don't include any swap.

/swapfile:
  cmd.run:
    - name: "fallocate -l 1024M /swapfile && chmod 600 /swapfile && mkswap /swapfile"
    - unless: test -f /swapfile
  mount.swap:
    - require:
      - cmd: /swapfile
    

Logentries

The "agent" of the excellent Logentries log gathering service doesn't use a config file, and instead relies on the le tool that is used to set thing up. After config changes, the Logentries daemon must be restarted (that last restart part can likely be streamlined but I couldn't get a hard service restart to work otherwise).

logentries:
  pkgrepo.managed:
    - name: deb http://rep.logentries.com/ trusty main
    - dist: trusty
    - file: /etc/apt/sources.list.d/logentries.list
    - keyid: C43C79AD
    - keyserver: pgp.mit.edu
  pkg:
    - latest

logentries_registered:
  cmd.run:
    - unless: le whoami
    - name: le register --force --account-key={{ pillar['logentries']['account_key'] }} --hostname={{ grains.id }} --name={{ grains.id }}-`date +'%Y-%m-%dT%H:%M:%S'`
    - require:
      - pkg: logentries
    - require_in:
      - pkg: logentries-daemon

logentries_follow:
  cmd.run:
    - name: |
        le follow /var/log/syslog
        le follow /var/log/auth.log
        le follow /var/log/salt/minion
{% for site, args in pillar.get('sites', {}).items() %}
        le follow /var/log/nginx/{{ site }}.access.log
        le follow /var/log/nginx/{{ site }}.error.log
{% endfor %}
    - require:
      - pkg: logentries
    - require_in:
      - pkg: logentries-daemon

logentries-daemon:
  pkg:
    - latest

logentries_daemon_stop:
  service.dead:
    - name: logentries
    - require:
      - pkg: logentries-daemon
    - require_in:
      - logentries_daemon_start

logentries_daemon_start:
  service.running:
    - name: logentries
    

Tagged with:

Categorised as: