Longtime readers have no doubt observed that I plug a lot weird shit into my exocortex - from bookmark managers to card catalogues to just about anything that has an API. Sometimes this is fairly straightforward; if it's on the public Net I can get to it (processing that data is a separate issue, of course). But what about the stuff I have around the lab? I'm always messing with new toys that are network connected and occasionally useful. The question is, how do I get it out of the lab and out to my exocortex? Sometimes I write bots to do that for me, but that can be kind of clunky because a lot of stuff doesn't necessarily need user interaction. I could always poke some holes in my firewall, lock them to a specific IP address, and set static addresses on my gadgets. However, out of necessity I've got several layers of firewalls at home and making chains of port forwards work is a huge pain in the ass. I don't recommend it. "So, why not a VPN?" you're probably asking.
I'd been considering VPNs as a solution. For a while I considered the possibility of setting up OpenVPN on a few of my devices-that-are-actually-computers and connecting them to my exocortex as a VPN concentrator. However, I kept running into problems with trying to make just a single network port available over an OpenVPN connection. I never managed to figure it out. Then part of me stumbled across a package called Nebula, originally developed by Slack for doing just what I wanted to do: Make one port inside available to another server in a secure way. Plus, at the same time it networks all of the servers its running on together. Here's how I set it up.
The two things I wanted to network were my Huginn install and Cloudbuster, the ADSB Exchange air traffic scanner running on my desk (which just happens to make its raw data available if you hit the URI /tar1090/data/aircraft.json). The first thing I had to do was clone the Nebula git repository to Windbringer and compile it using the instructions in the "Building Nebula from source" section of the README.md file. Just to make life easier I ran make all to compile a version of the Nebula daemon for every platform rather than trying to pick and choose.
Nebula has the concept of a lighthouse node, which is a system whose IP address never changes, and which helps all of your Nebula nodes find each other. It doesn't have to be a dedicated system and runs quite nicely alongside Huginn and everything else in my exocortex. I just had to upload the nebula and nebula-cert utilities from the ~/nebula/build/linux-amd64 directory on Windbringer to get the bare essentials in place. Those went into the /usr/local/sbin directory. Similarly, the contents of ~/nebula/build/linux-arm-7 went into the same location on Cloudbuster (which happens to be a Raspberry Pi 3 B+ running their custom re-spin of Raspbian).
Then I had to pick a network layout. Pretty much all of my stuff uses RFC 1918 private addresses so it was just a matter of picking something that wouldn't collide. I arbitrarily picked 172.16.0.0/24 to get 254 possible hosts. 172.16.0.1 is my exocortex-slash-lighthouse node for Nebula, 172.16.0.2 is Cloudbuster's Nebula IP. Then I had to write a basic config file which I based off of the sample configs that come with Nebula's source code. By convention I put all of my Nebula config stuff in the new subdirectory /etc/nebula, which also makes it easy to back up and restore. Here's what the lighthouse /etc/nebula/config.yml file looks like with minimal comments and only the features I'm using:
pki: ca: /etc/nebula/ca.crt cert: /etc/nebula/exocortex.crt key: /etc/nebula/exocortex.key # Pattern is "in-VPN address": ["public IP address"] static_host_map: "172.16.0.1": ["184.108.40.206:4242"] # Yup, everybody connect to me! This is the secret sauce! lighthouse: am_lighthouse: true interval: 60 hosts: # The public IP and UDP port I listen on. listen: host: 220.127.116.11 port: 4242 # This makes sure Nebula connections stay open through # NATting firewalls. punchy: true punch_back: true # Private IP address range I picked for my VPN network. local_range: "172.16.0.0/24" tun: dev: nebula1 drop_local_broadcast: false drop_multicast: false tx_queue: 500 mtu: 1300 routes: unsafe_routes: logging: level: info format: text firewall: conntrack: tcp_timeout: 120h udp_timeout: 3m default_timeout: 10m max_connections: 100000 outbound: - port: any proto: any host: any inbound: - port: any proto: icmp host: any
Please note that anything exposed on the Nebula network is NOT exposed to the outside world as well. Think of a Nebula VPN as a little closed off world where you share your toys in private. While, in theory, it should be possible to expose Nebula services to the public Net, I haven't needed to do this yet and so haven't tried to set it up.
Then I had to bootstrap the certificates used for node identification and encryption key exchange in my Nebula network. Unlike setting up an actual CA (which is a terrible thing to go through if you don't have to), Slack actually made it easy to do with Nebula. It takes just one command:
root@exocortex:/etc/nebula(23)# nebula-cert ca -name "Virtual Adept Networks, Unlimited" root@exocortex:/etc/nebula(23)# ls -alF total 68 drwxr-xr-x 2 root root 4096 Mar 29 22:19 ./ drwxr-xr-x 143 root root 12288 Apr 11 18:31 ../ -rw------- 1 root root 259 Mar 7 22:49 ca.crt -rw------- 1 root root 174 Mar 7 22:49 ca.key -rw-r--r-- 1 root root 8556 Mar 8 01:07 config.yml root@exocortex:/etc/nebula(23)#
From this I can set up the rest of the network. I cut a new certificate pair for Cloudbuster:
root@exocortex:/etc/nebula(23)# nebula-cert sign -name "cloudbuster" -ip "172.16.0.2/24" root@exocortex:/etc/nebula(23)# ls -alF total 68 drwxr-xr-x 2 root root 4096 Mar 29 22:19 ./ drwxr-xr-x 143 root root 12288 Apr 11 18:31 ../ -rw------- 1 root root 259 Mar 7 22:49 ca.crt -rw------- 1 root root 174 Mar 7 22:49 ca.key -rw------- 1 root root 312 Mar 8 01:17 cloudbuster.crt -rw------- 1 root root 127 Mar 8 01:17 cloudbuster.key -rw-r--r-- 1 root root 8556 Mar 8 01:07 config.yml root@exocortex:/etc/nebula(23)#
I then copied the files ca.crt, cloudbuster.crt, and cloudbuster.key onto Cloudbuster and put them into the /etc/nebula subdirectory along with a config file. The only differences between the lighthouse-enabled config file and is the lack of lighthouse mode and a reference to the web server running on Cloudbuster. For reference, it looks like this:
pki: ca: /etc/nebula/ca.crt cert: /etc/nebula/cloudbuster.crt key: /etc/nebula/cloudbuster.key static_host_map: "172.16.0.1": ["18.104.22.168:4242"] # I'm not a lighthouse. lighthouse: am_lighthouse: false interval: 60 hosts: - "172.16.0.1" # Don't listen on any ports, I'm not a lighthouse. listen: port: 0 punchy: true punch_back: true local_range: "172.16.0.0/24" tun: dev: nebula1 drop_local_broadcast: false drop_multicast: false tx_queue: 500 mtu: 1300 routes: unsafe_routes: logging: level: info format: text firewall: conntrack: tcp_timeout: 120h udp_timeout: 3m default_timeout: 10m max_connections: 100000 outbound: - port: any proto: any host: any inbound: - port: any proto: icmp host: any # Expose my web server to Nebula node 'exocortex'. - port: 80 proto: tcp host: exocortex
Please note that I didn't have to set any IP addresses for any Nebula nodes that weren't lighthouse nodes, that information seems to get baked into the nodes' certificates. Now for the moment of truth: Start up Nebula on both hosts from separate terminals: sudo /usr/local/sbin/nebula -config /etc/nebula/config.yml
Now for the moment of truth. Will it blend^W^W^WDid it work?
drwho@exocortex:~()$ curl -s http://172.16.0.2/ | head -15 <!DOCTYPE html> <!--[if lte IE 6]><html class="preIE7 preIE8 preIE9"><![endif]--> <!--[if IE 7]><html class="preIE8 preIE9"><![endif]--> <!--[if IE 8]><html class="preIE9"><![endif]--> <!--[if gte IE 9]><!--><html><!--<![endif]--> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <meta name="viewport" content="width=device-width,initial-scale=1"> <title>Cloudbuster</title> <link href="/css/bootstrap.min.css" rel="stylesheet"> <link href="/css/jumbotron.css" rel="stylesheet"> <script src="/js/bootstrap.min.js"></script> <script src="/js/jquery-3.4.1.slim.min.js"></script> (23) Failed writing body drwho@exocortex:~()$ echo "w00t!" w00t!
Now to set up Nebula so it runs in the background all the time. To do that I had to create a simple /etc/systemd/system/nebula.service systemd service file on both hosts. I forget where I got it from originally but it was remarkably straightforward for systemd:
[Unit] Description=nebula Wants=basic.target After=basic.target network.target [Service] SyslogIdentifier=nebula StandardOutput=syslog StandardError=syslog ExecReload=/bin/kill -HUP $MAINPID ExecStart=/usr/local/sbin/nebula -config /etc/nebula/config.yml Restart=always [Install] WantedBy=multi-user.target
Now to enable and start the service on both machines and move on to doing interesting things with Cloudbuster:
root@exocortex:/etc/nebula(23)# systemctl enable nebula.service Created symlink /etc/systemd/system/multi-user.target.wants/nebula.service → /etc/systemd/system/nebula.service. root@exocortex:/etc/nebula(23)# systemctl start nebula.service root@exocortex:/etc/nebula(23)#