Routing Tunneled IPv6

 

As great as FiOS is (and no, this isn’t running on it), one of the major flaws is that Verizon has acted like IPv6 doesn’t exist. For such a “next-gen” fiber-optic network, the only thing we’ve gotten from VZ is talks of it “coming soon”. But after 6 years, soon still hasn’t come. It looks like if I want IPv6 support on my network, I’m going to have to tunnel it. So why not just set something up to make IPv6 available on my network like it was native, but it’s all running through a tunnel.

That’s just what I did, and in a long roundabout way; here’s how it’s done.

This project has been attempted over the years with varying amounts of success, and for various reasons. The primary reason is that as mobile/LTE networks went native IPv6, I found myself unable to maintain a VPN connection to home when moving about. Most of the time this wasn’t an issue, but when you’re accessing a streaming library on your home server over it while on the go, suddenly being able to maintain the VPN connection between tower handoffs was more important.

There were two basic solutions; the first was to get my PPTP on to IPv6, the second was to switch to OpenVPN. I am not the most knowledgeable person around and a lot of the things I do get running are the result of “on-demand” studying and googling and reading how others did it. Switching to OpenVPN looked more difficult than getting my VPN on IPv6; so I started out using a Hurricane Electric IPv6 tunnel on that machine. It actually worked. It took some fiddling with my router’s protocol forwarding, but I was able to connect to my server over the IPv6 LTE network using its tunnel endpoint.

I soon found myself wondering if I could use the existing tunnel on the server to provide IPv6 to the rest of my network. I was already assigned a routable /64 as part of the tunnel; but I also went ahead and took a routed /48, subnetting that down into the 65335 /64 blocks and just picking one. I still don’t know why I did this other than someone said: “if you can, why not?” So a ton of reading and tinkering later and I was successfully routing IPv6 to my entire network through the tunnel. However, I’m not sure if the failing hardware was the problem or if I misconfigured something; because the entire setup worked for a much shorter duration between needing a service restart and my ability to connect to my VPN vanished.

I never bothered trying to figure out what was wrong, as I knew I needed to replace that hardware anyway. Then when I did replace it, I decided I wanted to try and get the entire ordeal working on a separate machine before installing everything to the existing server. However, I would wind up negating my need for IPv6 anyway when I installed VirtualBox to use a Turnkey OpenVPN “appliance”. Setting up OpenVPN using that was almost too easy. OpenVPN only requires a UDP port and works just fine over my mobile provider’s v4 tunneling. So it would be a while before I decided to try and get it working again. I had to explain how to configure an HE tunnel to someone a couple of days ago, and with the brainworm having been planted…I decided to give it another go.

We’re doing all of this inside a VirtualBox VM on Ubuntu Server. At the time I installed and configured the replacement server, it was really out of necessity; so I went with Ubuntu because it was quick and easy and I knew I’d be back up and running quickly. I have played around with Debian more, including seeing how small of a footprint I can get a GUI and I’ve liked what I’ve come up with. But since I plan on doing this with the server, I’m sticking with the same distro. This is as basic of a server install as the ISO will give you. I didn’t install SSH or Apache, the only option I picked was standard system utilities. The network interface is enp0s3 in the virtual machine, it is simply bridged to the LAN; so any references you see to enp0s3 should be changed to reflect whatever you’re going to be routing v6 over. I’m doing it with a single card, but you could also use one interface for a tunnel and another interface for your v6 lan.

My first step was to get the IPv6 tunnel working in the first place. I find this to be easier on Linux than Windows since it’s just a matter of editing /etc/network/interfaces and restarting the network service. Windows is a bunch of individual commands that you’d have to batch. I mean, I think you can copy and paste the whole block and it’ll work, I’ve never tried it. HE gives you the “example code” you have to use…so it’s more like paste in to /etc/network/interfaces and restart networking.

auto he-ipv6
iface he-ipv6 inet6 v4tunnel
address <TUNNEL CLIENT IPv6 ADDRESS>
netmask 64
endpoint <TUNNEL IPV4 ADDRESS>
local <LOCAL IPV4 ADDRESS>
ttl 255
gateway <TUNNEL SERVER IPv6 ADDRESS>

The most complicated part of this process is figuring out which local IPv4 address you’re supposed to use if you’re behind a NAT. The HE example configuration uses your public IPv4 address, but also says if you’re behind a firewall that can pass Protocol 41 to use the IP the appliance gives you. Since my ISP router apparently has Protocol 41 forwarding in it’s firewall, I used the LAN IP and everything has worked fine. If in doubt, please consult something written by someone who knows what the hell they’re doing more than I do.

From this point on I want to remind you of a command that’s might be important:

sudo ip tun del he-ipv6

Since we’re going to be changing the tunnel properties and restarting networking later, you might need this. Should you get an error that you can’t create the sit0 tunnel due to buffer space; then running that command will delete the “broken” tunnel interface. For example, I used ifdown he-ipv6 earlier to test something; and running ifup he-ipv6 resulted in an error; running the command above solved it. I believe doing a full stop on networking and then restarting usually prevents it from happening, but in prior cases, I usually got it when just restarting the service. YMMV and I may be screwing something up somewhere (and usually am).

The first thing we need to do is enable ipv6 routing in the kernel by editing /etc/sysctl.conf and uncommenting the line:

net.ipv6.conf.all.forwarding=1

Writing the change to sysctl.conf ensures that it’s persistent between reboots, so that’s why I say do it first. For now, we can manually turn it on:

sudo sysctl -w net.ipv6.conf.all.forwarding=1

Now we need to set up our route. We do this in the network configuration for the tunnel, so it’s back to /etc/network/interfaces to add two new lines and modify one:

auto he-ipv6

iface he-ipv6 inet6 v4tunnel

address <TUNNEL CLIENT IPv6 ADDRESS>

netmask 64
endpoint <TUNNEL IPV4 ADDRESS>

local <LOCAL IPV4 ADDRESS>

ttl 64

gateway <TUNNEL SERVER IPv6 ADDRESS>
up ip link set mtu 1280 dev $IFACE
up route -6 add <YOUR IPv6 BLOCK/64> dev enp0s3 <or eth0/networkcardyou’reusing>

I wish I could tell you why the ttl change to 64 from HE’s recommended 255; I do know that when I ignored that in the past, it didn’t work. Restart networking to apply changes. But this is only half the solution; we’ve made the route available, now we have to broadcast it so our clients can autoconfigure. For that, we need to install radvd, the route advertisement daemon.

sudo apt install radvd

There is no config created when it’s installed…I actually want to say I saw an error that seemed to be generated for the specific purpose of telling you this. But the configuration file is /etc/radvd.conf and should look something like this:

interface enp0s3 {
AdvSendAdvert on;
MaxRtrAdvInterval 30;
AdvOtherConfigFlag on;
prefix <YOURV6BLOCK>/64 {
AdvOnLink on;
AdvAutonomous on;
};
RDNSS 2620:0:ccc::2 2620:0:ccd::2 {
};
};

The DNS servers I used were OpenVPN just because they were the easiest to type in. You could use whatever DNS you wanted there.

Then just start the radvd service:

sudo service radvd start

If everything went well, any IPv6 able device on your network should have received an IPv6 IP. If your system supports RFC6106, then everything is fine. However, to deal with the fact not everything is RFC6106; we need to install a dhcp6 server to, at a minimum, serve up the IPv6 DNS servers.

sudo apt install wide-dhcpv6-server

This is what my /etc/wide-dhcpv6/dhcp6s.conf looks like:

option domain-name-servers RDNSS 2620:0:ccc::2 2620:0:ccd::2;

And start the usual way:

sudo service wide-dhcpv6-server start

You can then call the service with status to make sure it’s running. I actually get an error, but it seems to work anyway; Windows 10 pulled the IPv6 OpenDNS servers automatically.

This is technically a stateless network, IIRC, and as I was informed by someone who knows a hell of a lot more than me; prefix delegation would be good since it’d give me globally addressable IPs. But since this is technically a tunnel, HE static routes the block to my tunnel so PD isn’t required.

dewdude@qth:~$ ping6 -c 4 google.com
PING google.com(iad30s09-in-x0e.1e100.net (2607:f8b0:4004:800::200e)) 56 data bytes
64 bytes from iad30s09-in-x0e.1e100.net (2607:f8b0:4004:800::200e): icmp_seq=1 ttl=57 time=5.02 ms
64 bytes from iad30s09-in-x0e.1e100.net (2607:f8b0:4004:800::200e): icmp_seq=2 ttl=57 time=5.72 ms
64 bytes from iad30s09-in-x0e.1e100.net (2607:f8b0:4004:800::200e): icmp_seq=3 ttl=57 time=5.33 ms
64 bytes from iad30s09-in-x0e.1e100.net (2607:f8b0:4004:800::200e): icmp_seq=4 ttl=57 time=4.56 ms

— google.com ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 4.569/5.164/5.727/0.429 ms

Invoking ping6 (the IPv6 version of ping) shows we have IPv6 connectivity…and we’ve never even logged on to this machine! To every device on the LAN, IPv6 is “native”. The only device with a tunnel configuration is the VM running on my desktop.

So the next question is, how does this compare to a native IPv4? Well naturally adding a tunnel is going to increase latency and even affect your overall throughput. Now HE seems to have a pretty serious about of bandwidth, so I would suspect speed wise it won’t hurt much; so the biggest impact would be latency. I do have a local tunnel though:

dewdude@qth:~$ ping -c 4 216.66.22.2
PING 216.66.22.2 (216.66.22.2) 56(84) bytes of data.
64 bytes from 216.66.22.2: icmp_seq=1 ttl=59 time=4.78 ms
64 bytes from 216.66.22.2: icmp_seq=2 ttl=59 time=4.36 ms
64 bytes from 216.66.22.2: icmp_seq=3 ttl=59 time=4.17 ms
64 bytes from 216.66.22.2: icmp_seq=4 ttl=59 time=4.54 ms

— 216.66.22.2 ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 4.171/4.464/4.783/0.240 ms

4.5ms is pretty good, pretty damn good for what I’ve gathered. The only lower pings I see are usually around 3ms to “on-network” routers. So my packets don’t have to travel very far to be tunneled…I also live near the major east coast hub for all the internet traffic. So how is the performance over the tunnel vs over native IPv4? Google once again provides servers almost everywhere on massive backbones that provided a semi-decent latency test:

 

dewdude@qth:~$ ping -4 -c 4 google.com
PING google.com (172.217.12.238) 56(84) bytes of data.
64 bytes from iad30s15-in-f14.1e100.net (172.217.12.238): icmp_seq=1 ttl=56 time=4.04 ms
64 bytes from iad30s15-in-f14.1e100.net (172.217.12.238): icmp_seq=2 ttl=56 time=3.95 ms
64 bytes from iad30s15-in-f14.1e100.net (172.217.12.238): icmp_seq=3 ttl=56 time=3.97 ms
64 bytes from iad30s15-in-f14.1e100.net (172.217.12.238): icmp_seq=4 ttl=56 time=4.24 ms

— google.com ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 3.956/4.054/4.246/0.131 ms

dewdude@qth:~$ ping6 -c 4 google.com
PING google.com(iad30s09-in-x0e.1e100.net (2607:f8b0:4004:800::200e)) 56 data bytes
64 bytes from iad30s09-in-x0e.1e100.net (2607:f8b0:4004:800::200e): icmp_seq=1 ttl=57 time=5.02 ms
64 bytes from iad30s09-in-x0e.1e100.net (2607:f8b0:4004:800::200e): icmp_seq=2 ttl=57 time=5.72 ms
64 bytes from iad30s09-in-x0e.1e100.net (2607:f8b0:4004:800::200e): icmp_seq=3 ttl=57 time=5.33 ms
64 bytes from iad30s09-in-x0e.1e100.net (2607:f8b0:4004:800::200e): icmp_seq=4 ttl=57 time=4.56 ms

— google.com ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 4.569/5.164/5.727/0.429 ms

So there is a measurable increase in latency…it’s just not much. It seems to add about 1ms of latency to the whole ordeal. Jitter tends to be an issue…as sometimes you’ll get pings of 20ms or more for no apparent reason; not to mention even then there’s .3ms of deviation between the two. I mean…I don’t think there’s a major issue.

I still need to get a proper firewall set up. I’m not too concerned right now as all my Windows machines are using one and my Linux machine has one active, though I don’t know if it’s set up to do anything with v6 addresses. I can apparently do this network-wide at the tunnel, that may be the next step before I see about setting it up on the actual Linux server I use.