Routing Tunneled IPv6…Part Two

A couple years ago I wrote about trying to route IPv6 from a Hurricane Electric tunnel on to my network; my goal was to make IPv6 act as if it was a native part of my network. Technically I was successful, but I never actually ran it for very long and had some issues.

Let’s revisit this…because I’m bored and society took my job away from me.

It’s 2020 and Verizon does not offer IPv6 on FiOS. Let me repeat that.


At this point I am sick of bitching about it. They’ve been promising IPv6 for almost 10 years now…teasing it in areas but otherwise just not doing a damn thing about it. I am largely convinced they have no intention of offering IPv6 to their customers at any point in the future. Why would they? They’re sitting on enough IPv4 they can be the hold out. Repeat phrase from two years ago: “it’s pretty pathetic when Comcast has you beat on at least something”. Seriously Verizon, give us IPv6 or stop talking about how advanced your network is. Sure…it’s fiber; big deal. Why is it’s IP technology as old as I am?

So this means if I want IPv6 on my network, much like two years ago; I have to tunnel it in. Sure…the Verizon router now has options for IPv6; but they aren’t assigning a public prefix and that does absolutely no good. But getting a tunnel on one PC is one thing…getting it routed to your network is another.

When I did this a few years ago I had the unfortunate luck of not really understanding what I was doing. There are some core routing concepts that are lost to me, largely because I don’t run an ISP or manage an internet backbone. In fact I was woefully behind in a lot of concepts. Things aren’t much different other than I did some reading on a few things and figured I may have had the pieces gathered…I just had to put them together.

Two years ago I did in fact have a tunnel routed; radvd was broadcasting a subnet to my network, all the hosts were grabbing IPs, and DHCPv6 was supply DNS for those devices that didn’t get it from radvd. In fact the first time I booted radvd up I happily watched my Windows 10 machine automatically grab an IP. Devices got IPs, they were talking to the gateway, the gateway was routing everything through the tunnel, and they were in fact publicly routable.

But there were two problems that meant this wasn’t staying on my network in that configuration. The first is that it was unstable; constantly going down requiring me to kill and rebuild the tunnel interface. This in turn made all the stuff that preferred an IPv6 connection to start failing. The other problem was that I needed to figure out a firewall solution. I’ve been spoiled by the firewall-effect of NAT and I wasn’t sure which devices on my network had IPv6 firewalls. While I’ve heard a few people claim this is not as big of a deal as it was with IPv4 as there’s some security built in to the protocol and the fact it’s “impossible” to scan a subnet to find IPs due to sheer size; it still didn’t feel right leaving a wide open-path to a machine on my network. If I decided to run a service on an IPv6 address then the address wouldn’t be hidden; and someone could just fire-up a port-scanner and go to town.

What I wanted was a network-wide firewall in my gateway; I just wasn’t sure how to do it and I wasn’t Googling the right stuff. Turns out there was a reason…I didn’t understand exactly how kernal routing and iptables worked. I was dealing with a machine, at the time, that had no firewall enabled; it was behind NAT and I didn’t feel like I needed one. So I didn’t encounter problems of no IPv6 connectivity that would have told me ip6tables rules don’t apply to just the local machine, but also apply to packets being routed through the machine. Of course once I figured this out, I just banged my head against the wall for not realizing this years ago and getting angry for not studying networking all these years.

Things were a bit different this time around. In the past I was doing this on a VM running on my desktop, which may have contributed to the instability; but this time things are running on a dedicated hypervisor on some surplus enterprise equipment. The versions of Ubuntu were newer, they were also different. Thankfully HE provided some experimental settings for netplan to create the tunnel which worked with a lot less hassle than the previous ones.

  version: 2
           mode: sit
           remote: <HE TUNNEL IPv4>
           local: <LOCAL ADDRESS>
             - <IPv6 TUNNEL CLIENT SIDE>
           gateway6: <IPv6 TUNNEL SERVER SIDE>

Other than filling in the correct info, this was all I needed to put in a .yaml for netplan to create and bring up the tunnel. My router passes port 41, meaning I don’t have to DMZ the IPv6 router and I use my LAN IP for the local: as opposed to the public IP like you do for most things NAT. The tunnel client and server IPs are v6 IPs for the tunnel and not your routed /64. In fact your routed /64 (or /48) are going to be in an entirely different subnet.

Applying the new settings to netplan instantly brings up the tunnel interfaces; and you will in fact have IPv6 on there. But now we need to get it to the rest of our network and then add some protection.

IPv6 is big. Mindbogglingly big.

So everything about configuration changed with v6; gone are the days where you needed to punch in an IP or get it from DHCP thanks to things like stateless configuration. See, IPv6 address space is big; you just won’t believe how vastly, hugely, mindbogglingly big it is. But we need that because the number of junk we’re putting online far exceeds what IPv4 gives us. v4 can only address 4,294,967,296 unique hosts. This was probably fine back in the 80’s when 640k of ram was all you needed and no one dreamed about 4.2 billion devices being networked. There were so many IPs relative to the time that entire large chunks were reserved for non-public use. All of, all 16.7 million possible addresses; host-only loopback. is all private LAN use. There’s a lot of “unusable” v4 space as far as the public internet is concerned. So just how big is IPv6 in comparison?

2128 addresses. You want that in standard format 3.4×1038. Full integer? 340,282,366,920,938,463,463,374,607,431,768,211,456. In Cardinal text; just call it 340 undecillion. I had to look that up; I had no clue what 1038 would be off the top of my head. Just like IPv4, there are large sections reserved for non-public use; but unlike IPv4 the impact is like peeing in the ocean. I’m not 100% sure how it works for ISPs that actually support IPv6, but giving the customer a /64 subnet is what I hear a lot; it’s what HE gave me by default on my tunnel. I did not just get a single IP; I got an entire subnet. That’s 64-bits worth of addresses. 2^64. 10^18. 18 quintillion!

So I’ve got 18 quintillion IPv6 addresses routed to my IPv6 tunnel. I’ve got maybe 15 things on my network that can use IPv6…so I’m not going to have a problem there. But now I need to actually set things up so devices on my network can use address space off my /64 and actually use IPv6. But how we accomplish this is apparently different than how we did for IPv4. I never had to route subnets outside of my LAN and usually had something with a front end. I mean based on the stuff I’ve read in the last few days I’m pretty sure it was a similar process.

radvd – not for movies, but for advertising your router

In an IPv4 network you’re either manually assigning machines to subnets and IPs, manually specifying all the required parameters; or you just let DHCP handle it. Network card sends a request on a special broadcast address for a DHCP server; DHCP server responds with some configuration information. It’s the reason when you plug your computer in to your network, your router hands it a private LAN ip with the rest of the information to make your connection work. IPv6 does have support for this, it’s what they call “stateful” mode. However stateless mode seems more common; and it largely makes DHCP un-necessary. The differences are something I just barely have a grasp on.

So to make our IPv6 /64 subnet available and routed to the rest of my LAN, we need to first enable ipv6 forwarding in the linux kernel and install radvd; which is actually a router advertisement daemon.

Enable ipv6 forwarding in /etc/sysctl.conf


sysctl -p will reload the file and make changes active. Now install radvd:

sudo apt-get install radvd

radvd will be unable to start immediately after installation as we don’t have a configuration file for it. This is what I put in mine and seems to get the job done


interface netif {
AdvSendAdvert on;
MaxRtrAdvInterval 30;
AdvOtherConfigFlag on;
prefix <YOUR IPv6 /64> {
AdvOnLink on;
AdvAutonomous on;
RDNSS 2620:0:ccc::2 2620:0:ccd::2 {

A few things should hopefully be obvious; replace ‘netif’ with the interface you’re broadcasting on and where to stick your IPv6 /64. Remember, this is not your tunnel IP; this is the routed /64 that HE has assigned. radvd also has support for configuring DNS servers; marked by the RDNSS entry pointing to OpenDNS.

Start up radvd and after checking if it’s loaded, enable it:

sudo systemctl start radvd
sudo systemctl status radvd
sudo systemctl enable radvd

I should point out that if you want to manually configure IPv6 on your network; there’s no reason to actually advertise your route. In fact, before I installed radvd I had no problem with a full manual configuration after enabling routing in the kernel. By assigning an IPv6 address manually to my Windows machine and pointing the gateway to the link-local of the router’s local network device, I had instant IPv6 access. In fact there’s nothing that prevents me from still manually assigning IPv6 IPs even while having radvd on my network. It just facilitates stateless auto-configuration so the hosts can get the public prefix they slap on their self-generated suffix. The self-generated portion of the address is usually a full 64-bits wide; so doing things like literally assigning IPs sequentially means there’s no way you’ll ever cause a conflict with stateless devices. So I can do things like assign my servers 64prefix::2, 64prefix::3, 64prefix::4, 64prefix::900, 64prefix::dead:beef; and not worry about the random stateless clients that have addresses like 64prefix:15cd:fed1:444c:5974.

But going back to setting the stuff up for auto-configuration; the DNS entries in radvd only work for devices that support RFC6106. For the devices that don’t support RFC6106, we still need a DHCPv6 server to supply DNS entries.

sudo apt install wide-dhcpv6-server

Our config file in /etc/wide-dhcpv6/dhcp6s.conf contains just one line:

option domain-name-servers RDNSS 2620:0:ccc::2 2620:0:ccd::2;

It’s service name is wide-dhcp6-server, use systemctl if your system uses that. I’m not typing all that crap back out. You should at least know this much anyway.

Up to this point things have gone exactly as my last article, with the exception that I had no issues getting the tunnel to work with netplan vs the HE provided example for /etc/network/interfaces. Enabling ipv6 forwarding in the kernel, installing radvd, it’s configuration, wide-dhcpv6-server…everything was pretty much the same as last time.

Except it doesn’t crap out after twenty minutes. So now we can continue to the actual “Part 2” where the article from two years ago left off.

The wall, the wall, the wall is fire.

I have for entirely too many years taken for granted the firewall effect of NAT. I’ve had broadband for almost 20 years now; and for most of that I’ve been behind NAT. Sure, I relished in the early 2000’s when my DSL provider would happily pump me numerous IPs if I plugged a switch in to the modem. Of course this left you entirely unprotected, but the risks were low. I mean the entirety of my dial-up years were spent with what was essentially an exposed IP. Sure, it wasn’t as online as often; but things weren’t as bad as they are now.

But, after getting forced to adopt the NAT/residental gateway aspect like the rest of the world; I was usually sticking a “server” on the DMZ and the rest of my network was protected by that. I stopped using DMZs and continued using the NAT effect as a firewall. I mean it worked; the firewall in the router either rejected packets or forwarded them; either based on port forwards or stateful firewall rules. So despite the fact I had several people telling me not having a firewalled IPv6 wasn’t a large deal, “since finding your IP even with your prefix is like trying to find a specific grain of sand”; why take the risk? In many ways having a publically routable /64 largely provides an avenue to bypass the firewall on v4. A malicious script just has to obtain the v6 address from a computer/device, send it off to an attacker, who now knows how to access the device directly beyond the firewall. One solution is to make sure every device has it’s own v6 firewall; which isn’t a problem for all my Windows machines and easily taken care of for my Linux VMs. But I still would like to have a network/router wide firewall to protect random devices that might not have any protection. This also means I don’t have to install and enable a firewall on all the VMs; I just have to allow routes of specific ports to specific IPs. I can always change my mind later and install individual firewalls on the VMs and set the router to just route all traffic to them.

However, while I write all that in hindsight and had a pretty good feeling that was in fact how things would work on the kernel doing the routing; but I needed to confirm it. I needed to see that doing things on the server affected the connection to my desktop. So my first stop was an IPv6 port-scanner. Without iptables (or ufw) enabled on my machine; I scanned both my laptop and the router…choosing it’s address from the routed /64. The results were as expected; the router reported hard closed on everything scanned except SSH. The laptop reported filtered on everything; as expected as Windows firewall was on. Disabling windows fire-wall reported port 445 open, with the rest closed. This was good; it gave me baseline no-protection results.

I enabled UFW with it’s basic set of rules and ran the test again, with Windows firewall turned off. Everything came back filtered on both machines. Turn the firewall off; back to the base-line behavior. So apparently adding the network-wide firewall wasn’t going to be a huge deal. But now I needed to make sure I could actually poke specific holes for specific hosts. I fooled around a bit and came up with this:

sudo ufw allow route to <v6 address> port 445

I ran the scan once again, and lo-and-behold; the scanner reported everything filtered except port 445. This was exactly what I was looking for.

Firewalling at the IPv6 router. (That machine does not use the v6 address shown or one based off it’s MAC.)

Now of course adding a firewall to the router means there are additional tasks we have to do. While ufw did add a lot of ip6tables rules on it’s own for required traffic; I still had an issue or two with getting things like DHCPv6 to work. So I set off in search for what others recommended for cases where your ipv6 routing basically doesn’t work when ufw is turned on…because there may be a rule or two I’m missing.

fe00::/7 547/udp           ALLOW       fe00::/7 546/udp
Anywhere (v6)              ALLOW       2001:470:8:3c3::/64
Anywhere (v6)              ALLOW       fe80::/1
Anywhere (v6)              ALLOW FWD   2001:470:8:3c3::/64

The main one that seemed to trip me up as I was working on this at 3:30 AM last night from bed on my phone was the last one. Without telling the firewall to allow outgoing packets to be forwarded from the /64, my traffic wasn’t getting out. The hosts configured themselves via local-link, and they use the local-link as the gateway; but that doesn’t matter as the packets are addressed as coming from the public IP. The first rule was set up to allow DHCP requests for the v6 server on the reserved address block they use for that. The second and third rules largely just make sure my machines can talk to the server.

There’s another rule I seem to remember adding in to iptables directly as well:

ip6tables -A FORWARD -o he-ipv6 -j ACCEPT

I don’t remember seeing this one in the default rules ufw applies and I don’t even know if it did any real good; but it’s likely to be that “one last thing” that will cause everything to break on reboot as I didn’t make that a persistent rule. (I have rebooted since and nothing broke.)

Anyway…the end results is exactly the type of thing you wouldn’t notice; my network has IPv6 connectivity and every device on my network has a publicly routable address. The v6 performance doesn’t suffer from the tunnel too much, helped by I’m very near all the stuff up in Ashburn, VA.

And one day…should Verizon ever decide to get with it and offer me native IPv6, I’ll switch to that. But in the meantime; I’ll make use of this.