I set up 6to4 in my home intranet, and I’ll share my experiences. The web is full of 6to4 how-tos, but there are a few reasons why my case is special: I have a dynamic IPv4 address, I want to share IPv6 in my network, and I want the cleanest solution possible. Quite a common scenario, I think, so here’s how it’s done.
6to4 is pretty simple: every IPv4 address has an associated 48-bit IPv6 address range (2002:xxxx:xxxx::/48), and there are relay routers which can translate traffic between the two. By convention, all those routers are accessible at the same IPv4 address (anycast, 192.88.99.1), so there is no need to look for them, you’ll automatically find the one closest to you (but of course you can specify another address if you know about one).
I’m running Debian testing on my server. The interface br0 is a bridge which includes my wired and wireless LAN’s, and eth1 faces my ISP. I compiled IPv6 and IPv6-in-IPv4 support into my Linux kernel, and my Windows and Linux clients are also IPv6 enabled. The plan is to use the 2002:xxxx:xxxx:1111::1 address on my server, and give the 2002:xxxx:xxxx:2222::/64 subnet to my client PC’s. We’ll use SLAAC, which stands for stateless address autoconfiguration. It means that the clients will automatically learn their configuration and assume an IPv6 address. I have to admit I was skeptic about this one.
Tunnel
First of all, we’re going to need a script to get the IPv4 address of an interface. I like to prefix my scripts’ names with my name, so here’s the one named joco-addr, which takes the name of an interface as a parameter:
#!/bin/sh ifconfig $1 | grep -o "inet addr:\([^ ]*\) " | cut -d: -f2 |
My favorite way of configuring interfaces is with ifupdown via /etc/network/interfaces. Luckily, it can run scripts with backticks, so there’s no need to create a separate script to create or destroy the tunnel (like some other tutorials suggest). Here’s the relevant part of the config file:
iface tun6to4 inet6 v4tunnel address `printf "2002:%02x%02x:%02x%02x:1111::1" \`joco-addr eth1 | tr . " "\`` netmask 64 gateway ::192.88.99.1 endpoint any local `joco-addr eth1` up ip route add `printf "2002:%02x%02x:%02x%02x:2222::/64" \`joco-addr eth1 | tr . " "\`` dev br0 down ip route del `printf "2002:%02x%02x:%02x%02x:2222::/64" \`joco-addr eth1 | tr . " "\`` dev br0 |
The first line declares that it’s a 6to4 tunnel, tun6to4 is a conventional name. The next line defines the address of this interface. It takes the IPv4 address of eth1, my internet interface, and generates the 2002:xxxx:xxxx:1111::1 address from it. The next three lines are standard 6to4 stuff. The local line is again the IPv4 address of eth1. The last two lines create and delete a route whenever the tunnel is created or destroyed, this route specifies that 2002:xxxx:xxxx:2222::/64 is routed to br0, my intranet.
Let’s bring it up:
root@wicklow:~# ifup tun6to4 root@wicklow:~# ifconfig tun6to4 tun6to4 Link encap:IPv6-in-IPv4 inet6 addr: ::84.3.42.135/128 Scope:Compat inet6 addr: 2002:5403:2a87:1111::1/64 Scope:Global UP RUNNING NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) root@wicklow:~# ip -6 route | grep -v ^f ::192.88.99.1 dev tun6to4 metric 1024 mtu 1480 advmss 1420 hoplimit 0 ::/96 via :: dev tun6to4 metric 256 mtu 1480 advmss 1420 hoplimit 0 2002:5403:2a87:1111::/64 dev tun6to4 proto kernel metric 256 mtu 1480 advmss 1420 hoplimit 0 2002:5403:2a87:2222::/64 dev br0 metric 1024 mtu 1500 advmss 1440 hoplimit 0 default via ::192.88.99.1 dev tun6to4 metric 1024 mtu 1480 advmss 1420 hoplimit 0 |
As you can see, the address of the interface and the intranet route are looking good. So does it work?
root@wicklow:~# ping6 ipv6.google.com PING ipv6.google.com(2a00:1450:8004::69) 56 data bytes 64 bytes from 2a00:1450:8004::69: icmp_seq=1 ttl=56 time=64.8 ms 64 bytes from 2a00:1450:8004::69: icmp_seq=2 ttl=56 time=55.1 ms 64 bytes from 2a00:1450:8004::69: icmp_seq=3 ttl=56 time=47.7 ms ^C --- ipv6.google.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 47.735/55.902/64.818/6.994 ms |
It works! Not too fast, though.
One more thing to do on the server: because of the dynamic IP, we need to make sure that the tunnel is created after we get an address and destroyed before we lose it. With isc-dhcp-client 4.1, we need to create a hook script named /etc/dhcp/dhclient-exit-hooks.d/joco-6to4 like this:
case $reason in BOUND|RENEW|REBIND|REBOOT) ifup tun6to4 ;; esac |
The other script is /etc/dhcp/dhclient-enter-hooks.d/joco-6to4:
case $reason in EXPIRE|FAIL|RELEASE|STOP) ifdown tun6to4 ;; esac |
Now if I do ifdown eth1 or ifup eth1 (and, more importantly, when my system does it), it brings the tunnel down or up, too. Perfect!
Intranet
In order to support SLAAC, we need to install a daemon which will supply the clients with the information they need (address range, gateway, similar to DHCP stuff). The obvious choice currently is radvd. Here’s my /etc/radvd.conf:
interface br0 { AdvSendAdvert on; MaxRtrAdvInterval 30; prefix 0:0:0:2222::/64 { Base6to4Interface eth1; AdvValidLifetime 300; AdvPreferredLifetime 120; }; }; |
In other tutorials, you’ll see additional values, but they are the default ones, so I omitted them. This configuration file means that SLAAC will work on my intranet (br0 interface), and the prefix is 2002:xxxx:xxxx:2222::/64 based on my public 6to4 address (eth1 interface). I also specified some low timeout values as suggested by radvd’s documentation because of the dynamic nature of the tunnel.
After we start radvd, the clients will be configured immediately. Here’s the ipconfig output on a Windows 7:
Connection-specific DNS Suffix . : joconet IPv6 Address. . . . . . . . . . . : 2002:5403:2a87:2222:852c:d148:92d9:4b13 Temporary IPv6 Address. . . . . . : 2002:5403:2a87:2222:d5f:42e8:b688:b1d1 Link-local IPv6 Address . . . . . : fe80::852c:d148:92d9:4b13%17 IPv4 Address. . . . . . . . . . . : 192.168.0.106 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : fe80::214:c0dd:fee1:22dc%17 192.168.0.1 |
It has the correct IPv6 address (actually two of them, not sure why) and a default gateway address (not the public one, but the link-local address of br0, it’s totally fine). Let’s see a ping test:
C:\>ping ipv6.google.com Pinging ipv6.l.google.com [2a00:1450:8004::69] with 32 bytes of data: Reply from 2a00:1450:8004::69: time=51ms Reply from 2a00:1450:8004::69: time=50ms Reply from 2a00:1450:8004::69: time=51ms Reply from 2a00:1450:8004::69: time=50ms Ping statistics for 2a00:1450:8004::69: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 50ms, Maximum = 51ms, Average = 50ms |
Again, not too fast, but it worked, without any client-side configuration at all! This is a Windows 7, but it works just as nicely on XP and Linux, and who knows what else.
Conclusion
It’s good to see that this whole thing works without too much trouble, but there are two drawbacks with this setup, which luckily cancel each other out.
The first problem is that the clients don’t know each other’s IPv6 address. On IPv4 I have dnsmasq, which gives out addresses via DHCP and also serves as a DNS server which knows about those addresses, quite clever. But there’s no such thing on IPv6 yet. I’m not saying it’s impossible, because it is, for example with a DHCPv6 server and automatic DNS updates from hook scripts, but that would be too much of a trouble for my needs. So it’s not the ISP’s only who are lazy about IPv6 deployment, better IPv6 tools are still yet to come, too.
The second problem is that if I lose my internet connection and my IPv4 address, then my tunnel and my entire IPv6 network are gone with it, too. If I had an IPv6 intranet, this would be a problem, which could be solved by a private, internal address range, like on IPv4. But like I said above, my clients still communicate with each other on IPv4, so this isn’t a problem at all. Lucky, huh?
Also, there’s the problem of latency as you can see in the ping tests. I’m guessing it’s because there aren’t too many 6to4 relays, and my IPv6 packets travel a lot before they reach one. Other than this, I’d say this technology is perfectly reliable.
However, I’m still interested in something better, so until my ISP provides IPv6 (shame on them), I’ll keep experimenting. My first target is a tunnel broker in Hungary where I’m also located, I’m hoping it will be a lot faster, and then the next goal is an IPv6-only intranet.