Integrating with your IPv6 Kubernetes Cluster

Why important to focus on IPv6

IPv6 was first introduced as a standard in 1995, yet it is only in the last five years that adoption has accelerated due to the growing need to address the limitations of IPv4. In even some countries like Japan, the pool of IPv4 addresses has long been considered exhausted and many utilize NAT architectures to buy time. 

With the explosion of cloud-based applications and IoT, not even Kubernetes can escape the call for IPv6 support and all the caveats that come with it. For network engineers, the IPv6 is well known. However, one can argue that the container world is still very new to the protocol. Fortunately, this is changing and arguably with more speed now that dual-stack IPv4/IPv6 networking has reached general availability (GA) in Kubernetes v1.23 and single-stack IPv6 alpha support in Istio v1.9.

IPv6 changes everything in Kubernetes

When deploying IPv6-only clusters, your pod and service subnets will talk to each other on 128-bit addresses from a block of IPs you defined (default /122) and will therefore need NAT64 and DNS64 in place when calling to IPv4 only services such as DockerHub, GitHub, and other package libraries sitting internally or on the Internet. This also applies to libraries required when installing pre-requisites for K8S. Your CNI configuration, for example Calico, will also need specific variables set to enable IPv6. 

https://projectcalico.docs.tigera.io/networking/ipv6

Once you have your cluster set up, you can then focus on all the hard parts of IPv6 integration. For example, your BGP routers will need to ensure proper configuration for IPv6 peers, and everything will need to allow neighbor discovery (or NDP) to succeed or else you will fail to achieve the simplest of tests — the ICMP ping. All your containerized applications will also need to support communicating on IPv6 sockets. This seems to be the biggest challenge with most applications that are still migrating from the IPv4 world.

Fortunately, F5 has made the ingress part easier.

How can BIG-IP integrate with your IPv6-only cluster for reverse proxy and security services?

IPv6 support is available for many features of Container Ingress Services (CIS) as well as F5 IPAM Controller to meet your needs for exposing your K8S workloads in an easy and secure way.

BIG-IP admin IP address

In versions prior to 2.6.0, you had to define a hostAliases block for your IPv6 BIG-IP URL (--bigip-url=bigip01). 

hostAliases:
  - hostnames:
      - bigip01
    ip: aaa1:bbb2:ccc3:ddd4::100

Now you can use either of below formats:

#IPv6 URL with non-standard port
--bigip-url=https://[aaa1:bbb2:ccc3:ddd4::100]:8443
 
#IPv6 address
--bigip-url=[aaa1:bbb2:ccc3:ddd4::100]

VirtualServer, TransportServer CRD

When deploying your custom resource, you simply specify an IPv6 address (virtualServerAddress) that will be the external IP exposing your workloads.

apiVersion: "cis.f5.com/v1"
kind: TransportServer
metadata:
   name: simple-virtual-l4
   labels:
     f5cr: "true"
spec:
  virtualServerAddress: "aaa1:bbb2:ccc3:eee5::200"
  virtualServerPort: 80
  mode: performance
  snat: auto
  pool:
    service: nginx
    servicePort: 8080
    monitor:
      type: tcp
      interval: 10
      timeout: 10
---
 
apiVersion: "cis.f5.com/v1"
kind: VirtualServer
metadata:
  name: l7-virtual-http
  labels:
    f5cr: "true"
spec:
  # This is an insecure virtual, Please use TLSProfile to secure the virtual
  # check out tls examples to understand more.
  host: expo.example.com
  virtualServerAddress: "aaa1:bbb2:ccc3:eee5::201"
  virtualServerName: "l7-virtual-http"
  pools:
  - path: /expo
    service: svc-2
    servicePort: 80

Service Type LoadBalancer (with F5 IPAM Controller)

You can also simulate the one-click public cloud experience when exposing your workloads by simply specifying your service to be of type LoadBalancer. The listener IP allocation and configuration is automagically done for you with the help of F5 IPAM Controller. 

In F5 IPAM Controller deployment manifest, just specify your IPv6 range in the container args like below:

- '{"Dev":"2001:db8:3::7-2001:db8:3::9","Test":"2001:db8:4::7-2001:db8:4::9"}'

Then deploy your F5 IPAM Controller and if you see addresses in the logs like below, you're good to go.

2022/03/08 15:34:31 [DEBUG] Created New IPAM Client
2022/03/08 15:34:31 [DEBUG] [STORE] Using IPAM DB file from mount path
2022/03/08 15:34:31 [DEBUG] [STORE] [ipaddress status ipam_label reference]
2022/03/08 15:34:31 [DEBUG] [STORE] 2001:db8:4::7 1 Test ZpTSxCKs0gigByk5
2022/03/08 15:34:31 [DEBUG] [STORE] 2001:db8:4::8 1 Test 650YpEeEBF2H88Z8
2022/03/08 15:34:31 [DEBUG] [STORE] 2001:db8:4::9 1 Test la9aJTZ5Ubqi/2zU

Note: For the CIS configuration, ensure that your CIS deployment manifest defines below parameters, which are particularly for service type LoadBalancer and IPv6 use case.

"--custom-resource-mode=true",
"--ipam=true",
"--enable-ipv6=true",

Then you can deploy a simple Service resource like below:

apiVersion: v1
kind: Service
metadata:
  annotations:
    cis.f5.com/health: '{"interval": 5, "timeout": 10}'
    cis.f5.com/ipamLabel: Test
  labels:
    app: svc-f5-demo-lb1
  name: svc-f5-demo-lb1
  namespace: default
spec:
  ports:
    - name: svc-lb1-80
      port: 80
      protocol: TCP
      targetPort: 80
  selector:
    app: f5-demo
  type: LoadBalancer

At which point the magic happens and you see that your "EXTERNAL-IP" is populated with an IP address allocated by F5 IPAM Controller.

# kubectl get svc svc-f5-demo-lb1
NAME              TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)        AGE
svc-f5-demo-lb1   LoadBalancer   172.19.2.55   2001:db8:4::4   80:32410/TCP   43m

On the BIG-IP you will see the virtual server created as defined in your Service object.

Summary

Moving to a single-stack IPv6 Kubernetes cluster can be difficult and requires a thorough review of all your application components as well as the parts that integrate with your cluster.

F5 has ensured that even with a pure IPv6 environment you are covered with an enterprise grade ingress solution that not only simplifies exposing of your workloads, but also provides many security options as well.

Updated Mar 17, 2022
Version 2.0

Was this article helpful?

No CommentsBe the first to comment