Better together - F5 Container Ingress Services and NGINX Plus Ingress Controller Integration

Introduction

The F5 Container Ingress Services (CIS) can be integrated with the NGINX Plus Ingress Controllers (NIC) within a Kubernetes (k8s) environment.

The benefits are getting the best of both worlds, with the BIG-IP providing comprehensive L4 ~ L7 security services, while leveraging NGINX Plus as the de facto standard for micro services solution.

This architecture is depicted below.

The integration is made fluid via the CIS, a k8s pod that listens to events in the cluster and dynamically populates the BIG-IP pool pointing to the NIC's as they scale.

There are a few components need to be stitched together to support this integration, each of which is discussed in detail over the proceeding sections.

NGINX Plus Ingress Controller

Follow this (https://docs.nginx.com/nginx-ingress-controller/installation/building-ingress-controller-image/) to build the NIC image.

The NIC can be deployed using the Manifests either as a Daemon-Set or a Service. See this ( https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/ ).

A sample Deployment file deploying NIC as a Service is shown below,

apiVersion: apps/v1
kind: Deployment
metadata:
 name: nginx-ingress
 namespace: nginx-ingress
spec:
 replicas: 3
 selector:
   matchLabels:
     app: nginx-ingress
 template:
   metadata:
     labels:
       app: nginx-ingress
    #annotations:
      #prometheus.io/scrape: "true"
      #prometheus.io/port: "9113"
   spec:
     serviceAccountName: nginx-ingress
     imagePullSecrets:
     - name: abgmbh.azurecr.io
     containers:
     - image: abgmbh.azurecr.io/nginx-plus-ingress:edge
       name: nginx-plus-ingress
       ports:
       - name: http
         containerPort: 80
       - name: https
         containerPort: 443
      #- name: prometheus
        #containerPort: 9113
       securityContext:
         allowPrivilegeEscalation: true
         runAsUser: 101 #nginx
         capabilities:
           drop:
           - ALL
           add:
           - NET_BIND_SERVICE
       env:
       - name: POD_NAMESPACE
         valueFrom:
           fieldRef:
             fieldPath: metadata.namespace
       - name: POD_NAME
         valueFrom:
           fieldRef:
             fieldPath: metadata.name
       args:
         - -nginx-plus
         - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
         - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
         - -ingress-class=sock-shop
        #- -v=3 # Enables extensive logging. Useful for troubleshooting.
        #- -report-ingress-status
        #- -external-service=nginx-ingress
        #- -enable-leader-election
        #- -enable-prometheus-metrics

Notice the ‘- -ingress-class=sock-shop’ argument, it means that the NIC will only work with an Ingress that is annotated with ‘sock-shop’. The absence of this annotation makes NIC the default for all Ingress created.

Below shows the counterpart Ingress with the ‘sock-shop’ annotation.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: sock-shop-ingress
 annotations:
   kubernetes.io/ingress.class: "sock-shop"
spec:
 tls:
 - hosts:
   - socks.ab.gmbh
   secretName: wildcard.ab.gmbh
 rules:
 - host: socks.ab.gmbh
   http:
     paths:
     - path: /
       backend:
         serviceName: front-end
         servicePort: 80

This Ingress says if hostname is socks.ab.gmbh and path is ‘/’, send traffic to a service named ‘front-end’, which is part of the socks application itself.

The above concludes Ingress configuration with the NIC.

F5 Container Ingress Services

The next step is to leverage the CIS to dynamically populate the BIG-IP pool with the NIC addresses.

Follow this ( https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-app-install.html ) to deploy the CIS.

A sample Deployment file is shown below,

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: k8s-bigip-ctlr-deployment
 namespace: kube-system
spec:
 # DO NOT INCREASE REPLICA COUNT
 replicas: 1
 template:
   metadata:
     name: k8s-bigip-ctlr
     labels:
       app: k8s-bigip-ctlr
   spec:
     # Name of the Service Account bound to a Cluster Role with the required
     # permissions
     serviceAccountName: bigip-ctlr
     containers:
       - name: k8s-bigip-ctlr
         image: "f5networks/k8s-bigip-ctlr"
         env:
           - name: BIGIP_USERNAME
             valueFrom:
               secretKeyRef:
                 # Replace with the name of the Secret containing your login
                 # credentials
                 name: bigip-login
                 key: username
           - name: BIGIP_PASSWORD
             valueFrom:
               secretKeyRef:
                 # Replace with the name of the Secret containing your login
                 # credentials
                 name: bigip-login
                 key: password
         command: ["/app/bin/k8s-bigip-ctlr"]
         args: [
           # See the k8s-bigip-ctlr documentation for information about
           # all config options
           # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest
           "--bigip-username=$(BIGIP_USERNAME)",
           "--bigip-password=$(BIGIP_PASSWORD)",
           "--bigip-url=https://x.x.x.x:8443",
           "--bigip-partition=k8s",
           "--pool-member-type=cluster",
           "--agent=as3",
           "--manage-ingress=false",
           "--insecure=true",
           "--as3-validation=true",
           "--node-poll-interval=30",
           "--verify-interval=30",
           "--log-level=INFO"
           ]
     imagePullSecrets:
       # Secret that gives access to a private docker registry
       - name: f5-docker-images
       # Secret containing the BIG-IP system login credentials
       - name: bigip-login

Notice the following arguments below. They tell the CIS to consume AS3 declaration to configure the BIG-IP. According to PM, CCCL (Common Controller Core Library) – used to orchestrate F5 BIG-IP, is getting removed this sprint for the CIS 2.0 release.

'--manage-ingress=false' means CIS is not doing anything for Ingress resources defined within the k8s, this is because that CIS is not the Ingress Controller, NGINX Plus is, as far as k8s is concerned.

The CIS will create a partition named k8s_AS3 on the BIG-IP, this is used to hold L4~7 configuration relating to the AS3 declaration.

The best practice is also to manually create a partition named 'k8s' (in our example), where networking info will be stored (e.g., ARP, FDB).

           "--bigip-url=https://x.x.x.x:8443",
           "--bigip-partition=k8s",
           "--pool-member-type=cluster",
           "--agent=as3",
           "--manage-ingress=false",
           "--insecure=true",
           "--as3-validation=true",

To apply AS3, the declaration is embedded within a ConfigMap applied to the CIS pod.

kind: ConfigMap
apiVersion: v1
metadata:
 name: as3-template
 namespace: kube-system
 labels:
   f5type: virtual-server
   as3: "true"
data:
 template: |
   {
      "class": "AS3",
      "action": "deploy",
      "persist": true,
      "declaration": {
        "class": "ADC",
        "id":"1847a369-5a25-4d1b-8cad-5740988d4423",
        "schemaVersion": "3.16.0",
        "Nginx_IC": {
                "class": "Tenant",
                "Nginx_IC_vs": {
                    "class": "Application",
                    "template": "https",
                    "serviceMain": {
                        "class": "Service_HTTPS",
                        "virtualAddresses": [
                            "10.1.0.14"
                        ],
                        "virtualPort": 443,
                        "redirect80": false,
                        "serverTLS": {
                            "bigip": "/Common/clientssl"
                        },
                        "clientTLS": {
                            "bigip": "/Common/serverssl"
                        },
                        "pool": "Nginx_IC_pool"
                    },
                    "Nginx_IC_pool": {
                        "class": "Pool",
                        "monitors": [
                            "https"
                        ],
                        "members": [
                            {
                                "servicePort": 443,
                                "shareNodes": true,
                                "serverAddresses": []
                            }
                        ]
                    }
                }
            }
        } 
    }

They are telling the BIG-IP to create a tenant called ‘Nginx_IC’, a virtual named ‘Nginx_IC_vs’ and a pool named ‘Nginx_IC_pool’. The CIS will update the serverAddresses with the NIC addresses dynamically.

Now, create a Service to expose the NIC’s.

apiVersion: v1
kind: Service
metadata:
 name: nginx-ingress
 namespace: nginx-ingress
 labels:
   cis.f5.com/as3-tenant: Nginx_IC
   cis.f5.com/as3-app: Nginx_IC_vs
   cis.f5.com/as3-pool: Nginx_IC_pool
spec:
 type: ClusterIP
 ports:
   - port: 443
     targetPort: 443
     protocol: TCP
     name: https
 selector:
   app: nginx-ingress

Notice the labels, they match with the AS3 declaration and this allows the CIS to populate the NIC’s addresses to the correct pool. Also notice the kind of the manifest ‘Service’, this means only a Service is created, not an Ingress, as far as k8s is concerned.

On the BIG-IP, the following should be created.

The end product is below.

Please note that this article is focused solely on control plane, that is, how to get the CIS to populate the BIG-IP with NIC's addresses.

The specific mechanisms to deliver packets from the BIG-IP to the NIC's on the data plane is not discussed, as it is decoupled from control plane. For data plane specifics, please take a look here ( https://clouddocs.f5.com/containers/v2/ ).

Hope this article helps to lift the veil on some integration mysteries.

Published Mar 05, 2020
Version 1.0
  •  Also is it any limitation to specify no. of F5 in --bigip-url=https://x.x.x.x:8443

    Can we update multiple F5(20) from the same CIS by specifying different bigip url in the same CIS POD?

  •  . Just to clarify, are you looking at injecting the same configs (e.g., VIP, pool) to all 20 F5's?

  •  yes not 20, but yes multiple F5 , 8 HA pair to be specific. Yes same configuration

  • One CIS can only speak to one BIG-IP. But you can have multiple CIS's with each talking to its associating BIG-IP.

  • Why the pool is with 5 Nginx conrollers ? Shouldn't it be with 3 "replicas: 3"?

  • Thanks just I was studying for my F5 cloud expert cert and I was curious if I was missing something.