Better together - F5 Container Ingress Services and NGINX Plus Ingress Controller Integration
Introduction
The F5 Container Ingress Services (CIS) can be integrated with the NGINX Plus Ingress Controllers (NIC) within a Kubernetes (k8s) environment.
The benefits are getting the best of both worlds, with the BIG-IP providing comprehensive L4 ~ L7 security services, while leveraging NGINX Plus as the de facto standard for micro services solution.
This architecture is depicted below.
The integration is made fluid via the CIS, a k8s pod that listens to events in the cluster and dynamically populates the BIG-IP pool pointing to the NIC's as they scale.
There are a few components need to be stitched together to support this integration, each of which is discussed in detail over the proceeding sections.
NGINX Plus Ingress Controller
Follow this (https://docs.nginx.com/nginx-ingress-controller/installation/building-ingress-controller-image/) to build the NIC image.
The NIC can be deployed using the Manifests either as a Daemon-Set or a Service. See this ( https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/ ).
A sample Deployment file deploying NIC as a Service is shown below,
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress namespace: nginx-ingress spec: replicas: 3 selector: matchLabels: app: nginx-ingress template: metadata: labels: app: nginx-ingress #annotations: #prometheus.io/scrape: "true" #prometheus.io/port: "9113" spec: serviceAccountName: nginx-ingress imagePullSecrets: - name: abgmbh.azurecr.io containers: - image: abgmbh.azurecr.io/nginx-plus-ingress:edge name: nginx-plus-ingress ports: - name: http containerPort: 80 - name: https containerPort: 443 #- name: prometheus #containerPort: 9113 securityContext: allowPrivilegeEscalation: true runAsUser: 101 #nginx capabilities: drop: - ALL add: - NET_BIND_SERVICE env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name args: - -nginx-plus - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret - -ingress-class=sock-shop #- -v=3 # Enables extensive logging. Useful for troubleshooting. #- -report-ingress-status #- -external-service=nginx-ingress #- -enable-leader-election #- -enable-prometheus-metrics
Notice the ‘- -ingress-class=sock-shop’ argument, it means that the NIC will only work with an Ingress that is annotated with ‘sock-shop’. The absence of this annotation makes NIC the default for all Ingress created.
Below shows the counterpart Ingress with the ‘sock-shop’ annotation.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: sock-shop-ingress annotations: kubernetes.io/ingress.class: "sock-shop" spec: tls: - hosts: - socks.ab.gmbh secretName: wildcard.ab.gmbh rules: - host: socks.ab.gmbh http: paths: - path: / backend: serviceName: front-end servicePort: 80
This Ingress says if hostname is socks.ab.gmbh and path is ‘/’, send traffic to a service named ‘front-end’, which is part of the socks application itself.
The above concludes Ingress configuration with the NIC.
F5 Container Ingress Services
The next step is to leverage the CIS to dynamically populate the BIG-IP pool with the NIC addresses.
Follow this ( https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-app-install.html ) to deploy the CIS.
A sample Deployment file is shown below,
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: k8s-bigip-ctlr-deployment namespace: kube-system spec: # DO NOT INCREASE REPLICA COUNT replicas: 1 template: metadata: name: k8s-bigip-ctlr labels: app: k8s-bigip-ctlr spec: # Name of the Service Account bound to a Cluster Role with the required # permissions serviceAccountName: bigip-ctlr containers: - name: k8s-bigip-ctlr image: "f5networks/k8s-bigip-ctlr" env: - name: BIGIP_USERNAME valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: username - name: BIGIP_PASSWORD valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: password command: ["/app/bin/k8s-bigip-ctlr"] args: [ # See the k8s-bigip-ctlr documentation for information about # all config options # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest "--bigip-username=$(BIGIP_USERNAME)", "--bigip-password=$(BIGIP_PASSWORD)", "--bigip-url=https://x.x.x.x:8443", "--bigip-partition=k8s", "--pool-member-type=cluster", "--agent=as3", "--manage-ingress=false", "--insecure=true", "--as3-validation=true", "--node-poll-interval=30", "--verify-interval=30", "--log-level=INFO" ] imagePullSecrets: # Secret that gives access to a private docker registry - name: f5-docker-images # Secret containing the BIG-IP system login credentials - name: bigip-login
Notice the following arguments below. They tell the CIS to consume AS3 declaration to configure the BIG-IP. According to PM, CCCL (Common Controller Core Library) – used to orchestrate F5 BIG-IP, is getting removed this sprint for the CIS 2.0 release.
'--manage-ingress=false' means CIS is not doing anything for Ingress resources defined within the k8s, this is because that CIS is not the Ingress Controller, NGINX Plus is, as far as k8s is concerned.
The CIS will create a partition named k8s_AS3 on the BIG-IP, this is used to hold L4~7 configuration relating to the AS3 declaration.
The best practice is also to manually create a partition named 'k8s' (in our example), where networking info will be stored (e.g., ARP, FDB).
"--bigip-url=https://x.x.x.x:8443", "--bigip-partition=k8s", "--pool-member-type=cluster", "--agent=as3", "--manage-ingress=false", "--insecure=true", "--as3-validation=true",
To apply AS3, the declaration is embedded within a ConfigMap applied to the CIS pod.
kind: ConfigMap apiVersion: v1 metadata: name: as3-template namespace: kube-system labels: f5type: virtual-server as3: "true" data: template: | { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "id":"1847a369-5a25-4d1b-8cad-5740988d4423", "schemaVersion": "3.16.0", "Nginx_IC": { "class": "Tenant", "Nginx_IC_vs": { "class": "Application", "template": "https", "serviceMain": { "class": "Service_HTTPS", "virtualAddresses": [ "10.1.0.14" ], "virtualPort": 443, "redirect80": false, "serverTLS": { "bigip": "/Common/clientssl" }, "clientTLS": { "bigip": "/Common/serverssl" }, "pool": "Nginx_IC_pool" }, "Nginx_IC_pool": { "class": "Pool", "monitors": [ "https" ], "members": [ { "servicePort": 443, "shareNodes": true, "serverAddresses": [] } ] } } } } }
They are telling the BIG-IP to create a tenant called ‘Nginx_IC’, a virtual named ‘Nginx_IC_vs’ and a pool named ‘Nginx_IC_pool’. The CIS will update the serverAddresses with the NIC addresses dynamically.
Now, create a Service to expose the NIC’s.
apiVersion: v1 kind: Service metadata: name: nginx-ingress namespace: nginx-ingress labels: cis.f5.com/as3-tenant: Nginx_IC cis.f5.com/as3-app: Nginx_IC_vs cis.f5.com/as3-pool: Nginx_IC_pool spec: type: ClusterIP ports: - port: 443 targetPort: 443 protocol: TCP name: https selector: app: nginx-ingress
Notice the labels, they match with the AS3 declaration and this allows the CIS to populate the NIC’s addresses to the correct pool. Also notice the kind of the manifest ‘Service’, this means only a Service is created, not an Ingress, as far as k8s is concerned.
On the BIG-IP, the following should be created.
The end product is below.
Please note that this article is focused solely on control plane, that is, how to get the CIS to populate the BIG-IP with NIC's addresses.
The specific mechanisms to deliver packets from the BIG-IP to the NIC's on the data plane is not discussed, as it is decoupled from control plane. For data plane specifics, please take a look here ( https://clouddocs.f5.com/containers/v2/ ).
Hope this article helps to lift the veil on some integration mysteries.
- kunalpuriiiAltocumulus
Hello Lief Zimmerman
Thanks for posting this document,
I am working on same topology, however did not get this working as of now.
Would it be possible for you to help.
Thanks
Kunal
- I recommend posting your specific question in our questions section. That is the most likely way to get the most eyes on your problem.
You might consider adding the URL to this article as a reference of your question and then you may even @mention the author of this article () in your question. Often, someone in the community (either Chris or another) will step in and offer guidance. Failing that, please reach out to your account management team.
Hope that helps.
- Chris_ZhangRet. Employee
Hey Kunal,
Regarding user/pass, you need to create a secret within k8s and reference that secret in the form of variables in the yaml file. - the references are already in place, so please create a secret per this article ( https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-app-install.html#kctlr-initial-setup-bigip ), step 3.
For "--bigip-url=<ip_address-or-hostname>", if your BIG-IP has a single interface, the management by default is on port 8443. Use the address that you use to administer the appliance.
You do not need to add anything to the ConfigMap as related to your question. If you follow the referenced article, all the prerequisites should be setup and ready to go.
--insecure=true means CIS will not validate certificate presented by the BIG-IP. All traffic is still SSL encrypted.
Install a recent version of f5-appsvcs on the BIG-IP, otherwise it won't understand AS3 embedded within the ConfigMap.
Once the CIS is able to communicate with the BIG-IP, the AS3 within the ConfigMap will set up everything in the BIG-IP. You do not have to configure anything manually inside the BIG-IP.
The integration is meant for NGINX Plus Ingress Controller, the Open Source Nginx might work as well, but I have not tested it at all.
Thanks,
Chris
- chongwpNimbostratus
My "k8s-bigip-ctlr" is having this error in its log.
> 2020-03-20T11:40:39.934958194Z 2020/03/20 11:40:39 [ERROR] Error parsing ConfigMap kube-system_as3-template
Need a recent version of f5-appsvcs on the BIG-IP?
- Chris_ZhangRet. Employee
This message is likely cosmetic, I have those messages as well. I will ask internally and see what is causing them.
- kunalpuriiiAltocumulus
I have setup and working now, issue is with AS3-template. I am only able to create VIP with name"serviceMain".
If i have multiple cluster connected to F5, how i will be able to create multiple VIP's.
I have tried changing name of VIP but its not working.
Can you suggest me what would be correct config for AS3 for flexible VIP name
- Chris_ZhangRet. Employee
With AS3, the name of the virtual server is fixed to 'ServiceMain', but you can put the name in 'Description'.
- kunalpuriiiAltocumulus
Hello Chris... thanks for responding to the queries... In above example clusterip is used which advertised nginx ingress controller IP to the F5. can you please confirm what would be the data plane forwarding, is it like below
F5-->k8'sNode-->Services-->POD where NGINX is running or is it different?
Thanks again for your help
- Chris_ZhangRet. Employee
When you use ClusterIP, the BIG-IP needs to be able to deliver traffic to that IP space. If you use Calico (BGP), that traffic is routed. If you use Flannel (VXLAN), that traffic is sent inside the VXLAN tunnel.
With BGP, the route table will have the next-hop set to the k8s nodes. Traffic is routed to k8s nodes and those nodes further route traffic to the NGINX IC pods.
With VXLAN, the BIG-IP establishes a tunnel with the k8s nodes at the other end. Traffic is sent inside the tunnel and arrives at the k8s nodes, and traffic is taken out from the tunnel and delivered to the NGINX IC pods.
- kunalpuriiiAltocumulus
Hello
I hope you are doing well.
Just wanted to check if this solution still works. I am trying to recreate the environment but its not working, its giving me below error
2020/06/08 20:47:18 [ERROR] [AS3] Response from BIG-IP: code: ERR_REQUEST_FAILED --- tenant:Nginx_IC --- message: declaration failed
2020/06/08 20:47:18 [ERROR] [AS3] Response from BIG-IP: code: 200 --- tenant:k8s-AS3_AS3 --- message: no change
I have tried this setup with CIS 2.0.0 and f5appsvc 3.20.0 and also CIS 1.14.0 and f5appsvc 3.17.1, i am using same working configuration from march. but getting below error
nginx SVC config
root@master-1:~# kubectl describe svc nginx-ingress2 -n nginx-ingress
Name: nginx-ingress2
Namespace: nginx-ingress
Labels: cis.f5.com/as3-app=Nginx_vs
cis.f5.com/as3-pool=Nginx_IC_pool
cis.f5.com/as3-tenant=Nginx_IC
Annotations: <none>
Selector: app=nginx-ingress
Type: ClusterIP
IP: 10.111.160.103
Port: https 443/TCP
TargetPort: 443/TCP
Endpoints: 10.1.2.191:443
Session Affinity: None
Events: <none>
Configmap for CIS and F5 integration
root@master-1:~# kubectl describe configmap nginx-as3 -n kube-system
Name: nginx-as3
Namespace: kube-system
Labels: as3=true
f5type=virtual-server
Annotations: <none>
Data
====
template:
----
{
"class": "AS3",
"action": "deploy",
"persist": true,
"declaration": {
"class": "ADC",
"schemaVersion": "3.13.0",
"id": "1847a369-5a25-4d1b-8cad-5740988d4423",
"label": "APP Template",
"remark": "HTTP application",
"Nginx_IC": {
"class": "Tenant",
"Nginx_IC_vs": {
"class": "Application",
"template": "generic",
"app_80_vs": {
"class": "Service_HTTP",
"remark": "app",
"virtualAddresses": [
"10.165.36.141"
],
"virtualPort": 80,
"profileTCP": {
"bigip": "/Common/f5-tcp-lan"
},
"pool": "Nginx_IC_pool"
},
"Nginx_IC_pool": {
"class": "Pool",
"members": [
{
"servicePort": 80,
"shareNodes": true,
"serverAddresses": []
}
]
}
}
}
}
}
Events: <none>
CIS:
root@master-1:~# kubectl describe pod k8s-bigip-ctlr-deployment-6759c46587-tdk79 -n kube-system
Name: k8s-bigip-ctlr-deployment-6759c46587-tdk79
Namespace: kube-system
Priority: 0
Node: worker-2/192.168.5.22
Start Time: Mon, 08 Jun 2020 20:40:16 +0000
Labels: app=k8s-bigip-ctlr
pod-template-hash=6759c46587
Annotations: <none>
Status: Running
IP: 10.1.2.192
IPs:
IP: 10.1.2.192
Controlled By: ReplicaSet/k8s-bigip-ctlr-deployment-6759c46587
Containers:
k8s-bigip-ctlr:
Container ID: docker://4f4bfd89700af786bfa3920e5287160003a4500370c4e133c159cc33c62ed984
Image: f5networks/k8s-bigip-ctlr:1.14.0
Image ID: docker-pullable://f5networks/k8s-bigip-ctlr@sha256:25bdfc947ed4cdd172a68e37c51dbaa8ca87fcbc4d894622b42a260755a2bf68
Port: <none>
Host Port: <none>
Command:
/app/bin/k8s-bigip-ctlr
Args:
--bigip-username=$(BIGIP_USERNAME)
--bigip-password=$(BIGIP_PASSWORD)
--bigip-url=https://192.168.5.210
--bigip-partition=k8s-AS3
--pool-member-type=cluster
--agent=as3
--manage-ingress=false
--insecure=true
--as3-validation=true
--node-poll-interval=30
--verify-interval=30
--log-level=INFO
State: Running
Started: Mon, 08 Jun 2020 20:40:20 +0000
Ready: True
Restart Count: 0
Environment:
BIGIP_USERNAME: <set to the key 'username' in secret 'bigip-login'> Optional: false
BIGIP_PASSWORD: <set to the key 'password' in secret 'bigip-login'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from bigip-ctlr-token-r6rvn (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
bigip-ctlr-token-r6rvn:
Type: Secret (a volume populated by a Secret)
SecretName: bigip-ctlr-token-r6rvn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned kube-system/k8s-bigip-ctlr-deployment-6759c46587-tdk79 to worker-2
Normal Pulling 17m kubelet, worker-2 Pulling image "f5networks/k8s-bigip-ctlr:1.14.0"
Normal Pulled 17m kubelet, worker-2 Successfully pulled image "f5networks/k8s-bigip-ctlr:1.14.0"
Normal Created 17m kubelet, worker-2 Created container k8s-bigip-ctlr
Normal Started 17m kubelet, worker-2 Started container k8s-bigip-ctlr
Any help is greatly appreciated.
Thanks
Kunal