tanzu
2 TopicsBIG-IP in Tanzu Kubernetes Grid in an NSX-T network
Introduction BIG-IP in Tanzu Kubernetes Grid provides a Ingress solution which is implemented with a single tier of Load Balancing. Typically Ingress requirea an in-cluster Ingress Controller and an external Load Balancer. By using BIG-IP, Ingress services get greatly simplified, while improving the perfomance by removing one hop and at the same time exposing all the BIG-IP's advanced load balancing functionalities and security modules. Tanzu Kubernetes Grid is a Kubernetes distribution supported by VMware that comes with the choice of two CNIs: Antrea - Geneve overlay based. Calico - BGP based, no overlay networking. When using Antrea in NSX-T environments Antrea uses it's own Geneve overlay on top of NSX-T's own Geneve overlay networking. This makes the communications in the TKG cluster to have the overhead of two encapsulations as it can be seen in the next wireshark capture. Modern NICs are able to off-load the CPUs from the task of handling Geneve's encapsulation but these are not able to cope with double-encapsulation as it is the case when using Antrea+NSX-T. In this blog we will describe how to setup BIG-IP with TKG's Calico CNI which doesn't have such overhead. The article is divided in the following sections: Deploying a TKG cluster with built-in Calico Deploying BIG-IP in TKG with NSX-T Configuring BIG-IP and TKG to peer with Calico (BGP) Configuring BIG-IP to handle Kubernetes workloads Verifying the resulting configuration Alternative topologies and multi-tenancy considerations Closing notes All the configuration files referenced in this blog can be found in the repository https://github.com/f5devcentral/f5-bd-tanzu-tkg-bigip Deploying a TKG cluster with built-in Calico Deploying a TKG cluster is as simple as running kubectl with a definition like the following one: Note that the only thing required to choose between using Antrea or Calico is to specify the cni: name: field. It is perfectly fine running clusters with different CNIs in the same environment. At time of this writing (H1 2021) Calico v3.11 is the version included in Kubernetes v1.18. As you can see from the definition above we will be creating a small cluster for testing purposes. This can be scaled-up/down later on as desired by re-applying an updated TKG cluster definition. Deploying BIG-IP in TKG with NSX-T When a TKG cluster is deployed in NSX-T the Tanzu Kubernetes Grid Service automatically creates the necessary networking configuration. This includes amongst other: creating a T1 Gateway (also known as DLR) named vnet-domain-c8:<uuid>-<namespace>-<name>-vnet-rtr and a subnet named vnet-domain-c8:<uuid>-<namespace>-<name>-vnet-0 where Kubernetes nodes are attached. This is shown in the next screenshot where we can see that the subnet is using the range 10.244.1.113/28. The BIG-IPs will be part of this network as if it was another Kubernetes node. Additionally, we will create another segment named VIP in the same T1 DLR to keep all TKG resources under the same network leaf (this can be customized). As the name suggests this VIP segment is used for exposing the Ingress services implemented in the BIG-IPs. This is shown in the next Node's view figure. The BIG-IPs will additionally have the regular management and HA segments which are not shown for clarity. Following regular BIG-IP rules, these segments should be outside the data plane path and the HA segment doesn't need to be connected to any T1. The IP addresses used by the BIG-IP's in the Node's segments 10.244.1.{124,125,126} will need to be allocated in NSX-T so when scaling the TKG cluster they are not used by the Kubernetes nodes. This is done in the next figure. This is done by first login in the NSX-T manager > Networking > IP Address Pools where we will find the subnet allocated to our tkg2-tkg2 cluster. In this screen we can obtain the API path to operate with it. In the figure we show the use of POSTMAN to make the IP address allocation with an API request. More precisely a PUT policy/api/v1/<API path>/ip-allocations/<name of allocation> request indicating in the body the desired IP to allocate. In this link at code.vmware.com you can find the full details of this API call. The names of the IP allocations are not relevant. In this case the names I chose are: bigip-floating, bigip1 and bigip2 respectively. Configuring BIG-IP and TKG to peer with Calico (BGP) In the BIG-IP we will configure the VLANs and Self-IPs normally, the only additional consideration is that since we are going to use Calico we have to allow BGP communication (TCP port 179) on the TKG's self-IPs port lock down settings (non-floating only) as shown next. Next we will enable BGP in the Networking -> Route Domains -> 0 (existing) as shown next. At this point we can configure BGP using the imish command line that brings us access to dynamic routing protocols configuration: In the figure above it is shown the configuration for BIG-IP1. Only the router-id needs to be changed to apply it in BIG-IP2. To finish the Calico configuration we have to instruct the TKG cluster to peer with the BIG-IPs, this is done with the following Kubernetes resources: kubectl apply -f - <<EOF kind: BGPPeer apiVersion: crd.projectcalico.org/v1 metadata: name: bigip1 spec: peerIP: 10.244.1.125 asNumber: 64512 --- kind: BGPPeer apiVersion: crd.projectcalico.org/v1 metadata: name: bigip2 spec: peerIP: 10.244.1.126 asNumber: 64512 EOF As you might have noticed we are not using BGP passwords this is because Calico v3.17+ is needed for this feature. In any case the Node's network is protected by default by TKG's firewall rules in the T1 Gateway. Finally we will verify that all the routes are advertised as expected: The above verification has to be done in both BIG-IPs. Note from the above verification that it is expected to see for SNAT's range the nexthop 0.0.0.0. In the verification we can see a /26 prefix for each Node (these prefixes are created by Calico on demand when more PODs are created) and an /24 prefix for the SNAT range. These are shown next as a picture. The network 100.128.0.0/24 has been chosen semi-arbitrarily. It is a range after POD's range that we indicated on cluster creation. This range will be only seen within the iBGP mesh (the PODs). It is best to use a range not used at all within NSX-T to avoid any possible IP range clashes. It is worth to remark that the SNAT Pool range will be used for VIP to POD traffic whilst for health monitoring the BIG-IP will use the Self-IPs in the Kubernetes Node's segment. Configuring BIG-IP to handle Kubernetes workloads BIG-IP plugs into Kubernetes API by means of using Controller Ingress Services (CIS), We deploy one CIS POD per BIG-IP. Each CIS watches Kubernetes events and when a configuration is applied or Deployment scaling occurs it will update BIG-IP's configuration. CIS works with BIG-IP appliances, chasis or Virtual Edition and exposes BIG-IP's advanced services in the Kubernetes API using standard Ingress resources as well as Custom Resource Definitions (CRDs). Each CIS instance works independently on each BIG-IP. This is good for redundancy purposes but at the same time it makes the two BIG-IPs believe that they are out of sync of each other because the CIS instances update the tkg partition of each BIG-IP independently. This behavior is only cosmetic. The BIG-IP's cluster failover mechanisms (up to 8 BIG-IPs) are independent of this. The next steps are to configure CIS in Tanzu Kubernetes Grid , which is the same as with any regular Kubernetes with Calico. We will install CIS by means of using the Helm installer kubectl create secret generic bigip1-login -n kube-system --from-literal=username=admin --from-literal=password=<password> kubectl create secret generic bigip2-login -n kube-system --from-literal=username=admin --from-literal=password=<password> Add the CIS chart repository in Helm using following command: helm repo add f5-stable https://f5networks.github.io/charts/stable Create values-bigip<unit>.yaml for each BIG-IP as follows: bigip_login_secret: bigip1-login rbac: create: true serviceAccount: create: true name: k8s-bigip1-ctlr # This namespace is where the Controller lives; namespace: kube-system args: # See https://clouddocs.f5.com/containers/latest/userguide/config-parameters.html # NOTE: helm has difficulty with values using `-`; `_` are used for naming # and are replaced with `-` during rendering. bigip_url: 192.168.200.14 bigip_partition: tkg default_ingress_ip: 10.106.32.100 # Use the following settings if you want to restrict CIS to specific namespaces # namespace: # namespace_label: pool_member_type: cluster # Trust default BIG-IP's self-signed TLS certificate insecure: true # Force using the SNAT pool override-as3-declaration: kube-system/bigip-snatpool log-as3-response: true image: # Use the tag to target a specific version of the Controller user: f5networks repo: k8s-bigip-ctlr pullPolicy: Always resources: {} version: latest Note that in the above values file the following values are per BIG-IP unit: Login's Secret ServiceAccount Management IP The values files above indicate that we are going to use two resources: A partition named "tkg" in BIG-IP An AS3 override ConfigMap to inject custom BIG-IP configurations on top of a regular Ingress declaration. In this case we use this feature to force all the traffic to use a SNATPool. Next, create the BIG-IP partition indicated in the Helm values file: root@(bigip1)(cfg-sync In Sync)(Standby)(/Common)(tmos)# create auth partition tkg root@(bigip1)(cfg-sync Changes Pending)(Standby)(/Common)(tmos)# run cm config-sync to-group Create a ConfigMap to apply our custom SNATPool: kubectl apply -n kube-system -f bigip-snatpool.yaml Do a Helm chart installation for each BIG-IP unit, following the next pattern: helm install -n kube-system -f values-bigip<unit>.yaml --name-template bigip<unit> f5-stable/f5-bigip-ctlr In a 2 BIG-IP cluster that is: helm install -n kube-system -f values-bigip1.yaml --name-template bigip1 f5-stable/f5-bigip-ctlr helm install -n kube-system -f values-bigip2.yaml --name-template bigip2 f5-stable/f5-bigip-ctlr which will result in the following for each BIG-IP: $ helm -n kube-system ls NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION bigip1 kube-system 1 2021-06-09 15:19:20.651138 +0200 CEST deployed f5-bigip-ctlr bigip2 kube-system 1 2021-06-10 10:01:57.902215 +0200 CEST deployed f5-bigip-ctlr Configuring a Kubernetes Ingress Service We are ready to deploy Ingress services. In this case we will deploy a single Ingress, named cafe.tkg.bd.f5.com which will perform TLS termination and will send the traffic to the applications tea and coffee depending on the URL requested. This is exhibit in the next figure: This is deployed with the following commands: kubectl create ns cafe kubectl apply -n cafe -f cafe-rbac.yaml kubectl apply -n cafe -f cafe-svcs.yaml kubectl apply -n cafe -f cafe-ingress.yaml We are going to describe the Ingress definition cafe-ingress.yaml: CIS supports a wide range of annotations for customizing Ingress services. You can find detailed information in the supported annotations reference page but when these are large the Ingress definitions might become not easy to mantain. BIG-IP's solution to Ingress resource's limited schema capabilities is the use of the AS3 override ConfigMap mechanism. In our configuration we are using this mechanism to create a SNAT Pool that we apply to the CIS's default Ingress VIP. This is shown next: Given that AS3 Override customizations are applied outside the Ingress definitions, by means of an overlay ConfigMap, we avoid having Ingress definitions with huge annotation sections. In this blog we have implemented the healtchchecks with annotations but these could have been implemented with the AS3 Override ConfigMap mechanism as well. The AS3 Override ConfigMap can be used to define any advanced service configuration which is possible with any module of BIG-IP, not only LTM. Please check this link for more information about AS3 automation. Verifying the resulting configuration At this point if we should see the following Kubernetes resources: And in the BIG-IP UI we will see the objects shown next in the Network Map section: But in order to reach the VIPs we need to add an NSX-T firewall rule in TKG's T1 gateway. This is shown next: After the rule has been applied we can run a curl command to perform an end to end validation: Alternative topologies and multi-tenancy considerations In this blog post we have shown a topology where each cluster and its is contained within the scope of a T1 Gateway. A single BIG-IP cluster can be used for several clusters just using more interfaces. We could also use a shared VIP network directly connected in the T0 Gateway as shown next. Note that in the above example there will be at least one CIS POD for each BIG-IP/cluster combination. Notice that we mean "at least one" CIS. This is because it is also possible to have multiple CIS per TKG cluster. CIS can be configured to listen a specific set of namespaces (possibly selected using labels) and owning a specific partition in the BIG-IP. This is shown next. Closing notes Before deleting a TKG cluster we should delete the NSX-T created resources when integrating the BIG-IP. These are: The TKG's NSX-T segments from the BIG-IPs (disconnecting them is not enough). The IP allocations. The T1's Gateway Firewall Rules if these have been created. We have seen how easy is to use TKG's Calico CNI and take advantage of its reduced overlay overhead. We've also seen how to configure a BIG-IP cluster in TKG to provide a simpler, higher-performance single-tier Ingress Controller. We have only shown the use of CIS with BIG-IP's LTM load balancing module and the standard Ingress resource type. We've also seen how to extend Ingress resource's limited capabilities in a manageable fashion by using the AS3 override mechanism while also reducing the annotations required. It is also worth to remark the additional CRDs that CIS provides besides the standard Ingress resource type. The possibilities are limitless: any BIG-IP configuration or module that you are used to use for VMs or baremetal workloads can be applied to containers through CIS. We hope that you enjoyed this blog. We look forward to your comments.2.5KViews2likes0CommentsDeploying NGINXplus with AppProtect in Tanzu Kubernetes Grid
Introduction Tanzu Kubernetes Grid (aka TKG) is VMware's main Kubernetes offering. Although Tanzu Kubernetes Grid is a certified conformant Kubernetes offering the different Kubernetes offerings can be customized in different ways. In the case of TKG a remarkable feature is the use of Pod Security Policies by default. TKG clusters are very easily spin-up in either public or private clouds by means of creating a single declaration YAML file such as the following: apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: tkg1 namespace: tkg1 spec: distribution: version: v1.18.15+vmware.1-tkg.1.600e412 topology: controlPlane: count: 1 class: best-effort-medium storageClass: vsan-default-storage-policy workers: count: 1 class: best-effort-medium storageClass: vsan-default-storage-policy As you can see from the schema a TKG cluster is deployed just as another Kubernetes resource. How does it work? In the case of public clouds, these TanzuKubernetesCluster resources are instantiated from a bootstrap cluster named "management Kubernetes cluster" whilst when using vSphere with Tanzu, the TanzuKubernetesCluster resources are instantiated from vSphere with Tanzu's supervisor cluster. In this blog post it will be shown an example from start to end: Creating wildcard certificate from custom CA with easy-rsa. Enabling Harbor registry. Installing NGINXplus with AppProtect. Using NGINXplus Ingress Controller without AppProtect. Adding AppProtect to an Ingress resource. AppProtect is an NGINXplus module for WAF and Bot protection based on market leading F5 BIG-IP's ASM. AppProtect provides enhanced capabilities and performance for those who require more than what mod_auth provides. We will finish with two relevant considerations: Updating NGINXplus Ingress controller using Helm. This is used for example for scaling-out NGINXplus hence improving the overall performance. Using NGINXplus alongside with other Ingress Controllers (such as Contour). Prerequisites You need an NGINXplus license which can be retrieved from https://www.nginx.com/free-trial-request-nginx-ingress-controller/. This license is in practice a cert/key pair with the file names nginx-repo.{crt,key} referenced later on. The following software needs to be present in your in your machine: Docker v18.09+ GNU Make git Helm3 OpenSSL https://github.com/OpenVPN/easy-rsa.git Create a wildcard certificate with easy-rsa In the next steps it will be created a Certificate Authority (CA) and from it a wildcard certificate/key pair which will be loaded into Kubernetes as a TLS secret. This wildcard certificate will be used by all the services which will expose through Ingress. Retrieve easy-rsa and initialize a CA (output summarized): $ git clone https://github.com/OpenVPN/easy-rsa.git $ cd easyrsa3/ $ ./easyrsa init-pki $ ./easyrsa build-ca Generate the wildcard key/cert pair (output summarized): $ ./easyrsa gen-req wildcard Common Name (eg: your user, host, or server name) [wildcard]:*.tkg.bd.f5.com Keypair and certificate request completed. Your files are: req: /Users/alonsocamaro/Documents/VMware-Tanzu/tanzu/easy-rsa/easyrsa3/pki/reqs/wildcard.req key: /Users/alonsocamaro/Documents/VMware-Tanzu/tanzu/easy-rsa/easyrsa3/pki/private/wildcard.key $ ./easyrsa sign-req server wildcard Request subject, to be signed as a server certificate for 825 days: subject= commonName= *.tkg.bd.f5.com The Subject's Distinguished Name is as follows commonName:ASN.1 12:'*.tkg.bd.f5.com' Certificate is to be certified until Aug 21 15:58:24 2023 GMT (825 days) Write out database with 1 new entries Data Base Updated Certificate created at: /Users/alonsocamaro/Documents/VMware-Tanzu/tanzu/easy-rsa/easyrsa3/pki/issued/wildcard.crt The certificate is stored in ./pki/issued/wildcard.crt and the key is stored encrypted in pki/private/wildcard.key. Import these into a Kubernetes secret using the next steps: $ openssl rsa -in ./pki/private/wildcard.key -out ./pki/private/wildcard-unencrypted.key Enter pass phrase for ./pki/private/wildcard.key: writing RSA key $ kubectl create ns ingress-nginx $ kubectl create -n ingress-nginx secret tls wildcard-tls --key ./pki/private/wildcard-unencrypted.key --cert ./pki/issued/wildcard.crt secret/wildcard-tls created $ rm ./pki/private/wildcard-unencrypted.key As you might have noticed the secret is loaded in the namespace ingress-nginx where NGINXplus Ingress Controller will be installed. Enable your image registry in Tanzu You need an image registry. When using vSphere with Tanzu this comes with Harbor. In this case you have to follow the next steps: Enable Harbor by following https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-AE24CF79-3C74-4CCD-B7C7-757AD082D86A.html. Trust Harbor's self-signed certificate: In the case of using vSphere with Tanzu Update 2 (U2) follow https://tanzu.vmware.com/content/blog/how-to-set-up-harbor-registry-self-signed-certificates-tanzu-kubernetes-clusters. If using a previous version follow https://cormachogan.com/2020/06/23/integrating-embedded-vsphere-with-kubernetes-harbor-registry-with-tkg-guest-clusters/ with the caveat that you will need to re-apply the changes if you upgrade the TKG cluster. Alternatively, you could also use your own certificates and follow https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.3/vmware-tanzu-kubernetes-grid-13/GUID-cluster-lifecycle-secrets.html#trust-custom-ca-certificates-on-cluster-nodes-3. Installing NGINXplus Ingress Controller This blog shows step by step everything that needs to be done to create an AppProtect-secured Ingress Controller with a wildcard certificate that will created as well. This blog only assumes that the TKG cluster is up and running. If you want to perform further customizations you can check https://docs.nginx.com/nginx-ingress-controller and https://docs.nginx.com/nginx-app-protect/configuration. In this blog it has been used TKG 1.3 in vSphere with Tanzu with NSX-T. The steps are similar when using any other supported TKG environment. Build the NGINXplus DOCKER image Define the registry endpoint and namespace where the TKG cluster will be deployed: $ REGISTRY=<registry IP or FQDN> $ NS=<your namespace> Log in the registry $ docker login $REGISTRY Username: <your user> Password: <your password> Login Succeeded Retrieve NGINXplus $ git clone https://github.com/nginxinc/kubernetes-ingress/ $ cd kubernetes-ingress $ git checkout v1.11.1 Copy the license files into base folder of NGINXplus $ cp $LICDIR/nginx-repo.{crt,key} . Build the image $ make debian-image-nap-plus PREFIX=$REGISTRY/$NS/nginx-plus-ingress TARGET=container Docker version 19.03.8, build afacb8b docker build --build-arg IC_VERSION=1.11.1-32745366 --build-arg GIT_COMMIT=32745366 --build-arg VERSION=1.11.1 --target container -f build/Dockerfile -t 10.105.210.67/tkg1/nginx-plus-ingress:1.11.1 . --build-arg BUILD_OS=debian-plus-ap --build-arg PLUS=-plus --secret id=nginx-repo.crt,src=nginx-repo.crt --secret id=nginx-repo.key,src=nginx-repo.key [+] Building 4.9s (24/24) FINISHED After this we can verify the image is ready in our local docker: $ docker images REPOSITORYTAGIMAGE IDCREATEDSIZE $REGISTRY/$NS/nginx-plus-ingress1.11.170113ec3891435 minutes ago626MB If we wanted to push it into another namespace we would perform an image tag operation as follows: docker image tag 70113ec38914 $REGISTRY/$ANOTHERNS/nginx-plus-ingress:1.11.1 Upload the image into the repository: make push PREFIX=$REGISTRY/$NS/nginx-plus-ingress Configure NGINXplus installation Switch to the helm chart folder cd deployments/helm-chart Make a backup of the default nginx-plus config file cp values-plus.yaml values-plus.yaml.orig We will edit the file values-plus.yaml as follows in order to: Enable AppProtect Allow to specify a wildcard TLS certificate that we will use for all the services. Expose NGINXplus using a LoadBalancer with a Cluster externalTrafficPolicy. Exposing NGINXplus (or any other Ingress Controller such as Contour) using Cluster externalTrafficPolicy is required given that the NSX-T native load balancer doesn't perform any health checking when creating a Service of Type LoadBalancer. We will see how to improve this in future blogs with the use of BIG-IP. controller: nginxplus: true image: repository: nginx-plus-ingress tag: "1.11.1" service: externalTrafficPolicy: Cluster appprotect: ## Enable the App Protect module in the Ingress Controller. enable: true wildcardTLS: ## The base64-encoded TLS certificate for every Ingress host that has TLS enabled but no secret specified. ## If the parameter is not set, for such Ingress hosts NGINX will break any attempt to establish a TLS connection. cert: "" ## The base64-encoded TLS key for every Ingress host that has TLS enabled but no secret specified. ## If the parameter is not set, for such Ingress hosts NGINX will break any attempt to establish a TLS connection. key: "" ## The secret with a TLS certificate and key for every Ingress host that has TLS enabled but no secret specified. ## The value must follow the following format: `<namespace>/<name>`. ## Used as an alternative to specifying a certificate and key using `controller.wildcardTLS.cert` and `controller.wildcardTLS.key` parameters. ## Format: <namespace>/<secret_name> secret: ingress-nginx/wildcard-tls This file can also be found in https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/values-plus.yaml Apply the required PodSecurityPolicy before NGINXplus installation The next step creates a PodSecurityPolicy which is required by Tanzu Kubernetes Grid and it is bound to the Service Account ingress-nginx used in the regular NGINXplus install. $ kubectl apply -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/nginx-psp.yaml podsecuritypolicy.policy/ingress-nginx created clusterrole.rbac.authorization.k8s.io/ingress-nginx-psp created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-psp created Install NGINXplus Ingress controller using Helm From the deployments/helm-chart directory of the downloaded NGINXplus, it is just needed to run the next command: $ helm -n ingress-nginx install ingress-nginx -f values-plus.yaml . NAME: ingress-nginx LAST DEPLOYED: Mon May 17 15:16:20 2021 NAMESPACE: ingress-nginx STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The NGINX Ingress Controller has been installed. Checking the resulting installation When checking the resulting resources we can see that by default a single POD is created. We can scale up/down this as required by using Helm. This will be shown later on. Note also that by default the NGINXplus Ingress Controller is automatically exposed using the Service Type LoadBalancer resource which configures an external load balancer. In this case the external load balancer is NSX-T's native LB as shown in the screenshot. below When using vSphere networking this would have been HAproxy by default. In next blogs we will show how to use F5 BIG-IP in TKG clusters instead. $ kubectl -n ingress-nginx get all NAMEREADYSTATUSRESTARTSAGE pod/ingress-nginx-nginx-ingress-7d4587b44c-n9b8l1/1Running016h NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE service/ingress-nginx-nginx-ingressLoadBalancer100.69.127.6610.105.210.6880:30527/TCP,443:31185/TCP16h NAMEREADYUP-TO-DATEAVAILABLEAGE deployment.apps/ingress-nginx-nginx-ingress1/11116h NAMEDESIREDCURRENTREADYAGE replicaset.apps/ingress-nginx-nginx-ingress-7d4587b44c11116h In the next screenshots we can see the resulting configuration in NSX-T: Both pools for port 80 and port 443 point to the worker's node addresses. This means that the traffic flow will be NSX-T LB -> ClusterIP -> Ingress Controller's POD address (in the same or in another node). This is the case of any regular Ingress Controller including TKG's provided Contour. In next blogs it will be shown how these many layers of indirection can be bypassed using F5 BIG-IP. Using NGINXplus Ingress Controller without AppProtect Creating a regular Ingress resource In this initial example we will create two services (coffee and tea) which will be exposed with an Ingress resource called cafe-ingress. This will expose the services in the URL https://cafe.tkg.bd.f5.com/coffee and https://cafe.tkg.bd.f5.com/tea using the previously created wildcard certificate for *.tkg.bd.f5.com as depicted in the next diagram. To create this setup run the following commands: $ kubectl create ns test $ kubectl -n test -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/cafe-rbac.yaml $ kubectl -n test -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/cafe.yaml $ kubectl -n test -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/cafe-ingress.yaml This is the same example provided in the official NGINXplus documentation but we add the cafe-rbac.yaml declaration which creates the necessary PodSecurity policies and bindings for TKG. To verify the result first we will check the Ingress resource itself: $ kubectl -n test get ingress NAMECLASSHOSTSADDRESSPORTSAGE cafe-ingressnginxcafe.tkg.bd.f5.com10.105.210.6880, 4433m44s where we can observe that the IP address is the one of the external loadbalancer seen before. To verify it is all working as expected we will use curl as follows: $ curl --cacert ca-tkg.bd.f5.com.crt --resolve cafe.tkg.bd.f5.com:443:10.105.210.68 https://cafe.tkg.bd.f5.com/coffee Server address: 100.96.1.16:8080 Server name: coffee-86954d94fd-pnvpq Date: 18/May/2021:16:18:59 +0000 URI: /coffee Request ID: 63964930a2d1038af5f204ef8fbe91fc which has the following key parameters: Use --cacert to specify our CA crt file previously created Use --resolve to allow curl resolve the FQDN of the request Adding AppProtect to an Ingress resource Additional configuration Our deployed NGINXplus has AppProtect built-in. It is up to the user of the Ingress resource if it wants to enable it, on a per Ingress basis. In our example we will apply the AppProtect security policies in the user namespace "test". We will also create a syslog store in the ingress-nginx namespace. All these can be customized. Ultimately, the user just needs to add the following annotations in order to secure the cafe site: annotations: appprotect.f5.com/app-protect-policy: "test/dataguard-alarm" appprotect.f5.com/app-protect-enable: "True" appprotect.f5.com/app-protect-security-log-enable: "True" appprotect.f5.com/app-protect-security-log: "test/logconf" appprotect.f5.com/app-protect-security-log-destination: "syslog:server=100.70.175.24:514" The custom AppProtect policy used in this example contains DataGuard protection for Credit Card Number, US Social Security number leaks and a custom signature. It is also defined where to send the AppProtect logs. These are sent in SYSLOG/TCP mode independently of the regular logs generated by NGINXplus. To make all these happen first we will create the syslog server: kubectl apply -n ingress-nginx -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/syslog-rbac.yaml kubectl apply -n ingress-nginx -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/syslog.yaml Next, we will create the AppProtect policies: kubectl apply -n test -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/ap-apple-uds.yaml kubectl apply -n test -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/ap-dataguard-alarm-policy.yaml kubectl apply -n test -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/ap-logconf.yaml Finally we will add the above annotations to the Ingress resource. For that, we will need to get SYSLOG's POD address and replace it in the cafe-ingress-ap.yaml definition. curl -O https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/cafe-ingress-ap.yaml SYSLOG_IP=<IP address of syslog's POD> sed -e "s/SYSLOG/$SYSLOG_IP/" cafe-ingress-ap.yaml > cafe-ingress-ap-syslog.yaml kubectl apply -n test -f cafe-ingress-ap-syslog.yaml Note: it might take few seconds to make the AppProtect configuration effective. Verifying AppProtect Run the following command to watch the requests live as handled by AppProtect: kubectl -n ingress-nginx exec -it <SYSLOG POD NAME> -- tail -f /var/log/messages Send a request that triggers the custom signature: curl --cacert ca-tkg.bd.f5.com.crt --resolve cafe.tkg.bd.f5.com:443:10.105.210.69 "https://cafe.tkg.bd.f5.com/coffee/" -X POST -d "apple" You should see a log similar to the following one in the syslog logs: May 24 13:43:23 ingress-nginx-nginx-ingress-7d4587b44c-wvrxs ASM:attack_type="Non-browser Client,Brute Force Attack",blocking_exception_reason="N/A",date_time="2021-05-24 13:43:23",dest_port="443",ip_client="10.105.210.69",is_truncated="false",method="POST",policy_name="dataguard-alarm",protocol="HTTPS",request_status="blocked",response_code="0",severity="Critical",sig_cves="N/A",sig_ids="300000000",sig_names="Apple_medium_acc [Fruits]",sig_set_names="{apple_sigs}",src_port="4096",sub_violations="N/A",support_id="15704273797572010868",threat_campaign_names="N/A",unit_hostname="ingress-nginx-nginx-ingress-7d4587b44c-wvrxs",uri="/coffee/",violation_rating="3",vs_name="24-cafe.tkg.bd.f5.com:9-/coffee",x_forwarded_for_header_value="N/A",outcome="REJECTED",outcome_reason="SECURITY_WAF_VIOLATION",violations="Attack signature detected,Bot Client Detected",violation_details="<?xml version='1.0' encoding='UTF-8'?><BAD_MSG><violation_masks><block>10000000200c00-3030430000070</block><alarm>2477f0ffcbbd0fea-8003f35cb000007c</alarm><learn>200000-20</learn><staging>0-0</staging></violation_masks><request-violations><violation><viol_index>42</viol_index><viol_name>VIOL_ATTACK_SIGNATURE</viol_name><context>request</context><sig_data><sig_id>300000000</sig_id><blocking_mask>7</blocking_mask><kw_data><buffer>YXBwbGU=</buffer><offset>0</offset><length>5</length></kw_data></sig_data></violation></request-violations></BAD_MSG>",bot_signature_name="curl",bot_category="HTTP Library",bot_anomalies="N/A",enforced_bot_anomalies="N/A",client_class="Untrusted Bot",request="POST /coffee/ HTTP/1.1\r\nHost: cafe.tkg.bd.f5.com\r\nUser-Agent: curl/7.64.1\r\nAccept: */*\r\nContent-Length: 5\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\napple" Updating NGINXplus Ingress controller using Helm By default a single NGINXplus instance is created, if you want to increase the performance of it, scaling-out is as simple as editing the values-plus.yaml file and setting a replicaCount parameter with the desired value: controller: replicaCount: 4 And running helm upgrade as follows $ helm -n ingress-nginx upgrade ingress-nginx -f values-plus.yaml . Release "ingress-nginx" has been upgraded. Happy Helming! NAME: ingress-nginx LAST DEPLOYED: Thu May 20 14:20:38 2021 NAMESPACE: ingress-nginx STATUS: deployed REVISION: 2 TEST SUITE: None NOTES: The NGINX Ingress Controller has been installed. Using NGINXplus alongside with other Ingress Controllers (such as Contour). NGINXplus does support Ingress/v1 resource version available in Kubernetes 1.18+ as well as previous Ingress/v1beta1 API resource version for backwards compatibility. Contour Ingress Controller is provided in TKG by VMware as an add-on which is not installed by default. If installed, you have to be aware that Contour, at time of this writting (May 2021), only supports the older Ingress/v1beta1 API resource version. This means that when defining Ingress resources you have to specify the Ingress Controller to use by means of adding the following annotation: kubernetes.io/ingress.class: <ingress conroller name> where <ingress controller name> could be nginx or contour. For further details on this topic you can check https://docs.nginx.com/nginx-ingress-controller/installation/running-multiple-ingress-controllers/ Conclusion In this blog post we have gone through all the steps required to install and use NGINXplus with AppProtect in Tanzu Kubernetes Grid with a real world example. Overall, the installation is the same as in any Kubernetes but the following two items need to be taken into account: Before deploying, make sure that the appropriate PodSecurityPolicies are in place for either NGINXplus or the workloads. PodSecurityPolicies are not enabled by default in many Kubernetes distributions so this represents a change from the usual practice. If deploying NGINXplus alongside another Ingress Controller make sure that the Ingress resources are defined appropriately in order to select the right Ingress Controller for the corresponding Ingress resource. In this blog we used NGINXplus in a TKG cluster deployed in an on-premises infrastructure (vSphere with Tanzu) with the Antrea CNI and NSX-T networking. The steps would have been the same if it had been used vSphere networking or Calico CNI. The only difference could come when exposing it through the external load balancer. If the external load balancer performed health checking it would be preferable to use local externalTrafficPolicy since this avoids a hop and allows keeping the client source address. In future blogs we will post how to expose NGINXplus more effectively and in a cloud-agnostic manner by using BIG-IP as external load balancer.1.3KViews1like0Comments