Deploying NGINXplus with AppProtect in Tanzu Kubernetes Grid


Tanzu Kubernetes Grid (aka TKG) is VMware's main Kubernetes offering. Although Tanzu Kubernetes Grid is a certified conformant Kubernetes offering the different Kubernetes offerings can be customized in different ways. In the case of TKG a remarkable feature is the use of Pod Security Policies by default.

TKG clusters are very easily spin-up in either public or private clouds by means of creating a single declaration YAML file such as the following:

kind: TanzuKubernetesCluster                 
 name: tkg1                                
 namespace: tkg1                          
   version: v1.18.15+vmware.1-tkg.1.600e412
     count: 1                          
     class: best-effort-medium          
     storageClass: vsan-default-storage-policy
     count: 1                               
     class: best-effort-medium               
     storageClass: vsan-default-storage-policy

As you can see from the schema a TKG cluster is deployed just as another Kubernetes resource. How does it work? In the case of public clouds, these TanzuKubernetesCluster resources are instantiated from a bootstrap cluster named "management Kubernetes cluster" whilst when using vSphere with Tanzu, the TanzuKubernetesCluster resources are instantiated from vSphere with Tanzu's supervisor cluster.

In this blog post it will be shown an example from start to end:

  • Creating wildcard certificate from custom CA with easy-rsa.
  • Enabling Harbor registry.
  • Installing NGINXplus with AppProtect.
  • Using NGINXplus Ingress Controller without AppProtect.
  • Adding AppProtect to an Ingress resource.

AppProtect is an NGINXplus module for WAF and Bot protection based on market leading F5 BIG-IP's ASM. AppProtect provides enhanced capabilities and performance for those who require more than what mod_auth provides.

We will finish with two relevant considerations:

  • Updating NGINXplus Ingress controller using Helm. This is used for example for scaling-out NGINXplus hence improving the overall performance.
  • Using NGINXplus alongside with other Ingress Controllers (such as Contour).


You need an NGINXplus license which can be retrieved from This license is in practice a cert/key pair with the file names nginx-repo.{crt,key} referenced later on.

The following software needs to be present in your in your machine:

  • Docker v18.09+
  • GNU Make
  • git
  • Helm3
  • OpenSSL

Create a wildcard certificate with easy-rsa

In the next steps it will be created a Certificate Authority (CA) and from it a wildcard certificate/key pair which will be loaded into Kubernetes as a TLS secret. This wildcard certificate will be used by all the services which will expose through Ingress.

Retrieve easy-rsa and initialize a CA (output summarized):

$ git clone
$ cd easyrsa3/
$ ./easyrsa init-pki
$ ./easyrsa build-ca

Generate the wildcard key/cert pair (output summarized):

$ ./easyrsa gen-req wildcard
Common Name (eg: your user, host, or server name) [wildcard]:*

Keypair and certificate request completed. Your files are:
req: /Users/alonsocamaro/Documents/VMware-Tanzu/tanzu/easy-rsa/easyrsa3/pki/reqs/wildcard.req
key: /Users/alonsocamaro/Documents/VMware-Tanzu/tanzu/easy-rsa/easyrsa3/pki/private/wildcard.key
$ ./easyrsa sign-req server wildcard
Request subject, to be signed as a server certificate for 825 days:

   commonName               = *

The Subject's Distinguished Name is as follows
commonName           :ASN.1 12:'*'
Certificate is to be certified until Aug 21 15:58:24 2023 GMT (825 days)

Write out database with 1 new entries
Data Base Updated

Certificate created at: /Users/alonsocamaro/Documents/VMware-Tanzu/tanzu/easy-rsa/easyrsa3/pki/issued/wildcard.crt

The certificate is stored in ./pki/issued/wildcard.crt and the key is stored encrypted in pki/private/wildcard.key. Import these into a Kubernetes secret using the next steps:

$ openssl rsa -in ./pki/private/wildcard.key -out ./pki/private/wildcard-unencrypted.key
Enter pass phrase for ./pki/private/wildcard.key:
writing RSA key
$ kubectl create ns ingress-nginx
$ kubectl create -n ingress-nginx secret tls wildcard-tls --key ./pki/private/wildcard-unencrypted.key --cert ./pki/issued/wildcard.crt
secret/wildcard-tls created
$ rm ./pki/private/wildcard-unencrypted.key

As you might have noticed the secret is loaded in the namespace ingress-nginx where NGINXplus Ingress Controller will be installed.

Enable your image registry in Tanzu

You need an image registry. When using vSphere with Tanzu this comes with Harbor. In this case you have to follow the next steps:

Installing NGINXplus Ingress Controller

This blog shows step by step everything that needs to be done to create an AppProtect-secured Ingress Controller with a wildcard certificate that will created as well. This blog only assumes that the TKG cluster is up and running. If you want to perform further customizations you can check and

In this blog it has been used TKG 1.3 in vSphere with Tanzu with NSX-T. The steps are similar when using any other supported TKG environment.

Build the NGINXplus DOCKER image

Define the registry endpoint and namespace where the TKG cluster will be deployed:

$ REGISTRY=<registry IP or FQDN>
$ NS=<your namespace>

Log in the registry

$ docker login $REGISTRY
Username: <your user>
Password: <your password>
Login Succeeded

Retrieve NGINXplus

$ git clone
$ cd kubernetes-ingress
$ git checkout v1.11.1

Copy the license files into base folder of NGINXplus

$ cp $LICDIR/nginx-repo.{crt,key} .

Build the image

$ make debian-image-nap-plus PREFIX=$REGISTRY/$NS/nginx-plus-ingress TARGET=container
Docker version 19.03.8, build afacb8b
docker build --build-arg IC_VERSION=1.11.1-32745366 --build-arg GIT_COMMIT=32745366 --build-arg VERSION=1.11.1 --target container -f build/Dockerfile -t . --build-arg BUILD_OS=debian-plus-ap --build-arg PLUS=-plus --secret id=nginx-repo.crt,src=nginx-repo.crt --secret id=nginx-repo.key,src=nginx-repo.key
[+] Building 4.9s (24/24) FINISHED                                                                                                                                                      

After this we can verify the image is ready in our local docker:

$ docker images
REPOSITORY                             TAG                IMAGE ID           CREATED            SIZE
$REGISTRY/$NS/nginx-plus-ingress  1.11.1             70113ec38914       35 minutes ago     626MB

If we wanted to push it into another namespace we would perform an image tag operation as follows:

docker image tag 70113ec38914 $REGISTRY/$ANOTHERNS/nginx-plus-ingress:1.11.1

Upload the image into the repository:

make push PREFIX=$REGISTRY/$NS/nginx-plus-ingress

Configure NGINXplus installation

Switch to the helm chart folder

cd deployments/helm-chart

Make a backup of the default nginx-plus config file

cp values-plus.yaml values-plus.yaml.orig

We will edit the file values-plus.yaml as follows in order to:

  • Enable AppProtect
  • Allow to specify a wildcard TLS certificate that we will use for all the services.
  • Expose NGINXplus using a LoadBalancer with a Cluster externalTrafficPolicy.

Exposing NGINXplus (or any other Ingress Controller such as Contour) using Cluster externalTrafficPolicy is required given that the NSX-T native load balancer doesn't perform any health checking when creating a Service of Type LoadBalancer. We will see how to improve this in future blogs with the use of BIG-IP.

 nginxplus: true
   repository: nginx-plus-ingress
   tag: "1.11.1"
    externalTrafficPolicy: Cluster
   ## Enable the App Protect module in the Ingress Controller.
   enable: true
   ## The base64-encoded TLS certificate for every Ingress host that has TLS enabled but no secret specified.
   ## If the parameter is not set, for such Ingress hosts NGINX will break any attempt to establish a TLS connection.
   cert: ""

   ## The base64-encoded TLS key for every Ingress host that has TLS enabled but no secret specified.
   ## If the parameter is not set, for such Ingress hosts NGINX will break any attempt to establish a TLS connection.
   key: ""

   ## The secret with a TLS certificate and key for every Ingress host that has TLS enabled but no secret specified.
   ## The value must follow the following format: `<namespace>/<name>`.
   ## Used as an alternative to specifying a certificate and key using `controller.wildcardTLS.cert` and `controller.wildcardTLS.key` parameters.
   ## Format: <namespace>/<secret_name>
   secret: ingress-nginx/wildcard-tls

This file can also be found in

Apply the required PodSecurityPolicy before NGINXplus installation

The next step creates a PodSecurityPolicy which is required by Tanzu Kubernetes Grid and it is bound to the Service Account ingress-nginx used in the regular NGINXplus install.

$ kubectl apply -f
podsecuritypolicy.policy/ingress-nginx created created created

Install NGINXplus Ingress controller using Helm

From the deployments/helm-chart directory of the downloaded NGINXplus, it is just needed to run the next command:

$ helm -n ingress-nginx install ingress-nginx -f values-plus.yaml .
NAME: ingress-nginx
LAST DEPLOYED: Mon May 17 15:16:20 2021
NAMESPACE: ingress-nginx
STATUS: deployed
The NGINX Ingress Controller has been installed.

Checking the resulting installation

When checking the resulting resources we can see that by default a single POD is created. We can scale up/down this as required by using Helm. This will be shown later on.

Note also that by default the NGINXplus Ingress Controller is automatically exposed using the Service Type LoadBalancer resource which configures an external load balancer. In this case the external load balancer is NSX-T's native LB as shown in the screenshot. below When using vSphere networking this would have been HAproxy by default. In next blogs we will show how to use F5 BIG-IP in TKG clusters instead.

$ kubectl -n ingress-nginx get all
NAME                                              READY  STATUS   RESTARTS  AGE
pod/ingress-nginx-nginx-ingress-7d4587b44c-n9b8l  1/1    Running  0         16h
NAME                                 TYPE          CLUSTER-IP     EXTERNAL-IP    PORT(S)                     AGE
service/ingress-nginx-nginx-ingress  LoadBalancer  80:30527/TCP,443:31185/TCP  16h
NAME                                         READY  UP-TO-DATE  AVAILABLE  AGE
deployment.apps/ingress-nginx-nginx-ingress  1/1    1           1          16h
NAME                                                    DESIRED  CURRENT  READY  AGE
replicaset.apps/ingress-nginx-nginx-ingress-7d4587b44c  1        1        1      16h

In the next screenshots we can see the resulting configuration in NSX-T:

Both pools for port 80 and port 443 point to the worker's node addresses. This means that the traffic flow will be NSX-T LB -> ClusterIP -> Ingress Controller's POD address (in the same or in another node). This is the case of any regular Ingress Controller including TKG's provided Contour. In next blogs it will be shown how these many layers of indirection can be bypassed using F5 BIG-IP.

Using NGINXplus Ingress Controller without AppProtect

Creating a regular Ingress resource

In this initial example we will create two services (coffee and tea) which will be exposed with an Ingress resource called cafe-ingress. This will expose the services in the URL and using the previously created wildcard certificate for * as depicted in the next diagram.

To create this setup run the following commands:

$ kubectl create ns test
$ kubectl -n test -f
$ kubectl -n test -f
$ kubectl -n test -f

This is the same example provided in the official NGINXplus documentation but we add the cafe-rbac.yaml declaration which creates the necessary PodSecurity policies and bindings for TKG.

To verify the result first we will check the Ingress resource itself:

$ kubectl -n test get ingress
NAME          CLASS  HOSTS               ADDRESS        PORTS    AGE
cafe-ingress  nginx  80, 443  3m44s

where we can observe that the IP address is the one of the external loadbalancer seen before.

To verify it is all working as expected we will use curl as follows:

$ curl --cacert --resolve 
Server address:
Server name: coffee-86954d94fd-pnvpq
Date: 18/May/2021:16:18:59 +0000
URI: /coffee
Request ID: 63964930a2d1038af5f204ef8fbe91fc

which has the following key parameters:

  • Use --cacert to specify our CA crt file previously created
  • Use --resolve to allow curl resolve the FQDN of the request

Adding AppProtect to an Ingress resource

Additional configuration

Our deployed NGINXplus has AppProtect built-in. It is up to the user of the Ingress resource if it wants to enable it, on a per Ingress basis. In our example we will apply the AppProtect security policies in the user namespace "test". We will also create a syslog store in the ingress-nginx namespace. All these can be customized.

Ultimately, the user just needs to add the following annotations in order to secure the cafe site:

 annotations: "test/dataguard-alarm" "True" "True" "test/logconf" "syslog:server="

The custom AppProtect policy used in this example contains DataGuard protection for Credit Card Number, US Social Security number leaks and a custom signature. It is also defined where to send the AppProtect logs. These are sent in SYSLOG/TCP mode independently of the regular logs generated by NGINXplus.

To make all these happen first we will create the syslog server:

kubectl apply -n ingress-nginx -f
kubectl apply -n ingress-nginx -f

Next, we will create the AppProtect policies:

kubectl apply -n test -f
kubectl apply -n test -f
kubectl apply -n test -f

Finally we will add the above annotations to the Ingress resource. For that, we will need to get SYSLOG's POD address and replace it in the cafe-ingress-ap.yaml definition.

curl -O
SYSLOG_IP=<IP address of syslog's POD>
sed -e "s/SYSLOG/$SYSLOG_IP/" cafe-ingress-ap.yaml > cafe-ingress-ap-syslog.yaml
kubectl apply -n test -f cafe-ingress-ap-syslog.yaml

Note: it might take few seconds to make the AppProtect configuration effective.

Verifying AppProtect

Run the following command to watch the requests live as handled by AppProtect:

kubectl -n ingress-nginx exec -it <SYSLOG POD NAME> -- tail -f /var/log/messages

Send a request that triggers the custom signature:

curl --cacert --resolve "" -X POST -d "apple"

You should see a log similar to the following one in the syslog logs:

May 24 13:43:23 ingress-nginx-nginx-ingress-7d4587b44c-wvrxs ASM:attack_type="Non-browser Client,Brute Force Attack",blocking_exception_reason="N/A",date_time="2021-05-24 13:43:23",dest_port="443",ip_client="",is_truncated="false",method="POST",policy_name="dataguard-alarm",protocol="HTTPS",request_status="blocked",response_code="0",severity="Critical",sig_cves="N/A",sig_ids="300000000",sig_names="Apple_medium_acc [Fruits]",sig_set_names="{apple_sigs}",src_port="4096",sub_violations="N/A",support_id="15704273797572010868",threat_campaign_names="N/A",unit_hostname="ingress-nginx-nginx-ingress-7d4587b44c-wvrxs",uri="/coffee/",violation_rating="3",vs_name="",x_forwarded_for_header_value="N/A",outcome="REJECTED",outcome_reason="SECURITY_WAF_VIOLATION",violations="Attack signature detected,Bot Client Detected",violation_details="<?xml version='1.0' encoding='UTF-8'?><BAD_MSG><violation_masks><block>10000000200c00-3030430000070</block><alarm>2477f0ffcbbd0fea-8003f35cb000007c</alarm><learn>200000-20</learn><staging>0-0</staging></violation_masks><request-violations><violation><viol_index>42</viol_index><viol_name>VIOL_ATTACK_SIGNATURE</viol_name><context>request</context><sig_data><sig_id>300000000</sig_id><blocking_mask>7</blocking_mask><kw_data><buffer>YXBwbGU=</buffer><offset>0</offset><length>5</length></kw_data></sig_data></violation></request-violations></BAD_MSG>",bot_signature_name="curl",bot_category="HTTP Library",bot_anomalies="N/A",enforced_bot_anomalies="N/A",client_class="Untrusted Bot",request="POST /coffee/ HTTP/1.1\r\nHost:\r\nUser-Agent: curl/7.64.1\r\nAccept: */*\r\nContent-Length: 5\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\napple"

Updating NGINXplus Ingress controller using Helm

By default a single NGINXplus instance is created, if you want to increase the performance of it, scaling-out is as simple as editing the values-plus.yaml file and setting a replicaCount parameter with the desired value:

 replicaCount: 4

And running helm upgrade as follows

$ helm -n ingress-nginx upgrade ingress-nginx -f values-plus.yaml .
Release "ingress-nginx" has been upgraded. Happy Helming!
NAME: ingress-nginx
LAST DEPLOYED: Thu May 20 14:20:38 2021
NAMESPACE: ingress-nginx
STATUS: deployed
The NGINX Ingress Controller has been installed.

Using NGINXplus alongside with other Ingress Controllers (such as Contour).

NGINXplus does support Ingress/v1 resource version available in Kubernetes 1.18+ as well as previous Ingress/v1beta1 API resource version for backwards compatibility.

Contour Ingress Controller is provided in TKG by VMware as an add-on which is not installed by default. If installed, you have to be aware that Contour, at time of this writting (May 2021), only supports the older Ingress/v1beta1 API resource version. This means that when defining Ingress resources you have to specify the Ingress Controller to use by means of adding the following annotation: <ingress conroller name>

where <ingress controller name> could be nginx or contour. For further details on this topic you can check


In this blog post we have gone through all the steps required to install and use NGINXplus with AppProtect in Tanzu Kubernetes Grid with a real world example. Overall, the installation is the same as in any Kubernetes but the following two items need to be taken into account:

  • Before deploying, make sure that the appropriate PodSecurityPolicies are in place for either NGINXplus or the workloads. PodSecurityPolicies are not enabled by default in many Kubernetes distributions so this represents a change from the usual practice.
  • If deploying NGINXplus alongside another Ingress Controller make sure that the Ingress resources are defined appropriately in order to select the right Ingress Controller for the corresponding Ingress resource.

In this blog we used NGINXplus in a TKG cluster deployed in an on-premises infrastructure (vSphere with Tanzu) with the Antrea CNI and NSX-T networking. The steps would have been the same if it had been used vSphere networking or Calico CNI. The only difference could come when exposing it through the external load balancer. If the external load balancer performed health checking it would be preferable to use local externalTrafficPolicy since this avoids a hop and allows keeping the client source address.

In future blogs we will post how to expose NGINXplus more effectively and in a cloud-agnostic manner by using BIG-IP as external load balancer.

Published Jun 02, 2021
Version 1.0

Was this article helpful?

No CommentsBe the first to comment