NGINX App Protect Data Plane Add-on in Kubernetes
Most enterprise workloads in Kubernetes are exposed by ingress controllers to external traffic. DevOps/NetOps engineers struggle to migrate workloads from virtualized environments (Cloud VMs, VMWare ESXI, Nutanix, KVM) to container orchestrators (Kubernetes being the most popular). This is because the Kubernetes networking fabric will now handle many functions that used to run inside a VM (security, logging, scaling, and so on).
DevOps teams use custom resource objects in YAML format to define the networking fabric in Kubernetes. The ingress controller is one example of a Kubernetes network function defined by DevOps teams.
There are two options to configure the F5 NGINX Ingress Controller in Kubernetes. You can use CRDs (Custom Resource Definitions) provided by F5 or Ingress resources.
Kubernetes adds a layer of complexity where resource definitions are translated to native nginx configuration that ultimately get loaded to the Ingress Controller for deployment.
This extra complexity could pose operational challenges, because even though K8s Ingress schemas may be valid, they may not be valid for native nginx configurations.
When deploying NGINX App Protect as a Dataplane Add-on in Kubernetes, we cut out this layer of complexity and mount nginx configs/policy files to the deployment.
In this article, I will provide instructions on how you can quickly get started with NGINX App Protect in Kubernetes to solve many of these ever-changing challenges.
Getting Started: NGINX App Protect Data Plane Add-on
First clone the GitHub repository from the F5 DevCentral repository.
$ git clone https://github.com/f5devcentral/NAP-Attack-Demos.git
Run the script with the nginx license key arguments (JWT/cert/key). You can download the license files with an NGINX One enterprise trial license from our website. Existing customers can pull the license files from the MyF5 portal.
$ sudo /bin/bash kubernetes/napv4_deploy <nginx-repo.crt> <nginx-repo.key> <license.jwt>
Note: You will need docker and kubectl installed on your machine to run the script.
Now I can verify that my App Protect deployment is running and expose my deployment with the NodePort method.
$ kubectl get pods -o wide -n nginx-plus
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
app-protect-6cfd855db8-8ztqx 1/1 Running 3 (10m ago) 10m 192.168.75.124 rawdata <none> <none>
Now I will create the NodePort Service and connect to NGINX App Protect in Kubernetes from my machine.
$ kubectl apply -f kubernetes/nodeport.yaml
$ curl -i "http://<node-ip>:<nodeport>/<script>"
Making Config Policy Changes
Changing the ingress controller would suggest changing K8s objects and applying them with the Kubernetes API. In this case, we will change the nginx config mounted on the Kubernetes deployment. For example, I will use the apply_policy script to apply the CSRF (Cross Site Request Forgery) policy.
$ /bin/sh apply_policy ../CSRF/CSRF.json
Conclusion
DevOps/NetOps engineers struggle to migrate workloads from virtualized environments (Cloud VMs, VMWare ESXI, Nutanix, KVM) to Kubernetes due to the underlying complexity of the Kubernetes networking fabric. Configurations need to be translated to K8s objects, which can be very complex depending on the use case. A viable option is choosing NGINX App Protect as a Deployment rather than an ingress controller to bypass limitations of k8s objects.