Modern Deployment and Security Strategies for Kubernetes with NGINX Gateway Fabric
Table of Contents
Introduction
Kubernetes has become the foundation for cloud-native applications. However, managing and routing traffic within clusters remains a challenging issue. The traditional Ingress resource, though helpful in exposing services, has shown limitations. Its loosely defined specifications often cause controller-specific behaviors, complicated annotations, and hinder portability across different environments. These challenges become even more apparent as organizations scale their microservices architectures. Ingress was designed primarily for basic service exposure and routing. While it can be extended with annotations or custom controllers, it lacks first-class support for advanced deployment patterns such as canary or blue-green releases. This forces teams to rely on add-ons or vendor-specific features, which adds complexity and reduces portability.
To overcome the limitations of traditional ingress controllers better, the Kubernetes community has introduced the Gateway API. This is a new, forward-looking, standards-based approach to service networking. Unlike the more rigid Ingress, the Gateway API provides greater flexibility, role-specific functionalities, and a comprehensive set of features. It encourages collaboration among platform engineers, developers, and security teams by supporting advanced capabilities such as TLS offloading, traffic splitting, and smooth integration with service meshes.
For a deeper understanding of this transition and its benefits, my colleague Dave McAllister's recent blog offers valuable insights into implementation strategies and best practices.
Enter F5 NGINX Gateway Fabric (NGF). Built on the Gateway API standard, NGINX Gateway Fabric offers a production-ready solution. It combines the robustness and performance of NGINX with the extensibility of this new standard. It provides consistent traffic management, observability, and security across Kubernetes clusters. Additionally, its integration with NGINX One Console enables centralized control and monitoring in distributed environments.
In this article, we will explore how F5 NGINX Gateway Fabric can be utilized to implement modern deployment and security strategies, such as Blue-Green deployments, securing applications using Let's Encrypt, and ACME. Coupled with the operational visibility of F5 NGINX One Console, these capabilities are vital for reducing risk, simplifying certificate lifecycle management, and ensuring a secure, production-ready path for cloud-native applications.
Before starting, please make sure that NGINX Gateway Fabric is installed in your Kubernetes cluster. If you have not installed NGINX Gateway Fabric yet, please follow the installation documentation to set it up. If NGINX Gateway Fabric is already installed, you can proceed directly with deploying your application.
Blue-Green Deployments
NGINX Gateway Fabric is a highly effective solution for seamless Blue-Green deployments. It offers the advantage of operating without the need for specific annotations. This setup allows development teams to run two environments concurrently: the Blue environment, which serves as the current production environment, and the Green environment, which contains the new version of the application. Once the Green environment has been thoroughly tested and validated, traffic can be smoothly transitioned from Blue to Green. This ensures a zero-downtime upgrade process.
Figure 1: Blue/Green Deployment with NGINX Gateway Fabric
To get started, you need to create the 'coffee-shop' application in your Kubernetes cluster. Begin by copying and pasting the following commands into your terminal. Make sure to follow each step carefully to successfully deploy your application and set up your Blue-Green deployment environment.
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: coffee-shop-blue
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: coffee-shop-blue
template:
metadata:
labels:
app: coffee-shop-blue
spec:
containers:
- name: coffee
image: akashdan/coffee-shop:v1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: coffee-shop-blue
namespace: default
spec:
selector:
app: coffee-shop-blue
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coffee-shop-green
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: coffee-shop-green
template:
metadata:
labels:
app: coffee-shop-green
spec:
containers:
- name: coffee
image: akashdan/coffee-shop:v2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: coffee-shop-green
namespace: default
spec:
selector:
app: coffee-shop-green
ports:
- protocol: TCP
port: 80
targetPort: 80
EOF
Once the pods and services are created, you should see two pods and two services. Run the following command to verify the resources were created:
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/coffee-shop-blue-587f7fc67b-6tk2w 1/1 Running 0 2d19h
pod/coffee-shop-green-5c8cddfcb8-pj8lr 1/1 Running 0 2d19h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/coffee-shop-blue ClusterIP 10.0.190.62 <none> 80/TCP 2d19h
service/coffee-shop-green ClusterIP 10.0.2.252 <none> 80/TCP 2d19h
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2d23h
The next step is to configure a gateway for your application. Please create the gateway by copying and running the commands below in your terminal.
Make sure to provide a hostname in the hostname field.
kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: gateway
spec:
gatewayClassName: nginx
listeners:
- name: http
port: 80
protocol: HTTP
hostname: <Your-Host-Name>"
EOF
Once the Gateway resource is created, NGINX Gateway Fabric will set up an NGINX Pod along with a Service to handle traffic routing. This Gateway is linked to NGINX Gateway Fabric via the gatewayClassName field labeled “nginx”.
kubectl get gateway
NAME CLASS ADDRESS PROGRAMMED AGE
gateway nginx <Your Host name> True 73s
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
coffee-shop-blue-587f7fc67b-6tk2w 1/1 Running 0 2d18h
coffee-shop-green-5c8cddfcb8-pj8lr 1/1 Running 0 2d18h
gateway-nginx-6f8d46894c-hggfb 1/1 Running 0 41s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coffee-shop-blue ClusterIP 10.0.190.62 <none> 80/TCP 2d18h
coffee-shop-green ClusterIP 10.0.2.252 <none> 80/TCP 2d18h
gateway-nginx LoadBalancer 10.0.136.94 172.168.104.157 80:30371/TCP 42s
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2d21h
Next, the next step is to create the HTTPRoute by copying and pasting the following into your terminal:
kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: coffee-shop-route
spec:
parentRefs:
- name: gateway
sectionName: http
hostnames:
- " <Your-Host-Name>"
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: coffee-shop-blue
port: 80
weight: 100
- name: coffee-shop-green
port: 80
weight: 0
EOF
To connect the coffee-shop’s HTTPRoute with the Gateway, include the Gateway's name in the parentRefs field; in our example, it’s “gateway”. The connection will be successful if the hostname and protocol specified in the HTTPRoute are allowed by at least one of the Gateway’s listeners.
Once its HTTPRoute resources are deployed, verify it using the following command.
kubectl get httproute
NAME HOSTNAMES AGE
coffee-shop-route [" <Your-Host-Name>"] 3s
To confirm your deployment, verify that your domain name, “Your-domain-Name,” correctly resolves to the public IP address of the NGINX Service.
You can do this by opening a terminal or your web browser and entering your domain name. If configured correctly, you should see traffic reaching the Coffee-Shop-Blue application. Ensure that the DNS settings are properly updated and propagated. This will help confirm that your deployment is successful and accessible.
curl http://<Your-Host-Name>/
<!DOCTYPE html>
<html>
<head>
<title>Coffee Shop Demo</title>
</head>
<body style="background-color: #f0f8ff; text-align:center; font-family: Arial;">
<h1>☕ Coffee Shop Demo</h1>
<h2 style="color: #0077b6;">Version: v1 (Blue)</h2>
<p>Welcome to the original coffee shop experience!</p>
</body>
</html>
To gradually shift traffic between the two environments, consider implementing weighted routing. You might start by directing 80% of traffic to the Coffee-Shop-Blue application and 20% to the Coffee-Shop-Green application. Over time, adjust these weights to a 50/50 split and eventually transition to 100% of traffic to the new version. This controlled approach minimizes risk and ensures a smoother upgrade experience for users.
To change the traffic pattern, modify the HTTPRoute YAML accordingly.
backendRefs:
- name: coffee-shop-blue
port: 80
weight: 80
- name: coffee-shop-green
port: 80
weight: 20
After making the necessary changes, refresh your web page in your browser to see the updates. Alternatively, you can run the command below in your terminal to apply the changes. This method ensures a smooth transition and helps verify correct implementation. To execute in your terminal, copy and save the script below, and be sure to add your hostname.
#!/bin/bash
for i in {1..10}
do
echo "Request #$i"
curl -s <Your-Host-Name> | grep "Version:"
echo ""
done
80% of the traffic will be directed to the Coffee-Shop-Blue application, with the remaining 20% going to the Coffee-Shop-Green application.
./test.sh
Request #1
<h2 style="color: #0077b6;">Version: v1 (Blue)</h2>
Request #2
<h2 style="color: #0077b6;">Version: v1 (Blue)</h2>
Request #3
<h2 style="color: #0077b6;">Version: v1 (Blue)</h2>
Request #4
<h2 style="color: #0077b6;">Version: v1 (Blue)</h2>
Request #5
<h2 style="color: #0077b6;">Version: v1 (Blue)</h2>
Request #6
<h2 style="color: #0077b6;">Version: v1 (Blue)</h2>
Request #7
<h2 style="color: #009900;">Version: v2 (Green)</h2>
Request #8
<h2 style="color: #0077b6;">Version: v1 (Blue)</h2>
Request #9
<h2 style="color: #0077b6;">Version: v1 (Blue)</h2>
Request #10
<h2 style="color: #0077b6;">Version: v1 (Blue)</h2>
Safeguard application traffic with Let's Encrypt and cert-manager.
Manually handling SSL/TLS certificates can quickly become a tedious and error-prone task. In today’s application environments, protecting communication between users and services is non-negotiable. A fundamental step in this protection is enabling HTTPS, which uses TLS/SSL to encrypt data as it travels across the network. Encryption ensures that information exchanged between clients and servers cannot be intercepted or altered.
To make this possible, applications need valid SSL/TLS certificates issued by a trusted Certificate Authority (CA). Traditionally, obtaining and maintaining these certificates has been a complicated process. Renewals, key management, and configuration updates often require significant effort. Tools such as Let’s Encrypt and cert-manager simplify this process by automating certificate issuance and lifecycle management. When integrated with NGINX Gateway Fabric, cert-manager can seamlessly provide and refresh TLS certificates for Kubernetes workloads. This setup delivers secure traffic without manual intervention. It enforces HTTPS across services, and reduces both operational effort and security risks.
Figure 2: Secure your application with Let's Encrypt in NGINX Gateway Fabric.There are detailed steps on how to deploy cert-manager, ClusterIssuer, and other resources in our NGINX documentation.
Integration with NGINX One Console
NGINX One Console offers a centralized SaaS management dashboard that simplifies the oversight, policy enforcement, and lifecycle management of NGINX workloads. When integrated with NGINX Gateway Fabric, platform teams can access comprehensive insights into traffic patterns, CVEs, latency, error rates, and TLS health. This integration empowers teams to implement measures more effectively and confidently.
Figure 3: Integration with NGINX One ConsoleTo learn more about how to integrate your NGINX Gateway Fabric data plane with NGINX One Console, follow these step-by-step instructions in the docs.
Once you have established your setup with NGINX One Console, you will gain the ability to view all your NGINX assets through a single, unified interface. Currently, this platform allows you to track the versions of NGINX and NGINX Gateway Fabric, along with any CVEs linked to these versions. Additionally, you’ll have a read-only view of the NGINX Gateway Fabric configuration. For each instance, you can examine:
- Read-only configuration settings
- Unmanaged SSL/TLS certificates associated with Control Planes.
Conclusion
NGINX Gateway Fabric is a fantastic option for organizations looking to boost their deployment strategies. While testing with the open-source version can provide valuable insights and flexibility, the real value emerges with NGINX Plus. NGINX Plus not only builds on the foundational features of Gateway API but also offers robust metrics and a dynamic upstream configuration. This ensures that teams have the tools they need for comprehensive support and enhanced capabilities. Embracing NGINX Plus allows organizations to leverage both the freedom of open source and the advanced features necessary for enterprise success.
Resources
- Announcing F5 NGINX Gateway Fabric 2.0.0 with a New Distributed Architecture
- NGINX Gateway Fabric Docs
- F5 NGINX Gateway Fabric: Revolutionizing Kubernetes Traffic Management
- NGINX Introduces Native Support for ACME Protocol
- F5 NGINX Plus R35 Release Now Available