Getting Started with the Certified F5 NGINX Gateway Fabric Operator on Red Hat OpenShift
As enterprises modernize their Kubernetes strategies, the shift from standard Ingress Controllers to the Kubernetes Gateway API is redefining how we manage traffic. For years, the F5 NGINX Ingress Controller has been a foundational component in OpenShift environments.
With the certification of F5 NGINX Gateway Fabric (NGF) 2.2 for Red Hat OpenShift, that legacy enters its next chapter.
This new certified operator brings the high-performance NGINX data plane into the standardized, role-oriented Gateway API model—with full integration into OpenShift Operator Lifecycle Manager (OLM). Whether you're a platform engineer managing cluster ingress or a developer routing traffic to microservices, NGF on OpenShift 4.19+ delivers a unified, secure, and fully supported traffic fabric.
In this guide, we walk through installing the operator, configuring the NginxGatewayFabric resource, and addressing OpenShift-specific networking patterns such as NodePort + Route.
Why NGINX Gateway Fabric on OpenShift?
While Red Hat OpenShift 4.19+ includes native support for the Gateway API (v1.2.1), integrating NGF adds critical enterprise capabilities:
✔ Certified & OpenShift-Ready
The operator is fully validated by Red Hat, ensuring UBI-compliant images and compatibility with OpenShift’s strict Security Context Constraints (SCCs).
✔ High Performance, Low Complexity
NGF delivers the core benefits long associated with NGINX—efficiency, simplicity, and predictable performance.
✔ Advanced Traffic Capabilities
Capabilities like Regular Expression path matching and support for ExternalName services allow for complex, hybrid-cloud traffic patterns.
✔ AI/ML Readiness
NGF 2.2 supports the Gateway API Inference Extension, enabling inference-aware routing for GenAI and LLM workloads on platforms like Red Hat OpenShift AI.
Prerequisites
Before we begin, ensure you have:
- Cluster Administrator access to an OpenShift cluster (version 4.19 or later is recommended for Gateway API GA support).
- Access to the OpenShift Console and the oc CLI.
- Ability to pull images from ghcr.io or your internal mirror.
Step 1: Installing the Operator from OperatorHub
We leverage the Operator Lifecycle Manager (OLM) for a "point-and-click" installation that handles lifecycle management and upgrades.
- Log into the OpenShift Web Console as an administrator.
- Navigate to Operators > OperatorHub.
- Search for NGINX Gateway Fabric in the search box.
- Select the NGINX Gateway Fabric Operator card and click Install
- Accept the default installation mode (All namespaces) or select a specific namespace (e.g. nginx-gateway), and click Install. Wait until the status shows Succeeded.
Once installed, the operator will manage NGF lifecycle automatically.
Step 2: Configuring the NginxGatewayFabric Resource
Unlike the Ingress Controller, which used NginxIngressController resources, NGF uses the NginxGatewayFabric Custom Resource (CR) to configure the control plane and data plane.
- In the Console, go to Installed Operators > NGINX Gateway Fabric Operator.
- Click the NginxGatewayFabric tab and select Create NginxGatewayFabric.
- Select YAML view to configure the deployment specifics.
Step 3: Configuring the NginxGatewayFabric Resource
NGF uses a Kubernetes Service to expose its data plane. Before the data plane launches, we must tell the Controller how to expose it.
Option A - LoadBalancer (ROSA, ARO, Managed OpenShift)
By default, the NGINX Gateway Fabric Operator configures the service type as LoadBalancer. On public cloud managed OpenShift services (like ROSA on AWS or ARO on Azure), this native default works out-of-the-box to provision a cloud load balancer.
No additional steps required.
Option B - NodePort with OpenShift Route (On-Prem/Hybrid)
However, for on-premise or bare-metal OpenShift clusters lacking a native LoadBalancer implementation, the common pattern is to use a NodePort service exposed via an OpenShift Route.
Update the NGF CR to use NodePort
- In the Console, go to Installed Operators > NGINX Gateway Fabric Operator.
- Click the NginxGatewayFabric tab and select NginxGatewayFabric.
- Select YAML view to directly edit the configuration specifics.
- Change the spec.nginx.service.type to NodePort:
apiVersion: gateway.nginx.org/v1alpha1
kind: NginxGatewayFabric
metadata:
name: default
namespace: nginx-gateway
spec:
nginx:
service:
type: NodePort
Create the OpenShift Route:
After applying the CR, create a Route to expose the NGINX Service.
oc create route edge ngf \
--service=nginxgatewayfabric-sample-nginx-gateway-fabric\
--port=http \
-n nginx-gateway
Note: This creates an Edge TLS termination route. For passthrough TLS (allowing NGINX to handle certificates), use --passthrough and target the https port.
Step 4: Validating the Deployment
Verify that the operator has deployed the control plane pods successfully.
oc get pod -n nginx-gateway
NAME READY STATUS RESTARTS AGE
nginx-gateway-fabric-controller-manager-dd6586597-bfdl5 1/1 Running 0 23m
nginxgatewayfabric-sample-nginx-gateway-fabric-564cc6df4d-hztm8 1/1 Running 0 18m
oc get gatewayclass
NAME CONTROLLER ACCEPTED AGE
nginx gateway.nginx.org/nginx-gateway-controller True 4d1h
You should also see a GatewayClass named nginx. This indicates the controller is ready to manage Gateway resources.
Step 5: Functional Check with Gateway API
To test traffic, we will use the standard Gateway API resources (Gateway and HTTPRoute)
- Deploy a Test Application (Cafe Service) Ensure you have a backend service running. You can use a simple service for validation.
- Create a Gateway This resource opens the listener on the NGINX data plane.
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: cafe
spec:
gatewayClassName: nginx
listeners:
- name: http
port: 80
protocol: HTTP
- Create an HTTPRoute This binds the traffic to your backend service.
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: coffee
spec:
parentRefs:
- name: cafe
hostnames:
- "cafe.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: coffee
port: 80
- Test Connectivity If you used Option B (Route), send a request to your OpenShift Route hostname. If you used Option A, send it to the LoadBalancer IP.
OpenShift 4.19 Compatibility
Meanwhile, it is vital to understand the "under the hood" constraints of OpenShift 4.19:
- Gateway API Version Pinning:
- OpenShift 4.19 ships with Gateway API CRDs pinned to v1.2.1. While NGF 2.2 supports v1.3.0 features, it has been conformance-tested against v1.2.1 to ensure stability within OpenShift's version-locked environment.
oc get crd gateways.gateway.networking.k8s.io -o yaml | grep "gateway.networking.k8s.io/"
gateway.networking.k8s.io/bundle-version: v1.2.1
gateway.networking.k8s.io/channel: standard
-
- However, looking ahead, future NGINX Gateway Fabric releases may rely on newer Gateway API specifications that are not natively supported by the pinned CRDs in OpenShift 4.19. If you anticipate running a newer NGF version that may not be compatible with the current OpenShift Gateway API version, please reach out to us to discuss your compatibility requirements.
- Security Context Constraints (SCC): In previous manual deployments, you might have wrestled with NET_BIND_SERVICE capabilities or creating custom SCCs. The Certified Operator handles these permissions automatically, using UBI-based images that comply with Red Hat's security standards out of the box.
Next Steps: AI Inference
With NGF running, you are ready for advanced use cases:
- AI Inference: Explore the Gateway API Inference Extension to route traffic to LLMs efficiently, optimizing GPU usage on Red Hat OpenShift AI.
The certified NGINX Gateway Fabric Operator simplifies the operational burden, letting you focus on what matters: delivering secure, high-performance applications and AI workloads.
References:
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)