Announcing F5 NGINX Gateway Fabric 1.3.0 with Tracing, GRPCRoute, and Client Settings

Today we are announcing the next release of F5 NGINX Gateway Fabric version 1.3.0, which adds our next route type, GRPCRoute, to the product. As the name suggests, this enables gRPC traffic ingestion and management within NGINX Gateway Fabric the same way you would with an HTTPRoute. gRPC is a powerful protocol to use when you need additional performance or streaming over traditional HTTP REST APIs most often used across the web. 

This is the first release where we have extended the Gateway API with custom policies that directly expose advanced NGINX functionality and configuration otherwise not available in Gateway API resources such as Routes or Gateways. With our new ClientSettingsPolicy and ObservabilityPolicy you can enable and configure NGINX features using the framework the Gateway API provides by attaching them to Routes or Gateways. Through the ClientSettingsPolicy, you can set configuration around client request requirements and keepalive connections. With the ObservabilityPolicy and some configuration in NginxProxy, you can now enable tracing for your applications, although attachment is currently only possible on the Route for this release.

 

GRPCRoute 

With the release of NGINX Gateway Fabric 1.3.0 comes core support for GRPCRoute. For those applications currently taking advantage of, or looking at the gRPC framework, you can now ingress traffic to your gRPC application with a GRPCRoute. 

Many are looking at using gRPC over traditional HTTP REST based APIs to take advantage of its data efficiency and latency benefits. Utilizing a binary data format for transmission, gRPC calls offer a lower latency over HTTP REST. It was for this reason that we wanted to ensure it was included in NGINX Gateway Fabric. 

To enable this, we utilize our data plane, NGINX, to provide http/2 transport. This provides optimal proxy performance and minimal overhead for gRPC traffic through NGINX Gateway Fabric. With core support for GRPCRoute, you can: 

  • Match traffic to be routed based on hostname, headers, or method 
  • Modify request or response headers 
  • Split traffic by weight 

For full and up to date compatibility details, see our compatibility page under gRPC here. 

Enabling gRPC traffic management through NGINX Gateway Fabric is straightforward via a GRPCRoute. For example, if we want to set up NGINX Gateway Fabric to forward gRPC traffic to a single application, we will just need a Gateway and a GRPCRoute.


Gateway: 

apiVersion: gateway.networking.k8s.io/v1beta1 
kind: Gateway 
metadata: 
   name: my-gateway 
spec: 
   gatewayClassName: nginx 
   listeners: 
   - name: listener-1 
     port: 80 
     protocol: HTTP 
     allowedRoutes: 
       namespaces: 
         from: Same 
       kinds: 
       - kind: gRPCroute 
     hostname: bar.com 

 

GRPCRoute: 

apiVersion: gateway.networking.k8s.io/v1alpha2 
kind: GRPCRoute 
metadata: 
   name: grpc-route-1 
spec: 
   parentRefs: 
   - name: my-gateway 
     sectionName: listener-1 
   rules: 
   - matches: 
     backendRefs: 
     - name: gRPC-application-service 
       port: 80

 

Once these two resources have been created, any gRPC traffic sent to “my-gateway” on port 80 for bar.com will be forwarded to the service “gRPC-application-service". For other features and api reference, check out the Gateway API documentation available here. 

 

OpenTelemetry Tracing Support 

To enable tracing, we have not just bolted on the ability to add the relevant configuration to NGINX via some config, but built the feature into extensions of the Gateway API. By utilizing global configuration in NginxProxy and attaching our new ObservabilityPolicy to a route, you can configure NGINX Gateway Fabric to generate and forward traces to your tracing backend. 

These extensions align with our strategy to extend the Gateway API to create a cohesive configuration experience based on the roles defined in the Gateway API. As we’ve seen with Ingress, many custom configurations not defined in a standard way can lead to a lot of confusion we want to avoid. 

As a “cluster operator,” you would create the “NginxProxy” resource to attach to the GatewayClass. This defines where the traces should be sent to and what span name to include in the traces collected. An example of an NginxProxy resource is below: 

 

NGINXProxy: 

apiVersion: gateway.nginx.org/v1alpha1 
kind: NginxProxy 
metadata: 
  name: NGINX Gateway Fabric-proxy-config 
spec: 
  telemetry: 
    exporter: 
      endpoint: jaeger.tracing.svc:4317 
    spanAttributes: 
    - key: cluster 
      value: my-cluster 

 

While the above resource defines where traces go, no traces will be collected until an ObservabilityPolicy is created. Usually, this will be created by an application developer and attached to one of their own routes. See below for an example of an ObservabilityPolicy with a pre-existing route, “coffee”: 

 

ObservabilityPolicy: 

apiVersion: gateway.nginx.org/v1alpha1 
kind: ObservabilityPolicy
metadata: 
  name: coffee-tracing 
spec: 
  targetRefs: 
  - group: gateway.networking.k8s.io 
    kind: HTTPRoute 
    name: coffee 
  tracing: 
    strategy: ratio 
    ratio: 50 
    spanAttributes: 
    - key: route 
      value: coffee 

 

In the above example, we can also choose to omit the ratio for a default 100% capture rate. As always, when implementing in your cluster, make sure to check the status of any resources you create! 

We elected to skip a setting to enable tracing globally because as your cluster scales, the amount of tracing data it generates will become very large. A global option will likely generate far more data than you can use, so we recommend enabling tracing only for those endpoints that need it. But as always, let us know on GitHub if you have a strong use case for this feature! 

For the full documentation on how to configure tracing for your environment, check here.

 

Client Settings Policy 

As the name implies, NGINX Gateway Fabric uses NGINX OSS or NGINX Plus as its data plane. This means we can leverage all the features NGINX provides as we develop NGINX Gateway Fabric, but we also want to expose that configuration to our users. One aspect of our strategy towards exposing NGINX functionality is to promote frequently used settings into extensions of the Gateway API. The ClientSettingsPolicy is our first step towards doing that. 

The ClientSettingsPolicy currently allows you to configure 5 NGINX directives at either the Gateway or Route:

As the names of the directives imply, you can use these to change default nginx behavior. Below is an example of a ClientSettingsPolicy attached to a Gateway: 

apiVersion: gateway.nginx.org/v1alpha1 
kind: ClientSettingsPolicy 
metadata: 
  name: gateway-client-settings 
  namespace: default 
spec: 
  targetRef: 
    group: gateway.networking.k8s.io 
    kind: Gateway 
    name: my-gateway 
  body: 
    maxSize: 5m 
    timeout: 30s 
  keepAlive: 
    requests: 3000 
    time: 5s 
    timeout: 
      server: 2s 
      header: 1s 

 

If you were to attach this policy at both the Gateway and Route scopes, any fields specified in the Route will be overwritten over the Gateway defined values. That way, the cluster operator can set appropriate defaults, but can be overridden when needed for specific applications. A short guide on inherited policy attachment can be found here. 

 

For the full documentation on how to configure every aspect of the ClientSettingsPolicy, check it out here! 

What’s Next 

This next release cycle, we will be working on some smaller features such as TLS Passthrough and IPv6 while we improve our testing suite and follow up on smaller features and bugs. At the same time, we will be laying the groundwork for our bigger features for our next release, including NGINX directive customization, and separating our data and control planes. 

As many experienced users of NGINX know, there are a lot of features NGINX Gateway Fabric can tap into just by providing an interface to NGINX directives directly. Thus, instead of trying to expose absolutely everything via the Gateway API, we are going to provide an interface more akin to NGINX Ingress' snippets; a method to insert your own directives with your own values into specific places in the configuration NGINX Gateway Fabric sends to NGINX. 

While we will also provide a first-class method of configuration via Gateway API extensions, this feature will ensure our users can leverage existing directives now rather than when they are implemented as their own extension. Over time, we will be implementing the map below of NGINX functionality into our own extensions. 

For more information on our strategy towards NGINX customization, see our full enhancement proposal here. 

Resources 

For the complete changelog for NGINX Gateway Fabric 1.3.0, see the Release Notes. To try NGINX Gateway Fabric for Kubernetes with NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases. 

If you would like to get involved, see what is coming next, or see the source code for NGINX Gateway Fabric, check out our repository on GitHub! 

We have bi-weekly community meetings on Mondays at 9AM Pacific/5PM GMT. The meeting link, updates, agenda, and notes are on the NGINX Gateway Fabric Meeting Calendar. Links are also always available from our GitHub readme. 

Published Aug 06, 2024
Version 1.0
No CommentsBe the first to comment