kubernetes
68 Topics5 Technical Sessions That Should Be Great: F5 AppWorld 2025
These F5 Academy sessions explore modern app delivery, security, and operations. The full list of sessions is on the F5 AppWorld 2025 Academy page - if you haven't yet registered you can do so here: Register for F5 AppWorld 2025 LAB - F5 Distributed Cloud: Discovering & Securing APIs API security has never been more critical, and this lab dives straight into the tough stuff. Learn how to find hidden endpoints, detect sensitive data and authentication states, and apply integrated API security measures to keep your environment locked tight. TECHNICAL BRIEFING - LLM Security and Delivery with F5’s Distributed Cloud Security Ecosystem AI is fueling the next wave of applications—but it’s also introducing new security blind spots. This briefing explores how to secure LLMs and integrate the right solutions to ensure your AI-driven workloads remain fast, cost-effective, and protected. LAB - F5 NGINX Plus Ingress as an API Gateway for Kubernetes Containerized environments and microservices are here to stay, and this lab helps you navigate the complexity. Configure NGINX Plus Ingress as a powerful API gateway for your Kubernetes workloads, enabling schema enforcement, authorization, and rate-limiting all in one streamlined solution. LAB - Zero Trust at Scale With F5 NGINX Zero trust principles become a whole lot more meaningful when you can scale them. Get hands-on with NGINX Plus and BIG-IP GTM to build a robust, scalable zero trust architecture, ensuring secure and seamless app access across enterprises and multi-cluster Kubernetes environments. LAB - F5 Distributed Cloud: Security Automation & Zero Day Mitigation In this lab, you’ll learn how to leverage advanced matching criteria and custom rules to quickly respond to emerging threats. Shore up your defenses with automated policies that deliver frictionless security and agile zero-day mitigation. Session Updates Coming in January 🚨 AppWorld's Breakout Sessions officially drop in January 2025 but here is a sneak preview! Check back in January to add these to your agenda. Global App Delivery With a Global Network How Generative AI Breaks Traditional Application Security and What You Can Do About It The New Wave of Bots: A Deep Dive into Residential IP Proxy Networks From ZTNA to Universal ZTNA: Expanding Your App Security Strategy --- See you at F5 AppWorld 2025! #AppWorld2558Views0likes0CommentsNGINX Unit running in Distributed Cloud vK8s
F5 Distributed Cloud provides a mechanism to easily deploy applications using virtual Kubernetes (vK8s) across a global network. This results in applications running closer to the end user. This article will demonstrate creating a NGINX Unit container to run a simple server-side WebAssembly (Wasm) application and deliver it via Distributed Cloud Regional Edge (RE) sites. Distributed Cloud provides a vK8s infrastructure for deploying modern apps as well as load balancing and security services to securely and reliably deliver applications at the edge. NGINX Unit Pod A Kubernetes pod is the smallest execution unit that can be deployed in Kubernetes. A pod is a single instance of an application and may contain single or multiple containers. For this article, a single NGINX Unit container will make up the application pod and replicas of the pod will be deployed to each of the configured Distributed Cloud Points of Presence (PoPs). Building the NGINX Unit Container The first step is to build a server-side WebAssemby (Wasm) application that Unit can execute. Instructions on creating a simple Hello World Wasm application are available here. After the Hello World Wasm component is built, a NGINX Unit Container needs to be created to execute the application. To build the NGINX Unit container, download the Unit Wasm Dockerfile that is available on the NGINX Unit github repo. The Dockerfile needs to be modified to allow the container to run as a non-root user in the Distributed Cloud virtual Kubernetes (vK8s) environment. Below is the Dockerfile that I used for this demonstration. FROM debian:bullseye-slim LABEL org.opencontainers.image.title="Unit (wasm)" LABEL org.opencontainers.image.description="Official build of Unit for Docker." LABEL org.opencontainers.image.url="https://unit.nginx.org" LABEL org.opencontainers.image.source="https://github.com/nginx/unit" LABEL org.opencontainers.image.documentation="https://unit.nginx.org/installation/#docker-images" LABEL org.opencontainers.image.vendor="NGINX Docker Maintainers <docker-maint@nginx.com>" LABEL org.opencontainers.image.version="1.32.1" RUN set -ex \ && savedAptMark="$(apt-mark showmanual)" \ && apt-get update \ && apt-get install --no-install-recommends --no-install-suggests -y ca-certificates git build-essential libssl-dev libpcre2-dev curl pkg-config \ && mkdir -p /usr/lib/unit/modules /usr/lib/unit/debug-modules \ && mkdir -p /usr/src/unit \ && mkdir -p /unit/var/lib/unit \ && mkdir -p /unit/var/run \ && mkdir -p /unit/var/log \ && mkdir -p /unit/var/tmp \ && mkdir /app \ && chmod -R 777 /unit/var/lib/unit \ && chmod -R 777 /unit/var/run \ && chmod -R 777 /unit/var/log \ && chmod -R 777 /unit/var/tmp \ && chmod 777 /app \ && cd /usr/src/unit \ && git clone --depth 1 -b 1.32.1-1 https://github.com/nginx/unit \ && cd unit \ && NCPU="$(getconf _NPROCESSORS_ONLN)" \ && DEB_HOST_MULTIARCH="$(dpkg-architecture -q DEB_HOST_MULTIARCH)" \ && CC_OPT="$(DEB_BUILD_MAINT_OPTIONS="hardening=+all,-pie" DEB_CFLAGS_MAINT_APPEND="-Wp,-D_FORTIFY_SOURCE=2 -fPIC" dpkg-buildflags --get CFLAGS)" \ && LD_OPT="$(DEB_BUILD_MAINT_OPTIONS="hardening=+all,-pie" DEB_LDFLAGS_MAINT_APPEND="-Wl,--as-needed -pie" dpkg-buildflags --get LDFLAGS)" \ && CONFIGURE_ARGS_MODULES="--prefix=/usr \ --statedir=/unit/var/lib/unit \ --debug \ --control=unix:/unit/var/run/control.unit.sock \ --runstatedir=/unit/var/run \ --pid=/unit/var/run/unit.pid \ --logdir=/unit/var/log \ --log=/unit/var/log/unit.log \ --tmpdir=/unit/var/tmp \ --openssl \ --libdir=/usr/lib/$DEB_HOST_MULTIARCH" \ && CONFIGURE_ARGS="$CONFIGURE_ARGS_MODULES \ --njs" \ && make -j $NCPU -C pkg/contrib .njs \ && export PKG_CONFIG_PATH=$(pwd)/pkg/contrib/njs/build \ && ./configure $CONFIGURE_ARGS --cc-opt="$CC_OPT" --ld-opt="$LD_OPT" --modulesdir=/usr/lib/unit/debug-modules --debug \ && make -j $NCPU unitd \ && install -pm755 build/sbin/unitd /usr/sbin/unitd-debug \ && make clean \ && ./configure $CONFIGURE_ARGS --cc-opt="$CC_OPT" --ld-opt="$LD_OPT" --modulesdir=/usr/lib/unit/modules \ && make -j $NCPU unitd \ && install -pm755 build/sbin/unitd /usr/sbin/unitd \ && make clean \ && apt-get install --no-install-recommends --no-install-suggests -y libclang-dev \ && export RUST_VERSION=1.76.0 \ && export RUSTUP_HOME=/usr/src/unit/rustup \ && export CARGO_HOME=/usr/src/unit/cargo \ && export PATH=/usr/src/unit/cargo/bin:$PATH \ && dpkgArch="$(dpkg --print-architecture)" \ && case "${dpkgArch##*-}" in \ amd64) rustArch="x86_64-unknown-linux-gnu"; rustupSha256="0b2f6c8f85a3d02fde2efc0ced4657869d73fccfce59defb4e8d29233116e6db" ;; \ arm64) rustArch="aarch64-unknown-linux-gnu"; rustupSha256="673e336c81c65e6b16dcdede33f4cc9ed0f08bde1dbe7a935f113605292dc800" ;; \ *) echo >&2 "unsupported architecture: ${dpkgArch}"; exit 1 ;; \ esac \ && url="https://static.rust-lang.org/rustup/archive/1.26.0/${rustArch}/rustup-init" \ && curl -L -O "$url" \ && echo "${rustupSha256} *rustup-init" | sha256sum -c - \ && chmod +x rustup-init \ && ./rustup-init -y --no-modify-path --profile minimal --default-toolchain $RUST_VERSION --default-host ${rustArch} \ && rm rustup-init \ && rustup --version \ && cargo --version \ && rustc --version \ && make -C pkg/contrib .wasmtime \ && install -pm 755 pkg/contrib/wasmtime/target/release/libwasmtime.so /usr/lib/$(dpkg-architecture -q DEB_HOST_MULTIARCH)/ \ && ./configure $CONFIGURE_ARGS_MODULES --cc-opt="$CC_OPT" --modulesdir=/usr/lib/unit/debug-modules --debug \ && ./configure wasm --include-path=`pwd`/pkg/contrib/wasmtime/crates/c-api/include --lib-path=/usr/lib/$(dpkg-architecture -q DEB_HOST_MULTIARCH)/ && ./configure wasm-wasi-component \ && make -j $NCPU wasm-install wasm-wasi-component-install \ && make clean \ && ./configure $CONFIGURE_ARGS_MODULES --cc-opt="$CC_OPT" --modulesdir=/usr/lib/unit/modules \ && ./configure wasm --include-path=`pwd`/pkg/contrib/wasmtime/crates/c-api/include --lib-path=/usr/lib/$(dpkg-architecture -q DEB_HOST_MULTIARCH)/ && ./configure wasm-wasi-component \ && make -j $NCPU wasm-install wasm-wasi-component-install \ && cd \ && rm -rf /usr/src/unit \ && for f in /usr/sbin/unitd /usr/lib/unit/modules/*.unit.so; do \ ldd $f | awk '/=>/{print $(NF-1)}' | while read n; do dpkg-query -S $n; done | sed 's/^\([^:]\+\):.*$/\1/' | sort | uniq >> /requirements.apt; \ done \ && apt-mark showmanual | xargs apt-mark auto > /dev/null \ && { [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; } \ && /bin/true \ && mkdir -p /var/lib/unit/ \ && mkdir -p /docker-entrypoint.d/ \ && apt-get update \ && apt-get --no-install-recommends --no-install-suggests -y install curl $(cat /requirements.apt) \ && apt-get purge -y --auto-remove build-essential \ && rm -rf /var/lib/apt/lists/* \ && rm -f /requirements.apt \ && ln -sf /dev/stderr /var/log/unit.log WORKDIR /app COPY --chmod=755 hello_wasi_http.wasm /app COPY ./*.json /docker-entrypoint.d/ COPY --chmod=755 docker-entrypoint.sh /usr/local/bin/ COPY welcome.* /usr/share/unit/welcome/ STOPSIGNAL SIGTERM ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"] EXPOSE 8000 CMD ["unitd", "--no-daemon", "--control", "unix:/unit/var/run/control.unit.sock"] The changes to the Dockerfile are due to the non-root user not being able to write to the /var directory. Rather than modify the permissions on /var and its subdirectories, this docker file creates a /unit directory that is used for storing process information and logs. The docker-entrypoint.sh script also needed to be modified to use the /unit directory structure. The updated docker-entrypoint.sh script is shown below. #!/bin/sh set -e WAITLOOPS=5 SLEEPSEC=1 curl_put() { RET=$(/usr/bin/curl -s -w '%{http_code}' -X PUT --data-binary @$1 --unix-socket /unit/var/run/control.unit.sock http://localhost/$2) RET_BODY=$(echo $RET | /bin/sed '$ s/...$//') RET_STATUS=$(echo $RET | /usr/bin/tail -c 4) if [ "$RET_STATUS" -ne "200" ]; then echo "$0: Error: HTTP response status code is '$RET_STATUS'" echo "$RET_BODY" return 1 else echo "$0: OK: HTTP response status code is '$RET_STATUS'" echo "$RET_BODY" fi return 0 } if [ "$1" = "unitd" ] || [ "$1" = "unitd-debug" ]; then if /usr/bin/find "/unit/var/lib/unit/" -mindepth 1 -print -quit 2>/dev/null | /bin/grep -q .; then echo "$0: /unit/var/lib/unit/ is not empty, skipping initial configuration..." else echo "$0: Launching Unit daemon to perform initial configuration..." /usr/sbin/$1 --control unix:/unit/var/run/control.unit.sock for i in $(/usr/bin/seq $WAITLOOPS); do if [ ! -S /unit/var/run/control.unit.sock ]; then echo "$0: Waiting for control socket to be created..." /bin/sleep $SLEEPSEC else break fi done # even when the control socket exists, it does not mean unit has finished initialisation # this curl call will get a reply once unit is fully launched /usr/bin/curl -s -X GET --unix-socket /unit/var/run/control.unit.sock http://localhost/ if /usr/bin/find "/docker-entrypoint.d/" -mindepth 1 -print -quit 2>/dev/null | /bin/grep -q .; then echo "$0: /docker-entrypoint.d/ is not empty, applying initial configuration..." echo "$0: Looking for certificate bundles in /docker-entrypoint.d/..." for f in $(/usr/bin/find /docker-entrypoint.d/ -type f -name "*.pem"); do echo "$0: Uploading certificates bundle: $f" curl_put $f "certificates/$(basename $f .pem)" done echo "$0: Looking for JavaScript modules in /docker-entrypoint.d/..." for f in $(/usr/bin/find /docker-entrypoint.d/ -type f -name "*.js"); do echo "$0: Uploading JavaScript module: $f" curl_put $f "js_modules/$(basename $f .js)" done echo "$0: Looking for configuration snippets in /docker-entrypoint.d/..." for f in $(/usr/bin/find /docker-entrypoint.d/ -type f -name "*.json"); do echo "$0: Applying configuration $f"; curl_put $f "config" done echo "$0: Looking for shell scripts in /docker-entrypoint.d/..." for f in $(/usr/bin/find /docker-entrypoint.d/ -type f -name "*.sh"); do echo "$0: Launching $f"; "$f" done # warn on filetypes we don't know what to do with for f in $(/usr/bin/find /docker-entrypoint.d/ -type f -not -name "*.sh" -not -name "*.json" -not -name "*.pem" -not -name "*.js"); do echo "$0: Ignoring $f"; done else echo "$0: /docker-entrypoint.d/ is empty, creating 'welcome' configuration..." curl_put /usr/share/unit/welcome/welcome.json "config" fi echo "$0: Stopping Unit daemon after initial configuration..." kill -TERM $(/bin/cat /unit/var/run/unit.pid) for i in $(/usr/bin/seq $WAITLOOPS); do if [ -S /unit/var/run/control.unit.sock ]; then echo "$0: Waiting for control socket to be removed..." /bin/sleep $SLEEPSEC else break fi done if [ -S /unit/var/run/control.unit.sock ]; then kill -KILL $(/bin/cat /unit/var/run/unit.pid) rm -f /unit/var/run/control.unit.sock fi echo echo "$0: Unit initial configuration complete; ready for start up..." echo fi fi exec "$@" The Dockerfile copies all files ending in .json to a directory named /docker-entrypoint.d. The docker-entrypoint.sh script checks the /docker-entrypoint.d directory for configuration files that are used to configure NGINX Unit. The following config.json file was created in the same directory as the Dockerfile and docker-entrypoint.sh script. { "listeners": { "*:8000": { "pass": "applications/wasm" } }, "applications": { "wasm": { "type": "wasm-wasi-component", "component": "/app/hello_wasi_http.wasm" } } } This config file instructs NGINX unit to listen on port 8000 and pass any client request to the Wasm application component located at /app/hello_wasi_http.wasm. The last step before building the container is to copy the Wasm application component into the same directory as the Dockerfile, docker-entrypoint.sh, and config.json files. With these four files in the same directory, the docker build command is used to create a container: docker build --no-cache --platform linux/amd64 -t hello-wasi-xc:1.0 . Here I use the --platform option to specify that the container will run on a linux/amd64 platform. I also use -t to give the image a name. The portion of the name after the ":" can be used for versioning. For example, if I added a new feature to my application, I could create a new image with -t hello-wasi-xc:1.1 to represent version 1.1 of the hello-wasi-xc application. Once the image is built, it can be uploaded to the container registry of your choice. I have access to an Azure Container Registry, so that is what I chose to use. Distributed Cloud supports pulling images from both public and private container registries. To push the image to my registry, I had to first tag the image with the registry location ({{registry_name}} should be replaced with the name of your registry): docker tag hello-wasi-xc:1.0 {{registry_name}}/examples/hello-wasi-xc:1.0 Next, I logged into my Azure registry: az login az acr login --name {{registry_name}} I then pushed the image to the registry: docker push {{registry_name}}/examples/hello-wasi-xc:1.0 F5 Distributed Cloud Virtual Kubernetes F5 Distributed Cloud Services support a Kubernetes compatible API for centralized orchestration of applications across a fleet of sites (customer sites or F5 Distributed Cloud Regional Edges). Distributed Cloud utilizes a distributed control plane to manage scheduling and scaling of applications across multiple sites. Kubernetes Objects Virtual Kubernetes supports the following Kubernetes objects: Deployments, StatefulSets, Jobs, CronJob, DaemonSet, Service, ConfigMap, Secrets, PersistentVolumeClaim, ServiceAccount, Role, and Role Binding. For this article I will focus on the Deployments object. A deployment is commonly used for stateless applications like web-servers, front-end web-ui, etc. A deployment works well for this use case because the NGINX Unit container is serving up a stateless web application. Workload Object A workload within Distributed Cloud is used to configure and deploy components of an application in Virtual Kubernetes. Workload encapsulates all the operational characteristics of Kubernetes workload, storage, and network objects (deployments, statefulsets, jobs, persistent volume claims, configmaps, secrets, and services) configuration, as well as configuration related to where the workload is deployed and how it is advertised using L7 or L4 load balancers. Within Distributed Cloud there are four types of workloads: Simple Service, Service, Stateful Service, and Job. Namespaces In Distributed Cloud, tenant configuration objects are grouped under namespaces. Namespaces can be thought of as administrative domains. Within each Namespace, a user can create an object called vK8s and use that for application management. Each Namespace can have a maximum of one vK8s object. Distributed Cloud vK8s Configuration vK8s is configured under the Distributed Apps tile within the Distributed Cloud console. After clicking on that tile, I next created a new Virtual K8s. By clicking Add Virtual K8s from the Virtual K8s menu under Applications. This brings up the Virtual K8s configuration form. I provided a name for my vK8s site and click Save and Exit. This initiates the vK8s build. Deploy the Application Once the vK8s cluster is ready, I click on the name of my vk8s to configure the deployment of my application. For this demo, I am going to deploy my application via Workloads. I click on Workloads and then Add VK8s Workload. This brings up the Workload configuration form. I provide my Workload a name, select Service for Workload type and click Configure. Using Service allows me to have more granular controls on how my application will be advertised. Next, I click Add Item under containers to specify to Distribtued Cloud where it can pull my container image from. In the resulting form, I supply a name for my container, supply the public registry FQDN along with the container name and version and click Apply. Next I configure where to advertise the application. I want to manually configure my HTTP LB, so I chose to advertise in Cluster. Manually configuring the LB allows me to configure additional options such as Web Application Firewall (WAF). Within the Advertise in Cluster form, I specify port 8000, because that is the port my container is listening on. Click Apply and then click Save and Exit. Create an HTTP Load Balancer to Advertise the Application The next step is to create an HTTP Load Balancer to expose the application to the Internet. This can be done by clicking Manage, selecting Load Balancers, and clicking on HTTP Load Balancers. Next, click on Add HTTP Load Balancer to create a new load balancer. I gave the LB a Name, Domain, Load Balancer Type, and Checked the box to Automatically Manage DNS Records. Then I configured the Origin Pool by clicking Add Item in the Origin Pool section and then clicking Add Item under the Origin Pool. On the resulting form, I created a name for my Origin Pool and then clicked Add Item under Origin Servers to add an origin Server. In the Origin Server form, I selected K8s Service Name of Origin Server on given Sites, Service Name, and provided my vK8s workload name along with my namespace. I also selected Virtual Site and created a Virtual Site for the REs I wanted to use to access my application and clicked Continue. Back on the Origin Server form, I selected vK8s Networks on Site from the Select Network on the site drop down and then clicked Apply. On the Origin Pool form, I specified port 8000 for my origin server Port and then clicked Continue. I then clicked Apply to return to the LB configuration form. I added a WAF policy to protect my application and then clicked Save and Exit. At this point my application is now accessible on the domain name I specified in the HTTP Load Balancer config (http://aconley-wasm.amer-ent.f5demos.com/). Conclusion This is a basic demo of using NGINX Unit and Distributed Cloud Virtual Kubernetes to deploy a server-side WASM application. The principals in this demo could be expanded to build more complex applications.101Views0likes0CommentsBIG-IP for SIP resources running in Kubernetes
Hello, We are trying to setup Virtual Server using BIG-IP that would server as a Load Balancer for SIP traffic for resources that are deployed in Kubernetes cluster and exposed through NodePort. Our F5 is not part of the Kubernetes cluster and it is a standalone Virtual Machine that sends its traffic to NodePort service of our SIP resources. We are facing few issues and hope someone can help us understand them. UDP not working When we try to use UDP the problem is that F5 (10.224.64.223) sends SIP OPTIONS to ip address/port that we defined as access point for SIP elements in Kubernetes (Node IP and NodePort port, 10.224.64.222, port:31131). But due to Kubernetes deployment, responses are sent from different IP address and port (10.224.64.222, port 30834). And this gets rejected by the F5. 10:17:23.695039 IP 10.224.64.223.51938 > 10.224.64.222.31131: UDP, length 575 out slot1/tmm1 lis=mon_mrf_sip_udp port=1.2 trunk= 10:17:23.700849 IP 10.224.64.220.30834 > 10.224.64.223.51938: UDP, length 520 in slot1/tmm0 lis= port=1.2 trunk= 10:17:23.700949 IP 10.224.64.223 > 10.224.64.220: ICMP 10.224.64.223 udp port 51938 unreachable, length 36 out slot1/tmm0 lis= port=1.2 trunk= Even the usage of macvlan on Kubernetes pods does not help. With macvlan we manage to achieve that IP address is preserved (10.226.64.225), but still the port changes (5060 -> 25404). And F5 rejects it. 10:42:07.370926 IP 10.224.64.223.54412 > 10.224.64.225.5060: SIP: OPTIONS sip:10.224.64.225:5060 SIP/2.0 out slot1/tmm0 lis= port=1.2 trunk= 10:42:07.378237 IP 10.224.64.225.25404 > 10.224.64.223.54412: UDP, length 425 in slot1/tmm0 lis= port=1.2 trunk= 10:42:07.378325 IP 10.224.64.223 > 10.224.64.225: ICMP 10.224.64.223 udp port 54412 unreachable, length 36 out slot1/tmm0 lis= port=1.2 trunk= So I guess there is no way to have it working for UDP at all with resources being deployed in Kubernets cluster? (host-network is not an option). TCP (in Message Routing mode) not working When we try to use TCP we found out that "Standard (SIP - legacy profile)" mode behaves differently then "Message Routing" one. In case when we use "Legacy" SIP monitor via TCP it establishes a TCP connection with destination server prior to sending the SIP Options message. This is OK for us. But when we try to use "Message Routing" (from what I understood this is generally advisable for SIP traffic) for TCP monitoring, TCP connection is not established before OPTIONS message is sent and this is not acceptable by our SIP servers. So I have few questions: Is it even possible to use F5 BIG-IP TLM VE as SIP LB for SIP resources operating in Kubernetes cluster (for both UDP and TCP) or the ONLY option is to use F5 BIG-IP Next Service Proxy Kubernetes (SPK) for SIP traffic? Is there a way to somehow force F5 that does Monitoring usin Message Routing mode to open TCP connection prior to sending SIP requests? Due to UDP problem above (that probably is solvable only if SPK version is used) is some way for F5 to do the UDP-2-TCP conversion of SIP traffic? Kind Regards, Zvonimir42Views0likes0CommentsHow to Prepare Your Network Infrastructure to Add HPC Clusters for AI to Your Data Center
HPC AI clusters are getting deployed as highly-engineered 'lego blocks' which are opaque to established data center operations and standards. By taking advantage of established Kubernetes based networking solutions that provide high-speed intelligent networking, you can save yourself from expensive cost overruns, data center re-auditing, and delays. By using Kubernetes based solutions which take advantage of the high-speed networking solutions already required by HP AI deployments, you further optimize your investment in AI.202Views4likes0CommentsAnnouncing F5 NGINX Gateway Fabric 1.4.0 with IPv6 and TLS Passthrough
We announced the next release of F5 NGINX Gateway Fabric version 1.4.0 which includes a lot of smaller but very necessary features. This allows us to dedicate more time to advancing our non-functional testing framework and ensuring we maintain top performance across releases. Nevertheless, we have some great highlights of this release: IPv6 support TLS passthrough (via TLSRoute) Server zone metrics Ability to add custom pod annotations Plenty of bug fixes! During this release cycle, we discovered a bug around our custom policies that occurred when you had the same path for more than one Route: The policy would not be applied to either Route. For this release, we’ve decided to enforce a restriction so that policies cannot be applied when two or more routes share the same path. However, we are pursuing a long-term solution to lift this restriction on this edge case, as we understand that use cases that route based on header, query parameter, or other request attributes on the same path do exist. IPv6 Support While most Kubernetes clusters are still utilizing IPv4, we recognized that anyone employing a IPv6 cluster would have no ability to deploy NGINX Gateway Fabric. Thus, we implemented a simple feature to dual IPv4/IPv6 networking for NGINX Gateway Fabric. This option is enabled by default, so you can simply install as normal on an IPv6 cluster. TLS Passthrough New with 1.4 is TLSRoute support. This Route type enables the TLS Passthrough use case and is similar to setting up an HTTPRoute. This allows you to pass encrypted traffic through NGINX Gateway Fabric where it is terminated by your backend application, ensuring end-to-end encryption. As most information passes through NGINX Gateway Fabric with this route, setup is easy. You can enable TLS passthrough for any application using our guide available here. Non-Functional Testing This release marks the completion of automating our non-functional testing that we execute before each release. If you are unfamiliar with these tests, our team runs NGINX Gateway Fabric through a series of scenarios, non-functional tests, to test if our performance is regressing or improving from previous releases. As an infrastructure product that you rely on, it is our top priority to ensure that stability and performance are not compromised as new features are released. The results of all non-functional testing are available in the GitHub repository for anyone to see and should give you an idea of how well NGINX Gateway Fabric performs in general and across releases. What’s Next NGINX Gateway Fabric 1.5.0 will bring NGINX code snippets to the Gateway API with a first-class Upstream Settings policy to configure keepalive connections and NGINX zone size. If you are familiar with NGINX or find that you need to use a feature that NGINX provides that is not yet available via a Gateway API extension, you can put a NGINX code snippet within a SnippetFilter to apply NGINX configuration to a Route rule. You will even be able to use the feature to load other modules NGINX provides and leverage the vast wealth of NGINX functionality. We will still be providing many NGINX features via first-class policies and filters, such as the Upstream Settings policy, as they allow us to handle much of the complexity of translating to Gateway API for you. These custom policies and filters allow us to handle a lot of the complexity of applying NGINX config across the Gateway API framework for you. The Upstream Settings policy can set upstream management directives that are unable to be applied via snippets effectively. We will continue to deliver these custom policies and filters across all of our releases, in addition to new Gateway API resources and NGINX Gateway Fabric specific features. You can see a preview of the full snippet design here, though not all features may be implemented in one release cycle. For more information on our strategy towards first-class NGINX customization via Gateway API extensions, see our full enhancement proposalhere. Resources For the complete changelog for NGINX Gateway Fabric 1.4.0, see the Release Notes. To try NGINX Gateway Fabric for Kubernetes with NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases. If you would like to get involved, see what is coming next, or see the source code for NGINX Gateway Fabric, check out our repository on GitHub! We have weekly community meetings on Tuesdays at 9:30AM Pacific/12:30PM Eastern/5:30PM GMT. The meeting link, updates, agenda, and notes are on the NGINX Gateway Fabric Meeting Calendar. Links are also always available from our GitHub readme.115Views1like0CommentsAnnouncing F5 NGINX Gateway Fabric 1.3.0 with Tracing, GRPCRoute, and Client Settings
The release of NGINX Gateway Fabric version 1.3.0, introduces plenty of highly requested features and improvements. GRPCRoutes are now supported to manage gRPC traffic, similar to the handling of HTTPRoute. The update includes new custom policies like ClientSettingsPolicy for client request configurations and ObservabilityPolicy for enabling application tracing with OpenTelemetry support. The GRPCRoute allows for efficient routing, header modifications, traffic weighting, and error conversion from HTTP to gRPC. We will explain how to set up NGINX Gateway Fabric to manage gRPC traffic using a Gateway and a GRPCRoute, providing a detailed example of the setup. It also outlines how to enable tracing through the NginxProxy resource and ObservabilityPolicy, emphasizing a selective approach to tracing to avoid data overload. Additionally, the ClientSettingsPolicy allows for the customization of NGINX directives at the Gateway or Route level, giving users control over certain NGINX behaviors with the possibility of overriding Gateway defaults at the Route level. Looking ahead, the NGINX Gateway Fabric team plans to work on TLS Passthrough, IPv6, and improvements to the testing suite, while preparing for larger updates like NGINX directive customization and separation of data and control planes. Check the end of the article to see how to get involved in the development process through GitHub and participate in bi-weekly community meetings. Further resources and links are also provided within.212Views0likes0CommentsSecuring and Scaling Hybrid Apps with F5/NGINX (Part 3)
In part 2 of our series, I demonstrated how to configure ZT (Zero Trust) use cases centering around authentication with NGINX Plus in hybrid environments. We deployed NGINX Plus as the external LB to route and authenticate users connecting to my Kubernetes applications. In this article, we explore other areas of the ZT spectrum configurable on the External LB Service, including: Authorization and Access Encryption mTLS Monitoring/Auditing ZT Use case #1: Authorization Many people think that authentication and authorization can be used interchangeably. However, they both mean different things. Authentication involves the process of verifying user identities based on the credentials presented. Even though authenticated users are verified by the system, they do not necessarily have the authority to access protected applications. That is where authorization comes into play. Authorization involves the process of verifying the authority of an identity before granting access to application. Authorization in the context of OIDC authentication involves retrieving claims from user ID tokens and setting conditions to validate whether the user is authorized to enter the system. An authenticated user is granted an ID token from the IdP with specific user information through JWT claims. The configuration of these claims is typically set from the IdP. Revisiting the OIDC auth use case configured in the previous section, we can retrieve the ID tokens of authenticated users from the NGINX key-value store. $ curl -i http://localhost:8010/api/9/http/keyvals/oidc_acess_tokens Then we can view the decoded value of the ID token using jwt.io. Below is an example of decoded payload data from the ID token. { "exp": 1716219261, "iat": 1716219201, "admin": true, "name": "Micash", "zone_info": "America/Los_Angeles" "jti": "9f8ff4bd-4857-4e12-9634-e5876f786f98", "iss": "http://idp.f5lab.com:8080/auth/realms/master", "aud": "account", "typ": "Bearer", "azp": "appworld2024", "nonce": "gMNK3tu06j6tp5-jGa3aRhkj4F0P-Z3e04UfcFeqbes" } NGINX Plus has access to these claims as embedded variables. They are accessed by prefixing $jwt_claim_ to the desired field (for example, $jwt_claim_admin for the admin claim). We can easily set conditions on these claims and block unauthorized users before they even reach the back-end applications. Going back to our frontend.conf file in the previous part of our series. We can set $jwt_flag variable to 0 or 1 based on the value of the admin JWT claim. We then use the jwt_claim_require directive to validate the ID token. ID tokens with admin claims set to false will be rejected. map $jwt_claim_admin $jwt_status { "true" 1; default 0; } server { include conf.d/openid_connect.server_conf; # Authorization code flow and Relying Party processing error_log /var/log/nginx/error.log debug; # Reduce severity level as required listen [::]:443 ssl ipv6only=on; listen 443 ssl; server_name example.work.gd; ssl_certificate /etc/ssl/nginx/default.crt; # self-signed for example only ssl_certificate_key /etc/ssl/nginx/default.key; location / { # This site is protected with OpenID Connect auth_jwt "" token=$session_jwt; error_page 401 = @do_oidc_flow; auth_jwt_key_request /_jwks_uri; # Enable when using URL auth_jwt_require $jwt_status; proxy_pass https://cluster1-https; # The backend site/app } } Note: Authorization with NGINX Plus is not restricted to only JWT tokens. You can technically set conditions on a variety of attributes, such as: Session cookies HTTP headers Source/Destination IP addresses ZT use case #2: Mutual TLS Authentication (mTLS) When it comes to ZT, mTLS is one of the mainstream use cases falling under the Zero Trust umbrella. For example, enterprises are using Service Mesh technologies to stay compliant with ZT standards. This is because Service Mesh technologies aim to secure service to service communication using mTLS. In many ways, mTLS is similar to the OIDC use case we implemented in the previous section. Only here, we are leveraging digital certificates to encrypt and authenticate traffic. This underlying framework is defined by PKI (Public Key Infrastructure). To explain this framework in simple terms we can refer to a simple example; the driver's license you carry in your wallet. Your driver’s license can be used to validate your identity, the same way digital certificates can be used to validate the identity of applications. Similarly, only the state can issue valid driver's licenses, the same way only Certificate Authorities (CAs) can issue valid certificates to applications. It is also important that only the state can issue valid certificates. Therefore, every CA must have a private secure key to sign and issue valid certificates. Configuring mTLS with NGINX can be broken down in two parts: Ingress mTLS; Securing SSL client traffic and validating client certificates against a trusted CA. Egress mTLS; securing SSL upstream traffic and offloading authentication of TLS material to a trusted HTTPS back-end server. Ingress mTLS You can configure ingress mTLS on the NLK deployment by simply referencing the trusted certificate authority adding the ssl_client_certificate directive in the server context. This will configure NGINX to validate client certificates with the referenced CA. Note: If you do not have a CA, you can create one using OpenSSL or Cloudflare PKI and TLS toolkits server { listen 443 ssl; status_zone https://cafe.example.com; server_name cafe.example.com; ssl_certificate /etc/ssl/nginx/default.crt; ssl_certificate_key /etc/ssl/nginx/default.key; ssl_client_certificate /etc/ssl/ca.crt; } Egress mTLS Egress mTLS is a slight alternative to ingress mTLS where NGINX verifies certificates of upstream applications rather than certificates originating from clients. This feature can be enabled by adding the proxy_ssl_trusted_certificate directive to the server context. You can reference the same trusted CA we used for verification when configuring ingress mTLS or reference a different CA. In addition to verifying server certificates, NGINX as a reverse-proxy can pass over certs/keys and offload verification to HTTPS upstream applications. This can be done by adding the proxy_ssl_certificate and proxy_ssl_certificate_key directives in the server context. server { listen 443 ssl; status_zone https://cafe.example.com; server_name cafe.example.com; ssl_certificate /etc/ssl/nginx/default.crt; ssl_certificate_key /etc/ssl/nginx/default.key; #Ingress mTLS ssl_client_certificate /etc/ssl/ca.crt; #Egress mTLS proxy_ssl_certificate /etc/nginx/secrets/default-egress.crt; proxy_ssl_certificate_key /etc/nginx/secrets/default-egress.key; proxy_ssl_trusted_certificate /etc/nginx/secrets/default-egress-ca.crt; } ZT use case #3: Secure Assertion Markup Language (SAML) SAML (Security Assertion Markup Language) is an alternative SSO solution to OIDC. Many organizations may choose between SAML and OIDC depending on requirements and IdPs they currently run in production. SAML requires a SP (Service Provider) to exchange XML messages via HTTP POST binding to a SAML IdP. Once exchanges between the SP and IdP are successful, the user will have session access to the protected backed applications with one set of user credentials. In this section, we will configure NGINX Plus as the SP and enable SAML with the IdP. This will be like how we configured NGINX Plus as the relying party in an OIDC authorization code flow (See ZT Use case #1). Setting up the IdP The one prerequisite is setting up your IdP. In our example, we will set up the Microsoft Entra ID on Azure. You can use the SAML IdP of your choosing. Once the SAML application is created in your IdP, you can access the SSO fields necessary to link your SP (NGINX Plus) to your IdP (Microsoft Entra ID). You will need to edit the basic SAML configuration by clicking on the pencil icon next to Editin Basic SAML Configuration, as seen in the figure above. Add the following values and click Save: Identifier (Entity ID) -- https://fourth.run.place Reply URL (Assertion Consumer Service URL) -- https://fourth.run.place/saml/acs Sign on URL: https://fourth.run.place Logout URL (Optional): https://fourth.run.place/saml/sls Finally download the Certificate (Raw) from Microsoft Entra ID and save it to your NGINX Plus instance. This certificate is used to verify signed SAML assertions received from the IdP. Once the certificate is saved on the NGINX Plus instance, extract the public key from the downloaded certificate and convert it to SPKI format. We will use this certificate later when we configure NGINX Plus in the next section. $ openssl x509 -in demo-nginx.der -outform DER -out demo-nginx.der $ openssl x509 -inform DER -in demo-nginx.der -pubkey -noout > demo-nginx.spki Configuring NGINX Plus as the SAML Service Provider After the IdP is setup, we can configure NGINX Plus as the SP to exchange and validate XML messages with the IdP. Once logged into the NGINX Plus instance, simply clone the nginx SAML GitHub repo. $ git clone https://github.com/nginxinc/nginx-saml.git && cd nginx-saml Copy the config files into the /etc/nginx/conf.d directory. $ cp frontend.conf saml_sp.js saml_sp.server_conf saml_sp_configuration.conf /etc/nginx/conf.d/ Notice that by default, frontend.conf listens on port 8010 with clear text http. You can merge kube_lb.conf into frontend.conf to enable TLS termination and update the upstream context with application endpoints you wish to protect with SAML. Finally we will need to edit the saml_sp_configuration.conf file and update variables in the map context based on the parameters of your SP and IdP: $saml_sp_entity_id; https://fourth.run.place $saml_sp_acs_url; https://fourth.run.place/saml/acs $saml_sp_sign_authn; false $saml_sp_want_signed_response; false $saml_sp_want_signed_assertion; true $saml_sp_want_encrypted_assertion; false $saml_idp_entity_id; Unique identifier that identifies the IdP to the SP. This field is retrieved from your IdP $saml_idp_sso_url; This is the login URL and is also retrieved from the IdP $saml_idp_verification_certificate; Variable referencing the certificate downloaded from the previous section when setting up the IdP. This certificate will verify signed assertions received from the IdP. Use the full directory (/etc/nginx/conf.d/demo-nginx.spki) $saml_sp_slo_url; https://fourth.run.place/saml/sls $saml_idp_slo_url; This is the logout URL retrieved from the IdP $saml_sp_want_signed_slo; true The remaining variables defined in saml_sp_configuration.conf can be left unchanged, unless there is a specific requirement for enabling them. Once the variables are set appropriately, we can reload NGINX Plus. $ nginx -s reload Testing Now we will verify the SAML flow. open your browser and enter https://fourth.run.place in the address bar. This should redirect me to the IDP login page. Once you login with your credentials, I should be granted access to my protected application ZT use case #4: Monitoring/Auditing NGINX logs/metrics can be exported to a variety of 3rd party providers including: Splunk, Prometheus/Grafana, cloud providers (AWS CloudWatch and Azure Monitor Logs), Datadog, ELK stack, and more. You can monitor NGINX metrics and logs natively with NGINX Instance Manager or NGINX SaaS. The NGINX Plus API provides me a lot of flexibility by exporting metrics to any third-party tool that accepts JSON. For example, you can export NGINX Plus API metrics to our native real-time dashboard from part 1. native real-time dashboard from part 1 Whichever tool I chose, monitoring/auditing my data generated from my IT systems is key to understanding and optimizing my applications. Conclusion Cloud providers offer a convenient way to expose Kubernetes Services to the internet. Simply create Kubernetes Service of type: LoadBalancer and external users connect to your services via public entry point. However, cloud load balancers do nothing more than basic TCP/HTTP load balancing. You can configure NGINX Plus with many Zero Trust capabilities as you scale out your environment to multiple clusters in different regions, which is what we will cover in the next part of our series.267Views2likes0CommentsSecuring and Scaling Hybrid Application with F5 NGINX (Part 1)
If you are using Kubernetes in production, then you are likely using an ingress controller. The ingress controller is the core engine managing traffic entering and exiting the Kubernetes cluster. Because the ingress controller is a deployment running inside the cluster, how do you route traffic to the ingress controller? How do you route external traffic to internal Kubernetes Services? Cloud providers offer a simple convenient way to expose Kubernetes Services using an external load balancer. Simply deploy a Managed Kubernetes Service (EKS, GKE, AKS) and create a Kubernetes Service of type LoadBalancer. The cloud providers will host and deploy a load balancer providing a public IP address. External users can connect to Kubernetes Services using this public entry point. However, this integration only applies to Managed Kubernetes Services hosted by cloud providers. If you are deploying Kubernetes in private cloud/on-prem environments, you will need to deploy your own load balancer and integrate it with the Kubernetes cluster. Furthermore, Kubernetes Load Balancing integrations in the cloud are limited to TCP Load Balancing and generally lack visibility into metrics, logs, and traces. We propose: A solution that applies regardless of the underlying infrastructure running your workloads Guidance around sizing to avoid bottlenecks from high traffic volumes Application delivery use cases that go beyond basic TCP/HTTP load balancing In the solution depicted below, I deploy NGINX Plus as the external LB service for Kubernetes and route traffic to the NGINX Ingress Controller. The NGINX Ingress Controller will then route the traffic to the application backends. The NLK (NGINX Load Balancer for Kubernetes) deployment is a new controller by NGINX that monitors specified Kubernetes Services and sends API calls to manage upstream endpoints of the NGINX External Load Balancer In this article, I will deploy the components both inside the Kubernetes cluster and NGINX Plus as the external load balancer. Note: I like to deploy both the NLK and Kubernetes cluster in the same subnet to avoid network issues. This is not a hard requirement. Prerequisites The blog assumes you have experience operating in Kubernetes environments. In addition, you have the following: Access to a Kubernetes environment; Bare Metal, Rancher Kubernetes Engine (RKE), VMWare Tanzu Kubernetes (VTK), Amazon Elastic Kubernetes (EKS), Google Kubernetes Engine (GKE), Microsoft Azure Kubernetes Service (AKS), and RedHat OpenShift NGINX Ingress Controller – Deploy NGINX Ingress Controller in the Kubernetes cluster. Installation instructions can be found in the documentation. NGINX Plus – Deploy NGINX Plus on VM or bare metal with SSH access. This will be the external LB service for the Kubernetes cluster. Installation instructions can be found in the documentation. You must have a valid license for NGINX Plus. You can get started today by requesting a 30-day free trial. Setting up the Kubernetes environment I start with deploying the back-end applications. You can deploy your own applications, or you can deploy our basic café application as an example. $ kubectl apply –f cafe.yaml Now I will configure routes and TLS settings for the ingress controller $ kubectl apply –f cafe-secret.yaml $ kubectl apply –f cafe-virtualserver.yaml To ensure the ingress rules are successfully applied, you can examine the output of kubectl get vs. The VirtualServer definition should be in the Valid state. NAMESPACE NAME STATE HOST IP PORTS default cafe-vs Valid cafe.example.com Setting up NGINX Plus as the external LB A fresh install of NGINX Plus will provide the default.conf file in the /etc/nginx/conf.d directory. We will add two additional files into this directory. Simply copy the bulleted files into your /etc/nginx/conf.d directory dashboard.conf; This will enable the real-time monitoring dashboard for NGINX Plus kube_lb.conf; The nginx configuration as the external load balancer for Kubernetes. You can change the configuration file to fit your requirements. In part 1 of this series, we enabled basic routing and TLS for one cluster. You will also need to generate TLS cert/keys and place them in the /etc/ssl/nginx folder of the NGINX Plus instance. For the sake of this example, we will generate a self-signed certificate with openssl. $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout default.key -out default.crt -subj "/CN=NLK" Note: Using self-signed certificates is for testing purposes only. In a live production environment, we recommend using a secure vault that will hold your secrets and trusted CAs (Certificate Authorities). Now I can validate the configuration and reload nginx for the changes to take effect. $ nginx –t $ nginx –s reload I can now connect to the NGINX Plus dashboard by opening a browser and entering http://<external-ip-nginx>:9000/dashboard.html#upstreams The HTTP upstream table should be empty as we have not deployed the NLK Controller yet. We will do that in the next section. Installing the NLK Controller You can install the NLK Controller as a Kubernetes deployment that will configure upstream endpoints for the external load balancer using the NGINX Plus API. First, we will create the NLK namespace $ kubectl create ns nlk And apply the RBAC settings for the NKL deployment $ kubectl apply -f serviceaccount.yaml $ kubectl apply -f clusterrole.yaml $ kubectl apply -f clusterrolebinding.yaml $ kubectl apply -f secret.yaml The next step is to create a ConfigMap defining the API endpoint of the NGINX Plus external load balancer. The API endpoint is used by the NLK Controller to configure the NGINX Plus upstream endpoints. We simply modify the nginx-hosts field in the manifest from our GitHub repository to the IP address of the NGINX external load balancer. nginx-hosts: http://<nginx-plus-external-ip>:9000/api Apply the updated ConfigMap and deploy the NLK controller $ kubectl apply –f nkl-configmap.yaml $ kubectl apply –f nkl-deployment I can verify the NLK controller deployment is running and the ConfigMap data is applied. $ kubectl get pods –o wide –n nlk $ kubectl describe cm nginx-config –n nlk You should see the NLK deployment in status Running and the URL should be defined under nginx-hosts. The URL is the NGINX Plus API endpoint of the external load Balancer. Now that the NKL Controller is successfully deployed, the external load balancer is ready to route traffic to the cluster. The final step is deploying a Kubernetes Service type NodePort to expose the Kubernetes cluster to NGINX Plus. $ kubectl apply –f nodeport.yaml There are a couple things to note about the NodePort Service manifest. Fields on line 7 and 14 are required for the NLK deployment to configure the external load balancer appropriately: The nginxinc.io/nkl-cluster annotation The port name matching the upstream block definition in the NGINX Plus configuration (See line 42 in kube_lb.conf) and preceding nkl- apiVersion: v1 kind: Service metadata: name: nginx-ingress namespace: nginx-ingress annotations: nginxinc.io/nlk-cluster1-https: "http" # Must be added spec: type: NodePort ports: - port: 443 targetPort: 443 protocol: TCP name: nlk-cluster1-https selector: app: nginx-ingress Once the service is applied, you can note down the assigned nodeport selecting the NGINX Ingress Controller deployment. In this example, that node port is 32222. $ kubectl get svc –o wide –n nginx-ingress NAME TYPE CLUSTER-IP PORT(S) SELECTOR nginx-ingress NodePort x.x.x.x 443:32222/TCP app=nginx-ingress If I reconnect to my NGINX Pus dashboard, the upstream tab should be populated with the worker node IPs of the Kubernetes cluster and matching the node port of the nginx-ingress Service (32222). You can list the node IPs of your cluster to make sure they match the IPs in the dashboard upstream tab. $ kubectl get nodes -o wide | awk '{print $6}' INTERNAL-IP 10.224.0.6 10.224.0.5 10.224.0.4 Now I can connect to the Kubernetes application from our local machine. The hostname we used in our example (cafe.example.com) should resolve to the IP address of the NGINX Plus load balancer. Wrapping it up Most enterprises deploying Kubernetes in production will install an ingress controller. It is the DeFacto standard for application delivery in container orchestrators like Kubernetes. DevOps/NetOps engineers are now looking for guidance on how to scale out their Kubernetes footprint in the most efficient way possible. Because enterprises are embracing the hybrid approach, they will need to implement their own integrations outside of cloud providers. The solution we propose: is applicable to hybrid environments (particularly on-prem) Sizing information to avoid bottlenecks from large traffic volumes Enterprise Load Balancing capabilities that stretch beyond a TCP LoadBalancing Service In the next part of our series, I will dive into the third bullet point into much more detail and cover Zero Trust use cases with NGINX Plus, providing extra later of security in your hybrid model.619Views0likes0Comments