DevOps
1564 Topics5 Technical Sessions That Should Be Great: F5 AppWorld 2025
These F5 Academy sessions explore modern app delivery, security, and operations. The full list of sessions is on the F5 AppWorld 2025 Academy page - if you haven't yet registered you can do so here: Register for F5 AppWorld 2025 LAB - F5 Distributed Cloud: Discovering & Securing APIs API security has never been more critical, and this lab dives straight into the tough stuff. Learn how to find hidden endpoints, detect sensitive data and authentication states, and apply integrated API security measures to keep your environment locked tight. TECHNICAL BRIEFING - LLM Security and Delivery with F5’s Distributed Cloud Security Ecosystem AI is fueling the next wave of applications—but it’s also introducing new security blind spots. This briefing explores how to secure LLMs and integrate the right solutions to ensure your AI-driven workloads remain fast, cost-effective, and protected. LAB - F5 NGINX Plus Ingress as an API Gateway for Kubernetes Containerized environments and microservices are here to stay, and this lab helps you navigate the complexity. Configure NGINX Plus Ingress as a powerful API gateway for your Kubernetes workloads, enabling schema enforcement, authorization, and rate-limiting all in one streamlined solution. LAB - Zero Trust at Scale With F5 NGINX Zero trust principles become a whole lot more meaningful when you can scale them. Get hands-on with NGINX Plus and BIG-IP GTM to build a robust, scalable zero trust architecture, ensuring secure and seamless app access across enterprises and multi-cluster Kubernetes environments. LAB - F5 Distributed Cloud: Security Automation & Zero Day Mitigation In this lab, you’ll learn how to leverage advanced matching criteria and custom rules to quickly respond to emerging threats. Shore up your defenses with automated policies that deliver frictionless security and agile zero-day mitigation. Session Updates Coming in January 🚨 AppWorld's Breakout Sessions officially drop in January 2025 but here is a sneak preview! Check back in January to add these to your agenda. Global App Delivery With a Global Network How Generative AI Breaks Traditional Application Security and What You Can Do About It The New Wave of Bots: A Deep Dive into Residential IP Proxy Networks From ZTNA to Universal ZTNA: Expanding Your App Security Strategy --- See you at F5 AppWorld 2025! #AppWorld2541Views0likes0CommentsUpdating SSL Certificates on BIG-IP using REST API
Simple cURL REST API commands to seamlessly update SSL certificates on a BIG-IP system. This method is ideal for those who prefer automation and want to integrate the process into their workflows. By following this guide, you will be able to: Upload a certificate and private key. Install them on the BIG-IP system. Update an SSL profile with the new certificate and key.33Views1like1CommentIntroducing the New Docker Compose Installation Option for F5 NGINX Instance Manager
F5 NGINX Instance Manager (NIM) is a centralised management solution designed to simplify the administration and monitoring of F5 NGINX instances across various environments, including on-premises, cloud, and hybrid infrastructures. It provides a single interface to efficiently oversee multiple NGINX instances, making it particularly useful for organizations using NGINX at scale. We’re excited to introduce a new Docker Compose installation option for NGINX Instance Manager, designed to help you get up and running faster than ever before, in just a couple of steps. Key Features: Quick and Easy Installation: With just a couple of steps, you can pull and deploy NGINX Instance Manager on any Docker host, without having to manually configure multiple components. The image is available in our container registry, so once you have a valid license to access it, getting up and running is as simple as pulling the container. Fault-Tolerant and Resilient: This installation option is designed with fault tolerance in mind. Persistent storage ensures your data is safe even in the event of container restarts or crashes. Additionally, with a separate database container, your product’s data is isolated, adding an extra layer of resilience and making it easier to manage backups and restores. Seamless Upgrades: Upgrades are a breeze. You can update to the latest version of NGINX Instance Manager by simply updating the image tag in your Docker Compose file. This makes it easy to stay up-to-date with the latest features and improvements without worrying about downtime or complex upgrade processes. Backup and Restore Options: To ensure your data is protected, this installation option comes with built-in backup and restore capabilities. Easily back up your data to a safe location and restore it in case of any issues. Environment Configuration Flexibility: The Docker Compose setup allows you to define custom environment variables, giving you full control over configuration settings such as log levels, timeout values, and more. Production-Ready: Designed for scalability and reliability, this installation method is ready for production environments. With proper resource allocation and tuning, you can deploy NGINX Instance Manager to handle heavy workloads while maintaining performance. The following steps walk you through how to deploy and manage NGINX Instance Manager using Docker Compose. What you need A working version ofDocker Your NGINX subscription’s JSON Web Token from MyF5 This pre-configureddocker-compose.yamlfile: Download docker-compose.yaml file. Step 1 - Set up Docker for NGINX container registry Log in to the Docker registry using the contents of the JSON Web Token file you downloaded fromMyF5 : docker login private-registry.nginx.com --username=<JWT_CONTENTS> --password=none Step 2 - Run “docker login” and then “docker compose up” in the directory where you downloaded docker-compose.yaml Note: You can optionally set the Administrator password for NGINX Instance Manager prior to running Docker Compose. ~$ docker login private-registry.nginx.com --username=<JWT_CONTENTS> --password=none ~$ echo "admin" > admin_password.txt ~$ docker compose up -d [+] Running 6/6 ✔ Network nim_clickhouse Created 0.1s ✔ Network nim_external_network Created 0.2s ✔ Network nim_default Created 0.2s ✔ Container nim-precheck-1 Started 0.8s ✔ Container nim-clickhouse-1 Healthy 6.7s ✔ Container nim-nim-1 Started 7.4s. Step 3 – Access NGINX Instance Manager Go to the NGINX Instance Manager UI on https://<<DOCKER_HOST>>:443 and license the product using the same JSON Web Token you downloaded from MyF5 earlier. Conclusion With this new setup, you can install and run NGINX Instance Manager on any Docker host in just 3 steps, dramatically reducing setup time and simplifying deployment. Whether you are working in a development environment or deploying to production, the Docker Compose-based solution ensures a seamless and reliable experience. For more information on using the docker compose option with NGINX Instance manager such as running a backup and restore, using secrets, and many more, please see the instructionshere.32Views1like0CommentsIntroducing the single Installation script for F5 NGINX Instance Manager
F5 NGINX Instance Manager (NIM) is a centralised management tool designed to simplify the administration and monitoring of F5 NGINX instances across various environments, including on-premises, cloud, and hybrid infrastructures. It provides a single interface to efficiently oversee multiple NGINX instances, making it particularly useful for organisations using NGINX at scale. In the past, installing NGINX Instance Manager and its dependencies could often be a tedious and error-prone process. Whether you’re dealing with databases, NGINX proxies, or multiple components that require manual configuration, the time and effort involved can quickly add up. This is where our new single-installation script comes into play. Designed to streamline and simplify the installation of NGINX Instance Manager, this script allows you to set up everything you need on a supported operating system with minimal effort. Gone are the days of hunting for specific installation commands, configuring dependencies manually, or worrying about compatibility issues between different systems. With this script, you can install NGINX Instance Manager, theClickhousedatabase, and NGINX proxycomponents in one seamless operation. Key Features: One-command installation: The script automates the installation of NGINX Instance Manager, Clickhouse database, and the NGINX proxy. Just run the script, and it takes care of everything. Cross-platform compatibility: The script supports all the operating systems specified in our technical specification document, making it versatile for different environments. Offline installation support: For users working in environments without internet access, the script can pull all necessary binaries from our software repository, enabling installation on a disconnected host. Simply gather the required binaries beforehand, and you’re good to go! License key integration: The only thing you’ll need to run the script in a fresh operating system environment are your license keys. This makes the process even easier, as there’s no need for additional setup steps. What you need A fresh machine with one of thesupported operating systemsinstalled. Download the JSON web token, SSL certificate, and private key for NGINX Instance Manager from MyF5 (including trials). You can use the same files as F5 NGINX Plus in your MyF5 portal. Download the scripthere. Step 1 - Download and run the installation script By default, the script: Assumes you’re connected to the internet for installations and upgrades Reads SSL files from the /etc/ssl/nginx directory Installs the latest version of NGINX Open Source Installs Clickhouse Installs the latest version of NGINX Instance Manager If you want to use the script with non-default options (e.g. to use NGINX Plus instead of Open Source), use these switches: To point to a repository key stored in a directory other than/etc/ssl/nginx: -k /path/to/your/<nginx-repo.key> file To point to a repository certificate stored in a directory other than/etc/ssl/nginx:-c /path/to/your/<nginx-repo.crt>file To install NGINX Plus (instead of NGINX OSS):-p <nginx_plus_version> -j /path/to/licence.jwt You also need to specify the current operating system. To get the latest list supported by the script, run the following command: grep '\-d distribution' install-nim-bundle.sh To see other options in the script, run: sudo bash install-nim-bundle.sh -h For example, to use the script to install NGINX Instance Manager on Ubuntu 24.04, with repository keys in the default /etc/ssl/nginx directory, with the latest version of NGINX Plus, run the following command: sudo bash install-nim-bundle.sh -n latest -d ubuntu24.04 -j /path/to/license.jwt In most cases, the script completes the installation of NGINX Instance Manager and associated packages. At the end of the process, you’ll see an auto generated password. Save that password. You’ll need it when you sign in to NGINX Instance Manager. Step 2 – Access NGINX Instance Manager To access the NGINX Instance Manager web interface, open a web browser and go tohttps://<NIM_FQDN>, replacing<NIM_FQDN>with the Fully Qualified Domain Name of your NGINX Instance Manager host. The default administrator username isadmin, and the generated password is saved, in encrypted format, to the/etc/nms/nginx/.htpasswdfile. The password was displayed in the terminal during installation. If you’d like to change this password, refer to the “Set or Change User Passwords” section in the Basic Authentication topic. This single-installation solution significantly reduces the complexity of deploying Instance Manager, whether you’re operating in a connected or disconnected environment. Now, all it takes is a simple command to get everything up and running. For more information and alternative options when using the script for NGINX Instance manager, please see the instructionshere.91Views1like0CommentsHow I did it - "Securing NVIDIA’s Morpheus AI Framework with NGINX Plus Ingress Controller”
In this installment of "How I Did It," we continue our journey into AI security. I have documented how I deployed an NVIDIA Morpheus AI infrastructure along with F5's NGINX Plus Ingress Controller to provide secure and scalable external access.195Views2likes1CommentGitOps: Declarative Infrastructure and Application Delivery with NGINX App Protect
First thing first. What is GitOps? In a nutshell, GitOps is a practice (Git Operation) that allows you to use GIT and code repository as your configuration source of truth (Declarative Infrastructure as Code and Application Delivery as Code) couple with various supporting tools. The state of your git repository syncs with your infrastructure and application states. As operation team runs daily operations (CRUD - "Create, Read, Update and Delete") leveraging the goodness and philosophy of DevOps, they no longer required to store configuration manifest onto various configuration systems. THe Git repo will be the source of truth. Typically, the target systems or infrastructures runs on Kubernetes base platform. For further details and better explanation of GitOps, please refer to below or Google search. https://codefresh.io/learn/gitops/ https://www.atlassian.com/git/tutorials/gitops Why practice GitOps? I havebeen managing my lab environment for many years. I use my lab for research of technologies, customer demos/Proof-of-Concepts, applications testing, and code development. Due to the nature of constant changes to my environment (agile and dynamic nature), especially with my multiple versions of Kubernetes platform, I have been spending too much time updating, changing, building, deploying and testing various cloud native apps. Commands like docker build, kubectl, istioctl and git have been constantly and repetitively used to operate environments. Hence, I practice GitOps for my Kubernetes infrastructure. Of course, task/operation can be automated and orchestrated with tools such as Ansible, Terraform, Chef and Puppet. You may not necessarily need GitOps to achieve similar outcome. I managed with GitOps practice partly to learn the new "language" and toexperience first-hand the full benefit of GitOps. Here are some of my learnings and operations experience that I have been using to manage F5's NGINX App Protect and many demo apps protected by it,which may benefit you and give you some insight on how you can run your own GitOps. You may leverage your own GitOps workflow from here. For details and description on F5's NGINX App Protect, please refer to https://www.nginx.com/products/nginx-app-protect/ Key architecture decision of my GitOps Workflow. Modular architecture - allows me to swap in/out technologies without rework. "Lego block" Must reduce my operational works - saves time, no repetitive task, write once and deploy many. Centralize all my configuration manifest - single source of truth. Currently, my configuration exists everywhere - jump hosts, local laptop, cloud storage and etc. I had lost track of which configuration was the latest. Must be simple, modern, easy to understand and as native as possible. Use Case and desirable outcome Build and keep up to date NGINX Plus Ingress Controller with NGINX App Protect in my Kubernetes environment. Build and keep up to date NGINX App Protect's attack signature and Threat Champaign signature. Zero downtime/impact to apps protected by NGINX App Protect with frequent releases, update and patch cycle for NGINX App Protect. My Problem Statement I need to ensure that my infrastructure (Kubernetes Ingress controller) and web application firewall (NGINX App Protect) is kept up to date with ease. For example, when there are new NGINX-ingress and NGINX App Protect updates (e.g., new version, attack signature and threat campaign signature), I would like to seamlessly push changes out to NGINX-ingress and NGINX App Protect (as it protects my backend apps) without impacting applications protected by NGINX App Protect. GitOps Workflow Start small, start with clear workflow. Below is a depiction of the overall GitOps workflow. NGINX App Protect is the target application. Hence, before description of the full GitOps Workflow, let us understand deployment options for NGINX App Protect. NGINX App Protect Deployment model There are four deployment models for NGINX App Protect. A common deployment models are: Edge - external load balancer and proxies (Global enforcement) Dynamic module inside Ingress Controller (per service/URI/ingress resource enforcement) Per-Service Proxy model - Kubernetes service tier (per service enforcement) Per-pod Proxy model - proxy embedded in pod (per endpoint enforcement) Pipeline demonstrated in this article will workwith either NGINX App Protect deployed as Ingress Controller (2) or per-service proxy model (3). For the purposes of this article, NGINX App Protect is deployed at the Ingress controller at the entry point to Kubernetes (Kubernetes Edge Proxy). For those who prefer video, Video Demonstration Part 1 /3 – GitOps with NGINX Plus Ingress and NGINX App Protect - Overview Part 2 /3 – GitOps with NGINX Plus Ingress and NGINX App Protect – Demo in Action Part 3/3 - GitOps with NGINX Plus Ingress and NGINX App Protect - WAF Security Policy Management. Description of GitOps workflow Operation (myself) updates nginx-ingress + NAP image repo (e.g. .gitlab-ci.yml in nginx-plus-ingress) via VSCODE and perform code merge and commit changes to repo stored in Gitlab. Gitlab CI/CD pipeline triggered. Build, test and deploy job started. Git clone kubernetes-ingress repo from https://github.com/nginxinc/kubernetes-ingress/. Checkout latest kubernetes-ingress version and build new image with DockerfilewithAppProtectForPlus dockerfile. [ Ensure you have appropriate nginx app protect license in place ] As part of the build process, it triggers trivy container security scanning for vulnerabilities (Open Source version of AquaSec). Upon completion of static binary scanning, pipeline upload scanned report back to repo for continuous security improvement. Pipeline pushesimage to private repository. Private image repo has been configured to perform nightly container security scan (Clair Scanner). Leverage multiple scanning tools - check and balance. Pipeline clone nginx-ingress deployment repo (my-kubernetes-apps) and update nginx-ingress deployment manifest with the latest build image tag (refer excerpt of the manifest below). Gitlab triggers a webhook to ArgoCD to refresh/sync the desire application state on ArgoCD with deployment state in Kubernetes. By default, ArgoCD will sync with Kubernetes every 3 mins. Webhook will trigger instance sync. ArgoCD (deployed on independent K3S cluster) fetches new code from the repo and detects code changes. ArgoCD automates the deployment of the desired application states in the specified target environment. It tracks updates to git branches, tags or pinned to a specific version of manifest at a git commit. Kubernetes triggers an image pull from private repo, performs a rolling updates, and ensures zero interruption to existing traffic. New pods (nginx-ingress) will spin up and traffic will move to new pods before terminating theold pod. Depending on environment and organisation maturity, a successful build, test and deployment onto DEV environment can be pushed to production environment. Note: DAST Scanning (ZAP Scanner) is not shown in this demo. Currently, running offline non-automated scanning. ArgoCD and Gitlab are integrated with Slack notifications. Events are reported into Slack channel via webhook. NGINX App Protectevents are send to ELK stack for visibility and analytics. Snippet on where Gitlab CI update nginx-plus-ingress deployment manifest (Flow#6). Each new image build will be tagged with <branch>-hash-<version> ... spec: imagePullSecrets: - name: regcred serviceAccountName: nginx-ingress containers: - image: reg.foobz.com.au/apps/nginx-plus-ingress:master-f660306d-1.9.1 imagePullPolicy: IfNotPresent name: nginx-plus-ingress ... Gitlab CI/CD Pipeline Successful run of CI/CD pipeline to build, test, scan and push container image to private repository and execute code commit onto nginx-plus-ingress repo. Note: Trivy scanning report will be uploaded or committed back to the same repo. To prevent Gitlab CI triggering another build process ("pipeline loop"), the code commit is tagged with [skip ci]. ArgoCD continuous deployment ArgoCD constantly (default every 3 mins) syncs desired application state with my Kubernetes cluster. Its ensures configuration manifest stored in Git repository is always synchronised with the target environment. Mytrain-dev apps are protected by nginx-ingress + NGINX App Protect. Specific (per service/URI enforcement). NGINX App Protect policy is applied onto this service. nginx-ingress + NGINX App Protect is deployed as an Ingress Controller in Kubernetes. pod-template-hash=xxxx is labeled and tracked by ArgoCD. $ kubectl -n nginx-ingress get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx-ingress-776b64dc89-pdtv7 1/1 Running 0 8h app=nginx-ingress,pod-template-hash=776b64dc89 nginx-ingress-776b64dc89-rwk8w 1/1 Running 0 8h app=nginx-ingress,pod-template-hash=776b64dc89 Please refer to the attached video links above for full demo in actions. References Tools involved NGINX Plus Ingress Controller - https://www.nginx.com/products/nginx-ingress-controller/ NGINX App Protect - https://www.nginx.com/products/nginx-app-protect/ Gitlab - https://gitlab.com Trivy Scanner - https://github.com/aquasecurity/trivy ArgoCD - https://argoproj.github.io/argo-cd/ Harbor Private Repository - https://goharbor.io/ Clair Scanner - https://github.com/quay/clair Slack - https://slack.com DAST Scanner - https://www.zaproxy.org/ Elasticsearch, Logstash and Kibana (ELK) - https://www.elastic.co/what-is/elk-stack, https://github.com/464d41/f5-waf-elk-dashboards K3S - https://k3s.io/ Source repo used for this demonstration Repo for building nginx-ingress + NGINX App Protect image repo https://github.com/fbchan/nginx-plus-ingress.git Repo use for deployment manifest of nginx-ingress controller with the NGINX App Protect policy. https://github.com/fbchan/my-kubernetes-apps.git Summary GitOps perhaps is a new buzzword. It may or may not make sense in your environment. It definitely makes sense for me. It integrated well with NGINX App Protect and allows me to constantly update and push new code changesinto my environment with ease. A few months down the road, when I need to update nginx-ingress and NGINX App Protect, I just need to trigger a CI job, and then everything works like magic. Your mileage may vary. Experience leads me to think along the line of - start small, start simple by "GitOps-ing" on one of your apps that may require frequency changes. Learn, revise and continuously improve from there. The outcome that GitOps provides will ease your operational burden with "do more with less". Ease of integration of nginx-ingress and NGINX App Protect into your declarative infrastructure and application delivery with GitOps and F5's industry leading Web Application firewall protection will definitely alleviate your organisation's risk exposure to external and internal applications threat.1.8KViews1like3CommentsService Discovery in F5 Distributed Cloud - Architecture Options
The F5 Distributed Cloud (XC) Virtual Edition (VE) Customer Edge (CE) platform can be deployed within your data center or cloud environment. It can perform service discovery for services in your Kubernetes (K8s) clusters. Why do Service discovery? Service discovery is important in systems that change and move around, like microservices architectures. It helps find and connect services automatically. Instead of hard coding network locations, service discovery makes sure that services can easily find and communicate with each other, even when they scale or change locations. This improves scalability, resilience, and simplifies managing services in complex environments like Kubernetes or cloud infrastructures. By reducing manual intervention, service discovery enhances the overall efficiency and reliability of application deployments. The F5 Distributed Cloud (XC) CE can use the native kube-apiserver, or Consul, to query for services as they come online enabling admins to reference these discovered services. These services become XC origin pool definitions and can then be published locally through a proxy (http load balancer) on the CE itself or via our Global Application Delivery Network (ADN) - (Regional Edge Deployment). The F5 XC Load Balancer does more than just balance packets. It offers a set of SAAS security services that are easy to use. Customers can have a globally redundant layer of security while serving content from private K8’s clusters. This write-up covers two distinct service discovery architecture options available with XC. Secure K8s Gateway (VE CE) Kubernetes Sitetype Customer Edge (K8s sitetype CE) Depending on your service discovery use-case you may end up with one of these two options or both as they are not mutually exclusive. The first option of using the CE as a Secure K8s Gateway, may be the easier option for folks not particularly versed with the nuances of Kubernetes. Architecture 1:Virtual Edition Customer Edge (VE CE) If a picture is worth a thousand words then a working lab environment is worth a million. This repo walks through an entire Secure K8s GW setup and will leave you with a config that could easily be expanded upon. You can quickly build a PoC and start getting familiar with these modern app capabilities by using these tools. The readme includes details on how to use everything and what functions the various tools provide. It's all shell script and yaml so it's very easy to read through these and understand what's going on. https://github.com/dober-man/ve-ce-secure-k8s-gw This repo is designed to automate the deployment and configuration of a secure Kubernetes Gateway in the F5 Distributed Cloud (F5 XC) environment. It provides scripts and YAML configurations to set up secure communication, networking policies, and infrastructure components required to establish a secure gateway in a Kubernetes cluster. The readme file also documents the pre-reqs but you will at a minimum need an XC tenant, an XC Virtual Edition CE and an Ubuntu 22.04 server. If you do not have an XC tenant or VE CE, reach out to your local F5 Account team. Please use the issues feature of Github to report any discrepancies with the builds or documentation. Architecture 2:Kubernetes Sitetype Customer Edge (K8s sitetype CE) In this architecture, the entire CE runs as a service within the cluster. This model is going to require a bit more fundamental knowledge of Kubernetes and more tightly integrates with the cluster. You can quickly build a PoC and start getting familiar with these modern app capabilities by using this repo. https://github.com/dober-man/k8s-sitetype-ce This repo is focused on automating the deployment of the k8s-sitetype CE in a Kubernetes cluster. It provides scripts to simplify the process of setting up a secure site gateway for handling network traffic between cloud environments, on-premises infrastructure, and edge locations. The readme file documents the pre-reqs, but you will at a minimum need an XC tenant and an Ubuntu 22.04 server. If you do not have an XC tenant, reach out to your local F5 Account team. Please use the issues feature of Github to report any discrepancies with the builds or documentation. Summary - F5 Distributed Cloud offers a number of Kubernetes integration options for service discovery but also offers several other capabilities including Virtual K8s (Namespace as a Service) and Managed K8s which will be covered in future articles. Please feel free to drop a like or leave a comment below.282Views2likes1CommentGetting Started with iRules: Directing Traffic
The intent of this getting started series was to be a journey through the basics of both iRules and programming concepts alike, bringing everyone up to speed on the necessary topics to tackle iRules in all their glory. Whether you’re a complete newbie to scripting or a seasoned veteran, we want everyone to be able to enjoy iRules equally, or at least as near as we can manage. As we wrap this series, it is our hope that it has been helpful to that end, and that you'll be eager to move on to intermediate topics in the next series. In this final entry in the series, we’re taking a look at what I think is likely the thing that many people believe is the primary if not one of the sole uses of iRules before they are indoctrinated into the iRuling ways: routing. That is, directing traffic to a particular location or in a given fashion based on … well, just about anything in the client request. Of course, this is only scratching the surface of what iRules is capable. Whether it's content manipulation, security, or authentication, there’s a huge amount that iRules can do beyond routing. That being said, routing and the various forms that it can take with a bit of lenience as to the traditional definition of the term, is a powerful function which iRules can provide. First of all, keep in mind that the BIG-IP platform is a full, bi-directional proxy, which means we can inspect and act upon any data in the transaction bound in either direction, ingress or egress. This means that we can technically affect traffic routing either from the client to the server or vice versa, depending on your needs. For simplicity’s sake, and because it’s the most common use case, let’s keep the focus of this article to only dealing with client-side routing, e.g. routing that takes place when a client request occurs, to determine the destination server to deliver the traffic to. Even looking at just this particular portion of the picture there are many options for routing from within iRules. Each of the following commands can change the flow of traffic through a virtual, and as such is something I’ll attempt to elucidate: pool node virtual HTTP::redirect reject drop discard As you can see there are some very different commands here, not all of them what you would consider traditional “routing” style commands, but each has a say in the outcome of where the traffic ends up. Let’s take a look at them in a bit more detail. pool The pool command is probably the most straight-forward, “bread and butter” routing command available from within iRules. The idea is very simple, you want to direct traffic, for whatever reason, to a given pool of servers. This simple command gets exactly that job done. Just supply the name of the pool you’d like to direct the traffic to and you’re done. Whether your criteria for allowing traffic to a given pool, or whether you’re trying to route traffic to one of several different pools based on client info or just about anything else, an iRule with a simple pool command will get you there. The pool command is the preferred method of simple traffic routing such as this because by using the pool construct you’re getting a lot of bang for your buck. The underlying infrastructure of monitors and reporting and such is a powerful thing, and shouldn’t be overlooked. There is, however, one other permutation of this command: pool [member []] Perhaps you don’t want to route to one of several pools, but instead want to route to a single pool but with more granularity. What if you don’t want to just route to the pool but to actually select which member inside the pool the traffic will be sent to? Easy enough, specify the “member” flag along with with the IP and port, and you’re set to go. This looks like: when CLIENT_ACCEPTED { if { [IP::addr [IP::client_addr] equals 10.10.10.10] } { pool my_pool } } when HTTP_REQUEST { if { [HTTP::uri] ends_with ".gif" } { if { [LB::status pool my_Pool member 10.1.2.200 80] eq "down" } { log "Server $ip $port down!" pool fallback_Pool } else { pool my_Pool member 10.1.2.200 80 } } } node So when directing traffic to a pool or a member in a pool, the pool command is the obvious choice. What if, however, you want to direct traffic to a server that may not be part of your configuration? What if you want to route to either a server not contained in a particular pool, whether that’s bouncing the request back out to some external server or to an off the grid type back-end server that isn’t a pool member yet? Enter the node command. The node command provides precisely this functionality. All you have to do is supply an IP address (and a port, if the desired port is different than the client-side destination port), and traffic is on its way. No pool member or configuration objects required. Keep in mind, of course, that because you aren’t routing traffic to an object within the BIG-IP statistics, connection information and status won’t be available for this connection. when HTTP_REQUEST { if { [HTTP::uri] ends_with ".gif" } { node 10.1.2.200 80 } } virtual The pool command routes traffic to a given pool, the node command to a particular node…it stands to reason that the virtual command would route traffic to the virtual of your choice, right? That is, in fact, precisely what the command does – allows you to route traffic from one virtual server to another within the same BIG-IP. That’s not all it does, though. This command allows for a massively complex set of scenarios, which I won’t even try to cover in any form of completeness. A couple of examples could be layered authentication, selective profile enabling, or perhaps late stage content re-writing post LB decision. Depending on your needs, there are two basic functions that this command provides. On is to add another level of flexibility to allow users to span multiple virtuals for a single connection. This command makes that easy, taking away the tricks the old timers may remember trying to perform to achieve something similar. Simply supply the name of whichever virtual you want to be the next stop for the inbound traffic, and you’re set. The other function is to return the name of the virtual server itself. If a virtual name is not supplied the command simply returns the name of the current VIP executing the iRule, which is actually quite useful in several different scenarios. Whether you’re looking to do virtual based rate limiting or use some wizardry to side-step SSL issues, the CodeShare has some interesting takes on how to make use of this functionality. when HTTP_REQUEST { # Send request to a new virtual server virtual my_post_processing_server } when HTTP_REQUEST { log local0. "Current virtual server name: [virtual name]" } HTTP::redirect While not something I would consider a traditional routing command, the redirection functionality within iRules has become a massively utilized feature and I’d be remiss in leaving it out, as it can obviously affect the outcome of where the traffic lands. While the above commands all affect the destination of the traffic invisibly to the user, the redirect command is more of a client-side function. It responds to the client browser indicating that the desired content is located elsewhere, by issuing an HTTP 302 temporary redirect. This can be hugely useful for many things from custom URIs to domain consolidation to … well, this command has been put through its paces in the years since iRules v9. Simple and efficient, the only required argument is a full URL to which the traffic will be routed, though not directly. Keep in mind that if you redirect a user to an outside URL you are removing the BIG-IP and as such your iRule(s) from the new request initiated by the client. when HTTP_RESPONSE { if { [HTTP::uri] eq “/app1?user=admin”} { HTTP::redirect "http://www.example.com/admin" } } reject The reject command does exactly what you’d expect: rejects connections. If there is some reason you’re looking to actively terminate a connection in a not so graceful but very immediate fashion, this is the command for you. It severs the given flow completely and alerts the user that their session has been terminated. This can be useful in preventing unwanted traffic from a particular virtual or pool, for weeding out unwanted clients all-together, etc. when CLIENT_ACCEPTED { if { [TCP::local_port] != 443 } { reject } } drop & discard These two commands have identical functionality. They do effectively the same thing as the reject command, that is, prevent traffic from reaching its intended destination but they do so in a very different manner. Rather than overtly refusing content, terminating the connection and as such alerting the client that the connection has been closed, the discard or drop commands are subtler. They simply do away with the affected traffic and leave the client without any status as to the delivery. This small difference can be very important in some security scenarios where it is far more beneficial to not notify an attacker that their attempts are being thwarted. when SERVER_CONNECTED { if { [IP::addr [IP::client_addr] equals 10.1.1.80] } { discard log local0. "connection discarded from [IP::client_addr]" } } Routing traffic is by no means the most advanced or glamorous thing that iRules can do, but it is valuable and quite powerful nonetheless. Combing advanced, full packet inspection with the ability to manipulate the flow of traffic leaves you with near endless options for how to deliver traffic in your infrastructure, and allows for greater flexibility with your deployments. This kind of fine grained control is part of what leads to a tailored user experience which is something that iRules can offer in a very unique and powerful way. Internal Virtual Server This isn't directly an iRules routing technology, though there are plenty of iRules entry points into this unique routing scenario on BIG-IP, so I thought I'd share. The internal virtual server is reachable only by configuration of an adapt profile on a standard virtual server. Once the configuration routing is place, events and commands related to content adaption (ICAP, though not required) are available to make decision on traffic manipulation and further routing. Check out the overview on AskF5.7.4KViews2likes3Comments