devops
24033 TopicsF5 iRule Reverse Proxy, rewrite, redirect
Hello everyone, We currently have a scenario where a URL is no longer available and needs to be (redirected). The starting point is when https://company.com/tool is accessed, it should (redirect) to https://x.x.x.x/tool. Unfortunately, the (redirected) website doesn't have an FQDN, so it needs to be (redirected) to the IP address. Of course, https://company.com/tool should remain in the browser. Is this possible? A reverse proxy approach? Could someone provide me an example iRule? THX58Views0likes6CommentsIs it possible to select ASM BoT profile from irule?
Hi. . Is it possible to select BoT profile from irule? . Concept is we have different set of IP which need to allow "some" BoT type. That why we can't use whitelist IP in BoT profile because it will allow all BoT type. So We want to use iRule to check if it IP A > use BoT profile which have some exception, but if all other IP > use normally BoT profile. . when HTTP_REQUEST { # Check IP and select BoT profile from that if { [IP::client_addr] eq "A" } { ASM::enable allow_some_bot_profile } else { ASM::enable normally_bot_profile } } ps. I didn't see any document about how to select BoT profile. So I'm not sure if ASM::enable can do that.48Views0likes3CommentsF5 DNS/GTM External Monitor(EAV) with SNI support and response code check
I have used this monitor for XC Distributed Cloud as the HTTP LB share by default the same tenant IP address and SNI support is needed. You can order dedicated public IP addresses for each HTTP LB and enable "Default Load Balancer" ( https://my.f5.com/manage/s/article/K000152902 ) option but it will cost you extra 😉 The script is a modified version of External https health monitor for SNI-enabled pool as to handle response codes and to set the SNI globally for the entire pool and it's members. If you are uploading from Windows machine see External monitor fails to run as you could hit the bug. This could be needed for F5 DNS/GTM below 16.1 that do not support SNI in HTTPS monitors. The only mandatory variable is "SNI" that should be set in the external monitor config that references this uploaded bash script. The "URI" variable by default is set to "/" and "$2" variable by default is empty or 443, the default expected response code 200. #!/bin/sh # External monitoring script for checking HTTP status code # $1 = IP (::ffff:nnn.nnn.nnn.nnn notation or hostname) # $2 = port (optional; defaults to 443 if not provided) # Default SNI to IP if not explicitly provided node_ip=$(echo "$1" | sed 's/::ffff://') # Remove IPv6 compatibility prefix SNI=${SNI:-"$node_ip"} # Assign sanitized IP to SNI # Default variables MON_NAME=${MON_NAME:-"MyExtMon$$"} pidfile="/var/run/$MON_NAME.$1..$2.pid" # PID file path DEBUG=${DEBUG:-0} # Enable debugging if set to 1 EXPECTED_STATUS=${EXPECTED_STATUS:-200} # Default HTTP status code to 200 URI=${URI:-"/"} # Default URI DEFAULT_PORT=443 # Default port (used if $2 is unset) # Set port to default if $2 is not provided if [ -z "${2}" ]; then PORT=${DEFAULT_PORT} else PORT=${2} fi # Kill old process if pidfile exists if [ -f "$pidfile" ]; then kill -9 -$(cat "$pidfile") > /dev/null 2>&1 fi echo "$$" > "$pidfile" # Perform the HTTP(S) request via single curl (fetch status code only) status_code=$(curl -s -o /dev/null -w '%{http_code}' --connect-timeout 5 --resolve "${SNI}:${PORT}:${node_ip}" "https://${SNI}:${PORT}${URI}") # Cleanup rm -f "$pidfile" > /dev/null 2>&1 # Output server status based on HTTP status code match if [ "$status_code" -eq "$EXPECTED_STATUS" ]; then echo "up" else echo "down" fi # Debugging if [ "$DEBUG" -eq 1 ]; then echo "Debugging on..." echo "SNI=${SNI}" echo "URI=${URI}" echo "IP=${node_ip}" echo "PORT=${PORT}" echo "MON_NAME=${MON_NAME}" echo "STATUS_CODE=${status_code}" echo "EXPECTED_STATUS=${EXPECTED_STATUS}" echo "curl -s -o /dev/null -w '%{http_code}' --connect-timeout 5 --resolve ${SNI}:${PORT}:${node_ip} https://${SNI}:${PORT}${URI}" fi89Views0likes1CommentBIG-IP Next for Kubernetes Nvidia DPU deployment walkthrough
Introduction Modern AI factories—hyperscale environments powering everything from generative AI to autonomous systems—are pushing the limits of traditional infrastructure. As these facilities process exabytes of data and demand near-real-time communication between thousands of GPUs, legacy CPUs struggle to balance application logic with infrastructure tasks like networking, encryption, and storage management. Data Processing Units (DPUs), purpose-built accelerators that offload these housekeeping tasks, freeing CPUs and GPUs to focus on what they do best. DPUs are specialized system-on-chip (SoC) devices designed to handle data-centric operations such as network virtualization, storage processing, and security enforcement. By decoupling infrastructure management from computational workloads, DPUs reduce latency, lower operational costs, and enable AI factories to scale horizontally. BIG-IP Next for Kubernetes and Nvidia DPU Looking at F5 ability to deliver and secure every app, we needed it to be deployed at multiple levels, a crucial one being edge and DPU. Installing F5 BIG-IP Next for Kubernetes on Nvidia DPU requires installing Nvidia’s DOCA framework to be installed. What’s DOCA? NVIDIA DOCA is a software development kit for NVIDIA BlueField DPUs. BlueField provides data center infrastructure-on-a-chip, optimized for high-performance enterprise and cloud computing. DOCA is the key to unlocking the potential of the NVIDIA BlueField data processing unit (DPU) to offload, accelerate, and isolate data center workloads. With DOCA, developers can program the data center infrastructure of tomorrow by creating software-defined, cloud-native, GPU-accelerated services with zero-trust protection. Now, let's explore BIG-IP Next for Kubernetes components, The BIG-IP Next for Kubernetes solution has two main parts: the Data Plane - Traffic Management Micro-kernel (TMM) and the Control Plane. The Control Plane watches over the Kubernetes cluster and updates the TMM’s configurations. The BIG-IP Next for Kubernetes Data Plane (TMM) manages the supply of network traffic both entering and leaving the Kubernetes cluster. It also proxies the traffic to applications running in the Kubernetes cluster. The Data Plane (TMM) runs on the BlueField-3 Data Processing Unit (DPU) node. It uses all the DPU resources to handle the traffic and frees up the Host (CPU) for applications. The Control Plane can work on the CPU or other nodes in the Kubernetes cluster. This makes sure that the DPU is still used for processing traffic. Use-case examples: There are some recently awesome use cases released by F5’s team based on conversation and work from the field. Let’s explore those items: Protecting MCP servers with F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs LLM routing with dynamic load balancing with F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs F5 optimizes GPUs for distributed AI inferencing with NVIDIA Dynamo and KV cache integration. Deployment walk-through In our demo, we go through the configurations from BIG-IP Next for Kubernetes Main BIG-IP Next for Kubernetes features L4 ingress flow HTTP/HTTPs ingress flow Egress flow BGP integration Logging and troubleshooting (Qkview, iHealth) You can find a quick walk-through via BIG-IP Next for Kubernetes - walk-through Related Content BIG-IP Next for Kubernetes - walk-through BIG-IP Next for Kubernetes BIG-IP Next for Kubernetes and Nvidia DPU-3 walkthrough BIG-IP Next for Kubernetes F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs673Views1like1CommentF5 Velos/rSeries/F5OS code for automating config backup with the new RESTCONF API and Ansible
On the new F5OS devices a new RESTCONF based API interface is used that allows everything to be done via that API. Now you can even send API command to make F5 to export the configuration file in outbound connection with HTTPS/SCP and this is an extra security for me. F5 has even released Ansible collections for Velos but some things are still not possible with the collection but with Ansible the URI module can used to do the things I am doing with Postman as even the HTTP headers can be added in the URI module. Some may use python but personally I like Ansible more (look at the end of this article for the Ansible Example) 🙂 https://clouddocs.f5.com/products/orchestration/ansible/devel/velos/velos.html https://clouddocs.f5.com/products/orchestration/ansible/devel/f5os/f5os.html https://docs.ansible.com/ansible/latest/collections/ansible/builtin/uri_module.html This code allows the automation of the configuration backups for the F5 Velos/rSeries using the new API. To get started with the F5OS API I recommend going through the Devcentral article https://community.f5.com/t5/technical-articles/exploring-f5os-automation-features-on-velos/ta-p/295318 The Velos Postman collections are at https://clouddocs.f5.com/api/velos-api/velos-api-workflows.html The Velos API documentation can be found at F5OS/F5OS-C OpenAPI Documentation . The F5OS API supports Basic and Bearer token authentication but it is much better to use the BASIC auth just to retrive the Token as shown in the examples below. Generate Bearer Token in Postman. This is from the F5 Postman collection. Endpoint: https://{{Chassis1_System_Controller_IP}}:8888/restconf/data/openconfig-system:system/aaa 'No body' 2. Create a config backup (now it is not called UCS but database configuration backup in F5OS). Endpoint: 8888/restconf/data/openconfig-system:system/f5-database:database/f5-database:config-backup Body: { "f5-database:name": "api-backup", "f5-database:overwrite": "true" } Note! For rSeries "f5-database:overwrite": "true" may need to be removed as 1.3.1 does not support to select to overwrite an existing backup or not. 3a. Download the config backup with ‘root’ with SCP from ‘/var/confd/configs/’, for example Back up and restore the F5OS-C configuration on a VELOS system 3b. Make F5 to send the backup with HTTPS to the backup server with the new file transfer utility that can be triggered with API commands for the F5 to start the file transfer. Endpoint: :8888/restconf/data/f5-utils-file-transfer:file/export Body: { "f5-utils-file-transfer:username": "test", "f5-utils-file-transfer:password": "test", "f5-utils-file-transfer:local-file": "configs/api-backup", "f5-utils-file-transfer:remote-url": "https://1.1.1.1/file" } In some versions the variable "insecure" : "true" can't be set, so maybe the web server will need a valid and not self-signed SSL cert. 3c. Export the backup with SCP/AFTP initiated from the F5 device with an API command. This is something that will be possible in the future as it seems as of now it is still not possible as I tried to follow the API documentation but sometimes, I get errors about missing element ‘’known-hosts’’ but this file should be created with the below API call as maybe the workaround is to go to the Linux with a root account and create this file but I still have not found where to create it. Another error is unknown element ‘remote-host’ but this should exist, so it is a bug or the documentation has some mistakes but as this is a new feature it will work eventually. As a note you need to add the fingerprints for the Velos or rSeries to start the SCP connection as an extra security step and this is really nice 😀 Endpoint: /restconf/data/f5-utils-file-transfer:file/known-hosts Body: { "f5-utils-file-transfer:known-host": [ { "remote-host": "string", "config": { "remote-host": "string", "key-type": "rsa", "fingerprint": "string" }, "state": { "remote-host": "string", "key-type": "rsa", "fingerprint": "string" } } ] } Now with F5OS when accessing the GUI, you can use Fiddler or F12 (the devtools) just to see the RESTCONF commands that are used and the use them in Postman/Ansible/Python etc. EDIT: 4. Using Ansible URI Module with F5OS for Basic Auth, Token generation and Config Backup Here is an example to do the same tasks but with using the Ansible URI module. The Ansible URI module allows us to make our own API requests when there is no build-in module and it even supports basic and form based authentication and after that the token can be saved and used a varible in the next requests that generate the backup and then the backup can be transfered with SCP triggered with cron job or another URI module task can be written that uses the file transfer utility. Ansible Playbook using jinja2 template as json body: root@niki1:/home/niki/ansible# cat f5os_backup.yml --- - name: F5OS_BACKUP hosts: lb connection: local gather_facts: false vars: Chassis_IP : X.X.X.X backup_name : api3_backup tasks: - name: Create a Basic request ansible.builtin.uri: url: https://{{ Chassis_IP }}:8888/restconf/data/openconfig-system:system/aaa user: xxx password: xxx method: GET force_basic_auth: yes status_code: 200 body_format: json validate_certs: false headers: Content-Type: application/yang-data+json X-Auth-Token: rctoken return_content: yes register: result - name: Save the token to a fact variable set_fact: metatoken: "{{ result.x_auth_token }}" - name: Create Backup ansible.builtin.uri: url: https://{{ Chassis_IP }}:8888/restconf/data/openconfig-system:system/f5-database:database/f5-database:config-backup method: POST status_code: 200 body_format: json validate_certs: false body: "{{ lookup('ansible.builtin.template','f5os.json') }}" headers: Content-Type: application/yang-data+json X-Auth-Token: "{{ metatoken }}" f5os.json Template: { "f5-database:name": "{{ backup_name }}", "f5-database:overwrite": "true" } Edit: Now there is an F5 Ansible collection for this 🙂 https://clouddocs.f5.com/products/orchestration/ansible/devel/f5os/modules_3_0/f5os_config_backup_module.html As of F5OS 1.8 now ":8888/restconf" can be replaced with ":443/api".2.9KViews0likes0CommentsUsing F5 NGINX Plus as the Ingress Controller within Nutanix Kubernetes Platform (NKP)
Managing incoming traffic is a critical component of running applications efficiently within Kubernetes clusters. As organizations continue to deploy a growing number of microservices, the need for robust, flexible, and intelligent traffic management solutions becomes more apparent. In this article, we provide an overview of how F5 NGINX Plus, when used as the ingress controller in the Nutanix Kubernetes Platform (NKP), offers a comprehensive approach to traffic optimization, application reliability, and security.154Views1like0CommentsKnowledge sharing: Velos and rSeries (F5OS) basic troubleshooting, logs and commands
This another part of my Knowledge sharing articles, where I will take a deeper look into Velos and rSeries investigation of issues, logs and command. 1. Velos HA controller and blade issues. As the Velos system is the one with two controllers in active/standby mode only with Velos it could be needed to check if there is an issue with the controller's HA. As the controller's HA order can be different for the system and the different partitions to check the HA for the system use the /var/log_controller/cc-confd file or for a partition HA issue look at the partition velos log at /var/F5/partition<ID>/log/velos.log . Also you can enable HA debug for the controllers with " system dbvars config debug confd ha-state-machine true ". Overview of HA: https://support.f5.com/csp/article/K19204400 Controller HA: https://support.f5.com/csp/article/K21130014 Partition HA: https://support.f5.com/csp/article/K58515297 List of Velos/rSeries services: Overview of F5 VELOS chassis controller services Overview of F5 VELOS partition services Overview of F5 rSeries system services 2. Entering into F5OS objects. The rSeries and Velos tenants are like vCMP quests with VIPRION and sometimes if there are access issues with them it could be needed to open their console. For this the "virtctl" command can be used and as an example " /usr/share/omd/kubevirt/virtctl console <tenant_name>-<tenant_instance_ID> ". Also as velos uses blades and partitions it could be needed to ssh to a blade with " ssh slot<number> " or to enter a partition with " docker exec -it partition<ID>_cli su admin " as sometimes for example to see the GUI logs entering the GUI container for the partition could be needed but F5 support will for this in most cases and maybe this will be the way to enter the BIG-IP NEXT CLI. Overview of VELOS system architecture: https://support.f5.com/csp/article/K73364432 Overview of rSeries system architecture: https://support.f5.com/csp/article/K49918625 rSeries tanant access: https://support.f5.com/csp/article/K33373310 Velos blade and tenant access: https://support.f5.com/csp/article/K65442484 Velos partition access: https://support.f5.com/csp/article/K11206563 3. Usefull commands and logs. For Velos/rSeries as this is a system with a cluster the "show cluster" command is usefull to see any issues (look fo "cluster is NOT ready."). Also the velos.log for the controller and partitions is a great place to start and debug level can be enabled for it under " SYSTEM SETTINGS Log Settings " as this is also the place for rSeries logging to be set to debug. Also the /var/log/openshift.log is good be checked with velos if there are cluster issues or or ks3.log in rSeries. Also the confd logs are like mcpd logs, so they are really usefull for Velos or rSeries. Other nice commands are docker ps, oc get pod --all-namespaces -o wide, kubectl get pod --all-namespaces -o wide but the support will ask for them in most cases. Velos cluster status: https://support.f5.com/csp/article/K27427444 Velos debug: https://support.f5.com/csp/article/K51486849 Velos openshift example issue: https://support.f5.com/csp/article/K01030619 Monitoring Velos: https://clouddocs.f5.com/training/community/velos-training/html/monitoring_velos.html Monitoring rSeries: https://clouddocs.f5.com/training/community/rseries-training/html/monitoring_rseries.html 4. Velos and rSeries tcpdumps packet captures, file utility and qkview files. For Velos qkviews ca be created for controller or partition as they are seperate qkviews. Tcpdumps for client traffic are done a tcpdump utility from the F5OS (su - admin) and a tcpdump in the Linux kernel is just for the managment ip addresses of the appliance , controller (floating or local) , partition or tenant. The file utility allows for file transfers to remote servers or even downloading any log from the Velos/rSeries to your computer as this was not possible before with iSeries or Viprion. Also the file utility starts outbound session to the remote servers so this an extra security as no inbound sessions need to be allowed on the firewall/web proxy and it can be even triggered by API call and I may make a codeshare article for this. Velos tcpdump utility: https://support.f5.com/csp/article/K12313135 rSeries tcpdump utility: https://support.f5.com/csp/article/K80685750 Qkview Velos: https://support.f5.com/csp/article/K02521182 Qkview Velos CLI location: https://support.f5.com/csp/article/K79603072 Qkview rSeries: https://support.f5.com/csp/article/K04756153 SCP: https://support.f5.com/csp/article/K34776373 For rSeries 2000/4000 tcpdump is different as SR-IOV not FPGA (rSeries Networking (f5.com)) is used to attach interfaces directly to the tenant VM: Article Detail (f5.com) 5. A final fast check could be to use ''kubectl get pods -o wide--all-namespaces'' (with Velos also ''oc get pods -o wide --all-namespaces'' should also work) to see that all pods are ok and running. Also ''docker ps'' or '' docker ps --format 'table {{.Names}}\t{{.RunningFor}}\t{{.Status}}' '' are usefull to see a container that could be going down and up and this can be correlated with issues seen with "show cluster" command. 6. The new F5OS has much better hardware diagnostics than the old devices, so no more the need to do EUD tests as all system hardware components and their health can be viewed from the GUI or CLI and also this is shown in F5 ihealth! https://techdocs.f5.com/en-us/velos-1-5-0/velos-systems-administration-configuration/title-system-settings.html 7. For Velos and rSeries always keep the software up to date as for example I will give with the Velos 1.5.1 the cluster rebuild because of the openshift ssl cert being 1 year is much simpler or the F5 rSeries and the Cisco Nexus issues or the corrupt Qkview generation when the GUI not the CLI is used (the velos cluster rebuild with touch /var/omd/CLUSTER_REINSTALL can solve many issues but it will cause some timeout): http://cdn.f5.com/product/bugtracker/ID1135853.html https://my.f5.com/manage/s/article/K000092905 https://support.f5.com/csp/article/K79603072 In the future ''docker'' commands could be not available but then just use "crictl" as this replaces the docker init system for kubernetes. 8. F5OS 1.8 has added several cool features that I will discuss. system rollback initiate proceed - F5OS config option to rollback to the previous config. system diagnostics os-utils docker restart node platform service xxx - config option to restart a docker service from the F5OS without the use or root bash. Also can be scheduled through API! f5sh - to enter F5OS from root bash like tmsh command. system diagnostics core-files list - to see the core files and which process made them as to know where to focus. system diagnostics net-utils xxx - to run ping, dig, traceroute from the F5os without bash access.3.7KViews2likes3CommentsAutomating F5 Application Delivery and Security Platform Deployments
The F5 ADSP Architecture Automation Project The F5 ADSP reduces the complexity of modern applications by integrating operations, traffic management, performance optimization, and security controls into a single platform with multiple deployment options. This series outlines practical steps anyone can take to put these ideas into practice using the F5 ADSP Architectures GitHub repo. Each article highlights different deployment examples, which can be run locally or integrated into CI/CD pipelines following DevSecOps practices. The repository is community-supported and provides reference code that can be used for demos, workshops, or as a stepping stone for your own F5 ADSP deployments. If you find any bugs or have any enhancement requests, open an issue, or better yet, contribute. The F5 Application Delivery and Security Platform (F5 ADSP) The F5 ADSP addresses four core areas: how you operate day to day, how you deploy at scale, how you secure against evolving threats, and how you deliver reliably across environments. Each comes with its own challenges, but together they define the foundation for keeping systems fast, stable, and safe. Each architecture deployment example is designed to cover at least two of the four core areas: xOps, Deployment, Delivery, and Security. This ensures the examples demonstrate how multiple components of the platform work together in practice. DevSecOps: Integrating security into the software delivery lifecycle is a necessary part of building and maintaining secure applications. This project incorporates DevSecOps practices by using supported APIs and tooling, with each use case including a GitHub repository containing IaC code, CI/CD integration examples, and telemetry options. Resources: F5 Application Delivery and Security Platform GitHub Repo and Guide ADSP Architecture Article Series: Automating F5 Application Delivery and Security Platform Deployments (Intro) F5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF) F5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF) F5 Hybrid Security Architectures (Part 3 - F5 XC API Protection and NGINX Ingress Controller) F5 Hybrid Security Architectures (Part 4 - F5 XC BOT and DDoS Defense and BIG-IP Advanced WAF) F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller) Minimizing Security Complexity: Managing Distributed WAF Policies188Views3likes0CommentsDeploying F5 Distributed Cloud Customer Edge on AWS in a scalable way with full automation
Scaling infrastructure efficiently while maintaining operational simplicity is a critical challenge for modern enterprises. This comprehensive guide presents the foundation for a fully automated Terraform solution for deploying F5 Distributed Cloud (F5XC) Customer Edge (CE) nodes on AWS that scales seamlessly from single-node proof-of-concepts to multi-node production deployments.179Views1like0Comments