big-ip
12005 TopicsAutomate Let's Encrypt Certificates on BIG-IP
To quote the evil emperor Zurg: "We meet again, for the last time!" It's hard to believe it's been six years since my first rodeo with Let's Encrypt and BIG-IP, but (uncompromised) timestamps don't lie. And maybe this won't be my last look at Let's Encrypt, but it will likely be the last time I do so as a standalone effort, which I'll come back to at the end of this article. The first project was a compilation of shell scripts and python scripts and config files and well, this is no different. But it's all updated to meet the acme protocol version requirements for Let's Encrypt. Here's a quick table to connect all the dots: Description What's Out What's In acme client letsencrypt.sh dehydrated python library f5-common-python bigrest BIG-IP functionality creating the SSL profile utilizing an iRule for the HTTP challenge The f5-common-python library has not been maintained or enhanced for at least a year now, and I have an affinity for the good work Leo did with bigrest and I enjoy using it. I opted not to carry the SSL profile configuration forward because that functionality is more app-specific than the certificates themselves. And finally, whereas my initial project used the DNS challenge with the name.com API, in this proof of concept I chose to use an iRule on the BIG-IP to serve the challenge for Let's Encrypt to perform validation against. Whereas my solution is new, the way Let's Encrypt works has not changed, so I've carried forward the process from my previous article that I've now archived. I'll defer to their how it works page for details, but basically the steps are: Define a list of domains you want to secure Your client reaches out to the Let’s Encrypt servers to initiate a challenge for those domains. The servers will issue an http or dns challenge based on your request You need to place a file on your web server or a txt record in the dns zone file with that challenge information The servers will validate your challenge information and notify you You will clean up your challenge files or txt records The servers will issue the certificate and certificate chain to you You now have the key, cert, and chain, and can deploy to your web servers or in our case, to the BIG-IP Before kicking off a validation and generation event, the client registers your account based on your settings in the config file. The files in this project are as follows: /etc/dehydrated/config # Dehydrated configuration file /etc/dehydrated/domains.txt # Domains to sign and generate certs for /etc/dehydrated/dehydrated # acme client /etc/dehydrated/challenge.irule # iRule configured and deployed to BIG-IP by the hook script /etc/dehydrated/hook_script.py # Python script called by dehydrated for special steps in the cert generation process # Environment Variables export F5_HOST=x.x.x.x export F5_USER=admin export F5_PASS=admin You add your domains to the domains.txt file (more work likely if signing a lot of domains, I tested the one I have access to). The dehydrated client, of course is required, and then the hook script that dehydrated interacts with to deploy challenges and certificates. I aptly named that hook_script.py. For my hook, I'm deploying a challenge iRule to be applied only during the challenge; it is modified each time specific to the challenge supplied from the Let's Encrypt service and is cleaned up after the challenge is tested. And finally, there are a few environment variables I set so the information is not in text files. You could also move these into a credential vault. So to recap, you first register your client, then you can kick off a challenge to generate and deploy certificates. On the client side, it looks like this: ./dehydrated --register --accept-terms ./dehydrated -c Now, for testing, make sure you use the Let's Encrypt staging service instead of production. And since I want to force action every request while testing, I run the second command a little differently: ./dehydrated -c --force --force-validation Depicted graphically, here are the moving parts for the http challenge issued by Let's Encrypt at the request of the dehydrated client, deployed to the F5 BIG-IP, and validated by the Let's Encrypt servers. The Let's Encrypt servers then generate and return certs to the dehydrated client, which then, via the hook script, deploys the certs and keys to the F5 BIG-IP to complete the process. And here's the output of the dehydrated client and hook script in action from the CLI: # ./dehydrated -c --force --force-validation # INFO: Using main config file /etc/dehydrated/config Processing example.com + Checking expire date of existing cert... + Valid till Jun 20 02:03:26 2022 GMT (Longer than 30 days). Ignoring because renew was forced! + Signing domains... + Generating private key... + Generating signing request... + Requesting new certificate order from CA... + Received 1 authorizations URLs from the CA + Handling authorization for example.com + A valid authorization has been found but will be ignored + 1 pending challenge(s) + Deploying challenge tokens... + (hook) Deploying Challenge + (hook) Challenge rule added to virtual. + Responding to challenge for example.com authorization... + Challenge is valid! + Cleaning challenge tokens... + (hook) Cleaning Challenge + (hook) Challenge rule removed from virtual. + Requesting certificate... + Checking certificate... + Done! + Creating fullchain.pem... + (hook) Deploying Certs + (hook) Existing Cert/Key updated in transaction. + Done! This results in a deployed certificate/key pair on the F5 BIG-IP, and is modified in a transaction for future updates. This proof of concept is on github in the f5devcentral org if you'd like to take a look. Before closing, however, I'd like to mention a couple things: This is an update to an existing solution from years ago. It works, but probably isn't the best way to automate today if you're just getting started and have already started pursuing a more modern approach to automation. A better path would be something like Ansible. On that note, there are several solutions you can take a look at, posted below in resources. Resources https://github.com/EquateTechnologies/dehydrated-bigip-ansible https://github.com/f5devcentral/ansible-bigip-letsencrypt-http01 https://github.com/s-archer/acme-ansible-f5 https://github.com/s-archer/terraform-modular/tree/master/lets_encrypt_module (Terraform instead of Ansible) https://community.f5.com/t5/technical-forum/let-s-encrypt-with-cloudflare-dns-and-f5-rest-api/m-p/292943 (Similar solution to mine, only slightly more robust with OCSP stapling, the DNS instead of HTTP challenge, and with bash instead of python)28KViews6likes19CommentsDynamic FQDN Node DNS Resolution based on URI with Route Domains and Caching iRule
Problem this snippet solves: Following on from the code share Ephemeral Node FQDN Resolution with Route Domains - DNS Caching iRule that I posted, I have made a few modifications based on further development and comments/questions in the original share. On an incoming HTTP request, this iRule will dynamically query a DNS server (out of the appropriate route domain) defined in a pool to determine the IP address of an FQDN ephemeral node depending on the requesting URI. The IP address will be cached in a subtable so to prevent querying on every HTTP request It will also append the route domain to the node and replace the HTTP Host header. Features of this iRule * Dynamically resolves FQDN node from requesting URI * Iterates through other DNS servers in a pool if response is invalid or no response * Caches response in the session table using custom TTL * Uses cached result if present (prevents DNS lookup on each HTTP_REQUEST event) * Appends route domain to the resolved node * Replaces host header for outgoing request How to use this snippet: Add a DNS pool with appropriate health monitors, in this example the pool is called dns_pool Add the lookup datagroup, dns_lookup_dg . This defines the parameters of the DNS query. The values in the datagroup will built up an array using the key in capitals to define the array object e.g. $myArray(FQDN) Modify the values as required: FQDN: the FQDN of the node to load balance to. DNS-RD: the outbound route domain to reach the DNS servers NODE-RD: the outbound route domain to reach the node TTL: TTL value for the DNS cache in seconds ltm data-group internal dns_lookup_dg { records { /app1 { data "FQDN app1.my-domain.com|DNS-RD %10|NODE-RD %20|TTL 300|PORT 8443" } /app2 { data "FQDN app2.my-other-domain.com|DNS-RD %10|NODE-RD %20|TTL 300|PORT 8080" } default { data "FQDN default.domain.com|DNS-RD %10|NODE-RD %20|TTL 300|PORT 443" } } type string } Code : ltm data-group internal dns_lookup_dg { records { /app1 { data "FQDN app1.my-domain.com|DNS-RD %10|NODE-RD %20|TTL 300|PORT 8443" } /app2 { data "FQDN app2.my-other-domain.com|DNS-RD %10|NODE-RD %20|TTL 300|PORT 8080" } default { data "FQDN default.domain.com|DNS-RD %10|NODE-RD %20|TTL 300|PORT 443" } } type string } when CLIENT_ACCEPTED { set dnsPool "dns_pool" set dnsCache "dns_cache" set dnsDG "dns_lookup_dg" } when HTTP_REQUEST { # set datagroup values in an array if {[class match -value [HTTP::uri] "starts_with" $dnsDG]} { set dnsValues [split [string trim [class match -value [HTTP::uri] "starts_with" $dnsDG]] "| "] } else { # set to default if URI is not defined in DG set dnsValues [split [string trim [class match -value "default" "equals" $dnsDG]] "| "] } if {([info exists dnsValues]) && ($dnsValues ne "")} { if {[catch { array set dnsArray $dnsValues set fqdn $dnsArray(FQDN) set dnsRd $dnsArray(DNS-RD) set nodeRd $dnsArray(NODE-RD) set ttl $dnsArray(TTL) set port $dnsArray(PORT) } catchErr ]} { log local0. "failed to set DNS variables - error: $catchErr" event disable return } } # check if fqdn has been previously resolved and cached in the session table if {[table lookup -notouch -subtable $dnsCache $fqdn] eq ""} { log local0. "IP is not cached in the subtable - attempting to resolve IP using DNS" # initialise loop vars set flgResolvError 0 if {[active_members $dnsPool] > 0} { foreach dnsServer [active_members -list $dnsPool] { set dnsResolvIp [lindex [RESOLV::lookup @[lindex $dnsServer 0]$dnsRd -a $fqdn] 0] # verify result of dns lookup is a valid IP address if {[scan $dnsResolvIp {%d.%d.%d.%d} a b c d] == 4} { table set -subtable $dnsCache $fqdn $dnsResolvIp $ttl # clear any error flag and break out of loop set flgResolvError 0 set nodeIpRd $dnsResolvIp$nodeRd break } else { call ${partition}log local0."$dnsResolvIp is not a valid IP address, attempting to query other DNS server" set flgResolvError 1 } } } else { call ${partition}log local0."ERROR no DNS servers are availible" event disable return } #log error if query answers are invalid if {$flgResolvError} { call ${partition}log local0."ERROR unable to resolve $fqdn" unset -nocomplain dnsServer dnsResolvIp dnsResolvIp event disable return } } else { # retrieve cached IP from subtable (verification of resolved IP compelted before being commited to table) set dnsResolvIp [table lookup -notouch -subtable $dnsCache $fqdn] set nodeIpRd $dnsResolvIp$nodeRd log local0. "IP found in subtable: $dnsResolvIp with TTL: [table timeout -subtable $dnsCache -remaining $fqdn]" } # rewrite outbound host header and set node IP with RD and port HTTP::header replace Host $fqdn node $nodeIpRd $port log local0. "setting node to $nodeIpRd $port" } Tested this on version: 12.13.9KViews1like7CommentsAccelerating AI Data Delivery with F5 BIG-IP
Introduction AI continues to rely heavily on efficient data delivery infrastructures to innovate across industries. S3 is the protocol that AL/ML engineers rely on for data delivery. As AI workloads grow in complexity, ensuring seamless and resilient data ingestion and delivery becomes critical. This will support massive datasets, robust training workflows, and production-grade outputs. S3 is HTTP-based, so F5 is commonly used to provide advanced capabilities for managing S3-compatible storage pipelines, enforcing policies, and preventing delivery failures. This enables businesses to maintain operational excellence in AI environments. This article explores three key functions of F5 BIG-IP within AI data delivery through embedded demo videos. From optimizing S3 data pipelines and enforcing granular policies to monitoring traffic health in real time, F5 presents core functions for developers and organizations striving for agility in their AI operations. The diagram shows a scalable, resilient, and secure AI architecture facilitated by F5 BIG-IP. End-user traffic is directed to the front-end application through F5, ensuring secure and load-balanced access via the "Web and API front door." This traffic interacts with the AI Factory, comprising components like AI agents, inference, and model training, also secured and scaled through F5. Data is ingested into enterprise events and data stores, which are securely delivered back to the AI Factory's model training through F5 to support optimized resource utilization. Additionally, the architecture includes Retrieval-Augmented Generation (RAG), securely backed by AI object storage and connected through F5 for AI APIs. Whether from the front-end applications or the AI Factory, traffic to downstream services like AI agents, databases, websites, or queues is routed via F5 to ensure consistency, security, and high availability across the ecosystem. This comprehensive deployment highlights F5's critical role in enabling secure, efficient AI-powered operations. 1. Ensure Resilient AI Data and S3 Delivery Pipelines with F5 BIG-IP Modern AI workflows often rely on S3-compatible storage for high-throughput data delivery. However, a common problem is inefficient resource utilization in clusters due to uneven traffic distribution across storage nodes, causing bottlenecks, delays, and reliability concerns. If you manage your own storage environment, or have spoken to a storage administrator, you’ll know that “hot spots” are something to avoid when dealing with disk arrays. In this demo, F5 BIG-IP demonstrates how a loose-coupling architecture solves these issues. By intelligently distributing traffic across all cluster nodes via a virtual server, BIG-IP ensures balanced load distribution, eliminates bottlenecks, and provides high-performance bandwidth for AI workloads. The demo uses Warp, a S3 benchmarking too, to highlight how F5 BIG-IP can take incoming S3 traffic and route it efficiently to storage clusters. We use the least-connection load balancing algorithm to minimize latency across the nodes while maximizing resource utilization. We also add new nodes to the load balancing pool, ensuring smooth, scalable, and resilient storage pipelines. 2.Enforce Policy-Driven AI Data Delivery with F5 BIG-IP AI workloads are susceptible to traffic spikes that can destabilize storage clusters and impact concurrent data workflows. The video demonstrates using iRules to cap connections and stabilize clusters under high request-per-second spikes. Additionally, we use local traffic policies to redirect specific buckets while preserving other ongoing requests. For operational clarity, the study tool visualizes real-time cluster metrics, offering deep insights into how policies influence traffic. 3.Prevent AI Data Delivery Failures with F5 BIG-IP AI operations depend on high efficiency and reliable data delivery to maintain optimal training and model fine-tuning workflows. The video demonstrates how F5 BIG-IP uses real-time health monitors to ensure storage clusters remain operational during failure scenarios. By dynamically detecting node health and write quorum thresholds, BIG-IP intelligently routes traffic to backup pools or read quorum clusters without disrupting endpoints. The health monitors also detect partial node failures, which is important to avoid risk of partial writes when working with S3 storage.. Conclusion Once again, with AI so reliant on HTTP-based S3 storage, F5 administrators find themselves as a critical part of the latest technologies. By enabling loose coupling, enforcing granular policies, and monitoring traffic health in real time, F5 optimizes data delivery for improved AI model accuracy, faster innovation, and future-proof architectures. Whether facing unpredictable traffic surges or handling partial failures in clusters, BIG-IP ensures your applications remain resilient and ready to meet business demands with ease. Related Resources AI Data Delivery Use Case AI Reference Architecture Enterprise AI delivery and security
124Views2likes0CommentsF5 BIG-IP Multi-Site Dashboard
Code is community submitted, community supported, and recognized as ‘Use At Your Own Risk’. A comprehensive real-time monitoring dashboard for F5 BIG-IP Application Delivery Controllers featuring multi-site support, DNS hostname resolution, member state tracking, and advanced filtering capabilities. A 170KB modular JavaScript application runs entirely in your browser, served directly from the F5's high-speed operational dataplane. One or more sites operate as Dashboard Front-Ends serving the dashboard interface (HTML, JavaScript, CSS) via iFiles, while other sites operate as API Hosts providing pool data through optimized JSON-based dashboard API calls. This provides unified visibility across multiple sites from a single interface without requiring even a read-only account on any of the BIG-IPs, allowing you to switch between locations and see consistent pool, member, and health status data with almost no latency and very little overhead. Think of it as an extension of the F5 GUI: near real-time state tracking, DNS hostname resolution (if configured), advanced search/filtering, and the ability to see exactly what changed and when. It gives application teams and operations teams direct visibility into application pool state without needing to wait for answers from F5 engineers, eliminating the organizational bottleneck that slows down troubleshooting when every minute counts. https://github.com/hauptem/F5-Multisite-Dashboard290Views4likes1CommentRemove alerts showing r-series LCD/Dashboard.
Code is community submitted, community supported, and recognized as ‘Use At Your Own Risk’. Especially R5+ with 1.8.0+, may you will see unexpected "interface down" alerts. You could use this one-liner in rseries Bash. Hope it helps :p [root@appliance-1 ~]# date;docker exec system_manager /confd/scripts/f5_confd_run_cmd 'show system alarms alarm state | csv' Mon Sep 1 14:32:24 JST 2025 # show system alarms alarm state | csv ID,RESOURCE,SEVERITY,TEXT,TIME CREATED 263169,interface-1.0,WARNING,Interface down,2025-06-11 02:24:43.512626724 UTC 263169,interface-10.0,WARNING,Interface down,2025-06-11 02:24:43.514048068 UTC 263169,interface-2.0,WARNING,Interface down,2025-06-11 02:24:43.517935094 UTC 263169,interface-5.0,WARNING,Interface down,2025-06-11 02:24:43.526179871 UTC 263169,interface-6.0,WARNING,Interface down,2025-06-11 02:24:43.528668180 UTC 263169,interface-7.0,WARNING,Interface down,2025-06-11 02:24:43.530864483 UTC 263169,interface-8.0,WARNING,Interface down,2025-06-11 02:24:43.533197062 UTC 263169,interface-9.0,WARNING,Interface down,2025-06-11 02:24:43.535438297 UTC [root@appliance-1 ~]# date;docker exec system_manager /confd/scripts/f5_confd_run_cmd 'show system alarms alarm state | csv'|grep 263169|cut -f 2 -d ,|xargs -I{} docker exec alert-service /confd/test/sendAlert -i 263169 -r clear-all -s {} Mon Sep 1 14:33:27 JST 2025 Alert Sent Alert Sent Alert Sent Alert Sent Alert Sent Alert Sent Alert Sent Alert Sent [root@appliance-1 ~]# date;docker exec system_manager /confd/scripts/f5_confd_run_cmd 'show system alarms alarm state | csv' Mon Sep 1 14:33:54 JST 2025 # show system alarms alarm state | csv % No entries found. <---------- REMOVED. [root@appliance-1 ~]# Ideas came from: https://cdn.f5.com/product/bugtracker/ID1644293.html https://my.f5.com/manage/s/article/K000150155202Views2likes1CommentCan't change sync type or failover after tenant upgrade.
I made a mistake that I didn't think in the end would matter, but here's what I did. I had previously upgraded this tenant pair to 17.1.3. Everything was fine, and I intended to install on another pair but I installed on the other boot location of one that I had already installed. I didn't think this was an issue as I would just not activate that boot location. However, I couldn't force the Active member to Standby. It was greyed out. I thought that maybe I should boot to that new location because maybe there was something that needed to complete to allow me to fail over between the members. That made it worse because I couldn't change the sync type back to Automatic with incremental sync. So naturally, I booted to the previous partition because it seemed to be at least better, but now I seem to be digging a hole I can't get out of. Where it stands now: The pair is set to sync type "Manual with Incremental Sync" Member1 is standby and says "Not All Devices Synced" Member2 is active and says "Changes Pending" On the Standby Member1, I can change the sync type, but I haven't. On the Active Member2, I can't change the sync type or force it to standby. I have a ticket open but as this is a live system, I pursuing all avenues.Solved88Views0likes7CommentsHow to log HTTP/2 reset_stream
Hello, We are currently in a meeting to prepare for HTTP/2 DDoS attacks. What we would like to do is log the client’s IP address (either local or remote) whenever an HTTP/2 RESET_STREAM is received. Is there any way to achieve this? Would it be possible to implement using an iRule? Thank you.62Views0likes1CommentVIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.1KViews5likes2Comments10 Settings to Lock Down your BIG-IP
EDITORS NOTE, Oct 16, 2025: This article was originally written in 2012 and may contain guidance that is out of date. Please see the new Security Best Practices for F5 Products article, updated in Oct 2025. Earlier this year, F5 notified its customers about a severe vulnerability in F5 products. This vulnerability had to do with SSH keys and you may have heard it called “the SSH key issue” and documented as CVE-2012-1493. The severity of this vulnerability cannot be overstated. F5 has gone above and beyond its normal process for customer notification but there is evidence that there are still BIG-IP devices with the exposed SSH keys accessible from the internet. There are several options available to reduce your organization’s exposure to this issue. Here are 10 mitigation techniques that you can implement today to secure your F5 infrastructure. 1. Install the hotfix. Do it. Do it now. The hotfix is non-invasive and requires little testing since it has no impact on the F5 data processing functionality. It simply edits the authorized key file to remove access for the offending key. Control Network Access to the F5 2. Audit your BIG-IP management ports and Self-IPs. Of course you should pay special attention to public addresses (non-RFC-1918), but don’t forget that even private addresses can be vulnerable to internal threats such as malware, malicious employees, and rogue wireless access points. By default, Self-IPs have many ports open – lock these down to just the ones that you know you need. 3. If you absolutely need to have routable addresses on your Self-IPs, at least lock down access to the networks that need it. To lock-down SSH and the GUI for a Self-IP from a specific network: (tmos)# modify /sys sshd allow replace-all-with { 192.168.2.* } (tmos)# modify /sys httpd allow replace-all-with { 192.168.2.* } (tmos)# save /sys config 4. By definition, machines within the network DMZ are at higher risk. If a DMZ machine is compromised, a hacker can use it as a jumping point to penetrate deeper into the network. Use access controls to restrict access to and from the DMZ. See Solution 13309 for more information about restricting access to the management interface. Lock down User Access with Appliance Mode F5’s iHealth system reports consistently that many systems have default passwords for the root and admin accounts and weak passwords for the other users. After controlling access to the management interfaces (see above), this is the most critical part of securing your F5 infrastructure. Here are three easy steps to lock down user access on the BIG-IP. 5. The Appliance Mode license option is simple. When enabled, Appliance Mode locks down the root user and removes the Unix bash shell as a command-line option– Permitting root login is a historical artifact that many F5 power users cherish. But when root logs in, you don’t know who that user really is, do you? This can be an audit issue if there’s a penetration or other funny business. If you are okay with locking down root but find that you cannot live without bash, then you can split the difference by just setting this db variable to true. (tmos)# modify /sys db systemauth.disablerootlogin value true (tmos)# save /sys config 6. Next, if you haven’t done this already, configure the BIG-IP for remote authentication against, say, the enterprise Active Directory repository. Make this happen from the System > Users > Authentication screen and ensure that the default role is Application Editor or less. You can use the /auth remote-role command to provide somewhat granular authorization to each user group. (tmos)# help /auth remote-role 7. Ensure that the oft-forgotten ‘admin’ user has no terminal access. (tmos)# modify /sys auth user admin shell none (tmos)# save /sys config With steps 5-7, you have significantly hardened the BIG-IP device. Neither of the special accounts, root and admin, will be able to login to the shell and that should eliminate both the SSH key issue and the automated brute force risk. Keep Up to Date on Security News, Hotfixes and Patches 8. If you haven’t done so already, subscribe to the F5 security alert mailing list at f5.com/about-us/preferences. This will ensure that you receive timely security notices. 9. Check your configuration against F5’s heuristic system iHealth. When you upload your diagnostics to iHealth, it will inform you of any missing or suggested security upgrades. Using iHealth is easy. Generating the support file is as simple as pressing a couple buttons on the GUI. Then point your browser at ihealth.f5.com, login and upload the support file. iHealth will tell you what else to look to help you lock down your system. There you have it, nine steps to lock down a BIG-IP and keep on top of infrastructure security… Wait, what, I promised you 10? 10. Follow me (@dholmesf5) and @f5security on Twitter. There that was easy. If you take anything away from this blog post (and congratulations for getting this far), it is be sure you install the SSH key hotfix and protect your management interfaces. And then, fun aside; remember that securing the infrastructure really is serious business.6.4KViews0likes3Comments