devops
1596 TopicsAutomating F5 Application Delivery and Security Platform Deployments
The F5 ADSP Architecture Automation Project The F5 ADSP reduces the complexity of modern applications by integrating operations, traffic management, performance optimization, and security controls into a single platform with multiple deployment options. This series outlines practical steps anyone can take to put these ideas into practice using the F5 ADSP Architectures GitHub repo. Each article highlights different deployment examples, which can be run locally or integrated into CI/CD pipelines following DevSecOps practices. The repository is community-supported and provides reference code that can be used for demos, workshops, or as a stepping stone for your own F5 ADSP deployments. If you find any bugs or have any enhancement requests, open an issue, or better yet, contribute. The F5 Application Delivery and Security Platform (F5 ADSP) The F5 ADSP addresses four core areas: how you operate day to day, how you deploy at scale, how you secure against evolving threats, and how you deliver reliably across environments. Each comes with its own challenges, but together they define the foundation for keeping systems fast, stable, and safe. Each architecture deployment example is designed to cover at least two of the four core areas: xOps, Deployment, Delivery, and Security. This ensures the examples demonstrate how multiple components of the platform work together in practice. DevSecOps: Integrating security into the software delivery lifecycle is a necessary part of building and maintaining secure applications. This project incorporates DevSecOps practices by using supported APIs and tooling, with each use case including a GitHub repository containing IaC code, CI/CD integration examples, and telemetry options. Resources: F5 Application Delivery and Security Platform GitHub Repo and Guide ADSP Architecture Article Series: Automating F5 Application Delivery and Security Platform Deployments (Intro) F5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF) F5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF) F5 Hybrid Security Architectures (Part 3 - F5 XC API Protection and NGINX Ingress Controller) F5 Hybrid Security Architectures (Part 4 - F5 XC BOT and DDoS Defense and BIG-IP Advanced WAF) F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller) Minimizing Security Complexity: Managing Distributed WAF Policies96Views3likes0CommentsDeploying F5 Distributed Cloud Customer Edge on AWS in a scalable way with full automation
Scaling infrastructure efficiently while maintaining operational simplicity is a critical challenge for modern enterprises. This comprehensive guide presents the foundation for a fully automated Terraform solution for deploying F5 Distributed Cloud (F5XC) Customer Edge (CE) nodes on AWS that scales seamlessly from single-node proof-of-concepts to multi-node production deployments.132Views1like0CommentsMultiple Certs, One VIP: TLS Server Name Indication via iRules
An age old question that we’ve seen time and time again in the iRules forums here on DevCentral is “How can I use iRules to manage multiple SSL certs on one VIP"?”. The answer has always historically been “I’m sorry, you can’t.”. The reasoning is sound. One VIP, one cert, that’s how it’s always been. You can’t do anything with the connection until the handshake is established and decryption is done on the LTM. We’d like to help, but we just really can’t. That is…until now. The TLS protocol has somewhat recently provided the ability to pass a “desired servername” as a value in the originating SSL handshake. Finally we have what we’ve been looking for, a way to add contextual server info during the handshake, thereby allowing us to say “cert x is for domain x” and “cert y is for domain y”. Known to us mortals as "Server Name Indication" or SNI (hence the title), this functionality is paramount for a device like the LTM that can regularly benefit from hosting multiple certs on a single IP. We should be able to pull out this information and choose an appropriate SSL profile now, with a cert that corresponds to the servername value that was sent. Now all we need is some logic to make this happen. Lucky for us, one of the many bright minds in the DevCentral community has whipped up an iRule to show how you can finally tackle this challenge head on. Because Joel Moses, the shrewd mind and DevCentral MVP behind this example has already done a solid write up I’ll quote liberally from his fine work and add some additional context where fitting. Now on to the geekery: First things first, you’ll need to create a mapping of which servernames correlate to which certs (client SSL profiles in LTM’s case). This could be done in any manner, really, but the most efficient both from a resource and management perspective is to use a class. Classes, also known as DataGroups, are name->value pairs that will allow you to easily retrieve the data later in the iRule. Quoting Joel: Create a string-type datagroup to be called "tls_servername". Each hostname that needs to be supported on the VIP must be input along with its matching clientssl profile. For example, for the site "testsite.site.com" with a ClientSSL profile named "clientssl_testsite", you should add the following values to the datagroup. String: testsite.site.com Value: clientssl_testsite Once you’ve finished inputting the different server->profile pairs, you’re ready to move on to pools. It’s very likely that since you’re now managing multiple domains on this VIP you'll also want to be able to handle multiple pools to match those domains. To do that you'll need a second mapping that ties each servername to the desired pool. This could again be done in any format you like, but since it's the most efficient option and we're already using it, classes make the most sense here. Quoting from Joel: If you wish to switch pool context at the time the servername is detected in TLS, then you need to create a string-type datagroup called "tls_servername_pool". You will input each hostname to be supported by the VIP and the pool to direct the traffic towards. For the site "testsite.site.com" to be directed to the pool "testsite_pool_80", add the following to the datagroup: String: testsite.site.com Value: testsite_pool_80 If you don't, that's fine, but realize all traffic from each of these hosts will be routed to the default pool, which is very likely not what you want. Now then, we have two classes set up to manage the mappings of servername->SSLprofile and servername->pool, all we need is some app logic in line to do the management and provide each inbound request with the appropriate profile & cert. This is done, of course, via iRules. Joel has written up one heck of an iRule which is available in the codeshare (here) in it's entirety along with his solid write-up, but I'll also include it here in-line, as is my habit. Effectively what's happening is the iRule is parsing through the data sent throughout the SSL handshake process and searching for the specific TLS servername extension, which are the bits that will allow us to do the profile switching magic. He's written it up to fall back to the default client SSL profile and pool, so it's very important that both of these things exist on your VIP, or you may likely find yourself with unhappy users. One last caveat before the code: Not all browsers support Server Name Indication, so be careful not to implement this unless you are very confident that most, if not all, users connecting to this VIP will support SNI. For more info on testing for SNI compatibility and a list of browsers that do and don't support it, click through to Joel's awesome CodeShare entry, I've already plagiarized enough. So finally, the code. Again, my hat is off to Joel Moses for this outstanding example of the power of iRules. Keep at it Joel, and thanks for sharing! when CLIENT_ACCEPTED { if { [PROFILE::exists clientssl] } { # We have a clientssl profile attached to this VIP but we need # to find an SNI record in the client handshake. To do so, we'll # disable SSL processing and collect the initial TCP payload. set default_tls_pool [LB::server pool] set detect_handshake 1 SSL::disable TCP::collect } else { # No clientssl profile means we're not going to work. log local0. "This iRule is applied to a VS that has no clientssl profile." set detect_handshake 0 } } when CLIENT_DATA { if { ($detect_handshake) } { # If we're in a handshake detection, look for an SSL/TLS header. binary scan [TCP::payload] cSS tls_xacttype tls_version tls_recordlen # TLS is the only thing we want to process because it's the only # version that allows the servername extension to be present. When we # find a supported TLS version, we'll check to make sure we're getting # only a Client Hello transaction -- those are the only ones we can pull # the servername from prior to connection establishment. switch $tls_version { "769" - "770" - "771" { if { ($tls_xacttype == 22) } { binary scan [TCP::payload] @5c tls_action if { not (($tls_action == 1) && ([TCP::payload length] > $tls_recordlen)) } { set detect_handshake 0 } } } default { set detect_handshake 0 } } if { ($detect_handshake) } { # If we made it this far, we're still processing a TLS client hello. # # Skip the TLS header (43 bytes in) and process the record body. For TLS/1.0 we # expect this to contain only the session ID, cipher list, and compression # list. All but the cipher list will be null since we're handling a new transaction # (client hello) here. We have to determine how far out to parse the initial record # so we can find the TLS extensions if they exist. set record_offset 43 binary scan [TCP::payload] @${record_offset}c tls_sessidlen set record_offset [expr {$record_offset + 1 + $tls_sessidlen}] binary scan [TCP::payload] @${record_offset}S tls_ciphlen set record_offset [expr {$record_offset + 2 + $tls_ciphlen}] binary scan [TCP::payload] @${record_offset}c tls_complen set record_offset [expr {$record_offset + 1 + $tls_complen}] # If we're in TLS and we've not parsed all the payload in the record # at this point, then we have TLS extensions to process. We will detect # the TLS extension package and parse each record individually. if { ([TCP::payload length] >= $record_offset) } { binary scan [TCP::payload] @${record_offset}S tls_extenlen set record_offset [expr {$record_offset + 2}] binary scan [TCP::payload] @${record_offset}a* tls_extensions # Loop through the TLS extension data looking for a type 00 extension # record. This is the IANA code for server_name in the TLS transaction. for { set x 0 } { $x < $tls_extenlen } { incr x 4 } { set start [expr {$x}] binary scan $tls_extensions @${start}SS etype elen if { ($etype == "00") } { # A servername record is present. Pull this value out of the packet data # and save it for later use. We start 9 bytes into the record to bypass # type, length, and SNI encoding header (which is itself 5 bytes long), and # capture the servername text (minus the header). set grabstart [expr {$start + 9}] set grabend [expr {$elen - 5}] binary scan $tls_extensions @${grabstart}A${grabend} tls_servername set start [expr {$start + $elen}] } else { # Bypass all other TLS extensions. set start [expr {$start + $elen}] } set x $start } # Check to see whether we got a servername indication from TLS. If so, # make the appropriate changes. if { ([info exists tls_servername] ) } { # Look for a matching servername in the Data Group and pool. set ssl_profile [class match -value [string tolower $tls_servername] equals tls_servername] set tls_pool [class match -value [string tolower $tls_servername] equals tls_servername_pool] if { $ssl_profile == "" } { # No match, so we allow this to fall through to the "default" # clientssl profile. SSL::enable } else { # A match was found in the Data Group, so we will change the SSL # profile to the one we found. Hide this activity from the iRules # parser. set ssl_profile_enable "SSL::profile $ssl_profile" catch { eval $ssl_profile_enable } if { not ($tls_pool == "") } { pool $tls_pool } else { pool $default_tls_pool } SSL::enable } } else { # No match because no SNI field was present. Fall through to the # "default" SSL profile. SSL::enable } } else { # We're not in a handshake. Keep on using the currently set SSL profile # for this transaction. SSL::enable } # Hold down any further processing and release the TCP session further # down the event loop. set detect_handshake 0 TCP::release } else { # We've not been able to match an SNI field to an SSL profile. We will # fall back to the "default" SSL profile selected (this might lead to # certificate validation errors on non SNI-capable browsers. set detect_handshake 0 SSL::enable TCP::release } } }4.2KViews0likes18CommentsPost of the Week: iControl REST Subcollections & ZoneRunner Options
In this episode of Post of the Week, Jason addresses a couple iControl REST issues that come up in Q&A often: confusion over how to handle objects that are not sub-collections, and options for working around the lack of F5 DNS cli for ZoneRunner. The procedures for safely updating the BIG-IP named files are covered in knowledge base article K7032 on AskF5.787Views0likes1CommentStreamlining Certificate Management in F5 Distributed Cloud: From Console Clicks to CLI Efficiency
Introduction Managing TLS certificates at scale in F5 Distributed Cloud (F5 XC) can become a complex task, especially when dealing with multiple namespaces, domains, load balancers, and frequent certificate renewals. While the F5 Distributed Cloud Console provides a comprehensive GUI for certificate management. However, the number of clicks and navigation steps required for routine operations can impact operational efficiency. In this article, we'll explore how to manage custom certificates in F5 Distributed Cloud. We'll compare the console-based approach with a streamlined CLI solution, and demonstrate why using automation tools can significantly improve your certificate management workflow. The Challenge: Certificate Management at Scale Modern enterprises often manage dozens or even hundreds of TLS certificates across their infrastructure. Each certificate requires: Regular renewal (typically every 90 days for Let's Encrypt certificates) Association with the correct load balancers When multiplied across numerous applications and environments, what seems like a simple task becomes a significant operational burden. Understanding F5 Distributed Cloud Certificate Management F5 Distributed Cloud provides robust support for custom TLS certificates (Bring Your Own Certificate - BYOC). The platform allows you to: Create and manage TLS certificate objects with support for both PEM and PKCS12 formats Associate multiple certificates with a single HTTPS load balancer Share certificates across multiple load balancers The Console Approach: Step-by-Step Process Let's walk through the typical process of adding a new certificate via the F5 XC Console: Navigate to Certificate Management (3 clicks/actions) Select Multi-Cloud App Connect service Select Certificate Management from the left menu Click on TLS Certificates Create a New Certificate (8 clicks/actions) Click "Add TLS Certificate" Enter certificate name Set labels and description (optional) Click "Import from File" in the Certificate field Click "Upload File" to upload the certificate Enter password (for PKCS12) Select key type Click "Save and Exit" Attach Certificate to Load Balancer (7 clicks/actions) Navigate to Load Balancers Select or create HTTP Load Balancer Select "HTTPS with Custom Certificate" Configure TLS parameters Select certificates from dropdown Apply configuration Save and Exit Total: 18 clicks/actions minimum for a single certificate deployment Now imagine doing this for 50 certificates across 20 load balancers – that's potentially a lot of clicks! Enter the CLI: CLI TLS Certificate Manager The CLI TLS Certificate Manager (available at https://github.com/veysph/F5XC-Tools/) transforms this multi-step process into simple, scriptable commands. This tool leverages the F5 XC API to provide direct, programmatic access to certificate management functions. Key Benefits of the CLI Approach 1. Dramatic Time Savings What takes 18 clicks in the console becomes a single command: python f5xc_tls_cert_manager.py --config config.json --create 2. Batch Operations / Automation-Ready Process multiple certificates easily. The tool can be integrated/adapted for CI/CD pipelines. 3. Consistent and Repeatable Eliminate human error with standardized commands and configuration files. Practical Use Cases Use Case 1: Multi-Environment Deployment Scenario: Deploying certificates across dev, staging, and production namespaces Console Approach: Navigate to each namespace Repeat certificate upload process Risk: High (manual process prone to errors) Effort: a lot clicks CLI Approach: python f5xc_tls_cert_manager.py --config dev.json --create python f5xc_tls_cert_manager.py --config staging.json --create python f5xc_tls_cert_manager.py --config production.json --create Time: ~5 minutes Risk: Very low (automated validation) Effort: 3 commands Use Case 2: Emergency Certificate Replacement Scenario: Expired (or compromised) certificate needs immediate replacement Console Approach: Stress of navigating multiple screens under pressure Risk of misconfiguration during urgent changes CLI Approach: python f5xc_tls_cert_manager.py --config config.json --replace Conclusion While the F5 Distributed Cloud Console provides a comprehensive and user-friendly interface for certificate management. However, the CLI approach offers undeniable advantages for organizations managing certificates at scale. The Certificate Manager CLI tool bridges the gap between the powerful capabilities of F5 Distributed Cloud and the operational efficiency demands of modern infrastructure code practices. Additional Resources F5 Distributed Cloud Certificate Management Documentation F5XC TLS Certificate Manager CLI Tool F5 Distributed Cloud API Documentation169Views1like0CommentsHow To Build An Agent With n8n And Docker - AI Step-By-Step Lab
OK, community... It’s time for another addition to our AI Step-By-Step Labs, from yours truly! I did a mini lab for installing n8n on a mac. It has been popular, but we thought it would be better as a Docker installation to fit into our lab. So I set out to do this on Docker and it was super easy. I'm not going to spoil it too much, as I'd like for you to visit the GitHub repository for our labs, but this is the flow you create: This lab walks you through installation in Docker and walks you through creation of an AI agent that proxies requests to the LLM instance you created with our first lab on installing Ollama. Please check out the repository on GitHub to get started using this powerful agentic framework in your lab, production or personal use today. Feel free to watch the video on our YouTube channel here:221Views1like1CommentVIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.892Views5likes2CommentsModernizing F5 Platforms with Ansible
I’ve been meaning to publish this article for some time now. Over the past few months, I’ve been building Ansible automation that I believe will help customers modernize their F5 infrastructure. This especially true for those looking to migrate from legacy BIG-IP hardware to next-generation platforms like VELOS and rSeries. As I explored tools like F5 Journeys and traditional CLI-based migration methods, I noticed a significant amount of manual pre-work was still required. This includes: Ensuring the Master Key used to encrypt the UCS archive is preserved and securely handled Storing UCS, Master Key and information assets in a backup host Pre-configuring all VLANs and properly tagging them on the VELOS partition before deploying a Tenant OS To streamline this, I created an Ansible Playbook with supporting roles tailored for Red Hat Ansible Automation Platform. It’s built to perform a lift-and-shift migration of a F5 BIG-IP configuration from one device to another—with optional OS upgrades included. In the demo video below, you’ll see an automated migration of a F5 i10800 running 15.1.10 to a VELOS BX110 Tenant OS running 17.5.0—demonstrating a smooth, hands-free modernization process. Currently Working Velos Velos Controller/Partition running (F5OS-C 1.8.1) - which allows Tenant Management IP to be in a different VLAN Migrates a standalone F5 BIG-IP i10800 to a VELOS BX110 Tenant OS VLAN'ed Source tenant required (Doesn’t support non-vlan tenants) rSeries Shares MGMT IP with the same subnet as the Chassis Partition. Migrates a standalone F5 BIG-IP i10800 to a R5000 Tenant OS VLAN'ed Source tenant required (Doesn’t support non-vlan tenants) Handles: Configuration and crypto backup UCS creation, transfer, and validation F5OS System VLAN Creation, and Association to Tenant - (Does Not manage Interface to VLAN Mapping) F5 OS Tenant provisioning and deployment inline OS upgrades during the migration Roadmap / What's Next Expanding Testing to include Viprion/iSeries (Using VCMP) Tenant Testing. Supporting hardware-to-virtual platform migrations Adding functionality for HA (High Availability) environments Watch the Demo Video View the Source Code on GitHub https://github.com/f5devcentral/f5-bd-ansible-platform-modernization This project is built for the community—so feel free to take it, fork it, and expand it. Let’s make F5 platform modernization as seamless and automated as possible.637Views4likes0Comments