f5xc
4 TopicsExtending F5 ADSP: Multi-Tailnet Egress
Tailscale tailnets make private networking simple, secure, and efficient. They’re quick to establish, easy to operate, and provide strong identity and network-level protection through zero-trust WireGuard mesh networking. However, while tailnets are secure, applications inside these environments still need enterprise-grade application security, especially when exposed beyond the mesh. This is where F5 Distributed Cloud (XC) App Stack comes in. As F5 XC’s Kubernetes-native platform, App Stack integrates directly with Tailscale to extend F5 ADSP into tailnets. The result is that applications inside tailnets gain the same enterprise-grade security, performance, and operational consistency as in traditional environments, while also taking full advantage of Tailscale networking.524Views4likes2CommentsDeploying F5 Distributed Cloud Customer Edge on AWS in a scalable way with full automation
Scaling infrastructure efficiently while maintaining operational simplicity is a critical challenge for modern enterprises. This comprehensive guide presents the foundation for a fully automated Terraform solution for deploying F5 Distributed Cloud (F5XC) Customer Edge (CE) nodes on AWS that scales seamlessly from single-node proof-of-concepts to multi-node production deployments.236Views1like0CommentsStreamlining Certificate Management in F5 Distributed Cloud: From Console Clicks to CLI Efficiency
Introduction Managing TLS certificates at scale in F5 Distributed Cloud (F5 XC) can become a complex task, especially when dealing with multiple namespaces, domains, load balancers, and frequent certificate renewals. While the F5 Distributed Cloud Console provides a comprehensive GUI for certificate management. However, the number of clicks and navigation steps required for routine operations can impact operational efficiency. In this article, we'll explore how to manage custom certificates in F5 Distributed Cloud. We'll compare the console-based approach with a streamlined CLI solution, and demonstrate why using automation tools can significantly improve your certificate management workflow. The Challenge: Certificate Management at Scale Modern enterprises often manage dozens or even hundreds of TLS certificates across their infrastructure. Each certificate requires: Regular renewal (typically every 90 days for Let's Encrypt certificates) Association with the correct load balancers When multiplied across numerous applications and environments, what seems like a simple task becomes a significant operational burden. Understanding F5 Distributed Cloud Certificate Management F5 Distributed Cloud provides robust support for custom TLS certificates (Bring Your Own Certificate - BYOC). The platform allows you to: Create and manage TLS certificate objects with support for both PEM and PKCS12 formats Associate multiple certificates with a single HTTPS load balancer Share certificates across multiple load balancers The Console Approach: Step-by-Step Process Let's walk through the typical process of adding a new certificate via the F5 XC Console: Navigate to Certificate Management (3 clicks/actions) Select Multi-Cloud App Connect service Select Certificate Management from the left menu Click on TLS Certificates Create a New Certificate (8 clicks/actions) Click "Add TLS Certificate" Enter certificate name Set labels and description (optional) Click "Import from File" in the Certificate field Click "Upload File" to upload the certificate Enter password (for PKCS12) Select key type Click "Save and Exit" Attach Certificate to Load Balancer (7 clicks/actions) Navigate to Load Balancers Select or create HTTP Load Balancer Select "HTTPS with Custom Certificate" Configure TLS parameters Select certificates from dropdown Apply configuration Save and Exit Total: 18 clicks/actions minimum for a single certificate deployment Now imagine doing this for 50 certificates across 20 load balancers – that's potentially a lot of clicks! Enter the CLI: CLI TLS Certificate Manager The CLI TLS Certificate Manager (available at https://github.com/veysph/F5XC-Tools/) transforms this multi-step process into simple, scriptable commands. This tool leverages the F5 XC API to provide direct, programmatic access to certificate management functions. Key Benefits of the CLI Approach 1. Dramatic Time Savings What takes 18 clicks in the console becomes a single command: python f5xc_tls_cert_manager.py --config config.json --create 2. Batch Operations / Automation-Ready Process multiple certificates easily. The tool can be integrated/adapted for CI/CD pipelines. 3. Consistent and Repeatable Eliminate human error with standardized commands and configuration files. Practical Use Cases Use Case 1: Multi-Environment Deployment Scenario: Deploying certificates across dev, staging, and production namespaces Console Approach: Navigate to each namespace Repeat certificate upload process Risk: High (manual process prone to errors) Effort: a lot clicks CLI Approach: python f5xc_tls_cert_manager.py --config dev.json --create python f5xc_tls_cert_manager.py --config staging.json --create python f5xc_tls_cert_manager.py --config production.json --create Time: ~5 minutes Risk: Very low (automated validation) Effort: 3 commands Use Case 2: Emergency Certificate Replacement Scenario: Expired (or compromised) certificate needs immediate replacement Console Approach: Stress of navigating multiple screens under pressure Risk of misconfiguration during urgent changes CLI Approach: python f5xc_tls_cert_manager.py --config config.json --replace Conclusion While the F5 Distributed Cloud Console provides a comprehensive and user-friendly interface for certificate management. However, the CLI approach offers undeniable advantages for organizations managing certificates at scale. The Certificate Manager CLI tool bridges the gap between the powerful capabilities of F5 Distributed Cloud and the operational efficiency demands of modern infrastructure code practices. Additional Resources F5 Distributed Cloud Certificate Management Documentation F5XC TLS Certificate Manager CLI Tool F5 Distributed Cloud API Documentation259Views1like0CommentsHow to deploy an F5XC SMSv2 for KVM with the help of automation
Typical KVM architecture for F5XC CE The purpose of this article isn't to build a complete KVM environment, but some basic concepts should be explained. To be deployed, a CE must have an interface (which is and will always be its default interface) that has Internet access. This access is necessary to perform the installation steps and provide the "control plane" part to the CE. The name of this interface will be referred to as « Site Local Outside or SLO ». It is highly recommended (not to say required) to add at least a second interface during the site first deployment. Because depending on the infrastructure (for example, GCP) it is not possible to add network interfaces after the creation of the VM. Even on platforms where adding a network interface is possible, a reboot of the F5XC CE is needed. An F5XC SMSv2 CE can have up to eight interfaces overall. Additional interfaces are (most of the time) used as “Site Local Inside or SLI” interfaces or "Segment interfaces" (that specific part will be covered in another article). To match the requirements, one typical KVM deployment is described below. It's a way of doing things, but it's not the only way. But most likely the architecture will be composed of: a Linux host user space softwares to run KVM: qemu libvirt virt-manager Linux Network Bridge Interfaces with KVM Networks mapped to those interfaces CE interfaces attached to KVM Networks This is what the diagram below is picturing. KVM Storage and Networking We will use both components in the Terraform automation. KVM Storage It is necessary to define a storage pool for the F5XC CE virtual machine. If you already have a storage pool, you can use it. Otherwise, here is how to create one: sudo virsh pool-define-as --name f5xc-vmdisks --type dir --target /f5xc-vmdisks To create the target directory (/f5xc-vmdisks): sudo virsh pool-build f5xc-vmdisks Start the storage pool: sudo virsh pool-start f5xc-vmdisks Ensure the storage pool starts automatically on system boot: sudo virsh pool-autostart f5xc-vmdisks KVM Networking Assuming you already have bridge interfaces configured on the Linux host but no KVM Networking yet. KVM SLO networking Create an XML file (kvm-net-ext.xml) with the following: <network> <name>kvm-net-ext</name> <forward mode="bridge"/> <bridge name="br0"/> </network> Then run: virsh net-define kvm-net-ext.xml virsh net-start kvm-net-ext virsh net-autostart kvm-net-ext KVM SLI networking Create an XML file (kvm-net-int.xml) with the following: <network> <name>kvm-net-int</name> <forward mode="bridge"/> <bridge name="br1"/> </network> Then run: virsh net-define kvm-net-int.xml virsh net-start kvm-net-int virsh net-autostart kvm-net-int Terraform Getting the F5XC CE QCOW2 image Please use this link to retrieve the QCOW2 image and store it in your KVM Linux host. Terraform variables vCPU and Memory Please refer to this page for the cpu and memory requirement. Then adjust: variable "f5xc-ce-memory" { description = "Memory allocated to KVM CE" default = "32000" } variable "f5xc-ce-vcpu" { description = "Number of vCPUs allocated to KVM CE" default = "8" } Networking Based on your KVM Networking setup, please adjust: variable "f5xc-ce-network-slo" { description = "KVM Networking for SLO interface" default = "kvm-net-ext" } variable "f5xc-ce-network-sli" { description = "KVM Networking for SLI interface" default = "kvm-net-int" } Storage Based on your KVM storage pool, please adjust: variable "f5xc-ce-storage-pool" { description = "KVM CE storage pool name" default = "f5xc-vmdisks" } F5XC CE image location variable "f5xc-ce-qcow2" { description = "KVM CE QCOW2 image source" default = "<path to the F5XC CE QCOW2 image>" } Cloud-init modification It's possible to configure static IP and Gateway for the SLO interface. This is done in the cloud-init part of the Terraform code. Please specify slo_ip and slo_gateway in the Terraform code. Like below. data "cloudinit_config" "config" { gzip = false base64_encode = false part { content_type = "text/cloud-config" content = yamlencode({ write_files = [ { path = "/etc/vpm/user_data" permissions = "0644" owner = "root" content = <<-EOT token: ${replace(volterra_token.smsv2-token.id, "id=", "")} slo_ip: 10.154.1.100/24 slo_gateway: 10.154.1.254 EOT } ] }) } } If you don't need a static IP, please comment or remove those two lines. Sample Terraform code Is available here. Deployment Make all the changes necessary in the Terraform variables and in the cloud-init. Then run: terraform init terraform plan terraform apply Should everything be correct at each step, you should get a CE object in the F5XC console, under Multi-Cloud Network Connect --> Manage --> Site Management --> Secure Mesh Sites v2113Views3likes0Comments