cloud
1752 TopicsF5 Distributed Cloud Telemetry (Metrics) - Prometheus
Scope This article walks through the process of collecting metrics from F5 Distributed Cloud’s (XC) Service Graph API and exposing them in a format that Prometheus can scrape. Prometheus then scrapes these metrics, which can be visualized in Grafana. Introduction Metrics are essential for gaining real-time insight into service performance and behaviour. F5 Distributed Cloud (XC) provides a Service Graph API that captures service-to-service communication data across your infrastructure. Prometheus, a leading open-source monitoring system, can scrape and store time-series metrics — and when paired with Grafana, offers powerful visualization capabilities. This article shows how to integrate a custom Python-based exporter that transforms Service Graph API data into Prometheus-compatible metrics. These metrics are then scraped by Prometheus and visualized in Grafana, all running in Docker for easy deployment. Prerequisites Access to F5 Distributed Cloud (XC) SaaS tenant VM with Python3 installed Running Prometheus instance (If not check "Configuring Prometheus" section below) Running Grafana instance (If not check "Configuring Grafana" section below) Note – In this demo, an AWS VM is used with Python installed and running exporter (port - 8888), Prometheus (host port - 9090) and Grafana (port - 3000) running as docker instance, all in same VM. Architecture Overview F5 XC API → Python Exporter → Prometheus → Grafana Building the Python Exporter To collect metrics from the F5 Distributed Cloud (XC) Service Graph API and expose them in a format Prometheus understands, we created a lightweight Python exporter using Flask. This exporter acts as a transformation layer — it fetches service graph data, parses it, and exposes it through a /metrics endpoint that Prometheus can scrape. Code Link -> exporter.py Key Functions of the Exporter Uses XC-Provided .p12 File for Authentication: To authenticate API requests to F5 Distributed Cloud (XC), the exporter uses a client certificate packaged in a .p12 file. This file must be manually downloaded from the F5 XC console (steps) and stored on the VM where the Python script runs. The script expects the full path to the .p12 file and its associated password to be specified in the configuration section. Fetches Service Graph Metrics: The script pulls service-level metrics such as request rates, error rates, throughput, and latency from the XC API. It supports both aggregated and individual load balancer views. Processes and Structures the Data: The exporter parses the raw API response to extract the latest metric values and converts them into Prometheus exposition format. Each metric is labelled (e.g., by vhost and direction) for flexibility in Grafana queries. Exposes a /metrics Endpoint: A Flask web server runs on port 8888, serving the /metrics endpoint. Prometheus periodically scrapes this endpoint to ingest the latest metrics. Handles Multiple Metric Types: Traffic metrics and health scores are handled and formatted individually. Each metric includes a descriptive name, type declaration, and optional labels for fine-grained monitoring and visualization. Running the Exporter python3 exporter.py > python.log 2>&1 & This command runs exporter.py using Python3 in background and redirects all standard output and error messages to python.log for easier debugging. Configuring Prometheus docker run -d --name=prometheus --network=host -v $(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus:latest Prometheus is running as docker instance in host network (port 9090) mode with below configuration (prometheus.yml), scrapping /metrics endpoint exposed from python flask exporter on port 8888 every 60 seconds. Configuring Grafana docker run -d --name=grafana -p 3000:3000 grafana/grafana:latest Private IP of the Prometheus docker instance along with port (9090) is used as data source in Grafana configuration. Once Prometheus is configured under Grafana Data sources, follow below steps: Navigate to Explore menu Select “Prometheus” in data source picker Choose appropriate metric, in this case “f5xc_downstream_http_request_rate” Select desired time range and click “Run query” Observe metrics graph will be displayed Note : Some requests need to be generated for metrics to be visible in Grafana. A broader, high-level view of all metrics can be accessed by navigating to “Drilldown” and selecting “Metrics”, providing a comprehensive snapshot across services. Conclusion F5 Distributed Cloud’s (F5 XC) Service Graph API provides deep visibility into service-to-service communication, and when paired with Prometheus and Grafana, it enables powerful, real-time monitoring without vendor lock-in. This integration highlights F5 XC’s alignment with open-source ecosystems, allowing users to build flexible and scalable observability pipelines. The custom Python exporter bridges the gap between the XC API and Prometheus, offering a lightweight and adaptable solution for transforming and exposing metrics. With Grafana dashboards on top, teams can gain instant insight into service health and performance. This open approach empowers operations teams to respond faster, optimize more effectively, and evolve their observability practices with confidence and control.320Views3likes2CommentsDistributed Cloud for App Delivery & Security for Hybrid Environments
As enterprises modernize and expand their digital services, they increasingly deploy multiple instances of the same applications across diverse infrastructure environments—such as VMware, OpenShift, and Nutanix—to support distributed teams, regional data sovereignty, redundancy, or environment-specific compliance needs. These application instances often integrate into service chains that span across clouds and data centers, introducing both scale and operational complexity. F5 Distributed Cloud provides a unified solution for secure, consistent application delivery and security across hybrid and multi-cloud environments. It enables organizations to add workloads seamlessly—whether for scaling, redundancy, or localization—without sacrificing visibility, security, or performance.235Views3likes0CommentsUsing Aliases to launch F5 AMI Images in AWS Marketplace
F5 lists 82 product offerings in the AWS Marketplace as Amazon Machine Images (AMI). Each version of each product in each AWS Region has a different AMI. That’s around 22,000 images! Each AMI is identified by an AMI ID. You use the AMI ID to indicate which AMI you want to use when launching an F5 product. You can find AMI IDs using the AWS Web Console, but the AWS CLI is the best tool for the job. Searching for AMIs using the AWS CLI Here’s how you find the AMI IDs for version 17.5.1.2 of BIG-IP Virtual Edition in the us-east-1 AWS region: aws ec2 describe-images --owners aws-marketplace --filters 'Name=name,Values=F5 BIGIP-17.5.1.2*' --query "sort_by(Images,&Name)[:]. {Description: Description, Id:ImageId }" --region us-east-1 --output table ---------------------------------------------------------------------------------------------------- | DescribeImages | +------------------------------------------------------------------------+-------------------------+ | Description | Id | +------------------------------------------------------------------------+-------------------------+ | F5 BIGIP-17.5.1.2-0.0.5 BYOL-All Modules 1Boot Loc-250916013758 | ami-0948eabdf29ef2a8f | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-All Modules 2Boot Loc-250916015535 | ami-0cb3aaa67967ad029 | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-LTM 1Boot Loc-250916013616 | ami-05d70b82c9031ff39 | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-LTM 2Boot Loc-250916014744 | ami-0b6021cc939308f3e | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-encrypted-threat-protection-250916015535 | ami-01f4fde300d3763be | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-AWF Plus 16vCPU-250916015534 | ami-015474056159387ac | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Adv WAF Plus 200Mbps-250916015522 | ami-06ce5b03dce2a059d | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Adv WAF Plus 25Mbps-250916015520 | ami-0826808708df97480 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Adv WAF Plus 3Gbps-250916015523 | ami-08c63c8f7ca71cf37 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 10Gbps-250916015532 | ami-0e806ef17838760e4 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 1Gbps-250916015530 | ami-05e31c2a0ac9ec050 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 200Mbps-250916015528 | ami-02dc0995af98d0710 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 25Mbps-250916015527 | ami-08b8f2daefde800e9 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 5Gbps-250916015531 | ami-0d16154bb1102f3e9 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 10Gbps-250916015512 | ami-05c9527fff191feba | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 1Gbps-250916015510 | ami-05ce2932601070d5c | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 200Mbps-250916015508 | ami-0f6044db3900ba46f | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 25Mbps-250916014542 | ami-0de57aba160170358 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 5Gbps-250916015511 | ami-04271103ab2d1369d | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 10Gbps-250916014739 | ami-0d06d2a097d7bb47a | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 1Gbps-250916014737 | ami-01707e969ebcc6138 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 200Mbps-250916014735 | ami-06f9a44562d94f992 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 25Mbps-250916013626 | ami-0aa2bca574c66af13 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 5Gbps-250916014738 | ami-01951e02c52deef85 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-PVE Adv WAF Plus 200Mbps-0916015525 | ami-03df50dfc04f19df5 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-PVE Adv WAF Plus 25Mbps-50916015524 | ami-0777c069eaae20ea1 | +------------------------------------------------------------------------+-------------------------+ This command shows all 17.5.1* releases of the "PayGo Good 1Gbps" flavor of BIG-IP in the us-west-1 region sorted by newest release first: aws ec2 describe-images --owners aws-marketplace --filters 'Name=name,Values=F5 BIGIP-17.5.1*PAYG-Good 1Gbps*' --query "reverse(sort_by(Images,&CreationDate))[:]. {Description: Name, Id:ImageId , date:CreationDate}" --region us-west-1 --output table ---------------------------------------------------------------------------------------------------------------------------------------------------- | DescribeImages | +--------------------------------------------------------------------------------------------+------------------------+----------------------------+ | Description | Id | date | +--------------------------------------------------------------------------------------------+------------------------+----------------------------+ | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 1Gbps-250916014737-7fb2f9db-2a12-4915-9abb-045b6388cccd | ami-0de8ca1229be5f7fe | 2025-09-16T23:12:28.000Z | | F5 BIGIP-17.5.1-0.80.7 PAYG-Good 1Gbps-250811055424-7fb2f9db-2a12-4915-9abb-045b6388cccd | ami-09afcec6f36494382 | 2025-08-15T19:03:23.000Z | | F5 BIGIP-17.5.1-0.0.7 PAYG-Good 1Gbps-250618090310-7fb2f9db-2a12-4915-9abb-045b6388cccd | ami-03e389e112872fd53 | 2025-07-01T06:00:44.000Z | +--------------------------------------------------------------------------------------------+------------------------+----------------------------+ Notice that the same BIG-IP VE release has a different AMI ID in each AWS region. Attempting to launch a product in one region using an AMI ID from a different region will fail. This causes a problem when a shell script or automation tool is used to launch new EC2 instances and the AMI IDs have been hardcoded for one region and you attempt to use it in another. Wouldn’t it be nice to have a single AMI identifier that works in all AWS regions? Introducing AMI Aliases The Ami Alias is a similar ID to the AMI ID, but it’s easier to use in automation. An AMI alias has the form /aws/service/marketplace/prod-<identifier>/<version> , for example, "PayGo Good 1Gbps" /aws/service/marketplace/prod-s6e6miuci4yts/17.5.1.2-0.0.5 You can use this Ami Alias ID in any Region, and AWS automatically maps it to the correct Regional AMI ID. BIG-IP AMI Alias Identifiers F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 16vCPU) prod-qqgc2ltsirpio F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 200Mbps) prod-yajbds56coa24 F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 25Mbps) prod-qiufc36l6sepa F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 3Gbps) prod-fp5qrfirjnnty F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 10Gbps) prod-w2p3rtkjrjmw6 F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 1Gbps) prod-g3tye45sqm5d4 F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 200Mbps) prod-dnpovgowtyz3o F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 25Mbps) prod-wjoyowh6kba46 F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 5Gbps) prod-hlx7g47cksafk F5 BIG-IP VE - ALL (BYOL, 1 Boot Location) prod-zvs3u7ov36lig F5 BIG-IP VE - ALL (BYOL, 2 Boot Locations) prod-ubfqxbuqpsiei F5 BIG-IP VE - LTM/DNS (BYOL, 1 Boot Location) prod-uqhc6th7ni37m F5 BIG-IP VE - LTM/DNS (BYOL, 2 Boot Locations) prod-o7jz5ohvldaxg F5 BIG-IP Virtual Edition - BETTER (PAYG, 10Gbps) prod-emsxkvkzwvs3o F5 BIG-IP Virtual Edition - BETTER (PAYG, 1Gbps) prod-4idzu4qtdmzjg F5 BIG-IP Virtual Edition - BETTER (PAYG, 200Mbps) prod-firaggo6h7bt6 F5 BIG-IP Virtual Edition - BETTER (PAYG, 25Mbps) prod-wijbh7ib34hyy F5 BIG-IP Virtual Edition - BETTER (PAYG, 5Gbps) prod-rfglxslpwq64g F5 BIG-IP Virtual Edition - GOOD (PAYG, 10Gbps) prod-54qdbqglgkiue F5 BIG-IP Virtual Edition - GOOD (PAYG, 1Gbps) prod-s6e6miuci4yts F5 BIG-IP Virtual Edition - GOOD (PAYG, 200Mbps) prod-ynybgkyvilzrs F5 BIG-IP Virtual Edition - GOOD (PAYG, 25Mbps) prod-6zmxdpj4u4l5g F5 BIG-IP Virtual Edition - GOOD (PAYG, 5Gbps) prod-3ze6zaohqssua F5 BIG-IQ Virtual Edition - (BYOL) prod-igv63dkxhub54 F5 Encrypted Threat Protection prod-bbtl6iceizxoi F5 Per-App-VE Advanced WAF with LTM, IPI, TC (PAYG, 200Mbps) prod-gkzfxpnvn53v2 F5 Per-App-VE Advanced WAF with LTM, IPI, TC (PAYG, 25Mbps) prod-qu34r4gipys4s NGINX Plus Alias Identifiers NGINX Plus Basic - Amazon Linux 2 (LTS) AMI prod-jhxdrfyy2jtva NGINX Plus Developer - Amazon Linux 2 (LTS) prod-kbeepohgkgkxi NGINX Plus Developer - Amazon Linux 2 (LTS) ARM Graviton prod-vulv7pmlqjweq NGINX Plus Developer - Amazon Linux 2023 prod-2zvigd3ltowyy NGINX Plus Developer - Amazon Linux 2023 ARM Graviton prod-icspnobisidru NGINX Plus Developer - RHEL 8 prod-tquzaepylai4i NGINX Plus Developer - RHEL 9 prod-hwl4zfgzccjye NGINX Plus Developer - Ubuntu 22.04 prod-23ixzkz3wt5oq NGINX Plus Developer - Ubuntu 24.04 prod-tqr7jcokfd7cw NGINX Plus FIPS Premium - RHEL 9 prod-v6fhyzzkby6c2 NGINX Plus Premium - Amazon Linux 2 (LTS) AMI prod-4dput2e45kkfq NGINX Plus Premium - Amazon Linux 2 (LTS) ARM Graviton prod-56qba3nacijjk NGINX Plus Premium - Amazon Linux 2023 prod-w6xf4fmhpc6ju NGINX Plus Premium - Amazon Linux 2023 ARM Graviton prod-e2iwqrpted4kk NGINX Plus Premium - RHEL 8 AMI prod-m2v4zstxasp6s NGINX Plus Premium - RHEL 9 prod-rytmqzlxdneig NGINX Plus Premium - Ubuntu 22.04 prod-dtm5ujpv7kkro NGINX Plus Premium - Ubuntu 24.04 prod-opg2qh33mi4pk NGINX Plus Standard - Amazon Linux 2 (LTS) AMI prod-mdgdnfftmj7se NGINX Plus Standard - Amazon Linux 2 (LTS) ARM Graviton prod-2kagbnj7ij6zi NGINX Plus Standard - Amazon Linux 2023 prod-i25cyug3btfvk NGINX Plus Standard - Amazon Linux 2023 ARM Graviton prod-6s5rvlqlgrt74 NGINX Plus Standard - RHEL 8 prod-ebhpntvlfwluc NGINX Plus Standard - RHEL 9 prod-3e7rk2ombbpfa NGINX Plus Standard - Ubuntu 22.04 prod-7rhflwjy5357e NGINX Plus Standard - Ubuntu 24.04 prod-b4rly35ct3dlc NGINX Plus with NGINX App Protect Developer - Amazon Linux 2 prod-pjmfzy5htmaks NGINX Plus with NGINX App Protect Developer - Debian 11 prod-ixsytlu2eluqa NGINX Plus with NGINX App Protect Developer - RHEL 8 prod-6v57ggy3dqb6c NGINX Plus with NGINX App Protect Developer - Ubuntu 20.04 prod-4a4g7h7mpepas NGINX Plus with NGINX App Protect DoS Developer - Amazon Linux 2023 prod-fmqayhbsryoz2 NGINX Plus with NGINX App Protect DoS Developer - Debian 11 prod-4e5fwakhrn36y NGINX Plus with NGINX App Protect DoS Developer - RHEL 8 prod-ubid75ixhf34a NGINX Plus with NGINX App Protect DoS Developer - RHEL 9 prod-gg7mi5njfuqcw NGINX Plus with NGINX App Protect DoS Developer - Ubuntu 20.04 prod-qiwzff7orqrmy NGINX Plus with NGINX App Protect DoS Developer - Ubuntu 22.04 prod-h564ffpizhvic NGINX Plus with NGINX App Protect DoS Developer - Ubuntu 24.04 prod-wckvpxkzj7fvk NGINX Plus with NGINX App Protect DoS Premium - Amazon Linux 2023 prod-lza5c4nhqafpk NGINX Plus with NGINX App Protect DoS Premium - Debian 11 prod-ych3dq3r44gl2 NGINX Plus with NGINX App Protect DoS Premium - RHEL 8 prod-266ker45aot7g NGINX Plus with NGINX App Protect DoS Premium - RHEL 9 prod-6qrqjtainjlaa NGINX Plus with NGINX App Protect DoS Premium - Ubuntu 20.04 prod-hagmbnluc5zmw NGINX Plus with NGINX App Protect DoS Premium - Ubuntu 22.04 prod-y5iwq6gk4x4yq NGINX Plus with NGINX App Protect DoS Premium - Ubuntu 24.04 prod-k3cb7avaushvq NGINX Plus with NGINX App Protect Premium - Amazon Linux 2 prod-tlghtvo66zs5u NGINX Plus with NGINX App Protect Premium - Debian 11 prod-6kfdotc3mw67o NGINX Plus with NGINX App Protect Premium - RHEL 8 prod-okwnxdlnkmqhu NGINX Plus with NGINX App Protect Premium - Ubuntu 20.04 prod-5wn6ltuzpws4m NGINX Plus with NGINX App Protect WAF + DoS Premium - Amazon Linux 2023 prod-mualblirvfcqi NGINX Plus with NGINX App Protect WAF + DoS Premium - Debian 11 prod-k2rimvjqipvm2 NGINX Plus with NGINX App Protect WAF + DoS Premium - RHEL 8 prod-6nlubep3hg4go NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 18.04 prod-f2diywsozd22m NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 20.04 prod-ajcsh5wsfuen2 NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 22.04 prod-6adjgf6yl7hek NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 24.04 prod-autki7guiiqio Using AMI Aliases for BIG-IP The following example shows using an AMI alias to launch a new "F5 BIG-IP Virtual Edition - GOOD (PAYG, 1Gbps)" instance version 17.5.1.2-0.0.5 by using the AWS CLI. aws ec2 run-instances --image-id resolve:ssm:/aws/service/marketplace/prod-s6e6miuci4yts/17.5.1.2-0.0.5 --instance-type m5.xlarge --key-name MyKeyPair The next example shows a CloudFormation template that accepts the AMI alias as an input parameter to create an instance. AWSTemplateFormatVersion: 2010-09-09 Parameters: AmiAlias: Description: AMI alias Type: 'String' Resources: MyEC2Instance: Type: AWS::EC2::Instance Properties: ImageId: !Sub "resolve:ssm:${AmiAlias}" InstanceType: "g4dn.xlarge" Tags: -Key: "Created from" Value: !Ref AmiAlias Using AMI Aliases for NGINX Plus NGINX Plus images in the AWS Marketplace are not version specific, so just use "latest" as the version to launch. For example, this will launch NGINX Plus Premium on Ubuntu 24.04: aws ec2 run-instances --image-id resolve:ssm:/aws/service/marketplace/prod-opg2qh33mi4pk/latest --instance-type c5.large --key-name MyKeyPair Finding AMI Aliases in AWS Marketplace AMI aliases are new to the AWS Marketplace, so not all products have them. To locate the alias for an AMI you use often, you need to resort to the AWS Marketplace web console. Here are the step-by-step instructions provided by Amazon: 1. Navigate to AWS Marketplace Go to AWS Marketplace Sign in to your AWS account 2. Find and Subscribe to the Product Search for or browse to find your desired product Click on the product listing Click "Continue to Subscribe" Accept the terms and subscribe to the product 3. Configure the Product After subscribing, click "Continue to Configuration" Select your desired: Delivery Method (if multiple options are available) Software Version Region 4. Locate the AMI Alias At the bottom of the configuration page, you'll see: AMI ID: ami-1234567890EXAMPLE AMI Alias: /aws/service/marketplace/prod-<identifier>/<version> New Tools for Your AMI Hunt In this article, we focused on using AMI Aliases to select the right F5 product to launch in AWS EC2. But, there’s one more takeaway. Scroll back up to the top of this page and take a closer look at the "aws ec2 describe-images" commands. These commands use JMESpath to filter, sort, and format the output. Find out more about filtering the output of AWS CLI commands here.113Views3likes0CommentsOpenShift Service Mesh 2.x/3.x with F5 BIG-IP
Overview OpenShift Service Mesh (OSSM) is Red Hat´s packaged version of Istio Service Mesh. Istio has the Ingress Gateway component to handle incoming traffic from outside of the cluster. Like other ingress controllers, it requires an external load balancer to get the traffic into the ingress PODs. This follows the canonical Kubenetes 2-tier arrangement for getting the traffic inside the cluster. This is depicted in the next figure: This article covers the configuration of OpenShift Service Mesh 2.x/3.x and expose it to the BIG-IP, and how to properly monitor its health, either using BIG-IP´s Container Ingress Services (CIS) or without using it. Exposing OSSM in BIG-IP - VIP configuration It is a customer choice how to publish OSSM in the BIG-IP: A Layer 4 (L4) Virtual Server is more simple and certificate management is done in OpenShift. The advantages of using this mode are the potential higher performance and scalability, including connection mirroring, yet mirroring is not usually used for HTTP traffic due to the typical retry mechanism of HTTP applications. Connection persistence is limited to the source IP. When using CIS, this is done with a TransportServer CR, which creates a fastL4 type virtual server in the BIG-IP. A Layer 7 (L7) Virtual Server requires additional configuration because TLS termination is required. In this mode, OpenShift can take advantage of BIG-IP´s TLS off-loading capabilities and Hardware/Network/SaaS/Cloud HSM integrations, which store private keys securely, including FIPS level support. Working at L7 also allows to do per-application traffic management, including headers and payload rewrites, cookie persistence, etc. It also allows to do per-application multi-cluster. The above features are provided by the LTM (load balancing) module in BIG-IP. The possibilities are further expanded when using modules such as ASM (Advanced WAF) and Access (authentication). When using CIS, this is done with a VirtualServer CR, which creates a standard-type virtual server in the BIG-IP. Exposing OSSM to BIG-IP - pool configuration There are two options to expose Istio Ingress Gateways to BIG-IP: Using ClusterIP addresses, these are POD IPs which are dynamic. This requires the use of CIS for discovering the IP addresses of the Ingress Gateway PODs. Using NodePort addresses, these are reachable from the outside network. When using these, it is not strictly necessary to use CIS, but it is recommended. Exposing OpenShift Service Mesh using ClusterIP This requires the use of CIS with the following parameters --orchestration-cni=ovn --static-routing-mode=true These make CIS create IP routes in the BIG-IP for reaching the POD IPs inside the OpenShift cluster. Please note that this only works if all the OpenShift nodes are directly connected in the same subnet as the BIG-IP. Additionally, it is required following parameter. It is the one that actually makes CIS populate pool members with Cluster (POD) IPs: --pool-member-type=cluster It is not needed to change any configuration in OSSM because ClusterIP mode is the default mode in Istio Ingress Gateways. Exposing OpenShift Service Mesh using NodePort Using NodePort allows to have known IP addresses for the Ingress Gateways, reachable from outside the cluster. Note that when using nodePort, only one Ingress Gateway replica will run per node. The behavior of NodePort varies using the externalTrafficPolicy field: Using the Cluster value, any OpenShift node will accept traffic and will redirect the traffic to any node that has an Ingress Gateway POD, in a load balancing fashion. This is the easiest to setup, but because each request might go to a different node makes health checking not reliable (it is not known which POD goes down). Using the Local value, only the OpenShift nodes that have an Ingress Gateway PODs will accept traffic. The traffic will be delivered to the local Ingress Gateway PODs, without further indirection. This is the recommended way when using NodePort because of its deterministic behaviour and therefore reliable health checking. Next, it is described how to setup a NodePort using the Local externalTrafficPolicy. There are two options for configuring OSSM: Using the ServiceMeshControlPlane CR method: this is the default method in OSSM 2.x for backwards compatibility, but it doesn’t allow to fine tune the configuration of the proxy. See this OSSM 2.x link for further details. This is deprecated and not available in OSSM 3.x. Using Gateway injection method: this is the only method possible in OSSM 3.x and the current recommendation from Red Hat for OSSM 2.x. Using this method allows you to tune the proxy settings. In this article, it will be shown how this tuning is of special interest because at present the Ingress Gateway doesn’t have good default values for allowing reliable health checking. These will be discussed in the Health Checking section. When using ServiceMeshControlPlane CR method, the above will be configured as follows: apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane [...] spec: gateways: ingress: enabled: false runtime: deployment: replicas: 2 service: externalTrafficPolicy: Local ports: - name: status-port nodePort: 30021 port: 15021 targetPort: 15021 - name: http2 nodePort: 30080 port: 80 targetPort: 8080 - name: https nodePort: 30443 port: 443 targetPort: 8443 type: NodePort When using the Gateway injection method (recommended), the Service definition is manually created analogously to the ServiceMeshControlPlane CR: apiVersion: v1 kind: Service [...] spec: externalTrafficPolicy: Local type: NodePort ports: - name: status-port nodePort: 30021 port: 15021 protocol: TCP targetPort: 15021 - name: http2 nodePort: 30080 port: 80 protocol: TCP targetPort: 8080 - name: https nodePort: 30443 port: 443 protocol: TCP targetPort: 8443 Where the ports section is optional but recommended in order to have deterministic ports, and required when not using CIS (because it requires static ports). The nodePort values can be customised. When not using CIS, it is needed to manually configure the pool members in the BIG-IP. It is typical in OpenShift to have the Ingress components (OpenShift Router or Istio) in dedicated infra nodes. See this Red Hat solution for details. When using the ServiceMeshControlPlane method, the configuration is as follows: apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane [...] spec: runtime: defaults: pod: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved When using the Gateway injection method, the configuration is added to the Deployment file directly: apiVersion: apps/v1 kind: Deployment [...] spec: template: metadata: spec: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved The configuration above is also a good practice when using CIS. Additionally, CIS by default adds all nodes IPs to the Service pool regardless of whether the externalTrafficPolicy is set to Cluster or Local value. The health check will discard nodes where there are no Ingress Gateways. It can be limited to the scope of the nodes discovered by CIS with the following parameter: --node-label-selector Health Checking and retries for the Ingress Gateway Ingress Gateway Readiness The Ingress Gateway has the following readinessProbe for Kubernete´s own health checking: readinessProbe: failureThreshold: 30 httpGet: path: /healthz/ready port: 15021 scheme: HTTP initialDelaySeconds: 1 periodSeconds: 2 successThreshold: 1 timeoutSeconds: 3 where the failureThreshold value of 30 is considered way too large and only marks down the Ingress Gateway as not Ready after 90 seconds (tested to be failureThreshold *timeoutSeconds). In this article, it is recommended to mark down an Ingress Gateway no later than 16 seconds. When using CIS, Kubernetes informs whenever a POD is not Ready and CIS automatically, removes its associated pool member from the pool. In order to achieve the desired behaviour of marking down the Ingress Gateway before 16 seconds, it is required to change the default failureThreshold value in the Deployment file by adding the following snippet: apiVersion: apps/v1 kind: Deployment [...] spec: template: metadata: spec: containers: - name: istio-proxy image: auto readinessProbe: failureThreshold: 5 httpGet: path: /healthz/ready port: 15021 scheme: HTTP initialDelaySeconds: 1 periodSeconds: 2 successThreshold: 1 timeoutSeconds: 3 Which keeps all other values equal and sets failureThreshold to 5, therefore marking down the Ingress Gateway after 15 seconds. When not using CIS, a HTTP health check has to be configured manually in the BIG-IP. An example health check monitor is shown next: Connection draining When an Ingress Gateway POD is deleted (because of an upgrade, scale-down event, etc...), it immediately returns HTTP 503 in the /healthz/ready endpoint and keeps serving connections until it is effectively deleted. This is called the drain period and by default is extremely short (3 seconds) for any external load balancer. This value has to be increased so the Ingress Gateway PODs being deleted continue serving connections until the Ingress Gateway POD is removed from the external load balancer (the BIG-IP) and the outstanding connections finalised. This setting can only be tuned using the Gateway injection method and it is applied by adding the following snippet in the Deployment file: apiVersion: apps/v1 kind: Deployment [...] spec: template: metadata: annotations: proxy.istio.io/config: | terminationDrainDuration: 45s In the example above, it has been used as the default drain period of the OpenShift Router (45 seconds). The value can be customised, keeping in mind that: When using CIS, it should allow CIS to update the configuration in the BIG-IP and drain the connections. When not using CIS, it should allow the health check to detect the condition of the POD and drain the connections. Additional recommendations The next recommendations apply to any ingress controller or API manager and have been previously suggested when using OpenShift Router. Handle non-graceful errors with the pool’s reselect tries To deal better with non-graceful shutdowns or transient errors, this mechanism will reselect a new Ingress Gateway POD when a request fails. The recommendation is to set the number of tries to the number of Ingress Gateway PODs -1. When using CIS, this can be set in the VirtualServer or TransportServer CRs with the reselectTries parameter. Set an additional TCP monitor for Ingress Gateway´s application traffic sockets This complementary TCP monitor (for both HTTP and HTTPS listeners) validates that Ready instances can actually receive traffic in the application’s traffic sockets. Although this is handled with the reselect tries mechanism, this monitor will provide visibility that such types of errors are happening. Conclusion and closing remarks We hope this article highlights the most important aspects of integrating OpenShift Service Mesh with BIG-IP. A key aspect for having a reliable Ingress Gateway integration is to modify OpenShift Service Mesh’s terminationDrainDuration and readinessProbe.failureThreshold defaults. F5 has submitted to Red Hat RFE 04270713 to improve these. This article will be updated accordingly. Whether CIS integration is used or not, BIG-IP allows you to expose OpenShift ServiceMesh reliably with extensive L4-L7 security and traffic management capabilities. It also allows fine-grained access control, scalable SNAT or keeping the original source IP, among others. Overall, BIG-IP is able to fulfill any requirement. We look forward to hearing your experience and feedback on this article.228Views1like0CommentsUsing AWS CloudHSM with F5 BIG-IP
With the release of TMOS version 17.5.1, BIG-IP now supports the latest AWS CloudHSM hardware security module (HSM) type, hsm2m.medium, and the latest AWS CloudHSM Client SDK, version 5. This article explains how to install and configure AWS CloudHSM Client SDK 5 on BIG-IP 17.5.1464Views1like4CommentsF5 Distributed Cloud (XC) Global Applications Load Balancing in Cisco ACI
Introduction F5 Distributed Cloud (XC) simplify cloud-based DNS management with global server load balancing (GSLB) and disaster recovery (DR). F5 XC efficiently directs application traffic across environments globally, performs health checks, and automates responses to activities and events to maintain high application performance with high availability and robustness. In this article, we will discuss how we can ensure high application performance with high availability and robustness by using XC to load-balance global applications across public clouds and Cisco Application Centric Infrastructure (ACI) sites that are geographically apart. We will look at two different XC in ACI use cases. Each of them uses a different approach for global applications delivery and leverages a different XC feature to load balance the applications globally and for disaster recovery. XC DNS Load Balancer Our first XC in ACI use case is very commonly seen where we use a traditional network-centric approach for global applications delivery and disaster recovery. We use our existing network infrastructure to provide global applications connectivity and we deploy GSLB to load balance the applications across sites globally and for disaster recovery. In our example, we will show you how to use XC DNS Load Balancer to load-balance a global application across ACI sites that are geographically dispersed. One of the many advantages of using XC DNS Load Balancer is that we no longer need to manage GSLB appliances. Also, we can expect high DNS performance thanks to XC global infrastructure. In addition, we have a single pane of glass, the XC console, to manage all of our services such as multi-cloud networking, applications delivery, DNS services, WAAP etc. Example Topology Here in our example, we use Distributed Cloud (XC) DNS Load Balancer to load balance our global application hello.bd.f5.com, which is deployed in a hybrid multi-cloud environment across two ACI sites located in San Jose and New York. Here are some highlights at each ACI site from our example: New York location XC CE is deployed in ACI using layer three attached with BGP XC advertises custom VIP 10.10.215.215 to ACI via BGP XC custom VIP 10.10.215.215 has an origin server 10.131.111.88 on AWS BIG-IP is integrated into ACI BIG-IP has a public VIP 12.202.13.149 that has two pool members: on-premise origin server 10.131.111.161 XC custom VIP 10.10.215.215 San Jose location XC CE is deployed in ACI using layer three attached with BGP XC advertises custom VIP 10.10.135.135 to ACI via BGP XC custom VIP 10.10.135.135 has an origin server 10.131.111.88 on Azure BIG-IP is integrated into Cisco ACI BIG-IP has a public VIP 12.202.13.147 that has two pool members: on-premise origin server 10.131.111.55 XC custom VIP 10.10.135.135 *Note: Click here to review on how to deploy XC CE in ACI using layer three attached with BGP. DNS Load Balancing Rules A DNS Load Balancer is an ingress controller for the DNS queries made to your DNS servers. The DNS Load Balancer receives the requests and answers with an IP address from a pool of members based on the configured load balancing rules. On the XC console, go to "DNS Management" -> "DNS Load Balancer Management" to create a DNS Load Balancer and then define the load balancing rules. Here in our example, we created a DNS Load Balancer and defined the load balancing rules for our global application hello.bd.f5.com (note: as a prerequisite, F5 XC must be providing primary DNS for the domain): Rule #1: If the DNS request to hello.bd.f5.com comes from United States or United Kingdom, respond with BIG-IP VIP 12.203.13.149 in the DNS response so that the application traffic will be directed to New York ACI site and forwarded to an origin server that is located in AWS or on-premise: Rule #2: If the DNS request to hello.bd.f5.com comes from United States or United Kingdom and if New York ACI site become unavailable, respond with BIG-IP VIP 12.203.13.147 in the DNS response so that the application traffic will be directed to San Jose ACI site and forwarded to an origin server that is located on-premise or in Azure: Rule #3: If the DNS request to hello.bd.f5.com comes from somewhere outside of United States or United Kingdom, respond with BIG-IP VIP 12.203.13.147 in the DNS response so that the application traffic will be directed to San Jose ACI and forwarded to an origin server that is located on-premise or in Azure: Validation Now, let's see what happens. When a machine located in the United States tries to reach hello.bd.f5.com and if both ACI sites are up, the traffic is directed to New York ACI site and forwarded to an origin server that is located on-premise or in AWS as expected: When a machine located in the United States tries to reach hello.bd.f5.com and if the New York ACI site is down or becomes unavailable, the traffic is re-directed to San Jose ACI site and forwarded to an origin server that is located on-premise or in Azure as expected: When a machine tries to access hello.bd.f5.com from outside of United States or United Kingdom, it is directed to San Jose ACI site and forwarded to an origin server that is located on-premise or in Azure as expected: On the XC console, go to "DNS Management" and select the appropriate DNS Zone to view the Dashboard for information such as the DNS traffic distribution across the globe, the query types etc and Requests for DNS requests info: XC HTTP Load Balancer Our second XC in ACI use case uses a different approach for global applications delivery and disaster recovery. Instead of using the existing network infrastructure for global applications connectivity and utilizing XC DNS Load Balancer for global applications load balancing, we simplify the network layer management by securely deploying XC to connect our applications globally and leveraging XC HTTP Load Balancer to load balance our global applications and for disaster recovery. Example Topology Here in our example, we use XC HTTP load balancer to load balance our global application global.f5-demo.com that is deployed across a hybrid multi-cloud environment. Here are some highlights: XC CE is deployed in each ACI site using layer three attached with BGP New York location: ACI advertises on-premise origin server 10.131.111.161 to XC CE via BGP San Jose location: ACI advertises on-premise origin server 10.131.111.55 to XC CE via BGP An origin server 10.131.111.88 is located in AWS An origin server 10.131.111.88 is located in Azure *Note: Click here to review on how to deploy XC CE in ACI using layer three attached with BGP. XC HTTP Load Balancer On the XC console, go to “Multi-Cloud App Connect” -> “Manage” -> “Load Balancers” -> “HTTP Load Balancers” to “Add HTTP Load Balancer”. In our example, we created a HTTPS load balancer named global with domain name global.f5-demo.com. Instead of bringing our own certificate, we took advantage of the automatic TLS certificate generation and renewal supported by XC: Go to “Origins” section to specify the origin servers for the global application. In our example, we included all origin servers across the public clouds and ACI sites for our global application global.f5-demo.com: Next, go to “Other Settings” -> “VIP Advertisement”. Here, select either “Internet” or “Internet (Specified VIP)” to advertise the HTTP Load Balancer to the Internet. In our example, we selected “Internet” to advertise global.f5-demo.com globally because we decided not to manage nor to acquire a public IP: In our first use case, we defined a set of DNS load balancing rules on the XC DNS Load Balancer to direct the application traffic based on our requirement: If the request to global.f5-demo.com comes from United States or United Kingdom, application traffic should be directed to an origin server that is located on-premise in New York ACI site or in AWS. If the request to global.f5-demo.com comes from United States or United Kingdom and if the origin servers in New York ACI site and AWS become unavailable, application traffic should be re-directed to an origin server that is located on-premise in San Jose ACI site or in Azure. If the request to global.f5-demo.com comes from somewhere outside of United States or United Kingdom, application traffic should be directed to an origin server that is located on-premise in San Jose ACI site or in Azure. We can accomplish the same with XC HTTP Load Balancer by configuring Origin Server Subset Rules. XC HTTP Load Balancer Origin Server Subset Rules allow users to create match conditions on incoming source traffic to the XC HTTP Load Balancer and direct the matched traffic to the desired origin server(s). The match condition can be based on country, ASN, regional edge (RE), IP address, or client label selector. As a prerequisite, we create and assign a label (key-value pair) to an origin server so that we can specify where to direct the matched traffic to in reference to the label in Origin Server Subset Rules. Go to “Shared Configuration” -> “Manage” -> “Labels” -> “Known Keys” and “Add Know Key” to create labels. In our example, we created a key named jy-key with two labels: us-uk and other : Now, go to "Origin pool" under “Multi-Cloud App Connect” and apply the labels to the origin servers: In our example, origin servers in New York ACI site and AWS are labeled us-uk while origin servers in San Jose ACI site and Azure are labeled other : Then, go to “Other Settings” to enable subset load balancing. In our example, jy-key is our origin server subsets class, and we configured to use default subset original pool labeled other as our fallback policy choice based on our requirement that is if the origin servers in New York ACI site and AWS become unavailable, traffic should be directed to an origin server in San Jose ACI site or Azure: Next, on the HTTP Load Balancer, configure the Origin Server Subset Rules by enabling “Show Advanced Fields” in the "Origins" section: In our example, we created following Origin Server Subset Rules based on our requirement: us-uk-rule: If the request to global.f5-demo.com comes from United States or United Kingdom, direct the application traffic to an origin server labeled us-uk that is either in New York ACI site or AWS. other-rule: If the request to global.f5-demo.com does not come from United States or United Kingdom, direct the application traffic to an origin server labeled other that is either in San Jose ACI site or Azure. Validation As a reminder, we use XC automatic TLS certificate generation and renewal feature for our HTTPS load balancer in our example. First, let's confirm the certificate status: We can see the certificate is valid with an auto renew date. Now, let’s run some tests and see what happens. First, let’s try to access global.f5-demo.com from United Kingdom: We can see the traffic is directed to an origin server located in New York ACI site or AWS as expected. Next, let's see what happens if the origin servers from both of these sites become unavailable: The traffic is re-directed to an origin server located in San Jose ACI site or Azure as expected. Last, let’s try to access global.f5-demo.com from somewhere outside of United States or United Kingdom: The traffic is directed to an origin server located in San Jose ACI site or Azure as expected. To check the requests on the XC Console, go to "Multi-Cloud App Connect" -> “Performance” -> "Requests" from the selected HTTP Load Balancer. Below is a screenshot from our example and we can see the request to global.f5-demo.com came from Australia was directed to the origin server 10.131.111.55 located in San Jose ACI site based on the configured Origin Server Subset Rules other-rule: Here is another example that the request came from United States was sent to the origin server 10.131.111.88 located in AWS based on the configured Origin Server Subset Rules us-uk-rule: Summary F5 XC simplify cloud-based DNS management with global server load balancing (GSLB) and disaster recovery (DR). By deploying F5 XC in Cisco ACI, we can securely deploy and load balance our global applications across ACI sites (and public clouds) efficiently while maintaining high application performance with high availability and robustness among global applications at all times. Related Resources *On-Demand Webinar* Deploying F5 Distributed Cloud Services in Cisco ACI Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Three Attached Deployment Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Two Attached Deployment
903Views1like0CommentsF5 Distributed Cloud for Global Layer 3 Virtual Network Implementation
Introduction As organizations expand their infrastructure across multiple cloud providers and on-site locations, the need for seamless network connectivity becomes paramount. F5 Distributed Cloud provides a powerful solution for connecting distributed sites while maintaining network isolation and control. This article walks through implementing a global Layer 3 Virtual Network using segments with Secure Mesh Sites v2 (SMSv2) Customer Edges. It demonstrates connectivity between private data centers and AWS VPCs. We'll explore the configuration steps and BGP peering setup. The Challenge Organizations need to connect multiple isolated environments—private data centers and cloud VPCs, while maintaining: Network segmentation and isolation Dynamic routing capabilities Consistent connectivity across heterogeneous environments Simple management through a unified control plane Solution Architecture Our implementation consists of three distinct sites: Private Site: Running a Customer Edge (CE) in KVM with BGP peering to the local router for subnet exposure AWS VPC Site 1: Hosting a CE within the VPC AWS VPC Site 2: Another CE deployment with complete isolation (no VPC peering with Site 1) All sites utilize SMSv2 Customer Edges with dual-NIC configurations, connected through F5 Distributed Cloud's global network fabric. Figure 1: Global implementation diagram showing all IP subnets across the three sites with CE deployments and network segments Technical Deep Dive Before diving into the configuration, it's crucial to understand what segments are and how they function within F5 Distributed Cloud: Segments In F5 Distributed Cloud, segments can be considered the equivalent of Layer 3 VRFs in traditional networking. Just as VRFs create separate routing table instances in conventional routers, segments provide: Routing isolation: Each segment maintains its own routing table, ensuring traffic separation Multi-tenancy support: Different segments can overlap IP address spaces without conflict Security boundaries: Traffic between segments requires explicit policy configuration Simplified network management: Logical separation of different network domains or applications Key Segment Characteristics Interface Binding Requirements: Segments must be explicitly attached to CE interfaces Each interface can be part of only one segment This one-to-one mapping ensures clear traffic demarcation and prevents routing ambiguity Route Advertisement and Limitations: Supported Route Types: Connected Routes: Routes for subnets directly configured on the segment interface are automatically advertised BGP Learned Routes: Routes received via BGP peering on the segment interface are propagated to other sites in the same segment Current Limitations: No Static Route Support: Static routes cannot currently be advertised through segment interfaces This is an important consideration when planning your routing architecture Workaround: Use BGP to advertise routes that would traditionally be static Traffic Flow: Traffic entering a CE interface flows within the assigned segment Inter-segment communication requires a special configuration Routes learned on one segment remain isolated from other segments unless explicitly shared Only connected and BGP-learned routes are exchanged between sites within a segment Use Cases: Production/Development Separation: Different segments for prod and dev environments Multi-tenant Deployments: Isolated segments per customer or business unit Compliance Requirements: Segmented networks for PCI, HIPAA, or other regulated traffic This architectural approach provides the flexibility of traditional VRF implementations while leveraging F5 Distributed Cloud's global network capabilities. Customer Edge Interface Architecture Understanding CE Interface Requirements F5 Distributed Cloud Customer Edges require careful interface planning to function correctly, especially in SMSv2 deployments with segments. Understanding the interface architecture is crucial for successful implementations. Interface Capacity and Requirements Minimum Requirement: Each CE must be deployed with at least two physical interfaces Maximum Capacity: CEs support up to eight physical interfaces VLAN Support: Sub-interfaces can be created on top of physical interfaces Interface Types and Roles Customer Edge interfaces serve distinct purposes within the F5 Distributed Cloud architecture: 1. Site Local Outside (SLO) Interface The SLO interface is the "management" and control plane interface: Primary Functions: Zero-touch provisioning of the Customer Edge Establishing VPN tunnels for control plane communication with F5 XC Global Controller Management traffic and orchestration commands Health monitoring and telemetry data transmission Requirements: Must have Internet access to reach F5's global infrastructure Should be considered as the "management interface" of the CE Configured on the first interface (eth0/ens3) Cannot be used for segment assignment 2. Site Local Inside (SLI) and Segment Interfaces The remaining interfaces can be configured for data plane traffic: Site Local Inside (SLI): Used for local network connectivity without segment assignment Segment Interfaces: Dedicated to specific network segments (VRF-like isolation) Each interface can belong to only one segment Supports BGP peering within the segment context Used for segmented connectivity Interface Planning Considerations When designing your CE deployment: Two-Interface Minimum Deployment: Interface 1: SLO for management and control plane Interface 2: Segment or SLI for data plane traffic Multi-Segment Deployments: Require additional interfaces (one per segment plus SLO) Example: 4 segments need 5 interfaces (1 SLO + 4 segment interfaces) Cloud Deployments: Ensure cloud instance types support the required number of network interfaces Remember to disable source/destination checks on all interfaces Consider network interfaces limits when planning for scale Routing Considerations for Segments: Plan for BGP peering if you need to advertise routes beyond connected subnets Static routes cannot be advertised through segment interfaces yet Each segment interface will only advertise: It’s directly connected subnet Routes learned via BGP on that interface Design your IP addressing scheme accordingly This interface architecture ensures proper separation between management/control plane traffic and data plane traffic, while providing the flexibility needed for complex network topologies. Prerequisites Before beginning the implementation, ensure you have: F5 Distributed Cloud account with appropriate permissions Three deployed Customer Edge nodes (SMSv2 sites) Basic understanding of BGP configuration (if implementing BGP peering) Step-by-Step Configuration Step 1: Create the Network Segment Navigate to Multi-Cloud Network Connect → Manage → Networking → Segments Click "Add Segment" Configure your segment with appropriate naming and network policies Define the segment scope based on your requirements Save the configuration Figure 2: Segment creation The segment acts as a logical network overlay that spans across all participating sites, similar to extending a VRF across multiple locations in traditional MPLS networks. Step 2: Assign Segments to CE Interfaces Navigate to Multi-Cloud Network Connect → Manage → Site Management → Secure Mesh Sites v2 For each Customer Edge: Select the CE and edit its configuration Navigate to the node interface configuration Modify the interface settings: Select the appropriate interface (typically the second NIC, not the SLO interface) Assign the created segment to this interface Configure the interface mode as required Ensure the SLO interface remains dedicated to management/control plane Apply the changes Figure 3: Node interface configuration showing segment assignment to the appropriate interface Important: Remember that: The SLO interface (typically eth0/ens3) should not be used for segment assignment Each data plane interface can belong to only one segment Plan your interface allocation carefully based on your traffic segmentation requirements Repeat this process for all participating CEs. Once complete, all sites will be connected through the assigned segment. Figure 4: Overview of configured interfaces with segment assignments across all CE nodes Step 3: Configure BGP Peering (Optional) For sites requiring dynamic routing with local infrastructure: Navigate to the CE's BGP configuration Select the correct interface tied to the segment (e.g., "ens4") Configure BGP parameters: Local AS number Peer AS number Peer IP address Network advertisements Apply the configuration Figure 5: BGP peering configuration showing interface selection tied to the segment BGP peering enables automatic route exchange between your CE and local network infrastructure, with routes learned via BGP being contained within the assigned segment's routing domain. Important Note on Route Advertisement: Segment interfaces only advertise connected routes (interface subnets) and BGP-learned routes Static routes are not currently supported for advertisement through segments If you need to advertise additional routes beyond the connected subnet, BGP peering is the only available method This makes BGP configuration essential for most production deployments where multiple subnets need to be accessible Verifying Route Tables To confirm proper route propagation: Navigate to Multi-Cloud Network Connect → Overview → Infrastructure Select your site name Click on CE Routes Apply filters as needed Figure 6: CE Routes selection interface for viewing routing information You should observe: Routes from remote sites appearing in the routing table Correct next-hop information pointing to remote CE IPs BGP-learned routes (if BGP is configured and Site Survivability is enabled) Routes properly isolated within their respective segments Only connected and BGP routes present (no static routes) Figure 7: Route table showing routes received from other sites with next-hop information Conclusion F5 Distributed Cloud's Global Layer 3 Virtual Network with segments provides a robust solution for connecting distributed infrastructure across multiple environments. By leveraging segments as VRF-like constructs, organizations can achieve network isolation, multi-tenancy, and simplified management across their global infrastructure. Key takeaways: Always use dual-NIC configurations for SMSv2 sites (minimum one SLO + one data plane interface) Understand the critical role of the SLO interface for management and control plane Plan interface allocation carefully - CEs support up to 8 physical interfaces plus VLAN sub-interfaces Understand segments as Layer 3 VRF equivalents for proper design Remember the one-to-one mapping between interfaces and segments Be aware that segments only advertise connected and BGP-learned routes (no static route support currently) Use BGP peering to advertise additional subnets beyond connected routes Disable source/destination checks for cloud-based CEs As F5 Distributed Cloud continues to evolve, some of these considerations may change. Always refer to the latest documentation and test thoroughly in your environment.249Views2likes0CommentsUsing F5 Distributed Cloud DNS Load Balancer health checks and DNS observability
Introduction This article is a continuation of my previous article that covers how to configure F5 Distributed Cloud (XC) DNS Load Balancer to provide geo-proximity and disaster recovery, in addition to other failover scenarios. This article builds on the previous configuration to add health checks and shows how the Distributed Cloud DNS service is performing. DNS Load Balancer Configuring DNS LB Health Checks F5 XC can perform health checks on all IP members in a DNS Load Balancer Pool. To configure health checks for a pool, go to DNS Management > DNS Load Balancer Management > DNS Load Balancer Health Checks, then click "Add DNS Load Balancer Health Check". Name the rule, for example, "europe-healthcheck", and choose an appropriate health check type. The following health check types are supported for DNS LB: HTTP HTTPS TCP TCP (Hex payload) UDP ICMP Each health check type, except ICMP, supports sending a custom string payload, and looks for a response to match. For example, choosing the HTTPS health check, F5 XC will first confirm whether it received a valid SSL certificate from the member. Passing the SSL certificate check, it then sends the configured "Send String" (an HTTP request). By default, the string is "HEAD / HTTP/1.0\r\n\r\n", although more complex strings are supported. The "Receive String", in regex (re2) format, validates the application layer response. The default receive string for HTTP(S) requests is "HTTP/1." A custom TCP or UDP port can also be configured to support services running on non-standard ports. Configuring the port with "0" uses the default port belonging to the intended protocol. To apply the health check to a DNS LB Load Balancing rule, navigate to DNS Load Balancer Management > DNS Load Balancer Pools. Locate the pool to apply the health check to, and use the Manage Configuration action. Within the pool configuration, click Edit Configuration, scroll down to DNS Load Balancer Health Check, enable it, and then choose the health check created above. Save and Exit the Pool. Status information about the health of the DNS LB pools and pool members can be found at the DNS Load Balancers Overview page. In the following example, one of the members in the "eu-pool" is unhealthy. Details about each specific pool member can be found by clicking on the pool. Distributed Cloud DNS Observability The F5 XC DNS Performance Overview dashboards provide usage details for up to a 24-hour interval. Navigate to DNS Management > Overview > Performance for a high-level view showing how many requests a domain has received. To see where DNS requests are coming from, the most requested services, and specific response details, click on each DNS zone. The DNS performance dashboards provide the following views for each DNS zone: Traffic Distribution Top Requests Total Queries Query Type Response Type (by RCODE) DNS Query Rate (by Query Type) The DNS Dashboards also include showing the type and frequency of each DNS request. Query logging is available and located in the Requests tab. This view provides up to a 24-hour interval of each DNS query. The dashboard can be filtered to show requests from a particular geo location, resource record type, which record or records are being requested, in addition to the client IP and return code. The following image illustrates a filtered list. Records in the table below can be downloaded in a CSV formatted file. Details about an individual request can be viewed by clicking on the ">" symbol, and the detailed record can be shown in either JSON or YAML format. The DNS Zones overview shows zone-specific details: a comparison of zone types, deployment status, and includes the number of records and requests per 24hrs on a per-zone basis. Clicking on a one of the zones shows the records that belong to it and a breakdown of the record types. For Secondary DNS, the overview includes when the zone was last transferred, making it easier to proactively detect and troubleshoot stale DNS issues. Drilling down on an the secondary DNS zones shows the resource records and each time-to-live (TTL), another important metric when troubleshooting potentially stale records Logging & Analytics The Global Log Receiver provides the logging of DNS Requests in addition to other security services in Distributed Cloud, including WAF and Bot Defense. This article explains how to configure Global Log Receiver and send logs using HTTPS to an ELK Stack (elasticsearch, logstash, and kibana). It also shows how to configure logstash and kibana to process GLR formatted logs from Distributed Cloud. To configure the Global Log Receiver for DNS logging, go to Shared Configuration > Global Log Receiver, and create a new entry. In the Log Type field, choose DNS Request Logs. In the following example, I've configured Global Log Receiver to send via HTTPS, but many other logging platforms are also supported. See this product documentation page for up to date information on all the features available with Global Log Receiver. With DNS Request Logs configured, we can now see every DNS request to our Distributed Cloud tenant, processed by logstash, in the Kibana dashboard. The following output in Kibana shows DNS requests for all zones configured in the Distributed Cloud tenant. Additional Resources Previous article in series: Using Distributed Cloud DNS Load Balancer with Geo-Proximity and failover scenarios Technical article: How I did it - "Remote Logging with the F5 XC Global Log Receiver and Elastic" Product Documentation: DNS LB Product Documentation DNS Zone Management Global Log Receiver More information about Distributed Cloud DNS Load Balancer and DNS service: https://www.f5.com/cloud/products/dns-load-balancer https://www.f5.com/cloud/products/dns1.6KViews2likes0CommentsUse F5 Distributed Cloud to control Primary and Secondary DNS
Overview Domain Name Service (DNS); it's how humans and machines discover where to connect. DNS on the Internet is the universal directory of addresses to names. If you need to get support for the product Acme, you go to support.acme.com. Looking for the latest headlines in News, try www.aonn.com or www.npr.org. DNS is the underlying feature that nearly every service on the Internet depends on. Having a robust and reliable DNS provider is critical to keeping your organization online and working, and especially so during a DDoS attack. "Nature is a mutable cloud, which is always and never the same." - Ralph Waldo Emerson We might not wax that philosophically around here, but our heads are in the cloud nonetheless! Join the F5 Distributed Cloud user group today and learn more with your peers and other F5 experts. F5 Distributed Cloud DNS (F5 XC DNS) can function as both Primary or Secondary nameservers, and it natively includes DDoS protection. Using F5 XC DNS, it’s possible to provision and configure primary or secondary DNS securely in minutes. Additionally, the service uses a global anycast network and is built to scale automatically to respond to large query volumes. Dynamic security is included and adds automatic failover, DDoS protection, TSIG authentication support, and when used as a secondary DNS—DNSSEC support. F5 Distributed Cloud allows you to manage all of your sites as a single “logical cloud” providing: - A portable platform that spans multiple sites/clouds - A private backbone connects all sites - Connectivity to sites through its nodes (F5 Distributed Cloud Mesh and F5 Distributed Cloud App Stack) - Node flexibility, allowing it to be virtual machines, live on hardware within data centers, sites, or in cloud instances (e.g. EC2) - Nodes provide vK8s (virtual K8s), network and security services - Services managed through F5 Distributed Cloud’s SaaS base console Scenario 1 – F5 Distributed Cloud DNS: Primary Nameserver Consider the following; you're looking to improve the response time of your app with a geo-distributed solution, including DNS and app distribution. With F5 XC DNS configured as the primary nameserver, you’ll automatically get DNS DDoS protection, and will see an improvement in the response the time to resolve DNS just by using Anycast with F5’s global network’s regional point of presence. To configure F5 XC DNS to be the Primary nameserver for your domain, access the F5 XC Console, go to DNS Management, and then Add Zone. Alternately, if you're migrating from another DNS server or DNS service to F5 XC DNS, you can import this zone directly from your DNS server. Scenario 1.2 below illustrates how to import and migrate your existing DNS zones to F5 XC DNS. Here, you’ll write in the domain name (your DNS zone), and then View Configuration for the Primary DNS. On the next screen, you may change any of the default SOA parameters for the zone, and any type of resource record (RR) or record sets which the DNS server will use to respond to queries. For example, you may want to return more than one A record (IP address) for the frontend to your app when it has multiple points of presence. To do this, enter as many IP addresses of record type A as needed to send traffic to all the points of ingress to your app. Additional Resource Record Sets allows the DNS server to return more than a single type of RR. For example, the following configurations, returns two A (IPv4 address) records and one TXT record to the query of type ANY for “al.demo.internal”. Optionally, if your root DNS zone has been configured for DNSSEC, then enabling it for the zone is just a matter of toggling the default setting in the F5 XC Console. Scenario 1.2 - Import an Existing Primary Zone to Distributed Cloud using Zone Transfer (AXFR) F5 XC DNS can use AXFR DNS zone transfer to import an existing DNS zone. Navigate to DNS Management > DNS Zone Management, then click Import DNS Zone. Enter the zone name and the externally accessible IP of the primary DNS server. ➡️ Note: You'll need to configure your DNS server and any firewall policies to allow zone transfers from F5. A current list of public IP's that F5 uses can be found in the following F5 tech doc. Optionally, configure a transaction signature (TSIG) to secure the DNS zone transfer. When you save and exit, F5 XC DNS executes a secondary nameserver zone AXFR and then transitions itself to be the zone's primary DNS server. To finish the process, you'll need to change the NS records for the zone at your domain name registrar. In the registrar, change the name servers to the following F5 XC DNS servers: ns1.f5clouddns.com ns2.f5clouddns.com Scenario 1.3 - Import Existing (BIND format) Primary Zones directly to Distributed Cloud F5 XC DNS can directly import BIND formatted DNS zone files in the Console, for example, db.2-0-192.in-addr.arpa and db.foo.com. Enterprises often use BIND as their on-prem DNS service, importing these files to Distributed Cloud makes it easier to migrate existing DNS records. To import existing BIND db files, navigate to DNS Management > DNS Zone Management, click Import DNS Zone, then "BIND Import". Now click "Import from File" and upload a .zip with one or more BIND db zone files. The import wizard accepts all primary DNS zones and ignores other zones and files. After uploading a .zip file, the next screen reports any warnings and errors At this poing you can "Save and Exit" to import the new DNS zones or cancel to make any changes. For more complex zone configurations, including support for using $INCLUDE and $ORIGIN directives in BIND files, the following open source tool will convert BIND db files to JSON, which can then be copied directly to the F5 XC Console when configuring records for new and existing Primary DNS zones. BIND to XC-DNS Converter Scenario 2 - F5 Distributed Cloud DNS: Primary with Delegated Subdomains An enhanced capability when using Distributed Cloud (F5 XC) as the primary DNS server for your domains or subdomains, is to have F5 XC dynamically manage the DNS records for its own managed services. Note that prior to July 2023, the delegated DNS feature in F5 XC required the exclusive use of subdomains to use dynamically managed DNS records. As of July 2023, organizations are allowed to have both F5 XC managed and self-managed DNS resource records in the same domain or subdomain. When "Allow HTTP Load Balancer Managed Records" is checked, DNS records automatically added by F5 XC appear in a new RR set group called x-ves-io-managed which is read-only. In the following example, I've created an HTTP Load Balanacer with the domain "www.example.f5-cloud-demo.com" and F5 XC automatically created the A resource record (RR) in the group x-ves-io-managed. Scenario 3 – F5 Distributed Cloud DNS: Secondary Nameserver In this scenario, say you already have a primary DNS server in your on-prem datacenter, but due to security needs, you don’t want it to be directly accessible to the Internet. F5 XC DNS can be configured as a secondary DNS server and support both zone transfer (AXFR, IXFR) and receive (NOTIFY) updates from your primary DNS server. All that's needed to complete this change is to change the nameserver records with your DNS registrar by adding the F5 XC nameservers and removing your the real primary. Having F5 XC DNS as public interface includes complimentary security services, such as DDoS protection and vector scaling. This improves both the uptime of your services as well as reducing latency by allowing all F5's nameservers world-wide to handle domain name resolution. If the primary nameserver is configured for DNSSEC and delivers RRSIG and zone DNSKEY records, F5 XC nameservers will also include these records in the lookups delivered to clients. This ensures a consistent level of security for records management end-to-end. To configure F5 XC DNS to be a secondary DNS server, go to Add Zone, then choose Secondary DNS Configuration. Next, View Configuration for it, and add your primary DNS server IP’s. To enhance the security of zone transfers and updates, F5 XC DNS supports TSIG encrypted transfers from the primary DNS server. To support TSIG, ensure your primary DNS server supports encryption, and enable it by entering the pre-shared key (PSK) name and its value. The PSK itself can be blindfold-encrypted using the F5 XC Console to prevent other admins from being able to see it. If encryption for zone transfers is desired, simply enter the remaining details for your TSIG PSK and click Apply. Once you’ve saved a new secondary DNS configuration, the F5 XC DNS pulls the zone details and begins resolving queries on the F5 XC Global Network with its pool of Anycast-reachable DNS servers. To see the status of individual zones and when they were last transferred by navigating to the DNS Management > DNS Zones overview. As applications mature and your audience broadens, ensuring low-latency for DNS requires additional services. Adding F5 XC DNS to complement an existing BIG-IP GTM or other existing primary nameserver deployment, including with DNSSEC records and TSIG-protected zone transfer support, is straight forward. Conclusion You’ve just seen how to configure F5 XC DNS both as a primary DNS as well as a secondary DNS service. Ensure the reachability of your company with a robust, secure, and optimized DNS service by F5. A service that delivers the lowest resolution latency with its global Anycast network of nameservers, and one that automatically includes DDoS protection, DNSSEC, TSIG support for secondary DNS. Watch the following demo video to see how to configure F5 XC DNS for scenarios #1 and #3 above. Additional Resources On-Demand webinar: Boost resilience and performance with F5 Distributed Cloud DNS Information about using F5 Distributed Cloud DNS Technical documentation DNS Demo Guide and step-by-step walkthrough BIND to XC-DNS Converter (open source tool)11KViews6likes0Comments