cloud
3908 TopicsDistributed Cloud for App Delivery & Security for Hybrid Environments
As enterprises modernize and expand their digital services, they increasingly deploy multiple instances of the same applications across diverse infrastructure environments—such as VMware, OpenShift, and Nutanix—to support distributed teams, regional data sovereignty, redundancy, or environment-specific compliance needs. These application instances often integrate into service chains that span across clouds and data centers, introducing both scale and operational complexity. F5 Distributed Cloud provides a unified solution for secure, consistent application delivery and security across hybrid and multi-cloud environments. It enables organizations to add workloads seamlessly—whether for scaling, redundancy, or localization—without sacrificing visibility, security, or performance.199Views3likes0CommentsF5 CNF/BNK issue with DNS Express tmm scaling and zone notifications
I did see an interesting issue with DNS Express with Next for Kubernetes when playing in a test environment. When you have 2 TMM pods in the same namespace as the DNS zone mirroring is done by zxfrd pod and I you need to create a listener "F5BigDnsApp" as shown in https://clouddocs.f5.com/cnfs/robin/latest/cnf-dnsexpress.html#create-a-dns-zone-to-answer-dns-queries for the optional notify that will feed this to the TMM and then to the zxfrd pod. The issue happens when you have 2 or more TMM as then the "F5BigDnsApp" that is like virtual server/listener as then then on the internal vlans there is arp conflict as the two tmm on two different kubernetes/openshift nodes advertise the same ip address on layer 2. This is seen with "kubectl logs" ("oc logs" for Openshift) on the TMM pods that mention the duplicate arp detected. Interesting that the same does not happen when you do this for the normal listener on the external Vlan (the one that captures and responds to the client DNS queries) as I think by default the ARP is stopped for the external listener that can be on 2 or more TMM as ECMP BGP is used to redistribute the traffic to the TMM by design. I see 4 possible solutions as I see it. One is to be able to control the ARP for the "F5BigDnsApp" CRD for Internal or External Vlans (BGP ECMP to be used also on the server side then) and the second is to be able to select "F5BigDnsApp" to be deployed just one 1 TMM even if there are more. Also if an ip address could be configured for the listener that is not part of the internal ip address range but then as I see with "kubectl logs" on the ingress controller (f5ing-tmm-pod-manager) the config is not pushed to the TMM as also with "configview" from the debug sidecar container on the tmm pods there is no listener at all. The manager logs suggest that because the Listener IP address is not part of the Self-IP IP range under the intnernal Vlan as this maybe system limitation and no one thinking about this use case as in BIG-IP this is is supported to have VIP on non self ip address range that is not advertised with arp because of this. The last solution that can work at the moment is to have many tmm in different namespaces on different kubernetes nodes with affinity rules that can deploy each tmm on different node even if the tmm are on different namespaces by matching a configured label (see the example below) as maybe this is the current working design to have one zxfrd pod with one tmm pod in a namespace but then the auto-scaling may not work as euto scale should create a new tmm pod in the same namespace if needed. Example: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: tmm # Match Pods in any namespaces that have this label namespaceSelector: {} # empty selector = all namespaces topologyKey: "kubernetes.io/hostname" Also it should be considered if the zxfrd pod can push the DNS zone to the RAM of more than one TMM pods as maybe it can't as maybe currently only one to one is supported. Maybe it was never tested what happens when you have Security Context IP address on the Internal Network and multiple TMM pods. Interest stuff that I just wanted to share as this was just testing things out😄52Views1like0CommentsF5 CNF/SPK/BNK and etc. support for Custom URL classifications/apps/IPS signatures?
While I played with CNF/SPK/BNK and etc. I didn't see anything in the docks about this https://clouddocs.f5.com/cnfs/robin/latest/ I think it is important feature as if a URL is wrongly classified by Brightcloud DB to be able to add the url to custom URL category as for example to allow it. As shown in https://clouddocs.f5.com/cnfs/aon/latest/cnf-pe-url-categorization.html I think this is somewhere hidden as there is option called "customdb" , so maybe the downloader pod can be configured to pull the custom URL classification. As the irules for CNF do not support "HTTP_REQUEST" and "HTTP_RESPONSE" events as mentioned in https://clouddocs.f5.com/cnfs/openshift/latest/cnf-irule-crd.html this seems important. Outside of that Custom IPS signatures like for the normal AFM will be nice as there is IPS pod I think like the IP intelligence it could connect to external feed list that has the custom signatures (the same for the URL category) https://clouddocs.f5.com/cnfs/robin/latest/cnf-ipi-feedlist-crd.html For the custom apps that PEM uses with iRules ( https://techdocs.f5.com/en-us/bigip-14-1-0/big-ip-policy-enforcement-manager-implementations-14-1-0/creating-custom-classifications.html ) I am just mentioning this but I see less use cases than what I see with custom URL categories and custom IPS signatures. I did write to cnfdocs@f5.com as mentioned in the web documents. Hope they see it and as mentioned ""To provide feedback and help improve this document, please email us at cnfdocs@f5.com. "" 🙂13Views0likes0CommentsUsing Aliases to launch F5 AMI Images in AWS Marketplace
F5 lists 82 product offerings in the AWS Marketplace as Amazon Machine Images (AMI). Each version of each product in each AWS Region has a different AMI. That’s around 22,000 images! Each AMI is identified by an AMI ID. You use the AMI ID to indicate which AMI you want to use when launching an F5 product. You can find AMI IDs using the AWS Web Console, but the AWS CLI is the best tool for the job. Searching for AMIs using the AWS CLI Here’s how you find the AMI IDs for version 17.5.1.2 of BIG-IP Virtual Edition in the us-east-1 AWS region: aws ec2 describe-images --owners aws-marketplace --filters 'Name=name,Values=F5 BIGIP-17.5.1.2*' --query "sort_by(Images,&Name)[:]. {Description: Description, Id:ImageId }" --region us-east-1 --output table ---------------------------------------------------------------------------------------------------- | DescribeImages | +------------------------------------------------------------------------+-------------------------+ | Description | Id | +------------------------------------------------------------------------+-------------------------+ | F5 BIGIP-17.5.1.2-0.0.5 BYOL-All Modules 1Boot Loc-250916013758 | ami-0948eabdf29ef2a8f | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-All Modules 2Boot Loc-250916015535 | ami-0cb3aaa67967ad029 | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-LTM 1Boot Loc-250916013616 | ami-05d70b82c9031ff39 | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-LTM 2Boot Loc-250916014744 | ami-0b6021cc939308f3e | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-encrypted-threat-protection-250916015535 | ami-01f4fde300d3763be | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-AWF Plus 16vCPU-250916015534 | ami-015474056159387ac | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Adv WAF Plus 200Mbps-250916015522 | ami-06ce5b03dce2a059d | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Adv WAF Plus 25Mbps-250916015520 | ami-0826808708df97480 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Adv WAF Plus 3Gbps-250916015523 | ami-08c63c8f7ca71cf37 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 10Gbps-250916015532 | ami-0e806ef17838760e4 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 1Gbps-250916015530 | ami-05e31c2a0ac9ec050 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 200Mbps-250916015528 | ami-02dc0995af98d0710 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 25Mbps-250916015527 | ami-08b8f2daefde800e9 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 5Gbps-250916015531 | ami-0d16154bb1102f3e9 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 10Gbps-250916015512 | ami-05c9527fff191feba | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 1Gbps-250916015510 | ami-05ce2932601070d5c | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 200Mbps-250916015508 | ami-0f6044db3900ba46f | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 25Mbps-250916014542 | ami-0de57aba160170358 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 5Gbps-250916015511 | ami-04271103ab2d1369d | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 10Gbps-250916014739 | ami-0d06d2a097d7bb47a | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 1Gbps-250916014737 | ami-01707e969ebcc6138 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 200Mbps-250916014735 | ami-06f9a44562d94f992 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 25Mbps-250916013626 | ami-0aa2bca574c66af13 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 5Gbps-250916014738 | ami-01951e02c52deef85 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-PVE Adv WAF Plus 200Mbps-0916015525 | ami-03df50dfc04f19df5 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-PVE Adv WAF Plus 25Mbps-50916015524 | ami-0777c069eaae20ea1 | +------------------------------------------------------------------------+-------------------------+ This command shows all 17.5.1* releases of the "PayGo Good 1Gbps" flavor of BIG-IP in the us-west-1 region sorted by newest release first: aws ec2 describe-images --owners aws-marketplace --filters 'Name=name,Values=F5 BIGIP-17.5.1*PAYG-Good 1Gbps*' --query "reverse(sort_by(Images,&CreationDate))[:]. {Description: Name, Id:ImageId , date:CreationDate}" --region us-west-1 --output table ---------------------------------------------------------------------------------------------------------------------------------------------------- | DescribeImages | +--------------------------------------------------------------------------------------------+------------------------+----------------------------+ | Description | Id | date | +--------------------------------------------------------------------------------------------+------------------------+----------------------------+ | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 1Gbps-250916014737-7fb2f9db-2a12-4915-9abb-045b6388cccd | ami-0de8ca1229be5f7fe | 2025-09-16T23:12:28.000Z | | F5 BIGIP-17.5.1-0.80.7 PAYG-Good 1Gbps-250811055424-7fb2f9db-2a12-4915-9abb-045b6388cccd | ami-09afcec6f36494382 | 2025-08-15T19:03:23.000Z | | F5 BIGIP-17.5.1-0.0.7 PAYG-Good 1Gbps-250618090310-7fb2f9db-2a12-4915-9abb-045b6388cccd | ami-03e389e112872fd53 | 2025-07-01T06:00:44.000Z | +--------------------------------------------------------------------------------------------+------------------------+----------------------------+ Notice that the same BIG-IP VE release has a different AMI ID in each AWS region. Attempting to launch a product in one region using an AMI ID from a different region will fail. This causes a problem when a shell script or automation tool is used to launch new EC2 instances and the AMI IDs have been hardcoded for one region and you attempt to use it in another. Wouldn’t it be nice to have a single AMI identifier that works in all AWS regions? Introducing AMI Aliases The Ami Alias is a similar ID to the AMI ID, but it’s easier to use in automation. An AMI alias has the form /aws/service/marketplace/prod-<identifier>/<version> , for example, "PayGo Good 1Gbps" /aws/service/marketplace/prod-s6e6miuci4yts/17.5.1.2-0.0.5 You can use this Ami Alias ID in any Region, and AWS automatically maps it to the correct Regional AMI ID. BIG-IP AMI Alias Identifiers F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 16vCPU) prod-qqgc2ltsirpio F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 200Mbps) prod-yajbds56coa24 F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 25Mbps) prod-qiufc36l6sepa F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 3Gbps) prod-fp5qrfirjnnty F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 10Gbps) prod-w2p3rtkjrjmw6 F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 1Gbps) prod-g3tye45sqm5d4 F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 200Mbps) prod-dnpovgowtyz3o F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 25Mbps) prod-wjoyowh6kba46 F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 5Gbps) prod-hlx7g47cksafk F5 BIG-IP VE - ALL (BYOL, 1 Boot Location) prod-zvs3u7ov36lig F5 BIG-IP VE - ALL (BYOL, 2 Boot Locations) prod-ubfqxbuqpsiei F5 BIG-IP VE - LTM/DNS (BYOL, 1 Boot Location) prod-uqhc6th7ni37m F5 BIG-IP VE - LTM/DNS (BYOL, 2 Boot Locations) prod-o7jz5ohvldaxg F5 BIG-IP Virtual Edition - BETTER (PAYG, 10Gbps) prod-emsxkvkzwvs3o F5 BIG-IP Virtual Edition - BETTER (PAYG, 1Gbps) prod-4idzu4qtdmzjg F5 BIG-IP Virtual Edition - BETTER (PAYG, 200Mbps) prod-firaggo6h7bt6 F5 BIG-IP Virtual Edition - BETTER (PAYG, 25Mbps) prod-wijbh7ib34hyy F5 BIG-IP Virtual Edition - BETTER (PAYG, 5Gbps) prod-rfglxslpwq64g F5 BIG-IP Virtual Edition - GOOD (PAYG, 10Gbps) prod-54qdbqglgkiue F5 BIG-IP Virtual Edition - GOOD (PAYG, 1Gbps) prod-s6e6miuci4yts F5 BIG-IP Virtual Edition - GOOD (PAYG, 200Mbps) prod-ynybgkyvilzrs F5 BIG-IP Virtual Edition - GOOD (PAYG, 25Mbps) prod-6zmxdpj4u4l5g F5 BIG-IP Virtual Edition - GOOD (PAYG, 5Gbps) prod-3ze6zaohqssua F5 BIG-IQ Virtual Edition - (BYOL) prod-igv63dkxhub54 F5 Encrypted Threat Protection prod-bbtl6iceizxoi F5 Per-App-VE Advanced WAF with LTM, IPI, TC (PAYG, 200Mbps) prod-gkzfxpnvn53v2 F5 Per-App-VE Advanced WAF with LTM, IPI, TC (PAYG, 25Mbps) prod-qu34r4gipys4s NGINX Plus Alias Identifiers NGINX Plus Basic - Amazon Linux 2 (LTS) AMI prod-jhxdrfyy2jtva NGINX Plus Developer - Amazon Linux 2 (LTS) prod-kbeepohgkgkxi NGINX Plus Developer - Amazon Linux 2 (LTS) ARM Graviton prod-vulv7pmlqjweq NGINX Plus Developer - Amazon Linux 2023 prod-2zvigd3ltowyy NGINX Plus Developer - Amazon Linux 2023 ARM Graviton prod-icspnobisidru NGINX Plus Developer - RHEL 8 prod-tquzaepylai4i NGINX Plus Developer - RHEL 9 prod-hwl4zfgzccjye NGINX Plus Developer - Ubuntu 22.04 prod-23ixzkz3wt5oq NGINX Plus Developer - Ubuntu 24.04 prod-tqr7jcokfd7cw NGINX Plus FIPS Premium - RHEL 9 prod-v6fhyzzkby6c2 NGINX Plus Premium - Amazon Linux 2 (LTS) AMI prod-4dput2e45kkfq NGINX Plus Premium - Amazon Linux 2 (LTS) ARM Graviton prod-56qba3nacijjk NGINX Plus Premium - Amazon Linux 2023 prod-w6xf4fmhpc6ju NGINX Plus Premium - Amazon Linux 2023 ARM Graviton prod-e2iwqrpted4kk NGINX Plus Premium - RHEL 8 AMI prod-m2v4zstxasp6s NGINX Plus Premium - RHEL 9 prod-rytmqzlxdneig NGINX Plus Premium - Ubuntu 22.04 prod-dtm5ujpv7kkro NGINX Plus Premium - Ubuntu 24.04 prod-opg2qh33mi4pk NGINX Plus Standard - Amazon Linux 2 (LTS) AMI prod-mdgdnfftmj7se NGINX Plus Standard - Amazon Linux 2 (LTS) ARM Graviton prod-2kagbnj7ij6zi NGINX Plus Standard - Amazon Linux 2023 prod-i25cyug3btfvk NGINX Plus Standard - Amazon Linux 2023 ARM Graviton prod-6s5rvlqlgrt74 NGINX Plus Standard - RHEL 8 prod-ebhpntvlfwluc NGINX Plus Standard - RHEL 9 prod-3e7rk2ombbpfa NGINX Plus Standard - Ubuntu 22.04 prod-7rhflwjy5357e NGINX Plus Standard - Ubuntu 24.04 prod-b4rly35ct3dlc NGINX Plus with NGINX App Protect Developer - Amazon Linux 2 prod-pjmfzy5htmaks NGINX Plus with NGINX App Protect Developer - Debian 11 prod-ixsytlu2eluqa NGINX Plus with NGINX App Protect Developer - RHEL 8 prod-6v57ggy3dqb6c NGINX Plus with NGINX App Protect Developer - Ubuntu 20.04 prod-4a4g7h7mpepas NGINX Plus with NGINX App Protect DoS Developer - Amazon Linux 2023 prod-fmqayhbsryoz2 NGINX Plus with NGINX App Protect DoS Developer - Debian 11 prod-4e5fwakhrn36y NGINX Plus with NGINX App Protect DoS Developer - RHEL 8 prod-ubid75ixhf34a NGINX Plus with NGINX App Protect DoS Developer - RHEL 9 prod-gg7mi5njfuqcw NGINX Plus with NGINX App Protect DoS Developer - Ubuntu 20.04 prod-qiwzff7orqrmy NGINX Plus with NGINX App Protect DoS Developer - Ubuntu 22.04 prod-h564ffpizhvic NGINX Plus with NGINX App Protect DoS Developer - Ubuntu 24.04 prod-wckvpxkzj7fvk NGINX Plus with NGINX App Protect DoS Premium - Amazon Linux 2023 prod-lza5c4nhqafpk NGINX Plus with NGINX App Protect DoS Premium - Debian 11 prod-ych3dq3r44gl2 NGINX Plus with NGINX App Protect DoS Premium - RHEL 8 prod-266ker45aot7g NGINX Plus with NGINX App Protect DoS Premium - RHEL 9 prod-6qrqjtainjlaa NGINX Plus with NGINX App Protect DoS Premium - Ubuntu 20.04 prod-hagmbnluc5zmw NGINX Plus with NGINX App Protect DoS Premium - Ubuntu 22.04 prod-y5iwq6gk4x4yq NGINX Plus with NGINX App Protect DoS Premium - Ubuntu 24.04 prod-k3cb7avaushvq NGINX Plus with NGINX App Protect Premium - Amazon Linux 2 prod-tlghtvo66zs5u NGINX Plus with NGINX App Protect Premium - Debian 11 prod-6kfdotc3mw67o NGINX Plus with NGINX App Protect Premium - RHEL 8 prod-okwnxdlnkmqhu NGINX Plus with NGINX App Protect Premium - Ubuntu 20.04 prod-5wn6ltuzpws4m NGINX Plus with NGINX App Protect WAF + DoS Premium - Amazon Linux 2023 prod-mualblirvfcqi NGINX Plus with NGINX App Protect WAF + DoS Premium - Debian 11 prod-k2rimvjqipvm2 NGINX Plus with NGINX App Protect WAF + DoS Premium - RHEL 8 prod-6nlubep3hg4go NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 18.04 prod-f2diywsozd22m NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 20.04 prod-ajcsh5wsfuen2 NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 22.04 prod-6adjgf6yl7hek NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 24.04 prod-autki7guiiqio Using AMI Aliases for BIG-IP The following example shows using an AMI alias to launch a new "F5 BIG-IP Virtual Edition - GOOD (PAYG, 1Gbps)" instance version 17.5.1.2-0.0.5 by using the AWS CLI. aws ec2 run-instances --image-id resolve:ssm:/aws/service/marketplace/prod-s6e6miuci4yts/17.5.1.2-0.0.5 --instance-type m5.xlarge --key-name MyKeyPair The next example shows a CloudFormation template that accepts the AMI alias as an input parameter to create an instance. AWSTemplateFormatVersion: 2010-09-09 Parameters: AmiAlias: Description: AMI alias Type: 'String' Resources: MyEC2Instance: Type: AWS::EC2::Instance Properties: ImageId: !Sub "resolve:ssm:${AmiAlias}" InstanceType: "g4dn.xlarge" Tags: -Key: "Created from" Value: !Ref AmiAlias Using AMI Aliases for NGINX Plus NGINX Plus images in the AWS Marketplace are not version specific, so just use "latest" as the version to launch. For example, this will launch NGINX Plus Premium on Ubuntu 24.04: aws ec2 run-instances --image-id resolve:ssm:/aws/service/marketplace/prod-opg2qh33mi4pk/latest --instance-type c5.large --key-name MyKeyPair Finding AMI Aliases in AWS Marketplace AMI aliases are new to the AWS Marketplace, so not all products have them. To locate the alias for an AMI you use often, you need to resort to the AWS Marketplace web console. Here are the step-by-step instructions provided by Amazon: 1. Navigate to AWS Marketplace Go to AWS Marketplace Sign in to your AWS account 2. Find and Subscribe to the Product Search for or browse to find your desired product Click on the product listing Click "Continue to Subscribe" Accept the terms and subscribe to the product 3. Configure the Product After subscribing, click "Continue to Configuration" Select your desired: Delivery Method (if multiple options are available) Software Version Region 4. Locate the AMI Alias At the bottom of the configuration page, you'll see: AMI ID: ami-1234567890EXAMPLE AMI Alias: /aws/service/marketplace/prod-<identifier>/<version> New Tools for Your AMI Hunt In this article, we focused on using AMI Aliases to select the right F5 product to launch in AWS EC2. But, there’s one more takeaway. Scroll back up to the top of this page and take a closer look at the "aws ec2 describe-images" commands. These commands use JMESpath to filter, sort, and format the output. Find out more about filtering the output of AWS CLI commands here.97Views3likes0CommentsXC Distributed Cloud and how to keep the Source IP from changing with customer edges(CE)!
The best will always be the application to stop tracking users based on something primitive as an ip address and sometimes the issue is in the Load Balancer or ADC after the XC RE and then if the persistence is based on source IP address on the ADC to be changed in case it is BIG-IP to Cookie or Universal or SSL session based if the Load Balancer is doing no decryption and it is just TCP/UDP layer . As an XC Regional Edge (RE) has many ip addresses it can connect to the origin servers adding a CE for the legacy apps is a good option to keep the source IP from changing for the same client HTTP requests during the session/transaction. Before going through this article I recommend reading the links below: F5 Distributed Cloud – CE High Availability Options: A Comparative Exploration | DevCentral F5 Distributed Cloud - Customer Edge | F5 Distributed Cloud Technical Knowledge Create Two Node HA Infrastructure for Load Balancing Using Virtual Sites with Customer Edges | F5 Distributed Cloud Technical Knowledge RE to CE cluster of 3 nodes The new SNAT prefix option under the origin pool allows no mater what CE connects to the origin pool the same IP address to be seen by the origin. Be careful as if you have more than a single IP with /32 then again the client may get each time different IP address. This may cause "inet port exhaustion " ( that is what it is called in F5BIG-IP) if there are too many connections to the origin server, so be careful as the SNAT option was added primary for that use case. There was an older option called "LB source IP persistence" but better not use it as it was not so optimized and clean as this one. RE to 2 CE nodes in a virtual site The same option with SNAT pool is not allowed for a virtual site made of 2 standalone CE. For this we can use the ring hash algorithm. Why this works? Well as Kayvan explained to me the hashing of the origin is taking into account the CE name, so the same origin under 2 different CE will get the same ring hash and the same source IP address will be send to the same CE to access the Origin Server. This will not work for a single 3-node CE cluster as it all 3 nodes have the same name. I have seen 503 errors when ring hash is enabled under the HTTP LB so enable it only under the XC route object and the attached origin pool to it! CE hosted HTTP LB with Advertise policy In XC with CE you can do do HA with 3-cluster CE that can be layer2 HA based on VRRP and arp or Layer 3 persistence based bgp that can work 3 node CE cluster or 2 CE in a virtual site and it's control options like weight, as prepend or local preference options at the router level. For the Layer 2 I will just mention that you need to allow 224.0.0.8 for the VRRP if you are migrating from BIG-IP HA and that XC selects 1 CE to hold active IP that is seen in the XC logs and at the moment the selection for some reason can't be controlled. if a CE can't reach the origin servers in the origin pool it should stop advertising the HTTP LB IP address through BGP. For those options Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Three Attached Deployment is a great example as it shows ECMP BGP but with the BGP attributes you can easily select one CE to be active and processing connections, so that just one ip address is seen by the origin server. When a CE gets traffic by default it does prefer to send it to the origin as by default "Local Preferred" is enabled under the origin pool. In the clouds like AWS/Azure just a cloud native LB is added In front of the 3 CE cluster and the solution there is simple as to just modify the LB to have a persistence. Public Clouds do not support ARP, so forget about Layer 2 and play with the native LB that load balances between the CE 😉 Hope you enjoyed this article!72Views2likes0CommentsOpenShift Service Mesh 2.x/3.x with F5 BIG-IP
Overview OpenShift Service Mesh (OSSM) is Red Hat´s packaged version of Istio Service Mesh. Istio has the Ingress Gateway component to handle incoming traffic from outside of the cluster. Like other ingress controllers, it requires an external load balancer to get the traffic into the ingress PODs. This follows the canonical Kubenetes 2-tier arrangement for getting the traffic inside the cluster. This is depicted in the next figure: This article covers the configuration of OpenShift Service Mesh 2.x/3.x and expose it to the BIG-IP, and how to properly monitor its health, either using BIG-IP´s Container Ingress Services (CIS) or without using it. Exposing OSSM in BIG-IP - VIP configuration It is a customer choice how to publish OSSM in the BIG-IP: A Layer 4 (L4) Virtual Server is more simple and certificate management is done in OpenShift. The advantages of using this mode are the potential higher performance and scalability, including connection mirroring, yet mirroring is not usually used for HTTP traffic due to the typical retry mechanism of HTTP applications. Connection persistence is limited to the source IP. When using CIS, this is done with a TransportServer CR, which creates a fastL4 type virtual server in the BIG-IP. A Layer 7 (L7) Virtual Server requires additional configuration because TLS termination is required. In this mode, OpenShift can take advantage of BIG-IP´s TLS off-loading capabilities and Hardware/Network/SaaS/Cloud HSM integrations, which store private keys securely, including FIPS level support. Working at L7 also allows to do per-application traffic management, including headers and payload rewrites, cookie persistence, etc. It also allows to do per-application multi-cluster. The above features are provided by the LTM (load balancing) module in BIG-IP. The possibilities are further expanded when using modules such as ASM (Advanced WAF) and Access (authentication). When using CIS, this is done with a VirtualServer CR, which creates a standard-type virtual server in the BIG-IP. Exposing OSSM to BIG-IP - pool configuration There are two options to expose Istio Ingress Gateways to BIG-IP: Using ClusterIP addresses, these are POD IPs which are dynamic. This requires the use of CIS for discovering the IP addresses of the Ingress Gateway PODs. Using NodePort addresses, these are reachable from the outside network. When using these, it is not strictly necessary to use CIS, but it is recommended. Exposing OpenShift Service Mesh using ClusterIP This requires the use of CIS with the following parameters --orchestration-cni=ovn --static-routing-mode=true These make CIS create IP routes in the BIG-IP for reaching the POD IPs inside the OpenShift cluster. Please note that this only works if all the OpenShift nodes are directly connected in the same subnet as the BIG-IP. Additionally, it is required following parameter. It is the one that actually makes CIS populate pool members with Cluster (POD) IPs: --pool-member-type=cluster It is not needed to change any configuration in OSSM because ClusterIP mode is the default mode in Istio Ingress Gateways. Exposing OpenShift Service Mesh using NodePort Using NodePort allows to have known IP addresses for the Ingress Gateways, reachable from outside the cluster. Note that when using nodePort, only one Ingress Gateway replica will run per node. The behavior of NodePort varies using the externalTrafficPolicy field: Using the Cluster value, any OpenShift node will accept traffic and will redirect the traffic to any node that has an Ingress Gateway POD, in a load balancing fashion. This is the easiest to setup, but because each request might go to a different node makes health checking not reliable (it is not known which POD goes down). Using the Local value, only the OpenShift nodes that have an Ingress Gateway PODs will accept traffic. The traffic will be delivered to the local Ingress Gateway PODs, without further indirection. This is the recommended way when using NodePort because of its deterministic behaviour and therefore reliable health checking. Next, it is described how to setup a NodePort using the Local externalTrafficPolicy. There are two options for configuring OSSM: Using the ServiceMeshControlPlane CR method: this is the default method in OSSM 2.x for backwards compatibility, but it doesn’t allow to fine tune the configuration of the proxy. See this OSSM 2.x link for further details. This is deprecated and not available in OSSM 3.x. Using Gateway injection method: this is the only method possible in OSSM 3.x and the current recommendation from Red Hat for OSSM 2.x. Using this method allows you to tune the proxy settings. In this article, it will be shown how this tuning is of special interest because at present the Ingress Gateway doesn’t have good default values for allowing reliable health checking. These will be discussed in the Health Checking section. When using ServiceMeshControlPlane CR method, the above will be configured as follows: apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane [...] spec: gateways: ingress: enabled: false runtime: deployment: replicas: 2 service: externalTrafficPolicy: Local ports: - name: status-port nodePort: 30021 port: 15021 targetPort: 15021 - name: http2 nodePort: 30080 port: 80 targetPort: 8080 - name: https nodePort: 30443 port: 443 targetPort: 8443 type: NodePort When using the Gateway injection method (recommended), the Service definition is manually created analogously to the ServiceMeshControlPlane CR: apiVersion: v1 kind: Service [...] spec: externalTrafficPolicy: Local type: NodePort ports: - name: status-port nodePort: 30021 port: 15021 protocol: TCP targetPort: 15021 - name: http2 nodePort: 30080 port: 80 protocol: TCP targetPort: 8080 - name: https nodePort: 30443 port: 443 protocol: TCP targetPort: 8443 Where the ports section is optional but recommended in order to have deterministic ports, and required when not using CIS (because it requires static ports). The nodePort values can be customised. When not using CIS, it is needed to manually configure the pool members in the BIG-IP. It is typical in OpenShift to have the Ingress components (OpenShift Router or Istio) in dedicated infra nodes. See this Red Hat solution for details. When using the ServiceMeshControlPlane method, the configuration is as follows: apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane [...] spec: runtime: defaults: pod: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved When using the Gateway injection method, the configuration is added to the Deployment file directly: apiVersion: apps/v1 kind: Deployment [...] spec: template: metadata: spec: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved The configuration above is also a good practice when using CIS. Additionally, CIS by default adds all nodes IPs to the Service pool regardless of whether the externalTrafficPolicy is set to Cluster or Local value. The health check will discard nodes where there are no Ingress Gateways. It can be limited to the scope of the nodes discovered by CIS with the following parameter: --node-label-selector Health Checking and retries for the Ingress Gateway Ingress Gateway Readiness The Ingress Gateway has the following readinessProbe for Kubernete´s own health checking: readinessProbe: failureThreshold: 30 httpGet: path: /healthz/ready port: 15021 scheme: HTTP initialDelaySeconds: 1 periodSeconds: 2 successThreshold: 1 timeoutSeconds: 3 where the failureThreshold value of 30 is considered way too large and only marks down the Ingress Gateway as not Ready after 90 seconds (tested to be failureThreshold *timeoutSeconds). In this article, it is recommended to mark down an Ingress Gateway no later than 16 seconds. When using CIS, Kubernetes informs whenever a POD is not Ready and CIS automatically, removes its associated pool member from the pool. In order to achieve the desired behaviour of marking down the Ingress Gateway before 16 seconds, it is required to change the default failureThreshold value in the Deployment file by adding the following snippet: apiVersion: apps/v1 kind: Deployment [...] spec: template: metadata: spec: containers: - name: istio-proxy image: auto readinessProbe: failureThreshold: 5 httpGet: path: /healthz/ready port: 15021 scheme: HTTP initialDelaySeconds: 1 periodSeconds: 2 successThreshold: 1 timeoutSeconds: 3 Which keeps all other values equal and sets failureThreshold to 5, therefore marking down the Ingress Gateway after 15 seconds. When not using CIS, a HTTP health check has to be configured manually in the BIG-IP. An example health check monitor is shown next: Connection draining When an Ingress Gateway POD is deleted (because of an upgrade, scale-down event, etc...), it immediately returns HTTP 503 in the /healthz/ready endpoint and keeps serving connections until it is effectively deleted. This is called the drain period and by default is extremely short (3 seconds) for any external load balancer. This value has to be increased so the Ingress Gateway PODs being deleted continue serving connections until the Ingress Gateway POD is removed from the external load balancer (the BIG-IP) and the outstanding connections finalised. This setting can only be tuned using the Gateway injection method and it is applied by adding the following snippet in the Deployment file: apiVersion: apps/v1 kind: Deployment [...] spec: template: metadata: annotations: proxy.istio.io/config: | terminationDrainDuration: 45s In the example above, it has been used as the default drain period of the OpenShift Router (45 seconds). The value can be customised, keeping in mind that: When using CIS, it should allow CIS to update the configuration in the BIG-IP and drain the connections. When not using CIS, it should allow the health check to detect the condition of the POD and drain the connections. Additional recommendations The next recommendations apply to any ingress controller or API manager and have been previously suggested when using OpenShift Router. Handle non-graceful errors with the pool’s reselect tries To deal better with non-graceful shutdowns or transient errors, this mechanism will reselect a new Ingress Gateway POD when a request fails. The recommendation is to set the number of tries to the number of Ingress Gateway PODs -1. When using CIS, this can be set in the VirtualServer or TransportServer CRs with the reselectTries parameter. Set an additional TCP monitor for Ingress Gateway´s application traffic sockets This complementary TCP monitor (for both HTTP and HTTPS listeners) validates that Ready instances can actually receive traffic in the application’s traffic sockets. Although this is handled with the reselect tries mechanism, this monitor will provide visibility that such types of errors are happening. Conclusion and closing remarks We hope this article highlights the most important aspects of integrating OpenShift Service Mesh with BIG-IP. A key aspect for having a reliable Ingress Gateway integration is to modify OpenShift Service Mesh’s terminationDrainDuration and readinessProbe.failureThreshold defaults. F5 has submitted to Red Hat RFE 04270713 to improve these. This article will be updated accordingly. Whether CIS integration is used or not, BIG-IP allows you to expose OpenShift ServiceMesh reliably with extensive L4-L7 security and traffic management capabilities. It also allows fine-grained access control, scalable SNAT or keeping the original source IP, among others. Overall, BIG-IP is able to fulfill any requirement. We look forward to hearing your experience and feedback on this article.188Views1like0CommentsUsing AWS CloudHSM with F5 BIG-IP
With the release of TMOS version 17.5.1, BIG-IP now supports the latest AWS CloudHSM hardware security module (HSM) type, hsm2m.medium, and the latest AWS CloudHSM Client SDK, version 5. This article explains how to install and configure AWS CloudHSM Client SDK 5 on BIG-IP 17.5.1401Views1like4CommentsF5 Distributed Cloud (XC) Global Applications Load Balancing in Cisco ACI
Introduction F5 Distributed Cloud (XC) simplify cloud-based DNS management with global server load balancing (GSLB) and disaster recovery (DR). F5 XC efficiently directs application traffic across environments globally, performs health checks, and automates responses to activities and events to maintain high application performance with high availability and robustness. In this article, we will discuss how we can ensure high application performance with high availability and robustness by using XC to load-balance global applications across public clouds and Cisco Application Centric Infrastructure (ACI) sites that are geographically apart. We will look at two different XC in ACI use cases. Each of them uses a different approach for global applications delivery and leverages a different XC feature to load balance the applications globally and for disaster recovery. XC DNS Load Balancer Our first XC in ACI use case is very commonly seen where we use a traditional network-centric approach for global applications delivery and disaster recovery. We use our existing network infrastructure to provide global applications connectivity and we deploy GSLB to load balance the applications across sites globally and for disaster recovery. In our example, we will show you how to use XC DNS Load Balancer to load-balance a global application across ACI sites that are geographically dispersed. One of the many advantages of using XC DNS Load Balancer is that we no longer need to manage GSLB appliances. Also, we can expect high DNS performance thanks to XC global infrastructure. In addition, we have a single pane of glass, the XC console, to manage all of our services such as multi-cloud networking, applications delivery, DNS services, WAAP etc. Example Topology Here in our example, we use Distributed Cloud (XC) DNS Load Balancer to load balance our global application hello.bd.f5.com, which is deployed in a hybrid multi-cloud environment across two ACI sites located in San Jose and New York. Here are some highlights at each ACI site from our example: New York location XC CE is deployed in ACI using layer three attached with BGP XC advertises custom VIP 10.10.215.215 to ACI via BGP XC custom VIP 10.10.215.215 has an origin server 10.131.111.88 on AWS BIG-IP is integrated into ACI BIG-IP has a public VIP 12.202.13.149 that has two pool members: on-premise origin server 10.131.111.161 XC custom VIP 10.10.215.215 San Jose location XC CE is deployed in ACI using layer three attached with BGP XC advertises custom VIP 10.10.135.135 to ACI via BGP XC custom VIP 10.10.135.135 has an origin server 10.131.111.88 on Azure BIG-IP is integrated into Cisco ACI BIG-IP has a public VIP 12.202.13.147 that has two pool members: on-premise origin server 10.131.111.55 XC custom VIP 10.10.135.135 *Note: Click here to review on how to deploy XC CE in ACI using layer three attached with BGP. DNS Load Balancing Rules A DNS Load Balancer is an ingress controller for the DNS queries made to your DNS servers. The DNS Load Balancer receives the requests and answers with an IP address from a pool of members based on the configured load balancing rules. On the XC console, go to "DNS Management" -> "DNS Load Balancer Management" to create a DNS Load Balancer and then define the load balancing rules. Here in our example, we created a DNS Load Balancer and defined the load balancing rules for our global application hello.bd.f5.com (note: as a prerequisite, F5 XC must be providing primary DNS for the domain): Rule #1: If the DNS request to hello.bd.f5.com comes from United States or United Kingdom, respond with BIG-IP VIP 12.203.13.149 in the DNS response so that the application traffic will be directed to New York ACI site and forwarded to an origin server that is located in AWS or on-premise: Rule #2: If the DNS request to hello.bd.f5.com comes from United States or United Kingdom and if New York ACI site become unavailable, respond with BIG-IP VIP 12.203.13.147 in the DNS response so that the application traffic will be directed to San Jose ACI site and forwarded to an origin server that is located on-premise or in Azure: Rule #3: If the DNS request to hello.bd.f5.com comes from somewhere outside of United States or United Kingdom, respond with BIG-IP VIP 12.203.13.147 in the DNS response so that the application traffic will be directed to San Jose ACI and forwarded to an origin server that is located on-premise or in Azure: Validation Now, let's see what happens. When a machine located in the United States tries to reach hello.bd.f5.com and if both ACI sites are up, the traffic is directed to New York ACI site and forwarded to an origin server that is located on-premise or in AWS as expected: When a machine located in the United States tries to reach hello.bd.f5.com and if the New York ACI site is down or becomes unavailable, the traffic is re-directed to San Jose ACI site and forwarded to an origin server that is located on-premise or in Azure as expected: When a machine tries to access hello.bd.f5.com from outside of United States or United Kingdom, it is directed to San Jose ACI site and forwarded to an origin server that is located on-premise or in Azure as expected: On the XC console, go to "DNS Management" and select the appropriate DNS Zone to view the Dashboard for information such as the DNS traffic distribution across the globe, the query types etc and Requests for DNS requests info: XC HTTP Load Balancer Our second XC in ACI use case uses a different approach for global applications delivery and disaster recovery. Instead of using the existing network infrastructure for global applications connectivity and utilizing XC DNS Load Balancer for global applications load balancing, we simplify the network layer management by securely deploying XC to connect our applications globally and leveraging XC HTTP Load Balancer to load balance our global applications and for disaster recovery. Example Topology Here in our example, we use XC HTTP load balancer to load balance our global application global.f5-demo.com that is deployed across a hybrid multi-cloud environment. Here are some highlights: XC CE is deployed in each ACI site using layer three attached with BGP New York location: ACI advertises on-premise origin server 10.131.111.161 to XC CE via BGP San Jose location: ACI advertises on-premise origin server 10.131.111.55 to XC CE via BGP An origin server 10.131.111.88 is located in AWS An origin server 10.131.111.88 is located in Azure *Note: Click here to review on how to deploy XC CE in ACI using layer three attached with BGP. XC HTTP Load Balancer On the XC console, go to “Multi-Cloud App Connect” -> “Manage” -> “Load Balancers” -> “HTTP Load Balancers” to “Add HTTP Load Balancer”. In our example, we created a HTTPS load balancer named global with domain name global.f5-demo.com. Instead of bringing our own certificate, we took advantage of the automatic TLS certificate generation and renewal supported by XC: Go to “Origins” section to specify the origin servers for the global application. In our example, we included all origin servers across the public clouds and ACI sites for our global application global.f5-demo.com: Next, go to “Other Settings” -> “VIP Advertisement”. Here, select either “Internet” or “Internet (Specified VIP)” to advertise the HTTP Load Balancer to the Internet. In our example, we selected “Internet” to advertise global.f5-demo.com globally because we decided not to manage nor to acquire a public IP: In our first use case, we defined a set of DNS load balancing rules on the XC DNS Load Balancer to direct the application traffic based on our requirement: If the request to global.f5-demo.com comes from United States or United Kingdom, application traffic should be directed to an origin server that is located on-premise in New York ACI site or in AWS. If the request to global.f5-demo.com comes from United States or United Kingdom and if the origin servers in New York ACI site and AWS become unavailable, application traffic should be re-directed to an origin server that is located on-premise in San Jose ACI site or in Azure. If the request to global.f5-demo.com comes from somewhere outside of United States or United Kingdom, application traffic should be directed to an origin server that is located on-premise in San Jose ACI site or in Azure. We can accomplish the same with XC HTTP Load Balancer by configuring Origin Server Subset Rules. XC HTTP Load Balancer Origin Server Subset Rules allow users to create match conditions on incoming source traffic to the XC HTTP Load Balancer and direct the matched traffic to the desired origin server(s). The match condition can be based on country, ASN, regional edge (RE), IP address, or client label selector. As a prerequisite, we create and assign a label (key-value pair) to an origin server so that we can specify where to direct the matched traffic to in reference to the label in Origin Server Subset Rules. Go to “Shared Configuration” -> “Manage” -> “Labels” -> “Known Keys” and “Add Know Key” to create labels. In our example, we created a key named jy-key with two labels: us-uk and other : Now, go to "Origin pool" under “Multi-Cloud App Connect” and apply the labels to the origin servers: In our example, origin servers in New York ACI site and AWS are labeled us-uk while origin servers in San Jose ACI site and Azure are labeled other : Then, go to “Other Settings” to enable subset load balancing. In our example, jy-key is our origin server subsets class, and we configured to use default subset original pool labeled other as our fallback policy choice based on our requirement that is if the origin servers in New York ACI site and AWS become unavailable, traffic should be directed to an origin server in San Jose ACI site or Azure: Next, on the HTTP Load Balancer, configure the Origin Server Subset Rules by enabling “Show Advanced Fields” in the "Origins" section: In our example, we created following Origin Server Subset Rules based on our requirement: us-uk-rule: If the request to global.f5-demo.com comes from United States or United Kingdom, direct the application traffic to an origin server labeled us-uk that is either in New York ACI site or AWS. other-rule: If the request to global.f5-demo.com does not come from United States or United Kingdom, direct the application traffic to an origin server labeled other that is either in San Jose ACI site or Azure. Validation As a reminder, we use XC automatic TLS certificate generation and renewal feature for our HTTPS load balancer in our example. First, let's confirm the certificate status: We can see the certificate is valid with an auto renew date. Now, let’s run some tests and see what happens. First, let’s try to access global.f5-demo.com from United Kingdom: We can see the traffic is directed to an origin server located in New York ACI site or AWS as expected. Next, let's see what happens if the origin servers from both of these sites become unavailable: The traffic is re-directed to an origin server located in San Jose ACI site or Azure as expected. Last, let’s try to access global.f5-demo.com from somewhere outside of United States or United Kingdom: The traffic is directed to an origin server located in San Jose ACI site or Azure as expected. To check the requests on the XC Console, go to "Multi-Cloud App Connect" -> “Performance” -> "Requests" from the selected HTTP Load Balancer. Below is a screenshot from our example and we can see the request to global.f5-demo.com came from Australia was directed to the origin server 10.131.111.55 located in San Jose ACI site based on the configured Origin Server Subset Rules other-rule: Here is another example that the request came from United States was sent to the origin server 10.131.111.88 located in AWS based on the configured Origin Server Subset Rules us-uk-rule: Summary F5 XC simplify cloud-based DNS management with global server load balancing (GSLB) and disaster recovery (DR). By deploying F5 XC in Cisco ACI, we can securely deploy and load balance our global applications across ACI sites (and public clouds) efficiently while maintaining high application performance with high availability and robustness among global applications at all times. Related Resources *On-Demand Webinar* Deploying F5 Distributed Cloud Services in Cisco ACI Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Three Attached Deployment Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Two Attached Deployment890Views1like0Comments