devops
24041 TopicsLess than 60 seconds lab setup
Today I'll share with you my less than 60 seconds lab setup which I use for testing basic stuff. It's an AS3 declaration that will setup two virtuals, the first virtual that accepts any http traffic on port 80 and forwards it to a second virtual that will respond 200 OK to any HTTP request. The lab can easily be extended to add a https virtual. Purpose of this setup I use this configuration for many scenarios. With this setup I can test different profiles, TLS configurations (requires small adjustments), AWAF rules and iRules attached to the first virtual server without the requirement to setup any backend application. Deploying this AS3 declaration takes less than 20 seconds and I have a basic lab environment ready. Prerequisites In order to use this config, you must have AS3 installed on your BIG-IP. If you have not worked with AS3 yet and you are new to automation, I recommend you to start with Visual Studio Code and install The F5 Extension. From The F5 Extension you can connect your BIG-IP and install the AS3 extension and deploy the declaration. Furthermore: if you have not with AS3 yet - you're damn late to the party! My AS3 declaration The full declaration is available on GitHub, let's just look at the iRules. The iRules are the important part of this lab config. Don't get confused that you won't see the iRule code in the AS3 declaration. It's there, but it's base64 encoded. Forwarding iRule The iRule attached to the first virtual just forwards to the second virtual. Don't get confused by the path /simple_testing/responder_service/. AS3 works with Partitions, so called tenants. Therefore I must reference the second virtual with the name of its partition and application. when HTTP_REQUEST { virtual /simple_testing/responder_service/service_http_200 } HTTP Responder iRule The second iRule is attached to the second virtual server. It will just return a HTML page that says 200 OK to any request. when HTTP_REQUEST { HTTP::respond 200 content { <html> <head> <title>BIG-IP</title> </head> <body> 200 OK </body> </html> } } Deployment As said above, for starting with this you don't need anything but a BIG-IP and Visual Studio Code. After installing the F5 Extension you can connect (using the + symbol) to your BIG-IP from VS Code. After connecting you can install the AS3 extension on your BIG-IP. And then you are ready to deploy the AS3 declaration linked above. The deployment will take less than 60 seconds. Once the deployment is done, you will have a Partition called on your BIG-IP. There you will find the two virtual servers. The website is nothing special... What's next? In the next couple of days, I will share with you a simple website I made with the help of ChatGPT. It can run on any webserver, NGINX, Apache, IIS... The website has 4 flavors (red, blue, green and yellow) and I use it for testing LTM use-cases like persistence, priority groups, http profiles, SNAT, etc. This will be my less than 600 seconds lab.50Views5likes1CommentF5 CNF/BNK issue with DNS Express tmm scaling and zone notifications
I did see an interesting issue with DNS Express with Next for Kubernetes when playing in a test environment. When you have 2 TMM pods in the same namespace as the DNS zone mirroring is done by zxfrd pod and I you need to create a listener "F5BigDnsApp" as shown in https://clouddocs.f5.com/cnfs/robin/latest/cnf-dnsexpress.html#create-a-dns-zone-to-answer-dns-queries for the optional notify that will feed this to the TMM and then to the zxfrd pod. The issue happens when you have 2 or more TMM as then the "F5BigDnsApp" that is like virtual server/listener as then then on the internal vlans there is arp conflict as the two tmm on two different kubernetes/openshift nodes advertise the same ip address on layer 2. This is seen with "kubectl logs" ("oc logs" for Openshift) on the TMM pods that mention the duplicate arp detected. Interesting that the same does not happen when you do this for the normal listener on the external Vlan (the one that captures and responds to the client DNS queries) as I think by default the ARP is stopped for the external listener that can be on 2 or more TMM as ECMP BGP is used to redistribute the traffic to the TMM by design. I see 4 possible solutions as I see it. One is to be able to control the ARP for the "F5BigDnsApp" CRD for Internal or External Vlans (BGP ECMP to be used also on the server side then) and the second is to be able to select "F5BigDnsApp" to be deployed just one 1 TMM even if there are more. Also if an ip address could be configured for the listener that is not part of the internal ip address range but then as I see with "kubectl logs" on the ingress controller (f5ing-tmm-pod-manager) the config is not pushed to the TMM as also with "configview" from the debug sidecar container on the tmm pods there is no listener at all. The manager logs suggest that because the Listener IP address is not part of the Self-IP IP range under the intnernal Vlan as this maybe system limitation and no one thinking about this use case as in BIG-IP this is is supported to have VIP on non self ip address range that is not advertised with arp because of this. The last solution that can work at the moment is to have many tmm in different namespaces on different kubernetes nodes with affinity rules that can deploy each tmm on different node even if the tmm are on different namespaces by matching a configured label (see the example below) as maybe this is the current working design to have one zxfrd pod with one tmm pod in a namespace but then the auto-scaling may not work as euto scale should create a new tmm pod in the same namespace if needed. Example: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: tmm # Match Pods in any namespaces that have this label namespaceSelector: {} # empty selector = all namespaces topologyKey: "kubernetes.io/hostname" Also it should be considered if the zxfrd pod can push the DNS zone to the RAM of more than one TMM pods as maybe it can't as maybe currently only one to one is supported. Maybe it was never tested what happens when you have Security Context IP address on the Internal Network and multiple TMM pods. Interest stuff that I just wanted to share as this was just testing things outđ51Views1like0CommentsF5 Kubernetes CNF/BNK GSLB functionality ?
Hello everyone is there F5 CNF/BNK GSLB functionality ? I see the containers gslb-engine (probably the main GTM/DNS module) and gslb-probe-agent (probably the big3d in a container/pod ) but no CR/CRD definitions about it and and can this data be shared between F5 TMM in different clusters (something like DNS sync groups) or probing normal F5 BIG-IP devices (not in kubernetes). https://clouddocs.f5.com/cnfs/robin/latest/cnf-software-install.html https://clouddocs.f5.com/cnfs/robin/latest/intro.html78Views0likes4CommentsF5 CNF/SPK/BNK and etc. support for Custom URL classifications/apps/IPS signatures?
While I played with CNF/SPK/BNK and etc. I didn't see anything in the docks about this https://clouddocs.f5.com/cnfs/robin/latest/ I think it is important feature as if a URL is wrongly classified by Brightcloud DB to be able to add the url to custom URL category as for example to allow it. As shown in https://clouddocs.f5.com/cnfs/aon/latest/cnf-pe-url-categorization.html I think this is somewhere hidden as there is option called "customdb" , so maybe the downloader pod can be configured to pull the custom URL classification. As the irules for CNF do not support "HTTP_REQUEST" and "HTTP_RESPONSE" events as mentioned in https://clouddocs.f5.com/cnfs/openshift/latest/cnf-irule-crd.html this seems important. Outside of that Custom IPS signatures like for the normal AFM will be nice as there is IPS pod I think like the IP intelligence it could connect to external feed list that has the custom signatures (the same for the URL category) https://clouddocs.f5.com/cnfs/robin/latest/cnf-ipi-feedlist-crd.html For the custom apps that PEM uses with iRules ( https://techdocs.f5.com/en-us/bigip-14-1-0/big-ip-policy-enforcement-manager-implementations-14-1-0/creating-custom-classifications.html ) I am just mentioning this but I see less use cases than what I see with custom URL categories and custom IPS signatures. I did write to cnfdocs@f5.com as mentioned in the web documents. Hope they see it and as mentioned ""To provide feedback and help improve this document, please email us at cnfdocs@f5.com. "" đ12Views0likes0CommentsModernizing F5 Platforms with Ansible
Iâve been meaning to publish this article for some time now. Over the past few months, Iâve been building Ansible automation that I believe will help customers modernize their F5 infrastructure. This especially true for those looking to migrate from legacy BIG-IP hardware to next-generation platforms like VELOS and rSeries. As I explored tools like F5 Journeys and traditional CLI-based migration methods, I noticed a significant amount of manual pre-work was still required. This includes: Ensuring the Master Key used to encrypt the UCS archive is preserved and securely handled Storing UCS, Master Key and information assets in a backup host Pre-configuring all VLANs and properly tagging them on the VELOS partition before deploying a Tenant OS To streamline this, I created an Ansible Playbook with supporting roles tailored for Red Hat Ansible Automation Platform. Itâs built to perform a lift-and-shift migration of a F5 BIG-IP configuration from one device to anotherâwith optional OS upgrades included. In the demo video below, youâll see an automated migration of a F5 i10800 running 15.1.10 to a VELOS BX110 Tenant OS running 17.5.0âdemonstrating a smooth, hands-free modernization process. Currently Working Velos Velos Controller/Partition running (F5OS-C 1.8.1) - which allows Tenant Management IP to be in a different VLAN Migrates a standalone F5 BIG-IP i10800 to a VELOS BX110 Tenant OS VLAN'ed Source tenant required (Doesnât support non-vlan tenants) rSeries Shares MGMT IP with the same subnet as the Chassis Partition. Migrates a standalone F5 BIG-IP i10800 to a R5000 Tenant OS VLAN'ed Source tenant required (Doesnât support non-vlan tenants) Handles: Configuration and crypto backup UCS creation, transfer, and validation F5OS System VLAN Creation, and Association to Tenant - (Does Not manage Interface to VLAN Mapping) F5 OS Tenant provisioning and deployment inline OS upgrades during the migration Roadmap / What's Next Expanding Testing to include Viprion/iSeries (Using VCMP) Tenant Testing. Supporting hardware-to-virtual platform migrations Adding functionality for HA (High Availability) environments Watch the Demo Video View the Source Code on GitHub https://github.com/f5devcentral/f5-bd-ansible-platform-modernization This project is built for the communityâso feel free to take it, fork it, and expand it. Letâs make F5 platform modernization as seamless and automated as possible.734Views4likes1CommentUsing Aliases to launch F5 AMI Images in AWS Marketplace
F5 lists 82 product offerings in the AWS Marketplace as Amazon Machine Images (AMI). Each version of each product in each AWS Region has a different AMI. Thatâs around 22,000 images! Each AMI is identified by an AMI ID. You use the AMI ID to indicate which AMI you want to use when launching an F5 product. You can find AMI IDs using the AWS Web Console, but the AWS CLI is the best tool for the job. Searching for AMIs using the AWS CLI Hereâs how you find the AMI IDs for version 17.5.1.2 of BIG-IP Virtual Edition in the us-east-1 AWS region: aws ec2 describe-images --owners aws-marketplace --filters 'Name=name,Values=F5 BIGIP-17.5.1.2*' --query "sort_by(Images,&Name)[:]. {Description: Description, Id:ImageId }" --region us-east-1 --output table ---------------------------------------------------------------------------------------------------- | DescribeImages | +------------------------------------------------------------------------+-------------------------+ | Description | Id | +------------------------------------------------------------------------+-------------------------+ | F5 BIGIP-17.5.1.2-0.0.5 BYOL-All Modules 1Boot Loc-250916013758 | ami-0948eabdf29ef2a8f | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-All Modules 2Boot Loc-250916015535 | ami-0cb3aaa67967ad029 | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-LTM 1Boot Loc-250916013616 | ami-05d70b82c9031ff39 | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-LTM 2Boot Loc-250916014744 | ami-0b6021cc939308f3e | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-encrypted-threat-protection-250916015535 | ami-01f4fde300d3763be | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-AWF Plus 16vCPU-250916015534 | ami-015474056159387ac | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Adv WAF Plus 200Mbps-250916015522 | ami-06ce5b03dce2a059d | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Adv WAF Plus 25Mbps-250916015520 | ami-0826808708df97480 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Adv WAF Plus 3Gbps-250916015523 | ami-08c63c8f7ca71cf37 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 10Gbps-250916015532 | ami-0e806ef17838760e4 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 1Gbps-250916015530 | ami-05e31c2a0ac9ec050 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 200Mbps-250916015528 | ami-02dc0995af98d0710 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 25Mbps-250916015527 | ami-08b8f2daefde800e9 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 5Gbps-250916015531 | ami-0d16154bb1102f3e9 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 10Gbps-250916015512 | ami-05c9527fff191feba | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 1Gbps-250916015510 | ami-05ce2932601070d5c | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 200Mbps-250916015508 | ami-0f6044db3900ba46f | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 25Mbps-250916014542 | ami-0de57aba160170358 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 5Gbps-250916015511 | ami-04271103ab2d1369d | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 10Gbps-250916014739 | ami-0d06d2a097d7bb47a | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 1Gbps-250916014737 | ami-01707e969ebcc6138 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 200Mbps-250916014735 | ami-06f9a44562d94f992 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 25Mbps-250916013626 | ami-0aa2bca574c66af13 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 5Gbps-250916014738 | ami-01951e02c52deef85 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-PVE Adv WAF Plus 200Mbps-0916015525 | ami-03df50dfc04f19df5 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-PVE Adv WAF Plus 25Mbps-50916015524 | ami-0777c069eaae20ea1 | +------------------------------------------------------------------------+-------------------------+ This command shows all 17.5.1* releases of the "PayGo Good 1Gbps" flavor of BIG-IP in the us-west-1 region sorted by newest release first: aws ec2 describe-images --owners aws-marketplace --filters 'Name=name,Values=F5 BIGIP-17.5.1*PAYG-Good 1Gbps*' --query "reverse(sort_by(Images,&CreationDate))[:]. {Description: Name, Id:ImageId , date:CreationDate}" --region us-west-1 --output table ---------------------------------------------------------------------------------------------------------------------------------------------------- | DescribeImages | +--------------------------------------------------------------------------------------------+------------------------+----------------------------+ | Description | Id | date | +--------------------------------------------------------------------------------------------+------------------------+----------------------------+ | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 1Gbps-250916014737-7fb2f9db-2a12-4915-9abb-045b6388cccd | ami-0de8ca1229be5f7fe | 2025-09-16T23:12:28.000Z | | F5 BIGIP-17.5.1-0.80.7 PAYG-Good 1Gbps-250811055424-7fb2f9db-2a12-4915-9abb-045b6388cccd | ami-09afcec6f36494382 | 2025-08-15T19:03:23.000Z | | F5 BIGIP-17.5.1-0.0.7 PAYG-Good 1Gbps-250618090310-7fb2f9db-2a12-4915-9abb-045b6388cccd | ami-03e389e112872fd53 | 2025-07-01T06:00:44.000Z | +--------------------------------------------------------------------------------------------+------------------------+----------------------------+ Notice that the same BIG-IP VE release has a different AMI ID in each AWS region. Attempting to launch a product in one region using an AMI ID from a different region will fail. This causes a problem when a shell script or automation tool is used to launch new EC2 instances and the AMI IDs have been hardcoded for one region and you attempt to use it in another. Wouldnât it be nice to have a single AMI identifier that works in all AWS regions? Introducing AMI Aliases The Ami Alias is a similar ID to the AMI ID, but itâs easier to use in automation. An AMI alias has the form /aws/service/marketplace/prod-<identifier>/<version> , for example, "PayGo Good 1Gbps" /aws/service/marketplace/prod-s6e6miuci4yts/17.5.1.2-0.0.5 You can use this Ami Alias ID in any Region, and AWS automatically maps it to the correct Regional AMI ID. BIG-IP AMI Alias Identifiers F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 16vCPU) prod-qqgc2ltsirpio F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 200Mbps) prod-yajbds56coa24 F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 25Mbps) prod-qiufc36l6sepa F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 3Gbps) prod-fp5qrfirjnnty F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 10Gbps) prod-w2p3rtkjrjmw6 F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 1Gbps) prod-g3tye45sqm5d4 F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 200Mbps) prod-dnpovgowtyz3o F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 25Mbps) prod-wjoyowh6kba46 F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 5Gbps) prod-hlx7g47cksafk F5 BIG-IP VE - ALL (BYOL, 1 Boot Location) prod-zvs3u7ov36lig F5 BIG-IP VE - ALL (BYOL, 2 Boot Locations) prod-ubfqxbuqpsiei F5 BIG-IP VE - LTM/DNS (BYOL, 1 Boot Location) prod-uqhc6th7ni37m F5 BIG-IP VE - LTM/DNS (BYOL, 2 Boot Locations) prod-o7jz5ohvldaxg F5 BIG-IP Virtual Edition - BETTER (PAYG, 10Gbps) prod-emsxkvkzwvs3o F5 BIG-IP Virtual Edition - BETTER (PAYG, 1Gbps) prod-4idzu4qtdmzjg F5 BIG-IP Virtual Edition - BETTER (PAYG, 200Mbps) prod-firaggo6h7bt6 F5 BIG-IP Virtual Edition - BETTER (PAYG, 25Mbps) prod-wijbh7ib34hyy F5 BIG-IP Virtual Edition - BETTER (PAYG, 5Gbps) prod-rfglxslpwq64g F5 BIG-IP Virtual Edition - GOOD (PAYG, 10Gbps) prod-54qdbqglgkiue F5 BIG-IP Virtual Edition - GOOD (PAYG, 1Gbps) prod-s6e6miuci4yts F5 BIG-IP Virtual Edition - GOOD (PAYG, 200Mbps) prod-ynybgkyvilzrs F5 BIG-IP Virtual Edition - GOOD (PAYG, 25Mbps) prod-6zmxdpj4u4l5g F5 BIG-IP Virtual Edition - GOOD (PAYG, 5Gbps) prod-3ze6zaohqssua F5 BIG-IQ Virtual Edition - (BYOL) prod-igv63dkxhub54 F5 Encrypted Threat Protection prod-bbtl6iceizxoi F5 Per-App-VE Advanced WAF with LTM, IPI, TC (PAYG, 200Mbps) prod-gkzfxpnvn53v2 F5 Per-App-VE Advanced WAF with LTM, IPI, TC (PAYG, 25Mbps) prod-qu34r4gipys4s NGINX Plus Alias Identifiers NGINX Plus Basic - Amazon Linux 2 (LTS) AMI prod-jhxdrfyy2jtva NGINX Plus Developer - Amazon Linux 2 (LTS) prod-kbeepohgkgkxi NGINX Plus Developer - Amazon Linux 2 (LTS) ARM Graviton prod-vulv7pmlqjweq NGINX Plus Developer - Amazon Linux 2023 prod-2zvigd3ltowyy NGINX Plus Developer - Amazon Linux 2023 ARM Graviton prod-icspnobisidru NGINX Plus Developer - RHEL 8 prod-tquzaepylai4i NGINX Plus Developer - RHEL 9 prod-hwl4zfgzccjye NGINX Plus Developer - Ubuntu 22.04 prod-23ixzkz3wt5oq NGINX Plus Developer - Ubuntu 24.04 prod-tqr7jcokfd7cw NGINX Plus FIPS Premium - RHEL 9 prod-v6fhyzzkby6c2 NGINX Plus Premium - Amazon Linux 2 (LTS) AMI prod-4dput2e45kkfq NGINX Plus Premium - Amazon Linux 2 (LTS) ARM Graviton prod-56qba3nacijjk NGINX Plus Premium - Amazon Linux 2023 prod-w6xf4fmhpc6ju NGINX Plus Premium - Amazon Linux 2023 ARM Graviton prod-e2iwqrpted4kk NGINX Plus Premium - RHEL 8 AMI prod-m2v4zstxasp6s NGINX Plus Premium - RHEL 9 prod-rytmqzlxdneig NGINX Plus Premium - Ubuntu 22.04 prod-dtm5ujpv7kkro NGINX Plus Premium - Ubuntu 24.04 prod-opg2qh33mi4pk NGINX Plus Standard - Amazon Linux 2 (LTS) AMI prod-mdgdnfftmj7se NGINX Plus Standard - Amazon Linux 2 (LTS) ARM Graviton prod-2kagbnj7ij6zi NGINX Plus Standard - Amazon Linux 2023 prod-i25cyug3btfvk NGINX Plus Standard - Amazon Linux 2023 ARM Graviton prod-6s5rvlqlgrt74 NGINX Plus Standard - RHEL 8 prod-ebhpntvlfwluc NGINX Plus Standard - RHEL 9 prod-3e7rk2ombbpfa NGINX Plus Standard - Ubuntu 22.04 prod-7rhflwjy5357e NGINX Plus Standard - Ubuntu 24.04 prod-b4rly35ct3dlc NGINX Plus with NGINX App Protect Developer - Amazon Linux 2 prod-pjmfzy5htmaks NGINX Plus with NGINX App Protect Developer - Debian 11 prod-ixsytlu2eluqa NGINX Plus with NGINX App Protect Developer - RHEL 8 prod-6v57ggy3dqb6c NGINX Plus with NGINX App Protect Developer - Ubuntu 20.04 prod-4a4g7h7mpepas NGINX Plus with NGINX App Protect DoS Developer - Amazon Linux 2023 prod-fmqayhbsryoz2 NGINX Plus with NGINX App Protect DoS Developer - Debian 11 prod-4e5fwakhrn36y NGINX Plus with NGINX App Protect DoS Developer - RHEL 8 prod-ubid75ixhf34a NGINX Plus with NGINX App Protect DoS Developer - RHEL 9 prod-gg7mi5njfuqcw NGINX Plus with NGINX App Protect DoS Developer - Ubuntu 20.04 prod-qiwzff7orqrmy NGINX Plus with NGINX App Protect DoS Developer - Ubuntu 22.04 prod-h564ffpizhvic NGINX Plus with NGINX App Protect DoS Developer - Ubuntu 24.04 prod-wckvpxkzj7fvk NGINX Plus with NGINX App Protect DoS Premium - Amazon Linux 2023 prod-lza5c4nhqafpk NGINX Plus with NGINX App Protect DoS Premium - Debian 11 prod-ych3dq3r44gl2 NGINX Plus with NGINX App Protect DoS Premium - RHEL 8 prod-266ker45aot7g NGINX Plus with NGINX App Protect DoS Premium - RHEL 9 prod-6qrqjtainjlaa NGINX Plus with NGINX App Protect DoS Premium - Ubuntu 20.04 prod-hagmbnluc5zmw NGINX Plus with NGINX App Protect DoS Premium - Ubuntu 22.04 prod-y5iwq6gk4x4yq NGINX Plus with NGINX App Protect DoS Premium - Ubuntu 24.04 prod-k3cb7avaushvq NGINX Plus with NGINX App Protect Premium - Amazon Linux 2 prod-tlghtvo66zs5u NGINX Plus with NGINX App Protect Premium - Debian 11 prod-6kfdotc3mw67o NGINX Plus with NGINX App Protect Premium - RHEL 8 prod-okwnxdlnkmqhu NGINX Plus with NGINX App Protect Premium - Ubuntu 20.04 prod-5wn6ltuzpws4m NGINX Plus with NGINX App Protect WAF + DoS Premium - Amazon Linux 2023 prod-mualblirvfcqi NGINX Plus with NGINX App Protect WAF + DoS Premium - Debian 11 prod-k2rimvjqipvm2 NGINX Plus with NGINX App Protect WAF + DoS Premium - RHEL 8 prod-6nlubep3hg4go NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 18.04 prod-f2diywsozd22m NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 20.04 prod-ajcsh5wsfuen2 NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 22.04 prod-6adjgf6yl7hek NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 24.04 prod-autki7guiiqio Using AMI Aliases for BIG-IP The following example shows using an AMI alias to launch a new "F5 BIG-IP Virtual Edition - GOOD (PAYG, 1Gbps)" instance version 17.5.1.2-0.0.5 by using the AWS CLI. aws ec2 run-instances --image-id resolve:ssm:/aws/service/marketplace/prod-s6e6miuci4yts/17.5.1.2-0.0.5 --instance-type m5.xlarge --key-name MyKeyPair The next example shows a CloudFormation template that accepts the AMI alias as an input parameter to create an instance. AWSTemplateFormatVersion: 2010-09-09 Parameters: AmiAlias: Description: AMI alias Type: 'String' Resources: MyEC2Instance: Type: AWS::EC2::Instance Properties: ImageId: !Sub "resolve:ssm:${AmiAlias}" InstanceType: "g4dn.xlarge" Tags: -Key: "Created from" Value: !Ref AmiAlias Using AMI Aliases for NGINX Plus NGINX Plus images in the AWS Marketplace are not version specific, so just use "latest" as the version to launch. For example, this will launch NGINX Plus Premium on Ubuntu 24.04: aws ec2 run-instances --image-id resolve:ssm:/aws/service/marketplace/prod-opg2qh33mi4pk/latest --instance-type c5.large --key-name MyKeyPair Finding AMI Aliases in AWS Marketplace AMI aliases are new to the AWS Marketplace, so not all products have them. To locate the alias for an AMI you use often, you need to resort to the AWS Marketplace web console. Here are the step-by-step instructions provided by Amazon: 1. Navigate to AWS Marketplace Go to AWS Marketplace Sign in to your AWS account 2. Find and Subscribe to the Product Search for or browse to find your desired product Click on the product listing Click "Continue to Subscribe" Accept the terms and subscribe to the product 3. Configure the Product After subscribing, click "Continue to Configuration" Select your desired: Delivery Method (if multiple options are available) Software Version Region 4. Locate the AMI Alias At the bottom of the configuration page, you'll see: AMI ID: ami-1234567890EXAMPLE AMI Alias: /aws/service/marketplace/prod-<identifier>/<version> New Tools for Your AMI Hunt In this article, we focused on using AMI Aliases to select the right F5 product to launch in AWS EC2. But, thereâs one more takeaway. Scroll back up to the top of this page and take a closer look at the "aws ec2 describe-images" commands. These commands use JMESpath to filter, sort, and format the output. Find out more about filtering the output of AWS CLI commands here.87Views3likes0CommentsF5 MCP(Model Context Protocol) Server
This project is a MCP( Model Context Protocol ) server designed to interact with F5 devices using the iControl REST API. It provides a set of tools to manage F5 objects such as virtual servers (VIPs), pools, iRules, and profiles. The server is implemented using the FastMCP framework and exposes functionalities for creating, updating, listing, and deleting F5 objects.1.3KViews1like1CommentOpenShift Service Mesh 2.x/3.x with F5 BIG-IP
Overview OpenShift Service Mesh (OSSM) is Red Hat´s packaged version of Istio Service Mesh. Istio has the Ingress Gateway component to handle incoming traffic from outside of the cluster. Like other ingress controllers, it requires an external load balancer to get the traffic into the ingress PODs. This follows the canonical Kubenetes 2-tier arrangement for getting the traffic inside the cluster. This is depicted in the next figure: This article covers the configuration of OpenShift Service Mesh 2.x/3.x and expose it to the BIG-IP, and how to properly monitor its health, either using BIG-IP´s Container Ingress Services (CIS) or without using it. Exposing OSSM in BIG-IP - VIP configuration It is a customer choice how to publish OSSM in the BIG-IP: A Layer 4 (L4) Virtual Server is more simple and certificate management is done in OpenShift. The advantages of using this mode are the potential higher performance and scalability, including connection mirroring, yet mirroring is not usually used for HTTP traffic due to the typical retry mechanism of HTTP applications. Connection persistence is limited to the source IP. When using CIS, this is done with a TransportServer CR, which creates a fastL4 type virtual server in the BIG-IP. A Layer 7 (L7) Virtual Server requires additional configuration because TLS termination is required. In this mode, OpenShift can take advantage of BIG-IP´s TLS off-loading capabilities and Hardware/Network/SaaS/Cloud HSM integrations, which store private keys securely, including FIPS level support. Working at L7 also allows to do per-application traffic management, including headers and payload rewrites, cookie persistence, etc. It also allows to do per-application multi-cluster. The above features are provided by the LTM (load balancing) module in BIG-IP. The possibilities are further expanded when using modules such as ASM (Advanced WAF) and Access (authentication). When using CIS, this is done with a VirtualServer CR, which creates a standard-type virtual server in the BIG-IP. Exposing OSSM to BIG-IP - pool configuration There are two options to expose Istio Ingress Gateways to BIG-IP: Using ClusterIP addresses, these are POD IPs which are dynamic. This requires the use of CIS for discovering the IP addresses of the Ingress Gateway PODs. Using NodePort addresses, these are reachable from the outside network. When using these, it is not strictly necessary to use CIS, but it is recommended. Exposing OpenShift Service Mesh using ClusterIP This requires the use of CIS with the following parameters --orchestration-cni=ovn --static-routing-mode=true These make CIS create IP routes in the BIG-IP for reaching the POD IPs inside the OpenShift cluster. Please note that this only works if all the OpenShift nodes are directly connected in the same subnet as the BIG-IP. Additionally, it is required following parameter. It is the one that actually makes CIS populate pool members with Cluster (POD) IPs: --pool-member-type=cluster It is not needed to change any configuration in OSSM because ClusterIP mode is the default mode in Istio Ingress Gateways. Exposing OpenShift Service Mesh using NodePort Using NodePort allows to have known IP addresses for the Ingress Gateways, reachable from outside the cluster. Note that when using nodePort, only one Ingress Gateway replica will run per node. The behavior of NodePort varies using the externalTrafficPolicy field: Using the Cluster value, any OpenShift node will accept traffic and will redirect the traffic to any node that has an Ingress Gateway POD, in a load balancing fashion. This is the easiest to setup, but because each request might go to a different node makes health checking not reliable (it is not known which POD goes down). Using the Local value, only the OpenShift nodes that have an Ingress Gateway PODs will accept traffic. The traffic will be delivered to the local Ingress Gateway PODs, without further indirection. This is the recommended way when using NodePort because of its deterministic behaviour and therefore reliable health checking. Next, it is described how to setup a NodePort using the Local externalTrafficPolicy. There are two options for configuring OSSM: Using the ServiceMeshControlPlane CR method: this is the default method in OSSM 2.x for backwards compatibility, but it doesnât allow to fine tune the configuration of the proxy. See this OSSM 2.x link for further details. This is deprecated and not available in OSSM 3.x. Using Gateway injection method: this is the only method possible in OSSM 3.x and the current recommendation from Red Hat for OSSM 2.x. Using this method allows you to tune the proxy settings. In this article, it will be shown how this tuning is of special interest because at present the Ingress Gateway doesnât have good default values for allowing reliable health checking. These will be discussed in the Health Checking section. When using ServiceMeshControlPlane CR method, the above will be configured as follows: apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane [...] spec: gateways: ingress: enabled: false runtime: deployment: replicas: 2 service: externalTrafficPolicy: Local ports: - name: status-port nodePort: 30021 port: 15021 targetPort: 15021 - name: http2 nodePort: 30080 port: 80 targetPort: 8080 - name: https nodePort: 30443 port: 443 targetPort: 8443 type: NodePort When using the Gateway injection method (recommended), the Service definition is manually created analogously to the ServiceMeshControlPlane CR: apiVersion: v1 kind: Service [...] spec: externalTrafficPolicy: Local type: NodePort ports: - name: status-port nodePort: 30021 port: 15021 protocol: TCP targetPort: 15021 - name: http2 nodePort: 30080 port: 80 protocol: TCP targetPort: 8080 - name: https nodePort: 30443 port: 443 protocol: TCP targetPort: 8443 Where the ports section is optional but recommended in order to have deterministic ports, and required when not using CIS (because it requires static ports). The nodePort values can be customised. When not using CIS, it is needed to manually configure the pool members in the BIG-IP. It is typical in OpenShift to have the Ingress components (OpenShift Router or Istio) in dedicated infra nodes. See this Red Hat solution for details. When using the ServiceMeshControlPlane method, the configuration is as follows: apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane [...] spec: runtime: defaults: pod: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved When using the Gateway injection method, the configuration is added to the Deployment file directly: apiVersion: apps/v1 kind: Deployment [...] spec: template: metadata: spec: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved The configuration above is also a good practice when using CIS. Additionally, CIS by default adds all nodes IPs to the Service pool regardless of whether the externalTrafficPolicy is set to Cluster or Local value. The health check will discard nodes where there are no Ingress Gateways. It can be limited to the scope of the nodes discovered by CIS with the following parameter: --node-label-selector Health Checking and retries for the Ingress Gateway Ingress Gateway Readiness The Ingress Gateway has the following readinessProbe for Kubernete´s own health checking: readinessProbe: failureThreshold: 30 httpGet: path: /healthz/ready port: 15021 scheme: HTTP initialDelaySeconds: 1 periodSeconds: 2 successThreshold: 1 timeoutSeconds: 3 where the failureThreshold value of 30 is considered way too large and only marks down the Ingress Gateway as not Ready after 90 seconds (tested to be failureThreshold *timeoutSeconds). In this article, it is recommended to mark down an Ingress Gateway no later than 16 seconds. When using CIS, Kubernetes informs whenever a POD is not Ready and CIS automatically, removes its associated pool member from the pool. In order to achieve the desired behaviour of marking down the Ingress Gateway before 16 seconds, it is required to change the default failureThreshold value in the Deployment file by adding the following snippet: apiVersion: apps/v1 kind: Deployment [...] spec: template: metadata: spec: containers: - name: istio-proxy image: auto readinessProbe: failureThreshold: 5 httpGet: path: /healthz/ready port: 15021 scheme: HTTP initialDelaySeconds: 1 periodSeconds: 2 successThreshold: 1 timeoutSeconds: 3 Which keeps all other values equal and sets failureThreshold to 5, therefore marking down the Ingress Gateway after 15 seconds. When not using CIS, a HTTP health check has to be configured manually in the BIG-IP. An example health check monitor is shown next: Connection draining When an Ingress Gateway POD is deleted (because of an upgrade, scale-down event, etc...), it immediately returns HTTP 503 in the /healthz/ready endpoint and keeps serving connections until it is effectively deleted. This is called the drain period and by default is extremely short (3 seconds) for any external load balancer. This value has to be increased so the Ingress Gateway PODs being deleted continue serving connections until the Ingress Gateway POD is removed from the external load balancer (the BIG-IP) and the outstanding connections finalised. This setting can only be tuned using the Gateway injection method and it is applied by adding the following snippet in the Deployment file: apiVersion: apps/v1 kind: Deployment [...] spec: template: metadata: annotations: proxy.istio.io/config: | terminationDrainDuration: 45s In the example above, it has been used as the default drain period of the OpenShift Router (45 seconds). The value can be customised, keeping in mind that: When using CIS, it should allow CIS to update the configuration in the BIG-IP and drain the connections. When not using CIS, it should allow the health check to detect the condition of the POD and drain the connections. Additional recommendations The next recommendations apply to any ingress controller or API manager and have been previously suggested when using OpenShift Router. Handle non-graceful errors with the poolâs reselect tries To deal better with non-graceful shutdowns or transient errors, this mechanism will reselect a new Ingress Gateway POD when a request fails. The recommendation is to set the number of tries to the number of Ingress Gateway PODs -1. When using CIS, this can be set in the VirtualServer or TransportServer CRs with the reselectTries parameter. Set an additional TCP monitor for Ingress Gateway´s application traffic sockets This complementary TCP monitor (for both HTTP and HTTPS listeners) validates that Ready instances can actually receive traffic in the applicationâs traffic sockets. Although this is handled with the reselect tries mechanism, this monitor will provide visibility that such types of errors are happening. Conclusion and closing remarks We hope this article highlights the most important aspects of integrating OpenShift Service Mesh with BIG-IP. A key aspect for having a reliable Ingress Gateway integration is to modify OpenShift Service Meshâs terminationDrainDuration and readinessProbe.failureThreshold defaults. F5 has submitted to Red Hat RFE 04270713 to improve these. This article will be updated accordingly. Whether CIS integration is used or not, BIG-IP allows you to expose OpenShift ServiceMesh reliably with extensive L4-L7 security and traffic management capabilities. It also allows fine-grained access control, scalable SNAT or keeping the original source IP, among others. Overall, BIG-IP is able to fulfill any requirement. We look forward to hearing your experience and feedback on this article.180Views1like0CommentsAnsible - Upload Certificates requires Administrator Role?
Hi, I'm trying to give people the opportunity to manage their SSL Certificates themself. So I build something, that triggers an ansible playbook upload and update certificates on a LTM. The user has the role "Certificate Manager". When logged into the GUI with that user (for testing purpose), one can upload, update, delete certifactes and keys, no problem. When trying to use an ansible playbook with the credentials of that "Certificate Manager" Role user, the playbook fails with the following message: { "msg": "Failed to upload the file." } For uploading/updating certificates and keys I use the F5 ansible modules: f5networks.f5_modules.bigip_ssl_certificate f5networks.f5_modules.bigip_ssl_key When I change the user-role mapping from "Certificate Manager" to "Adminstrator" the playbooks works as inspected. I also tried the following role mappings, none of which had the permission to upload certificates and keys. Resource Administrator Operator Application Editor Manager Do i really have to use an user with Administrator Role? This would be a huge security issue in my opinion. Supplement: I've noticed, that "Terminal Access" was disabled for the specific user. I set it to "tmsh" and tried again. This time, I was at least able to run the playbook successfully, when the certificate was already the same I've tried to upload. So the result of ansible change was false. But uploading new certificates is still not possible.Solved64Views0likes2Comments