devops
1627 TopicsVMware VKS integration with F5 BIG-IP and CIS
Introduction vSphere Kubernetes Service (VKS) is the Kubernetes runtime built directly into VMware Cloud Foundation (VCF). With CNCF certified Kubernetes, VKS enables platform engineers to deploy and manage Kubernetes clusters while leveraging a comprehensive set of cloud services in VCF. Cloud admins benefit from the support for N-2 Kubernetes versions, enterprise-grade security, and simplified lifecycle management for modern apps adoption. Alike with other Kubernetes platforms, the integration with BIG-IP is done through the use of the Container Ingress Services (CIS) component, which is hosted in the Kubernetes platform and allows to configure the BIG-IP using the Kubernetes API. Under the hood, it uses the F5 AS3 declarative API. Note from the picture that BIG-IP integration with VKS is not limited to BIG-IP´s load balancing capabilities and that most BIG-IP features can be configured using this integration. These features include: Advanced TLS encryption, including safe key storage with Hardware Security Module (HSM) or Network & Cloud HSM support. Advanced WAF, L7 bot and API protection. L3-L4 High-performance firewall with IPS for protocol conformance. Behavioral DDoS protection with cloud scrubbing support. Visibility into TLS traffic for inspection with 3 rd party solutions. Identity-aware ingress with Federated SSO and integration with leading MFAs. AI inference and agentic support thanks to JSON and MCP protocol support. Planning the deployment of CIS for VMware VKS The installation of CIS in VMware VKS is performed through the standard Helm charts facility. The platform owner needs to determine beforehand: Whether the deployment is hosted on a vSphere (VDS) network or an NSX network. It has to be taken into account that on an NSX network, VKS doesn´t currently allow to place the load balancers in the same segment as the VKS cluster. No special considerations have to be taken when hosting BIG-IP in a vSphere (VDS) network. Whether this is a single-cluster or a multi-cluster deployment. When using the multi-cluster option and clusterIP mode (only possible with Calico in VKS), it has to be taken into account that the POD networks of the clusters cannot have overlapping prefixes. What Kubernetes networking (CNI) is desired to be used. CIS supports both VKS supported CNIs: Antrea (default) and Calico. From the CIS point of view, the CNI is only relevant when sending traffic directly to the PODs. See next. What integration with the CNI is desired between the BIG-IP and VKS NodePort mode This is done by making applications discoverable using Services of type NodePort. From the BIG-IP, the traffic is sent to the Node´s IPs where it is redistributed to the POD depending on the TrafficPolicies of the Service. This is CNI agnostic. Any CNI can be used. Direct-to-POD mode This is done by making applications discoverable using the Services of type ClusterIP. Note that the CIS integration with Antrea uses Antrea´s nodePortLocal mechanism, which requires an additional annotation in the Service declaration. See the CIS VKS page in F5 CloudDocs for details. This Antrea nodePortLocal mechanism allows to send the traffic directly to the POD without actually using the POD IP address. This is especially relevant for NSX because it allows to access the PODs without actually re-distributing the PODs IPs across the NSX network, which is not allowed. When using vSphere (VDS) networking, either Antrea’s nodePortLocal or clusterIP with Calico can be used. Another way (but not frequent) is the use of hostNetwork POD networking because it requires privileges for the application PODs or ingress controllers. Network-wise, this would have a similar behavior to nodePortLocal, but without the automatic allocation of ports. Whether the deployment is a single-tier or a two-tier deployment. A single-tier deployment is a deployment where the BIG-IP sends the traffic directly to the application PODs. This has a simpler traffic flow and easier persistence and end-to-end monitoring. A two-tier deployment sends the traffic to an ingress controller POD instead of the application PODs. This ingress controller could be Contour, NGINX Gateway Fabric, Istio or an API gateway. This type of deployment offers the ultimate scalability and provides additional segregation between the BIG-IPs (typically owned by NetOps) and the Kubernetes cluster (typically owned by DevOps). Once CIS is deployed, applications can be published either using the Kubernetes standard Ingress resource or F5’s Custom Resources. This latter is the recommended way because it allows to expose most of the BIG-IPs capabilities. Details on the Ingress resource and F5 custom annotations can be found here. Details on the F5 CRDs can be found here. Please note that at time of this writing Antrea nodePortLocal doesn´t support the TransportServer CRD. Please consult your F5 representative for its availability. Detailed instructions on how to deploy CIS for VKS can be found on this CIS VKS page in F5 CloudDocs. Application-aware MultiCluster support MultiCluster allows to expose applications that are hosted in multiple VKS clusters and publish them in a single VIP. BIG-IP & CIS are in charge of: Discover where the PODs of the applications are hosted. Note that a given application doesn´t need to be available in all clusters. Upon receiving the request for a given application, decide to which cluster and Node/Pod the request has to be sent. This decision is based on the weight of each cluster, the application availability and the load balancing algorithm being applied. Single-tier or Two-tier architectures are possible. NodePort and ClusterIP modes are possible as well. Note that at the time of this writing, Antrea in ClusterIP mode (nodePortLocal) is not supported currently. Please consult your F5 representative for availability of this feature. Considerations for NSX Load Balancers cannot be placed in the same VPC segment where the VMware VKS cluster is. These can be placed in a separate VPC segment of the same VPC gateway as shown in the next diagram. In this arrangement the BIG-IP can be configured as either 1NIC mode or as a regular deployment, in which case the MGMT interface is typically configured through an infrastructure VLAN instead of an NSX segment. The data segment is only required to have enough prefixes to host the self-IPs of the BIG-IP units. The prefixes of the VIPs might not belong to the Data Segment´s subnet. These additional prefixes have to be configured as static routes in the VPC Gateway and Route Redistribution for these must be enabled. Given that the Load Balancers are not in line with the traffic flow towards the VKS Cluster, it is required to use SNAT. When using SNAT pools, the prefixes of these can optionally be configured as additional prefixes of the Data Segment, like the VIPs. Specifically for Calico, clusterIP mode cannot be used in NSX because this would require the BIG-IP to be in the same VPC segment as VMware VKS. Note also that BGP multi-hop is not feasible either because it would require the POD cluster network prefixes to be redistributed through NSX, which is not possible either. Conclusion and final remarks F5 BIG-IPs provides unmatched deployment options and features for VMware VKS; these include: Support for all VKS CNIs, which allows sending the traffic directly instead of using hostNetwork (which implies a security risk) or using the common NodePort, which can incur an additional kube-proxy indirection. Both 1-tier or 2-tier arrangements (or both types simultaneously) are possible. F5´s Container Ingress Services provides the ability to handle multiple VMware VKS clusters with application-aware VIPs. This is a unique feature in the industry. Securing applications with the wide range of L3 to L7 security features provided by BIG-IP, including Advanced WAF and Application Access. To complete the circle, this integration also provides IP address management (IPAM) which provides great flexibility to DevOps teams. All these are available regardless of the form factor of the BIG-IP: Virtual Edition, appliance or chassis, allowing great scalability and multi-tenancy options. In NSX deployments, the recommended form-factor is Virtual Edition in order to connect to the NSX segments. We look forward to hearing your experience and feedback on this article.33Views1like0CommentsHigh Availability for F5 NGINX Instance Manager in AWS
Introduction F5 NGINX Instance Manager gives you a centralized way to manage NGINX Open Source and NGINX Plus instances across your environment. It’s ideal for disconnected or air-gapped deployments, with no need for internet access or external cloud services. The NGINX Instance Manager features keep changing. They now include many features for managing configurations, like NGINX config versioning and templating, F5 WAF for NGINX policy and signature management, monitoring of NGINX metrics and security events, and a rich API to help external automation. As the role of NGINX Instance Manager becomes increasingly important in the management of disconnected NGINX fleets, the need for high availability increases. This article explores how we can use Linux clustering to provide high availability for NGINX Instance Manager across two availability zones in AWS. Core Technologies Core technologies used in this HA architecture design include: Amazon Elastic Compute instances (EC2) - virtual machines rented inside AWS that can be used to host applications, like NGINX Instance Manager. Pacemaker - an open-source high availability resource manager software used in Linux clusters since 2004. Pacemaker is generally deployed with the Corosync Cluster Engine, which provides the cluster node communication, membership tracking and cluster quorum. Amazon Elastic File System (EFS) - a serverless, fully managed, elastic Network File System (NFS) that allows servers to share file data simultaneously between systems. Amazon Network Load Balancer (NLB) - a layer 4 TCP/UDP load balancer that forwards traffic to targets like EC2 instances, containers or IP addresses. NLB can send periodic health checks to registered targets to ensure that traffic is only forwarded to healthy targets. Architecture Overview In this highly available architecture, we will install NGINX Instance Manager (NIM) on two EC2 instances in different AWS Availability Zones (AZ). Four EFS file systems will be created to share key stateful information between the two NIM instances, and Pacemaker/Corosync will be used to orchestrate the cluster - only one NIM instance is active at any time and Pacemaker will facilitate this by starting/stopping the NIM systemd services. Finally, an Amazon NLB will be used to provide network failover between the two NIM instances, using an HTTP health check to determine the active cluster node. Deployment Steps 1. Create AWS EFS file systems First, we are going to create four EFS volumes to hold important NIM configuration and state information that will be shared between nodes. These file systems will be mounted onto: /etc/nms, /var/lib/clickhouse, /var/lib/nms and /usr/share/nms inside the NIM node. Take note of the File System IDs of the newly created file systems. Edit the properties of each EFS file system and create a mount target in each AZ you intend to deploy a NIM node in, then restrict network access to only the NIM nodes by setting up an AWS Security Group. You may also consider more advanced authentication methods, but these aren't covered in this article. 2. Deploy two EC2 instances for NGINX Instance Manager Deploy two EC2 instances with suitable specifications to support the number of data plane instances that you plan to manage (you can find the sizing specifications here) and connect one to each of the AZ/subnet that you configured EFS mount targets in above. In this example, I will deploy two t2.medium instances running Ubuntu 24.04, connect one to us-east-1a and the other to us-east-1c, and create a security group allowing only traffic from its local assigned subnet. 3. Mount the EFS file systems on NGINX Instance Manager Node 1 Now we have the EC2 instances deployed, we can log on to Node 1 and mount the EFS volumes onto this node by executing the following steps: 1. SSH onto Node 1 2. Install efs-utils package if is not installed already 3. Edit /etc/fstab and create an entry for each EFS File System ID and its associated mount directory 4. Execute mount -a to mount the file systems 5. Execute df to ensure that the paths are mounted correctly 4. Install NGINX Instance Manager on Node 1 With the EFS file systems now mounted, it's time to run through the NGINX Instance Manager installation on Node 1. 1. Navigate to the Install the latest NGINX Instance Manager with a script page in the NGINX documentation and download install-nim-bundle.sh 2. Install your NGINX licenses (nginx-repo.crt and nginx-repo.key) into /etc/ssl/nginx/ 3. Run bash install-nim-bundle.sh -d ubuntu22.04 4. Wait for the installation to complete, take note of the password that was generated during the installation, then stop and disable autostart of NIM services on this node: systemctl stop nms; systemctl disable nms systemctl stop nginx; systemctl disable nginx systemctl stop clickhouse-server; systemctl disable clickhouse-server 5. Install NGINX Instance Manager on Node 2 This time we are going to install NGINX Instance Manager on Node two but without attaching the EFS file systems. On Node 2: 1. Navigate to the Install the latest NGINX Instance Manager with a script page in the NGINX documentation and download install-nim-bundle.sh 2. Install your NGINX licenses (nginx-repo.crt and nginx-repo.key) into /etc/ssl/nginx/ 3. Run bash install-nim-bundle.sh -d ubuntu22.04 4. Wait for the installation to complete, take note of the password that was generated during the installation, then stop and disable autostart of NIM services on this node: systemctl stop nms; systemctl disable nms systemctl stop nginx; systemctl disable nginx systemctl stop clickhouse-server; systemctl disable clickhouse-server 6. Mount EFS file systems on NGINX Instance Manager Node 2 Now we have the NGINX Instance Manager binaries installed on each node, let's mount the EFS file systems on Node 2: 1. SSH onto Node 2 2. Install efs-utils package if is not installed already 3. Edit /etc/fstab and create an entry for each EFS File System ID and its associated mount directory 4. Execute mount -a to mount the file systems 5. Execute df to ensure that the paths are mounted correctly 7. Install and configure Pacemaker/Corosync With NGINX Instance Manager now installed on both nodes, it's now time to get Pacemaker and Corosync installed: 1. Install Pacemaker, Corosync and other important agents sudo apt update sudo apt install pacemaker pcs corosync fence-agents-aws resource-agents-base 2. To allow Pacemaker to communicate between nodes, we need to add TCP communication between nodes to the Security Group for the NIM nodes. 3. Once we have the connectivity in place, we have to set a common password for the hacluster user on both nodes - we can do this by running the following command on both nodes: sudo passwd hacluster password: IloveF5 (don't use this!) 4. Now we start the Pacemaker services by running the following commands on both nodes: systemctl start pcsd.service systemctl enable pcsd.service systemctl status pcsd.service systemctl start pacemaker systemctl enable pacemaker 5. And finally, we authenticate the nodes with each other (using hacluster username, password and node hostname) and check the cluster status: pcs host auth ip-172-17-1-89 ip-172-17-2-160 pcs cluster setup nimcluster --force ip-172-17-1-89 pcs status 8. Configure Cluster Fencing Fencing is the ability to make a node unable to run resources, even when that node is unresponsive to cluster commands - you can think of fencing as cutting the power to the node. Fencing protects against corruption of data due to concurrent access to shared resources, commonly known as "split brain" scenario. In this architecture, we use the fence_aws agent, which uses boto3 library to connect to AWS and stop the EC2 instances of failing nodes. Let's install and configure the fence_aws agent: 1. Create an AWS Access Key and Secret Access key for fence_aws to use 2. Install the AWS CLI on both NIM nodes 3. Take note of the Instance IDs for the NIM instances 4. Configure the fence_aws agent as a Pacemaker STONITH device. Run the psc stonith command inserting your access key, secret key, region, and mappings of Instance ID to Linux hostname. pcs stonith create hacluster-stonith fence_aws access_key=(your access key) secret_key=(your secret key) region=us-east-1 pcmk_host_map="ip-172-31-34-95:i-0a46181368524dab6;ip-172-31-27-134:i-032d0b400b5689f68" power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4 5. Run pcs status and make sure that the stonith device is started 9. Configure Pacemaker resources, colocations and contraints Ok - we are almost there! It's time to configure the Pacemaker resources, colocations and constraints. We want to make sure that the clickhouse-server, nms and nginx systemd services all come up on the same node together, and in that order. We can do that using Pacemaker colocations and constraints. 1. Configure a pacemaker resource for each systemd service pcs resource create clickhouse systemd:clickhouse-server pcs resource create nms systemd:nms.service pcs resource create nginx systemd:nginx.service 🔥HOT TIP🔥 check out pcs resource command options (op monitor interval etc.) to optimize failover time. 2. Create two colocations to make sure they all start on the same node pcs constraint colocation add clickhouse with nms pcs constraint colocation add nms with nginx 3. Create three constraints to define the startup order: Clickhouse -> NMS -> NGINX pcs constraint order start clickhouse then nms pcs constraint order start nms then nginx 4. Enable and start the pcs cluster pcs cluster enable --all pcs cluster start --all 10. Provision AWS NLB Load Balancer Finally - we are going to set up the AWS Network Load Balancer (NLB) to facilitate the failover. Create a Security Group entry to allow HTTPs traffic to enter the EC2 instance from the local subnet 2. Create a Load Balancer target group, targeting instances, with Protocol TCP on port 443 ⚠️NOTE ⚠️ if you are using Load balancing with TCP Protocol and terminating the TLS connection on the NIM node (EC2 instance), you must create a security group entry to allow TCP 443 from the connecting clients directly to the EC2 instance IP address. If you have trusted SSL/TLS server certificates, you may want to investigate a load balancer for TLS protocol. 3. Ensure that a HTTPS health check is in place to facilitate the failover 🔥HOT TIP🔥 you can speed up failure detection and failover using Advanced health check settings. 4. Include our two NIM instances as pending and save the target group 5. Now let's create the network load balancer (NLB) listening on TCP port 443 and forwarding to the target group created above. 6. Once the load balancer is created, check the target group and you will find that one of the targets is healthy - that's the active node in the pacemaker cluster! 7. With the load balancing now in place, you can access the NIM console using the FQDN for your load balancer and login with the password set in the install of Node 1. 8. Once you have logged in, we need to install a license before we proceed any further: Click on Settings Click on Licenses Click Get Started Click Browse Upload your license Click Add 9. With the license now installed, we have access to the full console 11. Test failover The easiest way to test failover is to just shut down the active node in the cluster. Pacemaker will detect the node is no longer available and start the services on the remaining node. Stop the active node/instance of the NIM 2. Monitor the Target Group and watch it fail over - depending on the settings you have set up, this may take a few minutes 12. How to upgrade NGINX Instance Manager on the cluster To upgrade NGINX Instance Manager in a Pacemaker cluster, perform the following tasks: 1. Stop the Pacemaker Cluster services on Node 2 - forcing Node 1 to take over. pcs cluster stop ip-172-17-2-160 2. Disconnect the NFS mounts on Node2 umount /usr/share/nms umount /etc/nms umount /var/lib/nms umount /var/lib/clickhouse 3. Upgrade NGINX Instance Manager on Node 1 Download the update from the MyF5 Customer Portal sudo apt-get -y install -f /home/user/nms-instance-manager_<version>_amd64.deb sudo systemctl restart nms sudo systemctl restart nginx 4. Upgrade NGINX Instance Manager on Node 2 (with the NFS mounts disconnected) Download the update from the MyF5 Customer Portal sudo apt-get -y install -f /home/user/nms-instance-manager_<version>_amd64.deb sudo systemctl restart nms sudo systemctl restart nginx 5. Re-mount all the NFS mounts on Node 2 mount -a 6. Start the Pacemaker Cluster services on Node 2 - adding it back into the cluster pcs cluster start ip-172-17-2-160 13. Reference Documents Some good references on Pacemaker/Corosync clustering can be found here: Configuring a Red Hat High Availability cluster on AWS Implement a High-Availability Cluster with Pacemaker and Corosync ClusterLabs Pacemaker website Corosync Cluster Engine website191Views0likes0CommentsLevel up your F5 Distributed Cloud WAAP Ops
Learn how to stream F5 Distributed Cloud WAAP logs to Splunk and unlock insights beyond the built-in the F5 Distributed Cloud Console - from tenant-wide attack visibility to traffic source analysis and long-term threat pattern detection. Get started with ready-to-use Splunk queries that help you build dashboards tailored to your organization's security and operational needs.54Views1like0CommentsMoving HTTP Load Balancers Between F5 Distributed Cloud Namespaces — Why It's Harder Than You Think
The Problem If you have been working with F5 Distributed Cloud (XC) for a while, you have probably run into this: your namespace structure no longer reflects how your teams or applications are organized. Maybe the initial layout was a quick decision during onboarding. Maybe teams have merged, projects have grown, or your naming convention has evolved. Either way, you now want to move a handful of HTTP load balancers from one namespace to another. Simple enough, right? Just change the namespace field and save... Except you can't. There is no "move" operation on F5 XC - not in the UI, not in the API. Changing the namespace of a load balancer means deleting it in the source and re-creating it in the target. And that is where things get complicated. Why a Simple Delete-and-Recreate Is Not Enough On the surface, the API is straightforward: "GET" the config, "DELETE" the object, "POST" it into the new namespace. But a production HTTP load balancer on XC is rarely a standalone object. It sits at the top of a dependency tree that can include origin pools, health checks, TLS certificates, service policies, app firewalls, rate limiters, and more. Every one of those dependencies needs to be handled correctly - or the migration breaks. Here are the main challenges we might run into. Referential Integrity F5 XC enforces strict referential integrity. You cannot delete an origin pool that is still referenced by a load balancer. You cannot create a load balancer that references an origin pool that does not exist yet. This means the order of operations matters: delete top-down (LBs first, then dependencies), create bottom-up (dependencies first, then LBs). It also means that if two load balancers share an origin pool, you cannot move them independently. Delete the first LB, try to delete the shared pool, and the API returns a 409 Conflict because the second LB still references it. Both LBs - and all of their shared dependencies - have to be moved together as a single atomic unit. New CNAMEs After Every Move When you delete and re-create an HTTP load balancer, F5 XC assigns a new "host_name" (the CNAME target that your DNS records point to). If the LB uses Let's Encrypt auto-certificates, the ACME challenge CNAME changes too. That means after every move, someone needs to update external DNS records - and until that happens, the application is unreachable or the TLS certificate renewal fails. For tenants using XC-managed DNS zones with "Allow Application Loadbalancer Managed Records" enabled, this is handled automatically. But many customers manage their own DNS, and they need the old and new CNAME values for every moved LB. Certificates with Non-Portable Private Keys This one is subtle. When a load balancer uses a manually imported TLS certificate, the private key is stored in one of several formats: blindfolded (encrypted with the Volterra blindfold key) or clear secret. In both of these cases, the XC API never returns the private key material in its GET response. You get the certificate and metadata, but not the key. That means you cannot extract-and-recreate the certificate in a new namespace via the API. Cross-namespace certificate references (outside of "shared" namespace) are also not supported. So if an LB in namespace A uses a manually imported certificate stored in namespace A, and you want to move that LB to namespace B, you need to first manually upload the same certificate into namespace B (or into the "shared" namespace) before the migration can proceed. API Metadata The XC API returns a "referring_objects" field on every config GET response. In theory, this tells you what other objects reference a given resource - exactly what you need to know before deleting something. In practice, this field can be empty even when active references exist. The only reliable way to detect all external references is to actively scan: fetch the config of every load balancer in the namespace and check their specs for references to the objects you are about to move. Cross-Namespace References Are Not Allowed On F5 XC, an HTTP load balancer can only reference objects in its own namespace, in "system" or "shared" namespace. If your origin pool lives in namespace A and you move the LB to namespace B, the origin pool must either come along to namespace B or already exist there. There is no way to have the LB in namespace B point to a pool in namespace A. This means you need to discover the complete transitive dependency tree of every LB, determine which dependencies need to move, detect which are shared between multiple LBs, and batch everything accordingly. The Tool: XC Namespace migration To deal with all of this, (A)I built **xc-ns-mover** — a Python CLI tool that automates the entire process. It has two components: Scanner - scans all namespaces on your tenant, lists every HTTP load balancer, and writes a CSV report. This gives you the inventory to decide what to move. Mover - takes a CSV of load balancers, discovers all dependencies, groups LBs that share dependencies into atomic batches, runs a series of pre-flight checks, and then executes the migration - or generates a dry-run report so you can review everything first, or do the job manually (JSON Code blocks available in the report) What the Mover Does Before Touching Anything The mover runs six pre-flight phases before making any changes: Discovery and batching - fetches every LB config, walks the dependency tree, and uses a union-find algorithm to cluster LBs with shared dependencies into batches. External reference scan - for every dependency being moved, checks whether any LB outside the move list references it. If so, that dependency cannot be moved without breaking the external LB, and the batch is blocked. Conflict detection - lists all existing objects in the target namespace. If a name already exists, the user can skip the object or rename it with a configurable prefix (e.g., "migrated-my-pool"). All internal JSON references are updated automatically. Certificate pre-flight - identifies certificates with non-portable private keys, then searches the target and "shared" namespaces for a matching certificate by domain/SAN comparison (including wildcard matching per RFC 6125). If a match is found, the LB's certificate reference is automatically rewritten. If not, the batch is blocked until the certificate is manually created. DNS zone pre-flight - queries the tenant's DNS zones to detect which ones have managed LB records enabled. LBs under managed zones are flagged as "auto-managed" in the report — no manual DNS update needed. After all checks pass, the actual migration follows a strict order per batch: backup everything, delete top-down, create bottom-up, verify new CNAMEs. If anything fails, automatic rollback kicks in — objects created in the target are deleted, objects deleted from the source are restored from backups. The Reports Every run produces an HTML report. The dry-run report shows planned configurations, the full dependency graph , certificate issues, DNS changes required, and any blocking issues — all before a single API call mutates anything. The post-migration report includes old and new CNAME values, a DNS changes table with exactly which records need updating, and full configuration backups of everything that was touched. Things to Keep in Mind A few caveats that are worth highlighting: Brief interruption is unavoidable - The migration deletes and re-creates load balancers. During that window (typically seconds to a few minutes per batch), traffic to affected domains will be impacted. Plan a change window. Only HTTP load balancers are supported - TCP load balancers and other object types are not handled by this tool. DNS updates are your responsibility - The report gives you all the values - old CNAME, new CNAME, ACME challenge CNAME - but you need to update your DNS provider. Always run the dry-run first - The tool enforces this by default: it stores a fingerprint after a dry-run and verifies it before executing. If the config changes, a new dry-run is required. The project is open source and available on GitHub. This is privately maintained and not "officially supported": https://github.com/de1chk1nd/resources-and-tools/blob/main/tools/xc-ns-mover/README.md If you find bugs or have feature requests, please open a GitHub issue.98Views2likes0CommentsF5 Container Ingress Services (CIS) deployment using Cilium CNI and static routes
F5 Container Ingress Services (CIS) supports static route configuration to enable direct routing from F5 BIG-IP to Kubernetes/OpenShift Pods as an alternative to VXLAN tunnels. Static routes are enabled in the F5 CIS CLI/Helm yaml manifest using the argument --static-routing-mode=true. In this article, we will use Cilium as the Container Network Interface (CNI) and configure static routes for an NGINX deployment For initial configuration of the BIG-IP, including AS3 installation, please see https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/userguide/installation.html and https://clouddocs.f5.com/containers/latest/userguide/kubernetes/#cis-installation The first step is to install Cilium CNI using the steps below on Linux host: CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt) CLI_ARCH=amd64 if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum} sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum} cilium install --version 1.18.5 cilium status cilium status --wait root@ciliumk8s-ubuntu-server:~# cilium status --wait /¯¯\ /¯¯\__/¯¯\ Cilium: OK \__/¯¯\__/ Operator: OK /¯¯\__/¯¯\ Envoy DaemonSet: OK \__/¯¯\__/ Hubble Relay: disabled \__/ ClusterMesh: disabled DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1 DaemonSet cilium-envoy Desired: 1, Ready: 1/1, Available: 1/1 Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1 Containers: cilium Running: 1 cilium-envoy Running: 1 cilium-operator Running: 1 clustermesh-apiserver hubble-relay Cluster Pods: 6/6 managed by Cilium Helm chart version: 1.18.3 Image versions cilium quay.io/cilium/cilium:v1.18.3@sha256:5649db451c88d928ea585514746d50d91e6210801b300c897283ea319d68de15: 1 cilium-envoy quay.io/cilium/cilium-envoy:v1.34.10-1761014632-c360e8557eb41011dfb5210f8fb53fed6c0b3222@sha256:ca76eb4e9812d114c7f43215a742c00b8bf41200992af0d21b5561d46156fd15: 1 cilium-operator quay.io/cilium/operator-generic:v1.18.3@sha256:b5a0138e1a38e4437c5215257ff4e35373619501f4877dbaf92c89ecfad81797: 1 cilium connectivity test root@ciliumk8s-ubuntu-server:~# cilium connectivity test ℹ️ Monitor aggregation detected, will skip some flow validation steps ✨ [default] Creating namespace cilium-test-1 for connectivity check... ✨ [default] Deploying echo-same-node service... ✨ [default] Deploying DNS test server configmap... ✨ [default] Deploying same-node deployment... ✨ [default] Deploying client deployment... ✨ [default] Deploying client2 deployment... ✨ [default] Deploying ccnp deployment... ⌛ [default] Waiting for deployment cilium-test-1/client to become ready... ⌛ [default] Waiting for deployment cilium-test-1/client2 to become ready... ⌛ [default] Waiting for deployment cilium-test-1/echo-same-node to become ready... ⌛ [default] Waiting for deployment cilium-test-ccnp1/client-ccnp to become ready... ⌛ [default] Waiting for deployment cilium-test-ccnp2/client-ccnp to become ready... ⌛ [default] Waiting for pod cilium-test-1/client-645b68dcf7-s5mdb to reach DNS server on cilium-test-1/echo-same-node-f5b8d454c-qkgq9 pod... ⌛ [default] Waiting for pod cilium-test-1/client2-66475877c6-cw7f5 to reach DNS server on cilium-test-1/echo-same-node-f5b8d454c-qkgq9 pod... ⌛ [default] Waiting for pod cilium-test-1/client-645b68dcf7-s5mdb to reach default/kubernetes service... ⌛ [default] Waiting for pod cilium-test-1/client2-66475877c6-cw7f5 to reach default/kubernetes service... ⌛ [default] Waiting for Service cilium-test-1/echo-same-node to become ready... ⌛ [default] Waiting for Service cilium-test-1/echo-same-node to be synchronized by Cilium pod kube-system/cilium-lxjxf ⌛ [default] Waiting for NodePort 10.69.12.2:32046 (cilium-test-1/echo-same-node) to become ready... 🔭 Enabling Hubble telescope... ⚠️ Unable to contact Hubble Relay, disabling Hubble telescope and flow validation: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:4245: connect: connection refused" ℹ️ Expose Relay locally with: cilium hubble enable cilium hubble port-forward& ℹ️ Cilium version: 1.18.3 🏃[cilium-test-1] Running 126 tests ... [=] [cilium-test-1] Test [no-policies] [1/126] .................... [=] [cilium-test-1] Skipping test [no-policies-from-outside] [2/126] (skipped by condition) [=] [cilium-test-1] Test [no-policies-extra] [3/126] <- snip -> For this article, we will install k3s with Cilium CNI root@ciliumk8s-ubuntu-server:~# curl -sfL https://get.k3s.io | sh -s - --flannel-backend=none --disable-kube-proxy --disable servicelb --disable-network-policy --disable traefik --cluster-init --node-ip=10.69.12.2 --cluster-cidr=10.42.0.0/16 root@ciliumk8s-ubuntu-server:~# mkdir -p $HOME/.kube root@ciliumk8s-ubuntu-server:~# sudo cp -i /etc/rancher/k3s/k3s.yaml $HOME/.kube/config root@ciliumk8s-ubuntu-server:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config root@ciliumk8s-ubuntu-server:~# echo "export KUBECONFIG=$HOME/.kube/config" >> $HOME/.bashrc root@ciliumk8s-ubuntu-server:~# source $HOME/.bashrc API_SERVER_IP=10.69.12.2 API_SERVER_PORT=6443 CLUSTER_ID=1 CLUSTER_NAME=`hostname` POD_CIDR="10.42.0.0/16" root@ciliumk8s-ubuntu-server:~# cilium install --set cluster.id=${CLUSTER_ID} --set cluster.name=${CLUSTER_NAME} --set k8sServiceHost=${API_SERVER_IP} --set k8sServicePort=${API_SERVER_PORT} --set ipam.operator.clusterPoolIPv4PodCIDRList=$POD_CIDR --set kubeProxyReplacement=true --helm-set=operator.replicas=1 root@ciliumk8s-ubuntu-server:~# cilium config view | grep cluster bpf-lb-external-clusterip false cluster-id 1 cluster-name ciliumk8s-ubuntu-server cluster-pool-ipv4-cidr 10.42.0.0/16 cluster-pool-ipv4-mask-size 24 clustermesh-enable-endpoint-sync false clustermesh-enable-mcs-api false ipam cluster-pool max-connected-clusters 255 policy-default-local-cluster false root@ciliumk8s-ubuntu-server:~# cilium status --wait The F5 CIS yaml manifest for deployment using Helm Note that these arguments are required for CIS to leverage static routes static-routing-mode: true orchestration-cni: cilium-k8s We will also be installing custom resources, so this argument is also required 3. custom-resource-mode: true Values yaml manifest for Helm deployment bigip_login_secret: f5-bigip-ctlr-login bigip_secret: create: false username: password: rbac: create: true serviceAccount: # Specifies whether a service account should be created create: true # The name of the service account to use. # If not set and create is true, a name is generated using the fullname template name: k8s-bigip-ctlr # This namespace is where the Controller lives; namespace: kube-system ingressClass: create: true ingressClassName: f5 isDefaultIngressController: true args: # See https://clouddocs.f5.com/containers/latest/userguide/config-parameters.html # NOTE: helm has difficulty with values using `-`; `_` are used for naming # and are replaced with `-` during rendering. # REQUIRED Params bigip_url: X.X.X.S bigip_partition: <BIG-IP_PARTITION> # OPTIONAL PARAMS -- uncomment and provide values for those you wish to use. static-routing-mode: true orchestration-cni: cilium-k8s # verify_interval: # node-poll_interval: # log_level: DEBUG # python_basedir: ~ # VXLAN # openshift_sdn_name: # flannel_name: cilium-vxlan # KUBERNETES # default_ingress_ip: # kubeconfig: # namespaces: ["foo", "bar"] # namespace_label: # node_label_selector: pool_member_type: cluster # resolve_ingress_names: # running_in_cluster: # use_node_internal: # use_secrets: insecure: true custom-resource-mode: true log-as3-response: true as3-validation: true # gtm-bigip-password # gtm-bigip-url # gtm-bigip-username # ipam : true image: # Use the tag to target a specific version of the Controller user: f5networks repo: k8s-bigip-ctlr pullPolicy: Always version: latest # affinity: # nodeAffinity: # requiredDuringSchedulingIgnoredDuringExecution: # nodeSelectorTerms: # - matchExpressions: # - key: kubernetes.io/arch # operator: Exists # securityContext: # runAsUser: 1000 # runAsGroup: 3000 # fsGroup: 2000 # If you want to specify resources, uncomment the following # limits_cpu: 100m # limits_memory: 512Mi # requests_cpu: 100m # requests_memory: 512Mi # Set podSecurityContext for Pod Security Admission and Pod Security Standards # podSecurityContext: # runAsUser: 1000 # runAsGroup: 1000 # privileged: true Installation steps for deploying F5 CIS using helm can be found in this link https://clouddocs.f5.com/containers/latest/userguide/kubernetes/ Once F5 CIS is validated to be up and running, we can now deploy the following application example root@ciliumk8s-ubuntu-server:~# cat application.yaml apiVersion: cis.f5.com/v1 kind: VirtualServer metadata: labels: f5cr: "true" name: goblin-virtual-server namespace: nsgoblin spec: host: goblin.com pools: - path: /green service: svc-nodeport servicePort: 80 - path: /harry service: svc-nodeport servicePort: 80 virtualServerAddress: X.X.X.X --- apiVersion: apps/v1 kind: Deployment metadata: name: goblin-backend namespace: nsgoblin spec: replicas: 2 selector: matchLabels: app: goblin-backend template: metadata: labels: app: goblin-backend spec: containers: - name: goblin-backend image: nginx:latest ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: svc-nodeport namespace: nsgoblin spec: selector: app: goblin-backend ports: - port: 80 targetPort: 80 type: ClusterIP k apply -f application.yaml We can now verify the k8s pods are created. Then we will create a sample html page to test access to the backend NGINX pod root@ciliumk8s-ubuntu-server:~# k -n nsgoblin get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES goblin-backend-7485b6dcdf-d5t48 1/1 Running 0 6d2h 10.42.0.70 ciliumk8s-ubuntu-server <none> <none> goblin-backend-7485b6dcdf-pt7hx 1/1 Running 0 6d2h 10.42.0.97 ciliumk8s-ubuntu-server <none> <none> root@ciliumk8s-ubuntu-server:~# k -n nsgoblin exec -it po/goblin-backend-7485b6dcdf-pt7hx -- /bin/sh # cat > green <<'EOF' <!DOCTYPE html> > > <html> > <head> <title>Green Goblin</title> <style> body { background-color: #4CAF50; color: white; text-align: center; padding: 50px; } h1 { font-size: 3em; } > > > > > </style> </head> <body> <h1>I am the green goblin!</h1> <p>Access me at /green</p> </body> </html> > > > > > > > EOF root@ciliumk8s-ubuntu-server:~# k -n nsgoblin exec -it goblin-backend-7485b6dcdf-d5t48 -- /bin/sh # cat > green <<'EOF' > <!DOCTYPE html> <html> <head> <title>Green Goblin</title> <style> body { background-color: #4CAF50; color: white; text-align: center; padding: 50px; } h1 { font-size: 3em; } </style> > </head> <body> <h1>I am the green goblin!</h1> <p>Access me at /green</p> </body> </html> EOF> > > > > > > > > > > > > We can now validate the pools are created on the F5 BIG-IP root@(ciliumk8s-bigip)(cfg-sync Standalone)(Active)(/kubernetes/Shared)(tmos)# list ltm pool all ltm pool svc_nodeport_80_nsgoblin_goblin_com_green { description "crd_10_69_12_40_80 loadbalances this pool" members { /kubernetes/10.42.0.70:http { address 10.42.0.70 } /kubernetes/10.42.0.97:http { address 10.42.0.97 } } min-active-members 1 partition kubernetes } ltm pool svc_nodeport_80_nsgoblin_goblin_com_harry { description "crd_10_69_12_40_80 loadbalances this pool" members { /kubernetes/10.42.0.70:http { address 10.42.0.70 } /kubernetes/10.42.0.97:http { address 10.42.0.97 } } min-active-members 1 partition kubernetes } root@(ciliumk8s-bigip)(cfg-sync Standalone)(Active)(/kubernetes/Shared)(tmos)# list ltm virtual crd_10_69_12_40_80 ltm virtual crd_10_69_12_40_80 { creation-time 2025-12-22:10:10:37 description Shared destination /kubernetes/10.69.12.40:http ip-protocol tcp last-modified-time 2025-12-22:10:10:37 mask 255.255.255.255 partition kubernetes persist { /Common/cookie { default yes } } policies { crd_10_69_12_40_80_goblin_com_policy { } } profiles { /Common/f5-tcp-progressive { } /Common/http { } } serverssl-use-sni disabled source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled vs-index 2 } CIS log output 2025/12/22 18:10:25 [INFO] [Request: 1] cluster local requested CREATE in VIRTUALSERVER nsgoblin/goblin-virtual-server 2025/12/22 18:10:25 [INFO] [Request: 1][AS3] creating a new AS3 manifest 2025/12/22 18:10:25 [INFO] [Request: 1][AS3][BigIP] posting request to https://10.69.12.1 for tenants 2025/12/22 18:10:26 [INFO] [Request: 2] cluster local requested UPDATE in ENDPOINTS nsgoblin/svc-nodeport 2025/12/22 18:10:26 [INFO] [Request: 3] cluster local requested UPDATE in ENDPOINTS nsgoblin/svc-nodeport 2025/12/22 18:10:43 [INFO] [Request: 1][AS3][BigIP] post resulted in SUCCESS 2025/12/22 18:10:43 [INFO] [AS3][POST] SUCCESS: code: 200 --- tenant:kubernetes --- message: success 2025/12/22 18:10:43 [INFO] [Request: 3][AS3] Processing request 2025/12/22 18:10:43 [INFO] [Request: 3][AS3] creating a new AS3 manifest 2025/12/22 18:10:43 [INFO] [Request: 3][AS3][BigIP] posting request to https://10.69.12.1 for tenants 2025/12/22 18:10:43 [INFO] Successfully updated status of VirtualServer:nsgoblin/goblin-virtual-server in Cluster W1222 18:10:49.238444 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice 2025/12/22 18:10:52 [INFO] [Request: 3][AS3][BigIP] post resulted in SUCCESS 2025/12/22 18:10:52 [INFO] [AS3][POST] SUCCESS: code: 200 --- tenant:kubernetes --- message: success 2025/12/22 18:10:52 [INFO] Successfully updated status of VirtualServer:nsgoblin/goblin-virtual-server in Cluster Troubleshooting: 1. If static routes are not added, the first step is to inspect CIS logs for entries similar to these: Cilium annotation warning logs 2025/12/22 17:44:45 [WARNING] Cilium node podCIDR annotation not found on node ciliumk8s-ubuntu-server, node has spec.podCIDR ? 2025/12/22 17:46:41 [WARNING] Cilium node podCIDR annotation not found on node ciliumk8s-ubuntu-server, node has spec.podCIDR ? 2025/12/22 17:46:42 [WARNING] Cilium node podCIDR annotation not found on node ciliumk8s-ubuntu-server, node has spec.podCIDR ? 2025/12/22 17:46:43 [WARNING] Cilium node podCIDR annotation not found on node ciliumk8s-ubuntu-server, node has spec.podCIDR ? 2. These are resolved by adding annotations to the node using the reference: https://clouddocs.f5.com/containers/latest/userguide/static-route-support.html Cilium annotation for node root@ciliumk8s-ubuntu-server:~# k annotate node ciliumk8s-ubuntu-server io.cilium.network.ipv4-pod-cidr=10.42.0.0/16 root@ciliumk8s-ubuntu-server:~# k describe node | grep -E "Annotations:|PodCIDR:|^\s+.*pod-cidr" Annotations: alpha.kubernetes.io/provided-node-ip: 10.69.12.2 io.cilium.network.ipv4-pod-cidr: 10.42.0.0/16 PodCIDR: 10.42.0.0/24 3. Verify a static route has been created and test connectivity to k8s pods root@(ciliumk8s-bigip)(cfg-sync Standalone)(Active)(/kubernetes)(tmos)# list net route net route k8s-ciliumk8s-ubuntu-server-10.69.12.2 { description 10.69.12.1 gw 10.69.12.2 network 10.42.0.0/16 partition kubernetes } Using pup (command line HTML parser) -> https://commandmasters.com/commands/pup-common/ root@ciliumk8s-ubuntu-server:~# curl -s http://goblin.com/green | pup 'body text{}' I am the green goblin! Access me at /green 1 0.000000 10.69.12.34 ? 10.69.12.40 TCP 78 34294 ? 80 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM TSval=2984295232 TSecr=0 WS=128 2 0.000045 10.69.12.40 ? 10.69.12.34 TCP 78 80 ? 34294 [SYN, ACK] Seq=0 Ack=1 Win=23360 Len=0 MSS=1460 WS=512 SACK_PERM TSval=1809316303 TSecr=2984295232 3 0.001134 10.69.12.34 ? 10.69.12.40 TCP 70 34294 ? 80 [ACK] Seq=1 Ack=1 Win=64256 Len=0 TSval=2984295234 TSecr=1809316303 4 0.001151 10.69.12.34 ? 10.69.12.40 HTTP 149 GET /green HTTP/1.1 5 0.001343 10.69.12.40 ? 10.69.12.34 TCP 70 80 ? 34294 [ACK] Seq=1 Ack=80 Win=23040 Len=0 TSval=1809316304 TSecr=2984295234 6 0.002497 10.69.12.1 ? 10.42.0.97 TCP 78 33707 ? 80 [SYN] Seq=0 Win=23360 Len=0 MSS=1460 WS=512 SACK_PERM TSval=1809316304 TSecr=0 7 0.003614 10.42.0.97 ? 10.69.12.1 TCP 78 80 ? 33707 [SYN, ACK] Seq=0 Ack=1 Win=64308 Len=0 MSS=1410 SACK_PERM TSval=1012609408 TSecr=1809316304 WS=128 8 0.003636 10.69.12.1 ? 10.42.0.97 TCP 70 33707 ? 80 [ACK] Seq=1 Ack=1 Win=23040 Len=0 TSval=1809316307 TSecr=1012609408 9 0.003680 10.69.12.1 ? 10.42.0.97 HTTP 149 GET /green HTTP/1.1 10 0.004774 10.42.0.97 ? 10.69.12.1 TCP 70 80 ? 33707 [ACK] Seq=1 Ack=80 Win=64256 Len=0 TSval=1012609409 TSecr=1809316307 11 0.004790 10.42.0.97 ? 10.69.12.1 TCP 323 HTTP/1.1 200 OK [TCP segment of a reassembled PDU] 12 0.004796 10.42.0.97 ? 10.69.12.1 HTTP 384 HTTP/1.1 200 OK 13 0.004820 10.69.12.40 ? 10.69.12.34 TCP 448 HTTP/1.1 200 OK [TCP segment of a reassembled PDU] 14 0.004838 10.69.12.1 ? 10.42.0.97 TCP 70 33707 ? 80 [ACK] Seq=80 Ack=254 Win=23552 Len=0 TSval=1809316308 TSecr=1012609410 15 0.004854 10.69.12.40 ? 10.69.12.34 HTTP 384 HTTP/1.1 200 OK Summary: There we have it, we have successfully deployed an NGINX application on a Kubernetes cluster managed by F5 CIS using static routes to forward traffic to the kubernetes pods418Views3likes2CommentsHow to get a F5 BIG-IP VE Developer Lab License
(applies to BIG-IP TMOS Edition) To assist operational teams teams improve their development for the BIG-IP platform, F5 offers a low cost developer lab license. This license can be purchased from your authorized F5 vendor. If you do not have an F5 vendor, and you are in either Canada or the US you can purchase a lab license online: CDW BIG-IP Virtual Edition Lab License CDW Canada BIG-IP Virtual Edition Lab License Once completed, the order is sent to F5 for fulfillment and your license will be delivered shortly after via e-mail. F5 is investigating ways to improve this process. To download the BIG-IP Virtual Edition, log into my.f5.com (separate login from DevCentral), navigate down to the Downloads card under the Support Resources section of the page. Select BIG-IP from the product group family and then the current version of BIG-IP. You will be presented with a list of options, at the bottom, select the Virtual-Edition option that has the following descriptions: For VMware Fusion or Workstation or ESX/i: Image fileset for VMware ESX/i Server For Microsoft HyperV: Image fileset for Microsoft Hyper-V KVM RHEL/CentoOS: Image file set for KVM Red Hat Enterprise Linux/CentOS Note: There are also 1 Slot versions of the above images where a 2nd boot partition is not needed for in-place upgrades. These images include _1SLOT- to the image name instead of ALL. The below guides will help get you started with F5 BIG-IP Virtual Edition to develop for VMWare Fusion, AWS, Azure, VMware, or Microsoft Hyper-V. These guides follow standard practices for installing in production environments and performance recommendations change based on lower use/non-critical needs for development or lab environments. Similar to driving a tank, use your best judgement. Deploying F5 BIG-IP Virtual Edition on VMware Fusion Deploying F5 BIG-IP in Microsoft Azure for Developers Deploying F5 BIG-IP in AWS for Developers Deploying F5 BIG-IP in Windows Server Hyper-V for Developers Deploying F5 BIG-IP in VMware vCloud Director and ESX for Developers Note: F5 Support maintains authoritative Azure, AWS, Hyper-V, and ESX/vCloud installation documentation. VMware Fusion is not an official F5-supported hypervisor so DevCentral publishes the Fusion guide with the help of our Field Systems Engineering teams.110KViews14likes153CommentsF5 rSeries: Next-Generation Fully Automatable Hardware
What is rSeries? F5 rSeries is a rearchitected, next-generation hardware platform that scales application delivery performance and automates application services to address many of today’s most critical business challenges. F5 rSeries is a key component of the F5 Application Delivery and Security Platform (ADSP). rSeries relies on a Kubernetes-based platform layer (F5OS) that is tightly integrated with F5 TMOS software. Going to a microservice-based platform layer allows rSeries to provide additional functionality that was not possible in previous generations of F5 BIG-IP platforms. Customers do not need to learn Kubernetes but still get the benefits of it. Management of the hardware will still be done via a familiar F5 CLI, webUI or API. The additional benefit of automation capabilities can greatly simplify the process of deploying F5 products. A significant amount of time and resources are saved due to automation, which translates to more time to perform critical tasks. F5OS rSeries UI Demo Video Why is this important? Get more done in less time by using a highly automatable hardware platform that can deploy software solutions in seconds, not minutes or hours. Increased performance improves ROI: The rSeries platform is a high performance and highly scalable appliance with improved processing power. Running multiple versions on the same platform allows for more flexibility than previously possible. Pay-as-you-Grow licensing options that unlock more CPU resources. Key rSeries Use-Cases NetOps Automation Shorten time to market by automating network operations and offering cloud like orchestration with full stack programmability Drive app development and delivery with self-service and faster response time Business Continuity Drive consistent policies across on-prem and public cloud and across hardware and software based ADCs Build resiliency with rSeries’ superior performance and failover capabilities Future proof investments by running multiple versions of apps side-by-side; migrate applications at your own pace Cloud Migration On-Ramp Accelerate cloud strategy by adopting cloud operating models and on-demand scalability with rSeries and use that as on ramp to cloud Dramatically reduce TCO with rSeries systems; extend commercial models to migrate from hardware to software or as applications move to cloud Automation Capabilities Declarative APIs and integration with automation frameworks (Terraform, Ansible) greatly simplifies operations and reduces overhead: AS3 (Application Services 3 Extension): A declarative API that simplifies the configuration of application services. With AS3, customers can deploy and manage configurations consistently across environments. Ansible Automation: Prebuilt Ansible modules for rSeries enable automated provisioning, configuration, and updates, reducing manual effort and minimizing errors. Terraform: Organizations leveraging Infrastructure as Code (IaC) can use Terraform to define and automate the deployment of rSeries appliances and associated configurations. Example json file: Example of running the Automation Playbook: Example of the results: More information on Automation: Automating F5OS on rSeries GitHub Automation Repository Specialized Hardware Performance rSeries offers more hardware-accelerated performance capabilities with more FPGA chipsets that are more tightly integrated with TMOS. It also includes the latest Intel processing capabilities. This enhances the following: SSL and compression offload L4 offload for higher performance and reduced load on software Hardware-accelerated SYN flood protection Hardware-based protection from more than 100 types of denial-of-service (DoS) attacks Support for F5 Intelligence Services Migration Options (BIG-IP Journeys) Use BIG-IP Jouneys to easily migrate your existing configuration to rSeries. This covers the following: Entire L4-L7 configuration can be migrated Individual Applications can be migrated BIG-IP Tenant configuration can be migrated Automatically identify and resolve migration issues Convert UCS files into AS3 declarations if needed Post-deployment diagnostics and health The Journeys Tool, available on DevCentral’s GitHub, facilitates the migration of legacy BIG-IP configurations to rSeries-compatible formats. Customers can convert UCS files, validate configurations, and highlight unsupported features during the migration process. Multi-tenancy capabilities in rSeries simplify the process of isolating workloads during and after migration. GitHub repository for F5 Journeys Conclusion The F5 rSeries platform addresses the modern enterprise’s need for high-performance, scalable, and efficient application delivery and security solutions. By combining cutting-edge hardware capabilities with robust automation tools and flexible migration options, rSeries empowers organizations to seamlessly transition from legacy platforms while unlocking new levels of performance and operational agility. Whether driven by the need for increased throughput, advanced multi-tenancy, the rSeries platform stands as a future-ready solution for securing and optimizing application delivery in an increasingly complex IT landscape. Related Content Cloud Docs rSeries Guide F5 rSeries Appliance Datasheet F5 VELOS: A Next-Generation Fully Automatable Platform DEMO: The Next Generation of F5 Hardware is Ready for you
709Views2likes0CommentsF5 VELOS: A Next-Generation Fully Automatable Platform
What is VELOS? The F5 VELOS platform is the next generation of F5’s chassis-based systems. VELOS can bridge traditional and modern application architectures by supporting a mix of traditional F5 BIG-IP tenants as well as next-generation BIG-IP Next tenants in the future. F5 VELOS is a key component of the F5 Application Delivery and Security Platform (ADSP). VELOS relies on a Kubernetes-based platform layer (F5OS) that is tightly integrated with F5 TMOS software. Going to a microservice-based platform layer allows VELOS to provide additional functionality that was not possible in previous generations of F5 BIG-IP platforms. Customers do not need to learn Kubernetes but still get the benefits of it. Management of the chassis will still be done via a familiar F5 CLI, webUI, or API. The additional benefit of automation capabilities can greatly simplify the process of deploying F5 products. A significant amount of time and resources are saved due to automation, which translates to more time to perform critical tasks. F5OS VELOS UI Demo Video Why is VELOS important? Get more done in less time by using a highly automatable hardware platform that can deploy software solutions in seconds, not minutes or hours. Increased performance improves ROI: The VELOS platform is a high-performance and highly scalable chassis with improved processing power. Running multiple versions on the same platform allows for more flexibility than previously possible. Significantly reduce the TCO of previous-generation hardware by consolidating multiple platforms into one. Key VELOS Use-Cases NetOps Automation Shorten time to market by automating network operations and offering cloud-like orchestration with full-stack programmability Drive app development and delivery with self-service and faster response time Business Continuity Drive consistent policies across on-prem and public cloud and across hardware and software-based ADCs Build resiliency with VELOS’ superior platform redundancy and failover capabilities Future-proof investments by running multiple versions of apps side-by-side; migrate applications at your own pace Cloud Migration On-Ramp Accelerate cloud strategy by adopting cloud operating models and on-demand scalability with VELOS and use that as on-ramp to cloud Dramatically reduce TCO with VELOS systems; extend commercial models to migrate from hardware to software or as applications move to cloud Automation Capabilities Declarative APIs and integration with automation frameworks (Terraform, Ansible) greatly simplifies operations and reduces overhead: AS3 (Application Services 3 Extension): A declarative API that simplifies the configuration of application services. With AS3, customers can deploy and manage configurations consistently across environments. Ansible Automation: Prebuilt Ansible modules for VELOS enable automated provisioning, configuration, and updates, reducing manual effort and minimizing errors. Terraform: Organizations leveraging Infrastructure as Code (IaC) can use Terraform to define and automate the deployment of VELOS appliances and associated configurations. Example json file: Example of running the Automation Playbook: Example of the results: More information on Automation: Automating F5OS on VELOS GitHub Automation Repository Specialized Hardware Performance VELOS offers more hardware-accelerated performance capabilities with more FPGA chipsets that are more tightly integrated with TMOS. It also includes the latest Intel processing capabilities. This enhances the following: SSL and compression offload L4 offload for higher performance and reduced load on software Hardware-accelerated SYN flood protection Hardware-based protection from more than 100 types of denial-of-service (DoS) attacks Support for F5 Intelligence Services VELOS CX1610 chassis VELOS BX520 blade Migration Options (BIG-IP Journeys) Use BIG-IP Journeys to easily migrate your existing configuration to VELOS. This covers the following: Entire L4-L7 configuration can be migrated Individual Applications can be migrated BIG-IP Tenant configuration can be migrated Automatically identify and resolve migration issues Convert UCS files into AS3 declarations if needed Post-deployment diagnostics and health The Journeys Tool, available on DevCentral’s GitHub, facilitates the migration of legacy BIG-IP configurations to VELOS-compatible formats. Customers can convert UCS files, validate configurations, and highlight unsupported features during the migration process. Multi-tenancy capabilities in VELOS simplify the process of isolating workloads during and after migration. GitHub repository for F5 Journeys Conclusion The F5 VELOS platform addresses the modern enterprise’s need for high-performance, scalable, and efficient application delivery and security solutions. By combining cutting-edge hardware capabilities with robust automation tools and flexible migration options, VELOS empowers organizations to seamlessly transition from legacy platforms while unlocking new levels of performance and operational agility. Whether driven by the need for increased throughput, advanced multi-tenancy, the VELOS platform stands as a future-ready solution for securing and optimizing application delivery in an increasingly complex IT landscape. Related Content Cloud Docs VELOS Guide F5 VELOS Chassic System Datasheet DEMO: The Next Generation of F5 Hardware is Ready for you
707Views3likes0CommentsWorking with JSON data in iRules - Part 1
When TMOS version 21 dropped a few months ago, I released a three part article series focused on managing MCP in iRules. MCP is JSON-RPC2.0 based, so this was a great use case for the new JSON commands. But it's not the only use case. JSON has been the default data format for the web transport for well over a decade. And until v21, doing anything with JSON in iRules was not for the faint of heart as the Tcl version iRules uses has no native parsing capability. In this article, i'll do a JSON overview, introduce the test scripts to pass simple JSON payloads back and forth, and get the BIG-IP configured to manage this traffic. In part two, we'll dig into the iRules. JSON Structure & Terminology Let's start with some example JSON, then we'll break it down. { "my_string": "Hello World", "my_number": 42, "my_boolean": true, "my_null": null, "my_array": [1, 2, 3], "my_object": { "nested_string": "I'm nested", "nested_array": ["a", "b", "c"] } } JSON is pretty simple. The example shown there is a JSON object. Object delimeters are the curly brackets you see on lines 1 and 11, but also in the nested object in lines 7 and 10. Every key in JSON must be a string enclosed in double quotes. The keys are the left side of the colon on lines 2-9. The colon is the separator between the key and its value The comma is the separator between key/value pairs There are 6 data types in JSON String - should be enclosed with double quotes like keys Number - can be integer, floating point, or exponential format Boolean - can only be true or false, without quotes, no capitals Null - this is an intentional omission of a value Array - this is called a list in python and Tcl Object - this is called a dictionary in python and Tcl Objects can be nested. (If you've ever pulled stats from iControl REST, you know this to be true!) Creating a JSON test harness Since iControl REST is JSON based, I could easily pass payloads from my desktop through a virtual server and onward to an internal host for the iControl REST endpoints, but I wanted something I could simplify with a pre-defined client and server payload. So I vibe coded a python script to do just that if you want to use it. I have a ubuntu desktop connected to both the client and server networks of the v21 BIG-IP in my lab. First I tested on localhost, then got my BIG-IP set up to handle the traffic as well. Local test Clientside jrahm@udesktop:~/scripts$ ./cspayload.py client --host 10.0.3.95 --port 8088 [Client] Connecting to http://10.0.3.95:8088/ [Client] Sending JSON payload (POST): { "my_string": "Hello World", "my_number": 42, "my_boolean": true, "my_null": null, "my_array": [ 1, 2, 3 ], "my_object": { "nested_string": "I'm nested", "nested_array": [ "a", "b", "c" ] } } [Client] Received response (Status: 200): { "message": "Hello from server", "type": "response", "status": "success", "data": { "processed": true, "timestamp": "2026-01-29" } } Serverside jrahm@udesktop:~/scripts$ ./cspayload.py server --host 0.0.0.0 --port 8088 [Server] Starting HTTP server on 0.0.0.0:8088 [Server] Press Ctrl+C to stop [Server] Received JSON payload: { "my_string": "Hello World", "my_number": 42, "my_boolean": true, "my_null": null, "my_array": [ 1, 2, 3 ], "my_object": { "nested_string": "I'm nested", "nested_array": [ "a", "b", "c" ] } } [Server] Sent JSON response: { "message": "Hello from server", "type": "response", "status": "success", "data": { "processed": true, "timestamp": "2026-01-29" } } Great, my JSON payload is properly flowing from client to server on localhost. Now let's get the BIG-IP setup to manage this traffic. BIG-IP config This is a pretty basic setup, just need a JSON profile on top of the standard HTTP virtual server setup. My server is listening on 10.0.3.95:8088, so i'll add that as a pool member and then create the virtual in my clientside network at 10.0.2.50:80. Config is below. ltm virtual virtual.jsontest { creation-time 2026-01-29:15:10:10 destination 10.0.2.50:http ip-protocol tcp last-modified-time 2026-01-29:16:21:58 mask 255.255.255.255 pool pool.jsontest profiles { http { } profile.jsontest { } tcp { } } serverssl-use-sni disabled source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled vlans { ext } vlans-enabled vs-index 2 } ltm pool pool.jsontest { members { 10.0.3.95:radan-http { address 10.0.3.95 session monitor-enabled state up } } monitor http } ltm profile json profile.jsontest { app-service none maximum-bytes 3000 maximum-entries 1000 maximum-non-json-bytes 2000 } BIG-IP test, just traffic, no iRules yet Ok, let's repeat the same client/server test to make sure we're flowing properly through the BIG-IP. I'll just show the clientside this time as the serverside would be the same as before. Note the updated IP and port in the client request should match the virtual server you create. jrahm@udesktop:~/scripts$ ./cspayload.py client --host 10.0.2.50 --port 80 [Client] Connecting to http://10.0.2.50:80/ [Client] Sending JSON payload (POST): { "my_string": "Hello World", "my_number": 42, "my_boolean": true, "my_null": null, "my_array": [ 1, 2, 3 ], "my_object": { "nested_string": "I'm nested", "nested_array": [ "a", "b", "c" ] } } [Client] Received response (Status: 200): { "message": "Hello from server", "type": "response", "status": "success", "data": { "processed": true, "timestamp": "2026-01-29" } } Ok. Now we're cooking and BIG-IP is managing the traffic. Part two will drop as soon as I can share some crazy good news about a little thing happening at AppWorld you don't want to miss!316Views4likes2CommentsEven More Hands-On Quantum-Safe PKI: Building Enterprise PQC Certificate Authorities with EJBCA Community Edition
Your PQC CAs just graduated from the command line to the corner office. Back in December and again in January we published Hands-On Quantum-Safe PKI, a step-by-step lab for building quantum-resistant certificate authorities from scratch using OpenSSL. You learned ML-DSA algorithms, built a Root CA, chained an Intermediate CA, issued end-entity certificates, and stood up revocation infrastructure, all by hand, one command at a time. Billions of you went through it. Six of you even enjoyed it. But here's the thing about building a CA with OpenSSL: it works beautifully for learning and it works beautifully for testing. It does not work beautifully at 2 AM when someone asks you to revoke a certificate and your "management interface" is vim index.txt . Calgon, take me away! The lab has expanded. The Post-Quantum Cryptography Step-by-Step Lab now includes a complete EJBCA Community Edition deployment track, nine modules that take you from bare metal to enterprise-managed, quantum-resistant Certificate Authorities running inside a real PKI management platform. Same SassyCorp identity. Same ML-DSA-87 Root CA and ML-DSA-65 Intermediate CA. Now with a database, an application server, audit logs, and a web UI that doesn't require you to memorize openssl ca flags. But if you want to we won't stop you and we love you for that. 🔥🔥🔥 Access the Complete Lab on GitHub 🔥🔥🔥; The new lab walks through deploying Keyfactor's EJBCA Community Edition v9.3 on Ubuntu with WildFly 35, MariaDB, and OpenJDK 21. You configure a 3-port TLS architecture, HTTP on 8080, public HTTPS on 8442, and mutual TLS admin access on 8443 where your browser has to prove it's worth it is before EJBCA lets you touch anything. It's PKI with actual access control, which is a refreshing change from chmod 600 being your entire security model. The Evolution: Why This Matters Think of it as three stages of PQC readiness. The first lab (CNSA 2.0 with OpenSSL) taught you algorithm mechanics for federal use cases. The second lab (FIPS 203/204/205) broadened that to commercial compliance. This third expansion puts those same algorithms inside infrastructure that can actually manage certificates at scale — issuance, renewal, revocation, OCSP, CRL distribution, role-based access, and audit logging that doesn't live in a flat file. That progression is intentional. You can't meaningfully operate an enterprise PKI platform if you don't understand what's happening underneath it. And you can't stop at OpenSSL if your organization needs to manage more than a handful of certificates. The compliance clock is running - NIST is deprecating classical asymmetric algorithms by 2030, the NSA wants full CNSA 2.0 enforcement by 2033, and Australia is trying to eliminate classical public-key crypto entirely by 2030 (bless their ambitious hearts). Having people who can actually stand up and operate PQC certificate authorities isn't optional anymore. It's PKI Thunderdome! What You Can Do Next After completing the lab, you'll have a fully operational EJBCA instance with three CAs — the RSA ManagementCA for internal admin plumbing, plus your ML-DSA-87 Root and ML-DSA-65 Intermediate for quantum-resistant certificate operations. From here you can issue end-entity certificates through EJBCA's enrollment interface, configure CRL distribution points, set up OCSP responders, explore the REST API, and experiment with hybrid certificates that combine PQC and classical algorithms. You've got an enterprise PKI playground that happens to be quantum-resistant. The whole thing runs on a single VM if you want, that's what we did. No Docker, no scripts, no "just run this compose file and trust me." Every command is manual, every configuration file is edited by hand, and every step explains why. We remain faithful disciples of the "Learn Python the Hard Way" school of pedagogy, mostly because it works and partly because suffering builds character. The lab is open source, community-driven, and waiting for your pull requests. Go break something, then fix it. That's how you learn. Access the Complete Lab on GitHub → References Resource URL EJBCA Community Edition (Keyfactor) https://github.com/Keyfactor/ejbca-ce Keyfactor PQC Hybrid CA Tutorial https://docs.keyfactor.com/ejbca/latest/tutorial-create-pqc-hybrid-ca-chain Keyfactor EJBCA Installation Docs https://docs.keyfactor.com/ejbca-software/latest/installation WildFly 35 Documentation https://docs.wildfly.org/35/ PQC Coalition — International Requirements https://pqcc.org/international-pqc-requirements/185Views2likes1Comment