deployment
4185 TopicsNeed step-by-step guidance for migrating BIG-IP i2800 WAF to rSeries (UCS restore vs clean build)
Hello DevCentral Community, We are planning a hardware refresh migration from a legacy BIG-IP i2800 running WAF/ASM to a new rSeries platform and would like to follow F5 recommended best practices. Could you please advise on the step-by-step process for this migration, specifically around: o Whether UCS restore is recommended versus building config fresh o BIG-IP version compatibility considerations during the migration o Interface/VLAN mapping differences between iSeries and rSeries hardware o Best approach to migrate WAF/ASM policies and tuning after migration o Common issues or lessons learned during real-world cutovers Current environment: " BIG-IP model: i2800 " BIG-IP version: 17.1.3 " WAF module: ASM / Advanced WAF " Deployment: Active/Active Thank you .136Views0likes3Comments"01020066:3: The requested Node (/Common/fqdn1) already exists in partition Common."
Hello, when trying to POST a new application in a tenant that uses an already shared FQDN across tenants in the same cluster, the AS3 POST response is: "01020066:3: The requested Node (/Common/fqdn1) already exists in partition Common." It is saying that fqdn1 already exists in /Common, which is true, as it can be seen in other pools as a member. Now, if I try a different FQDN (fqdn2), it works fine with no issues. Any suggestions on how to find the root cause and fix this without: deleting fqdn1 from everywhere, and redeploying it? Thank you J version running 3.56.0-1045Views0likes1CommentBIG IP LTM BEST PRACTICES
I want to do an F5 deployment to balance traffic to multiple web servers for an application that will be accessed by 500k users, and I have several questions. As an architecture, I have a VXLAN fabric (ONE-SITE)where the F5 (HA ACTIVE-PASIVE) and the firewall(HA ACTIVE-PASIVE) are attached to the border/service leafs(eBGP PEERING for FIREWALL-BORDER LEAF, STATIC FOR F5-BORDER). The interface to the ISP is connected to the firewall(I think it would have been recommended to attach it to the border leafs), where the first VIP is configured, translating the public IP to an IP in the FIRST ARM VLAN(CLIENT SIDE TRANSIT TO BORDER), specifically where I created the VIP on F5. 1) I want to know if the design up to this point is correct. I would also like to know whether the subnet where the VIPs reside on the F5 can be different, and if it is recommended for it to be different, from the subnet used for CLIENT SIDE TRANSIT. 2) I also want to know if it is recommended for the second ARM VLAN (server side) to be the same as the web server VLAN, or if it is better for the web server subnet(another vlan) to be different, with routing between the two networks. 3) I would also like to know whether it is recommended for the SOURCE NAT pool to be the same as the SECOND ARM VLAN (server side) or if it should be different. In any of the approaches, I would still need to perform Source NAT, I also need to implement SSL offloading and WAF (Web Application Firewall). I am very familiar with the routing aspects for any deployment model. What I would like to know is what the best architectural approach would be, or how you would design such a deployment. Thank you very much—any advice would be greatly appreciated.112Views0likes1CommentAccelerate Application Deployment on Google Cloud with F5 NGINXaaS
Introduction In the push for cloud-native agility, infrastructure teams often face a crossroads: settle for basic, "good enough" load balancing, or take on the heavy lifting of manually managing complex, high-performance proxies. For those building on Google Cloud (GCP), this compromise is no longer necessary. F5 NGINXaaS for Google Cloud represents a shift in how we approach application delivery. It isn’t just NGINX running in the cloud; it is a co-engineered, fully managed on-demand service that lives natively within the GCP ecosystem. This integration allows you to combine the advanced traffic control and programmability NGINX is known for with the effortless scaling and consumption model of an SaaS offering in a platform-first way. By offloading the "toil" of lifecycle management—like patching, tuning, and infrastructure provisioning—to F5, teams can redirect their energy toward modernizing application logic and accelerating release cycles. In this article, we’ll dive into how this synergy between F5 and Google Cloud simplifies your architecture, from securing traffic with integrated secret management to gaining deep operational insights through native monitoring tools. Getting Started with NGINXaaS for Google Cloud The transition to a managed service begins with a seamless onboarding experience through the Google Cloud Marketplace. By leveraging this integrated path, teams can bypass the manual "toil" of traditional infrastructure setup, such as patching and individual instance maintenance. The deployment process involves: Marketplace Subscription: Directly subscribe to the service to ensure unified billing and support. Network Connectivity: Setting up essential VPC and Network Attachments to allow NGINXaaS to communicate securely with your backend resources. Provisioning: Launching a dedicated deployment that provides enterprise-grade reliability while maintaining a cloud-native feel. Secure and Manage SSL/TLS in F5 NGINXaaS for Google Cloud Security is a foundational pillar of this co-engineered service, particularly regarding traffic encryption. NGINXaaS simplifies the lifecycle of SSL/TLS certificates by providing a centralized way to manage credentials. Key security features include: Integrated Secrets Management: Working natively with Google Cloud services to handle sensitive data like private keys and certificates securely. Proxy Configuration: Demonstrating how to set up a Google Cloud proxy network load balancer to handle incoming client traffic. Credential Deployment: Uploading and managing certificates directly within the NGINX console to ensure all application endpoints are protected by robust encryption. Enhancing Visibility in Google Cloud with F5 NGINXaaS Visibility is no longer an afterthought but a native component of the deployment, providing high-fidelity telemetry without separate agents. Native Telemetry Export: By linking your Google Cloud Project ID and configuring Workload Identity Federation (WIF), metrics and logs are pushed directly to Google Cloud Monitoring. Real-Time Dashboards: The observability demo walks through using the Metrics Explorer to visualize critical performance data, such as active HTTP connection counts and response rates. Actionable Logging: Integrated Log Analytics allow you to use the Logs Explorer to isolate events and troubleshoot application issues within a single toolset, streamlining your operational workflow. Whether you are just beginning your transition to the cloud or fine-tuning a sophisticated microservices architecture, F5 NGINXaaS provides the advanced availability, scalability, security, and visibility capabilities necessary for success in the Google Cloud environment. Conclusion The integration of F5 NGINXaaS for Google Cloud represents a significant advantage for organizations looking to modernize their application delivery without the traditional overhead of infrastructure management. By shifting to this co-engineered, managed service, teams can bridge together advanced NGINX performance and the native agility of the Google Cloud ecosystem. Through the demonstrations provided in this article, we’ve highlighted how you can: Accelerate Onboarding: Move from Marketplace subscription to a live deployment in minutes using Network Attachments. Fortify Security: Centralize SSL/TLS management within the NGINX console while leveraging Google Cloud's robust networking layer. Maximize Operational Intelligence: Harness deep, real-time observability by piping telemetry directly into Google Cloud Monitoring and Logging. Resources Accelerating app transformation with F5 NGINXaaS for Google Cloud F5 NGINXaaS for Google Cloud: Delivering resilient, scalable applications105Views2likes2CommentsDeploying the F5 AI Security Certified OpenShift Operator: A Validated Playbook
Introduction As enterprises race to deploy Large Language Models (LLMs) in production, securing AI workloads has become as critical as securing traditional applications. The F5 AI Security Operator installs two products on your cluster — F5 AI Guardrails and F5 AI Red Team — both powered by CalypsoAI. Together they provide inline prompt/response scanning, policy enforcement, and adversarial red-team testing, all running natively on your own OpenShift cluster. This article is a validated deployment runbook for F5 AI Security on OpenShift (version 4.20.14) with NVIDIA GPU nodes. It is based on the official Red Hat Operator installation baseline, in a real lab deployment on a 3×A40 GPU cluster. If you follow these steps in order, you will end up with a fully functional AI Security stack, avoiding the most common pitfalls along the way. What Gets Deployed F5 AI Security consists of four main components, each running in its own OpenShift namespace: Component Namespace Role Moderator + PostgreSQL cai-moderator Web UI, API gateway, policy management, and backing database Prefect Server + Worker prefect Workflow orchestration for scans and red-team runs AI Guardrails Scanner cai-scanner Inline scanning against your OpenAI-compatible LLM endpoint AI Red Team Worker cai-redteam GPU-backed adversarial testing; reports results to Moderator via Prefect The Moderator is CPU-only. The Scanner and Red Team Worker can leverage GPUs depending on the policies and models you configure. Infrastructure Requirements Before you begin, verify your cluster meets these minimums: CPU / Control Node 16 vCPUs, 32 GiB RAM, x86_64, 100 GiB persistent storage Worker Nodes (per GPU-enabled component) 4 vCPUs, 16 GiB RAM (32 GiB recommended for Red Team), 100 GiB storage GPU Nodes AI Guardrails: CUDA-compatible GPU, minimum 24 GB VRAM, 100 GiB storage AI Red Team: CUDA-compatible GPU, minimum 48 GB VRAM, 200 GiB storage GPU must NOT be shared with other workloads Verify your cluster: # Check nodes oc get nodes -o wide # Check GPU allocatable resources oc get node -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.allocatable.nvidia\.com/gpu}{"\n"}{end}' # Check available storage classes oc get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE lvms-vg1 (default) topolvm.io Delete WaitForFirstConsumer true 15d Step 1 — Install Prerequisites 1.1 Node Feature Discovery (NFD) Operator NFD labels your nodes with hardware capabilities, which NVIDIA GPU Operator relies on to target the right nodes. OpenShift Console → Ecosystem → Software Catalog → Search Node Feature Discovery Operator → Install After installation: Installed Operators → Node Feature Discovery → Create NodeFeatureDiscovery → Accept defaults Verify: oc get pods -n openshift-nfd oc get node --show-labels | grep feature.node.kubernetes.io || true 1.2 NVIDIA GPU Operator OpenShift Console → Ecosystem → Software Catalog → Search GPU Operator → Install After installation: Installed Operators → NVIDIA GPU Operator → Create ClusterPolicy → Accept defaults Verify: oc get pods -n nvidia-gpu-operator oc describe node <gpu-node> | grep -i nvidia nvidia-smi</gpu-node> Step 2 — Install F5 AI Security Operator Prerequisites: You will need registry credentials and a valid license from the F5 AI Security team before proceeding. Contact F5 Sales: https://www.f5.com/products/get-f5 2.1 Create the Namespace and Pull Secret export DOCKER_USERNAME='<registry-username>' export DOCKER_PASSWORD='<registry-password>' export DOCKER_EMAIL='<your-email>' oc new-project f5-ai-sec oc create secret docker-registry regcred \ -n f5-ai-sec \ --docker-username=$DOCKER_USERNAME \ --docker-password=$DOCKER_PASSWORD \ --docker-email=$DOCKER_EMAIL</your-email></registry-password></registry-username> 2.2 Install from OperatorHub OpenShift Console → Ecosystem → Software Catalog → Search F5 AI Security Operator → Install into namespace f5-ai-sec Verify your F5 AI Security Operator: # Verify the controller-manager pod is Running oc -n f5-ai-sec get pods # NAME READY STATUS RESTARTS AGE # controller-manager-6f784bd96d-z6sbh 1/1 Running 1 43s # Verify the CSV reached Succeeded phase oc -n f5-ai-sec get csv # NAME DISPLAY VERSION PHASE # f5-ai-security-operator.v0.4.3 F5 Ai Security Operator 0.4.3 Succeeded # Verify the CRD is registered oc -n f5-ai-sec get crd | grep ai.security.f5.com # securityoperators.ai.security.f5.com 2.3 Deploy the SecurityOperator Custom Resource After installation: Installed Operators → F5 AI Security Operator → Create SecurityOperator Choose YAML and copy the below Custom Resource Template in there, changing select values to match your installation. apiVersion: ai.security.f5.com/v1alpha1 kind: SecurityOperator metadata: name: security-operator-demo namespace: f5-ai-sec spec: registryAuth: existingSecret: "regcred" # Internal PostgreSQL — convenient for labs, not recommended for production postgresql: enabled: true values: postgresql: auth: password: "pass" jobManager: enabled: true moderator: enabled: true values: env: CAI_MODERATOR_BASE_URL: https://<your-hostname> secrets: CAI_MODERATOR_DB_ADMIN_PASSWORD: "pass" CAI_MODERATOR_DEFAULT_LICENSE: "<valid_license_from_f5>" scanner: enabled: true redTeam: enabled: true</valid_license_from_f5></your-hostname> Key values to customize: Field What to set CAI_MODERATOR_BASE_URL Your cluster's public hostname for the UI (e.g., https://aisec.apps.mycluster.example.com ) CAI_MODERATOR_DEFAULT_LICENSE License string provided by F5 CAI_MODERATOR_DB_ADMIN_PASSWORD DB password — must match the value set in the PostgreSQL block For external PostgreSQL (recommended for production), replace the postgresql block with: moderator: values: env: CAI_MODERATOR_DB_HOST: <my-external-db-hostname> secrets: CAI_MODERATOR_DB_ADMIN_PASSWORD: <my-external-db-password></my-external-db-password></my-external-db-hostname> Verify your F5 AI Security Operator: oc -n f5-ai-sec get securityoperator oc -n f5-ai-sec get securityoperator security-operator-demo -o yaml | sed -n '/status:/,$p' Step 3 — Required OpenShift Configuration This is where most deployments hit problems. OpenShift's default restricted Security Context Constraint (SCC) blocks these containers from running. You must explicitly grant anyuid to each service account. 3.1 Apply SCC Policies oc adm policy add-scc-to-user anyuid -z cai-moderator-sa -n cai-moderator oc adm policy add-scc-to-user anyuid -z default -n cai-moderator oc adm policy add-scc-to-user anyuid -z default -n prefect oc adm policy add-scc-to-user anyuid -z prefect-server -n prefect oc adm policy add-scc-to-user anyuid -z prefect-worker -n prefect oc adm policy add-scc-to-user anyuid -z cai-scanner -n cai-scanner oc adm policy add-scc-to-user anyuid -z cai-redteam-worker -n cai-redteam 3.2 Force PostgreSQL to Restart (if Stuck at 0/1) If PostgreSQL was stuck before the SCC was applied, bounce it manually: oc -n cai-moderator scale sts/cai-moderator-postgres-cai-postgresql --replicas=0 oc -n cai-moderator scale sts/cai-moderator-postgres-cai-postgresql --replicas=1 3.3 Restart All Components oc -n cai-moderator rollout restart deploy oc -n prefect rollout restart deploy oc -n cai-scanner rollout restart deploy oc -n cai-redteam rollout restart deploy 3.4 Verify ➜ oc -n cai-moderator get statefulset NAME READY AGE cai-moderator-postgres-cai-postgresql 1/1 3d4h ➜ oc -n cai-moderator get pods | grep postgres cai-moderator-postgres-cai-postgresql-0 1/1 Running 0 3d4h ➜ oc -n cai-moderator get pods | grep cai-moderator cai-moderator-75c47fc9db-sl8t2 1/1 Running 0 3d4h cai-moderator-postgres-cai-postgresql-0 1/1 Running 0 3d4h ➜ oc -n cai-moderator get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cai-moderator ClusterIP 172.30.123.197 <none> 5500/TCP,8080/TCP 3d4h cai-moderator-headless ClusterIP None <none> 8080/TCP 3d4h cai-moderator-postgres-postgresql ClusterIP None <none> 5432/TCP 3d4h ➜ oc -n cai-moderator get endpoints Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice NAME ENDPOINTS AGE cai-moderator 10.130.0.139:8080,10.130.0.139:5500 3d4h cai-moderator-headless 10.130.0.139:8080 3d4h cai-moderator-postgres-postgresql 10.128.0.177:5432 3d4h</none></none></none> Step 4 — Create OpenShift Routes (Required for UI Access) The Moderator exposes two ports that must be routed separately: port 5500 for the UI and port 8080 for the /auth path. Skipping the auth route is the most common cause of the blank/black page issue. # UI route oc -n cai-moderator create route edge cai-moderator-ui \ --service=cai-moderator \ --port=5500 \ --hostname=<your-hostname> \ --path=/ # Auth route — required, or the UI will render blank oc -n cai-moderator create route edge cai-moderator-auth \ --service=cai-moderator \ --port=8080 \ --hostname=<your-hostname> \ --path=/auth</your-hostname></your-hostname> Verify all pods are running: oc get pods -n cai-moderator oc get pods -n cai-scanner oc get pods -n cai-redteam oc get pods -n prefect Access the UI Open https:// in a browser. Log in with the default credentials: admin / pass Log in and update the admin email address immediately. You should be able to log in successfully and see the Guardrails dashboard. Step 5 — Grant Prefect Worker Cluster-scope RBAC The Prefect worker watches Kubernetes Pods and Jobs at cluster scope to monitor scan and red-team workflow execution. Without this RBAC, prefect-worker fills its logs with 403 Forbidden errors. The Guardrails UI still loads, but scheduled workflows and Red Team runs will fail silently. # ClusterRole: allow prefect-worker to list/watch pods, jobs, and events cluster-wide oc apply -f - <<'YAML' apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prefect-worker-watch-cluster rules: - apiGroups: ["batch"] resources: ["jobs"] verbs: ["get","list","watch"] - apiGroups: [""] resources: ["pods","pods/log","events"] verbs: ["get","list","watch"] YAML # ClusterRoleBinding: bind to the prefect-worker ServiceAccount oc apply -f - <<'YAML' apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prefect-worker-watch-cluster subjects: - kind: ServiceAccount name: prefect-worker namespace: prefect roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prefect-worker-watch-cluster YAML # Restart to pick up the new permissions oc -n prefect rollout restart deploy/prefect-worker Verify RBAC errors are gone: oc -n prefect logs deploy/prefect-worker --tail=200 \ | egrep -i 'forbidden|rbac|permission|denied' \ || echo "OK: no RBAC errors detected" oc get clusterrolebinding prefect-worker-watch-cluster LlamaStack Integration F5 AI Security works alongside any OpenAI-compatible LLM inference endpoint. In our lab we pair it with LlamaStack running a quantized Llama 3.2 model on the same OpenShift cluster — F5 AI Guardrails then scans every prompt and response inline before it reaches your application. A dedicated follow-up post will walk through the full LlamaStack deployment and end-to-end integration in detail. Stay tuned. Summary Deploying F5 AI Security on OpenShift is straightforward once you know the OpenShift-specific friction points: SCC policies, the dual-route requirement, and the Prefect cluster-scope RBAC. Following this runbook in sequence — prerequisites, operator install, SCC grants, routes, Prefect RBAC — gets you to a fully operational AI guardrailing stack in a single pass. If you run into anything not covered here, drop a comment below. Tested on: OpenShift 4.20.14 · F5 AI Security Operator v0.4.3 · NVIDIA A40 GPUs · LlamaStack with Llama-3.2-1B-Instruct-quantized.w8a8 Additional Resources F5 AI Security Operator — Red Hat Catalog364Views1like0Commentswhich virtual server will be hit?
Hi, we created following virtual forwarding server for internet traffics on LTM. virtual server : internet-vs source ip: 192.12.0.1 ( downstream firewall external interface IP) destination: 0.0.0.0/0 For the return traffics of this VS, do we need to create another virtual server? If we create a new virtual forwarding server like below, will the return traffics of VS "internet-vs" hit this VS "Test-VS"? virtual server: Test-VS source: 0.0.0.0/0 destination: 192.12.0.1 Can someone please advise? Thanks in advance!91Views0likes1CommentAI Inference for VLLM models with F5 BIG-IP & Red Hat OpenShift
This article shows how to perform Intelligent Load Balancing for AI workloads using the new features of BIG-IP v21 and Red Hat OpenShift. Intelligent Load Balancing is done based on business logic rules without iRule programming and state metrics of the VLLM inference servers gathered from OpenShift´s Prometheus.368Views1like5CommentsQuestions on R-Series 5800s
Hello...Trying to bring up a new 5800 What is the recommendation for port-channel. Can we create port-channel between two different pipelines? i.e Ports 3.0,4.0,5.0 and 6.0 are in Pipeline-1 and Ports 7.0,8.0,9.0 and 10.0 are in pipeline-2. Can we create a LAG with 3.0 and 7.0? Plan on using a 10G link for HA and was planning to convert 6.0 to 10G and connect between two R-series 2. When deploying a tenant, Should we just leave it as "Recommended" for Provisioning instead of "Advanced" and have 18 vCPUs if plan on having just one tenant. Not sure how much Virtual disk size is recommended. Any recommendation for Virtual Disk Size? 3. If we want to have additional tenant, is it best to leave the tenant at 14 vCPU or can we change it later and What is the impact? Just existing tenant restart?119Views1like5CommentsApp Migration and Portability with Equinix Fabric and F5 Distributed Cloud CE
Enterprises face growing pressure to modernize legacy applications, adopt hybrid multi-cloud strategies, and meet rising compliance and performance demands. Migration and portability are now essential to enable agility, optimize costs, and accelerate innovation. Organizations need a secure, high‑performance way to move and connect applications across environments without re‑architecting. This solution brings together Equinix and F5 to deliver a unified, cloud‑adjacent application delivery and security platform.82Views1like0Comments