devops
24067 TopicsF5 VELOS: A Next-Generation Fully Automatable Platform
What is VELOS? The F5 VELOS platform is the next generation of F5’s chassis-based systems. VELOS can bridge traditional and modern application architectures by supporting a mix of traditional F5 BIG-IP tenants as well as next-generation BIG-IP Next tenants in the future. F5 VELOS is a key component of the F5 Application Delivery and Security Platform (ADSP). VELOS relies on a Kubernetes-based platform layer (F5OS) that is tightly integrated with F5 TMOS software. Going to a microservice-based platform layer allows VELOS to provide additional functionality that was not possible in previous generations of F5 BIG-IP platforms. Customers do not need to learn Kubernetes but still get the benefits of it. Management of the chassis will still be done via a familiar F5 CLI, webUI, or API. The additional benefit of automation capabilities can greatly simplify the process of deploying F5 products. A significant amount of time and resources are saved due to automation, which translates to more time to perform critical tasks. F5OS VELOS UI Why is VELOS important? Get more done in less time by using a highly automatable hardware platform that can deploy software solutions in seconds, not minutes or hours. Increased performance improves ROI: The VELOS platform is a high-performance and highly scalable chassis with improved processing power. Running multiple versions on the same platform allows for more flexibility than previously possible. Significantly reduce the TCO of previous-generation hardware by consolidating multiple platforms into one. Key VELOS Use-Cases NetOps Automation Shorten time to market by automating network operations and offering cloud-like orchestration with full-stack programmability Drive app development and delivery with self-service and faster response time Business Continuity Drive consistent policies across on-prem and public cloud and across hardware and software-based ADCs Build resiliency with VELOS’ superior platform redundancy and failover capabilities Future-proof investments by running multiple versions of apps side-by-side; migrate applications at your own pace Cloud Migration On-Ramp Accelerate cloud strategy by adopting cloud operating models and on-demand scalability with VELOS and use that as on-ramp to cloud Dramatically reduce TCO with VELOS systems; extend commercial models to migrate from hardware to software or as applications move to cloud Automation Capabilities Declarative APIs and integration with automation frameworks (Terraform, Ansible) greatly simplifies operations and reduces overhead: AS3 (Application Services 3 Extension): A declarative API that simplifies the configuration of application services. With AS3, customers can deploy and manage configurations consistently across environments. Ansible Automation: Prebuilt Ansible modules for VELOS enable automated provisioning, configuration, and updates, reducing manual effort and minimizing errors. Terraform: Organizations leveraging Infrastructure as Code (IaC) can use Terraform to define and automate the deployment of VELOS appliances and associated configurations. Example json file: Example of running the Automation Playbook: Example of the results: More information on Automation: Automating F5OS on VELOS GitHub Automation Repository Specialized Hardware Performance VELOS offers more hardware-accelerated performance capabilities with more FPGA chipsets that are more tightly integrated with TMOS. It also includes the latest Intel processing capabilities. This enhances the following: SSL and compression offload L4 offload for higher performance and reduced load on software Hardware-accelerated SYN flood protection Hardware-based protection from more than 100 types of denial-of-service (DoS) attacks Support for F5 Intelligence Services VELOS CX1610 chassis VELOS BX520 blade Migration Options (BIG-IP Journeys) Use BIG-IP Journeys to easily migrate your existing configuration to VELOS. This covers the following: Entire L4-L7 configuration can be migrated Individual Applications can be migrated BIG-IP Tenant configuration can be migrated Automatically identify and resolve migration issues Convert UCS files into AS3 declarations if needed Post-deployment diagnostics and health The Journeys Tool, available on DevCentral’s GitHub, facilitates the migration of legacy BIG-IP configurations to VELOS-compatible formats. Customers can convert UCS files, validate configurations, and highlight unsupported features during the migration process. Multi-tenancy capabilities in VELOS simplify the process of isolating workloads during and after migration. GitHub repository for F5 Journeys Conclusion The F5 VELOS platform addresses the modern enterprise’s need for high-performance, scalable, and efficient application delivery and security solutions. By combining cutting-edge hardware capabilities with robust automation tools and flexible migration options, VELOS empowers organizations to seamlessly transition from legacy platforms while unlocking new levels of performance and operational agility. Whether driven by the need for increased throughput, advanced multi-tenancy, the VELOS platform stands as a future-ready solution for securing and optimizing application delivery in an increasingly complex IT landscape. Related Content Cloud Docs VELOS Guide F5 VELOS Chassic System Datasheet F5 rSeries: Next-Generation Fully Automatable Hardware Demo Video
477Views3likes0CommentsF5 rSeries: Next-Generation Fully Automatable Hardware
What is rSeries? F5 rSeries is a rearchitected, next-generation hardware platform that scales application delivery performance and automates application services to address many of today’s most critical business challenges. F5 rSeries is a key component of the F5 Application Delivery and Security Platform (ADSP). rSeries relies on a Kubernetes-based platform layer (F5OS) that is tightly integrated with F5 TMOS software. Going to a microservice-based platform layer allows rSeries to provide additional functionality that was not possible in previous generations of F5 BIG-IP platforms. Customers do not need to learn Kubernetes but still get the benefits of it. Management of the hardware will still be done via a familiar F5 CLI, webUI or API. The additional benefit of automation capabilities can greatly simplify the process of deploying F5 products. A significant amount of time and resources are saved due to automation, which translates to more time to perform critical tasks. F5OS rSeries UI Why is this important? Get more done in less time by using a highly automatable hardware platform that can deploy software solutions in seconds, not minutes or hours. Increased performance improves ROI: The rSeries platform is a high performance and highly scalable appliance with improved processing power. Running multiple versions on the same platform allows for more flexibility than previously possible. Pay-as-you-Grow licensing options that unlock more CPU resources. Key rSeries Use-Cases NetOps Automation Shorten time to market by automating network operations and offering cloud like orchestration with full stack programmability Drive app development and delivery with self-service and faster response time Business Continuity Drive consistent policies across on-prem and public cloud and across hardware and software based ADCs Build resiliency with rSeries’ superior performance and failover capabilities Future proof investments by running multiple versions of apps side-by-side; migrate applications at your own pace Cloud Migration On-Ramp Accelerate cloud strategy by adopting cloud operating models and on-demand scalability with rSeries and use that as on ramp to cloud Dramatically reduce TCO with rSeries systems; extend commercial models to migrate from hardware to software or as applications move to cloud Automation Capabilities Declarative APIs and integration with automation frameworks (Terraform, Ansible) greatly simplifies operations and reduces overhead: AS3 (Application Services 3 Extension): A declarative API that simplifies the configuration of application services. With AS3, customers can deploy and manage configurations consistently across environments. Ansible Automation: Prebuilt Ansible modules for rSeries enable automated provisioning, configuration, and updates, reducing manual effort and minimizing errors. Terraform: Organizations leveraging Infrastructure as Code (IaC) can use Terraform to define and automate the deployment of rSeries appliances and associated configurations. Example json file: Example of running the Automation Playbook: Example of the results: More information on Automation: Automating F5OS on rSeries GitHub Automation Repository Specialized Hardware Performance rSeries offers more hardware-accelerated performance capabilities with more FPGA chipsets that are more tightly integrated with TMOS. It also includes the latest Intel processing capabilities. This enhances the following: SSL and compression offload L4 offload for higher performance and reduced load on software Hardware-accelerated SYN flood protection Hardware-based protection from more than 100 types of denial-of-service (DoS) attacks Support for F5 Intelligence Services Migration Options (BIG-IP Journeys) Use BIG-IP Jouneys to easily migrate your existing configuration to rSeries. This covers the following: Entire L4-L7 configuration can be migrated Individual Applications can be migrated BIG-IP Tenant configuration can be migrated Automatically identify and resolve migration issues Convert UCS files into AS3 declarations if needed Post-deployment diagnostics and health The Journeys Tool, available on DevCentral’s GitHub, facilitates the migration of legacy BIG-IP configurations to rSeries-compatible formats. Customers can convert UCS files, validate configurations, and highlight unsupported features during the migration process. Multi-tenancy capabilities in rSeries simplify the process of isolating workloads during and after migration. GitHub repository for F5 Journeys Conclusion The F5 rSeries platform addresses the modern enterprise’s need for high-performance, scalable, and efficient application delivery and security solutions. By combining cutting-edge hardware capabilities with robust automation tools and flexible migration options, rSeries empowers organizations to seamlessly transition from legacy platforms while unlocking new levels of performance and operational agility. Whether driven by the need for increased throughput, advanced multi-tenancy, the rSeries platform stands as a future-ready solution for securing and optimizing application delivery in an increasingly complex IT landscape. Related Content Cloud Docs rSeries Guide F5 rSeries Appliance Datasheet F5 VELOS: A Next-Generation Fully Automatable Platform Demo Video
538Views2likes0CommentsGetting Started with the Certified F5 NGINX Gateway Fabric Operator on Red Hat OpenShift
As enterprises modernize their Kubernetes strategies, the shift from standard Ingress Controllers to the Kubernetes Gateway API is redefining how we manage traffic. For years, the F5 NGINX Ingress Controller has been a foundational component in OpenShift environments. With the certification of F5 NGINX Gateway Fabric (NGF) 2.2 for Red Hat OpenShift, that legacy enters its next chapter. This new certified operator brings the high-performance NGINX data plane into the standardized, role-oriented Gateway API model—with full integration into OpenShift Operator Lifecycle Manager (OLM). Whether you're a platform engineer managing cluster ingress or a developer routing traffic to microservices, NGF on OpenShift 4.19+ delivers a unified, secure, and fully supported traffic fabric. In this guide, we walk through installing the operator, configuring the NginxGatewayFabric resource, and addressing OpenShift-specific networking patterns such as NodePort + Route. Why NGINX Gateway Fabric on OpenShift? While Red Hat OpenShift 4.19+ includes native support for the Gateway API (v1.2.1), integrating NGF adds critical enterprise capabilities: ✔ Certified & OpenShift-Ready The operator is fully validated by Red Hat, ensuring UBI-compliant images and compatibility with OpenShift’s strict Security Context Constraints (SCCs). ✔ High Performance, Low Complexity NGF delivers the core benefits long associated with NGINX—efficiency, simplicity, and predictable performance. ✔ Advanced Traffic Capabilities Capabilities like Regular Expression path matching and support for ExternalName services allow for complex, hybrid-cloud traffic patterns. ✔ AI/ML Readiness NGF 2.2 supports the Gateway API Inference Extension, enabling inference-aware routing for GenAI and LLM workloads on platforms like Red Hat OpenShift AI. Prerequisites Before we begin, ensure you have: Cluster Administrator access to an OpenShift cluster (version 4.19 or later is recommended for Gateway API GA support). Access to the OpenShift Console and the oc CLI. Ability to pull images from ghcr.io or your internal mirror. Step 1: Installing the Operator from OperatorHub We leverage the Operator Lifecycle Manager (OLM) for a "point-and-click" installation that handles lifecycle management and upgrades. Log into the OpenShift Web Console as an administrator. Navigate to Operators > OperatorHub. Search for NGINX Gateway Fabric in the search box. Select the NGINX Gateway Fabric Operator card and click Install Accept the default installation mode (All namespaces) or select a specific namespace (e.g. nginx-gateway), and click Install. Wait until the status shows Succeeded. Once installed, the operator will manage NGF lifecycle automatically. Step 2: Configuring the NginxGatewayFabric Resource Unlike the Ingress Controller, which used NginxIngressController resources, NGF uses the NginxGatewayFabric Custom Resource (CR) to configure the control plane and data plane. In the Console, go to Installed Operators > NGINX Gateway Fabric Operator. Click the NginxGatewayFabric tab and select Create NginxGatewayFabric. Select YAML view to configure the deployment specifics. Step 3: Configuring the NginxGatewayFabric Resource NGF uses a Kubernetes Service to expose its data plane. Before the data plane launches, we must tell the Controller how to expose it. Option A - LoadBalancer (ROSA, ARO, Managed OpenShift) By default, the NGINX Gateway Fabric Operator configures the service type as LoadBalancer. On public cloud managed OpenShift services (like ROSA on AWS or ARO on Azure), this native default works out-of-the-box to provision a cloud load balancer. No additional steps required. Option B - NodePort with OpenShift Route (On-Prem/Hybrid) However, for on-premise or bare-metal OpenShift clusters lacking a native LoadBalancer implementation, the common pattern is to use a NodePort service exposed via an OpenShift Route. Update the NGF CR to use NodePort In the Console, go to Installed Operators > NGINX Gateway Fabric Operator. Click the NginxGatewayFabric tab and select NginxGatewayFabric. Select YAML view to directly edit the configuration specifics. Change the spec.nginx.service.type to NodePort: apiVersion: gateway.nginx.org/v1alpha1 kind: NginxGatewayFabric metadata: name: default namespace: nginx-gateway spec: nginx: service: type: NodePort Create the OpenShift Route: After applying the CR, create a Route to expose the NGINX Service. oc create route edge ngf \ --service=nginxgatewayfabric-sample-nginx-gateway-fabric\ --port=http \ -n nginx-gateway Note: This creates an Edge TLS termination route. For passthrough TLS (allowing NGINX to handle certificates), use --passthrough and target the https port. Step 4: Validating the Deployment Verify that the operator has deployed the control plane pods successfully. oc get pod -n nginx-gateway NAME READY STATUS RESTARTS AGE nginx-gateway-fabric-controller-manager-dd6586597-bfdl5 1/1 Running 0 23m nginxgatewayfabric-sample-nginx-gateway-fabric-564cc6df4d-hztm8 1/1 Running 0 18m oc get gatewayclass NAME CONTROLLER ACCEPTED AGE nginx gateway.nginx.org/nginx-gateway-controller True 4d1h You should also see a GatewayClass named nginx. This indicates the controller is ready to manage Gateway resources. Step 5: Functional Check with Gateway API To test traffic, we will use the standard Gateway API resources (Gateway and HTTPRoute) Deploy a Test Application (Cafe Service) Ensure you have a backend service running. You can use a simple service for validation. Create a Gateway This resource opens the listener on the NGINX data plane. apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: cafe spec: gatewayClassName: nginx listeners: - name: http port: 80 protocol: HTTP Create an HTTPRoute This binds the traffic to your backend service. apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: coffee spec: parentRefs: - name: cafe hostnames: - "cafe.example.com" rules: - matches: - path: type: PathPrefix value: / backendRefs: - name: coffee port: 80 Test Connectivity If you used Option B (Route), send a request to your OpenShift Route hostname. If you used Option A, send it to the LoadBalancer IP. OpenShift 4.19 Compatibility Meanwhile, it is vital to understand the "under the hood" constraints of OpenShift 4.19: Gateway API Version Pinning: OpenShift 4.19 ships with Gateway API CRDs pinned to v1.2.1. While NGF 2.2 supports v1.3.0 features, it has been conformance-tested against v1.2.1 to ensure stability within OpenShift's version-locked environment. oc get crd gateways.gateway.networking.k8s.io -o yaml | grep "gateway.networking.k8s.io/" gateway.networking.k8s.io/bundle-version: v1.2.1 gateway.networking.k8s.io/channel: standard However, looking ahead, future NGINX Gateway Fabric releases may rely on newer Gateway API specifications that are not natively supported by the pinned CRDs in OpenShift 4.19. If you anticipate running a newer NGF version that may not be compatible with the current OpenShift Gateway API version, please reach out to us to discuss your compatibility requirements. Security Context Constraints (SCC): In previous manual deployments, you might have wrestled with NET_BIND_SERVICE capabilities or creating custom SCCs. The Certified Operator handles these permissions automatically, using UBI-based images that comply with Red Hat's security standards out of the box. Next Steps: AI Inference With NGF running, you are ready for advanced use cases: AI Inference: Explore the Gateway API Inference Extension to route traffic to LLMs efficiently, optimizing GPU usage on Red Hat OpenShift AI. The certified NGINX Gateway Fabric Operator simplifies the operational burden, letting you focus on what matters: delivering secure, high-performance applications and AI workloads. References: NGINX Gateway Fabric Operator on Red Hat Catalog F5 NGINX Gateway Fabric Certified for Red Hat OpenShift NGINX Gateway Fabric Installation Docs223Views3likes1CommentI Tried to Beat OpenAI with Ollama in n8n—Here’s Why It Failed (and the Bug I’m Filing)
Hey, community. I wanted to share a story about how I built the n8n Labs workflow. It watches a YouTube channel, summarizes the latest videos with AI agents, and sends a clean HTML newsletter via Gmail. In the video, I show it working flawlessly with OpenAI. But before I got there, I spent a lot of time trying to copy the same flow using open source models through Ollama with the n8n Ollama node. My results were all over the map. I really wanted this to be a great “open source first” build. I tried many local models via Ollama, tuned prompts, adjusted parameters, and re‑ran tests. The outputs were always unpredictable: sometimes I’d get partial JSON, sometimes extra text around the JSON. Sometimes fields would be missing. Sometimes it would just refuse to stick to the structure I asked for. After enough iterations, I started to doubt whether my understanding of the agent setup was off. So, I built a quick proof inside the n8n Code node. If the AI Agent step is supposed to take the XML→JSON feed and reshape it into a structured list—title, description, content URL, thumbnail URL—then I should be able to do that deterministically in JavaScript and compare. I wrote a tiny snippet that reads the entries array, grabs the media fields, and formats a minimal output. And guess what? Voila. It worked on the first try and my HTML generator lit up exactly the way I wanted. That told me two things: one, my upstream data (HTTP Request + XML→JSON) was solid; and two, my desired output structure was clear and achievable without any trickery. With that proof in hand, I turned to OpenAI. I wired the same agent prompt, the same structured output parser, and the same workflow wiring—but swapped the Ollama node for an OpenAI chat model. It worked immediately. Fast, cheap, predictable. The agent returned a perfectly clean JSON with the fields I requested. My code node transformed it into HTML. The preview looked right, and Gmail sent the newsletter just like in the demo. So at that point, I felt confident the approach was sound and the transcript you saw in the video was repeatable—at least with OpenAI in the loop. Where does that leave Ollama and open source models? I’m not throwing shade—I love open source, and I want this path to be great. My current belief is the failure is somewhere inside the n8n Ollama node code path. I don’t think it’s the models themselves in isolation; I think the node may be mishandling one or more of these details: how messages are composed (system vs user). Whether “JSON mode” or a grammar/format hint is being passed, token/length defaults that cause truncation, stop settings that let extra text leak into the output; or the way the structured output parser constraints are communicated. If you’ve worked with local models, you know they can follow structure very well when you give them a strict format or grammar. If the node isn’t exposing that (or is dropping it on the floor), you get variability. To make sure this gets eyes from the right folks, my intent is to file a bug with n8n for the Ollama node. I’ll include a minimal, reproducible workflow: the same RSS fetch, the same XML→JSON conversion, the same agent prompt and required output shape, and a comparison run where OpenAI succeeds and Ollama does not. I’ll share versions, logs, model names, and settings so the team can trace exactly where the behavior diverges. If there’s a missing parameter (like format: json) or a message-role mix‑up, great—let’s fix it. If it needs a small enhancement to pass a grammar or schema to the model, even better. The net‑net is simple: for AI agents inside n8n to feel predictable with Ollama, we need the node to enforce reliably structured outputs the same way the OpenAI path does. That unlocks a ton of practical automation for folks who prefer local models. In the meantime, if you’re following the lab and want a rock‑solid fallback, you can use the Code node to do the exact transformation the agent would do. Here’s the JavaScript I wrote and tested in the workflow: const entries = $input.first().json.feed?.entry ?? []; function truncate(str, max) { if (!str) return ''; const s = String(str).trim(); return s.length > max ? s.slice(0, max) + '…' : s; // If you want total length (including …) to be max, use: // return s.length > max ? s.slice(0, Math.max(0, max - 1)) + '…' : s; } const output = entries.map(entry => { const g = entry['media:group'] ?? {}; return { title: g['media:title'] ?? '', description: truncate(g['media:description'], 60), contentUrl: g['media:content']?.url ?? '', thumbnailUrl: g['media:thumbnail']?.url ?? '' }; }); return [{ json: { output } }]; That snippet proves the data is there and your HTML builder is fine. If OpenAI reproduces the same structured JSON as the code, and Ollama doesn’t, the issue is likely in the node’s request/response handling rather than your workflow logic. I’ll keep pushing on the bug report so we can make agents with Ollama as predictable as they need to be. Until then, if you want speed and consistency to get the job done, OpenAI works great. If you’re experimenting with open source, try enforcing stricter formats and shorter outputs—and keep an eye on what the node actually sends to the model. As always, I’ll share updates, because I love sharing knowledge—and I want the open-source path to shine right alongside the rest of our AI, agents, n8n, Gmail, and OpenAI workflows. As always, community, if you have a resolution and can pull it off, please share!
292Views2likes1CommentLeveraging BGP and ECMP for F5 Distributed Cloud Customer Edge, Part Two
Introduction This is the second part of our series on leveraging BGP and ECMP for F5 Distributed Cloud Customer Edge deployments. In Part One, we explored the high-level concepts, architecture decisions, and design principles that make BGP and ECMP such a powerful combination for Customer Edge high availability and maintenance operations. This article provides step-by-step implementation guidance, including: High-level and low-level architecture diagrams Complete BGP peering and routing policy configuration in F5 Distributed Cloud Console Practical configuration examples for Fortinet FortiGate and Palo Alto Networks firewalls By the end of this article, you'll have everything you need to implement BGP-based high availability for your Customer Edge deployment. Architecture Overview Before diving into configuration, let’s establish a clear picture of the architecture we’re implementing. We’ll examine this from two perspectives: a high-level logical view and a detailed low-level view showing specific IP addressing and AS numbers. High-Level Architecture The high-level architecture illustrates the fundamental traffic flow and BGP relationships in our deployment: Key Components: Component Role Internet External connectivity to the network Next-Generation Firewall Acts as the BGP peer and performs ECMP distribution to Customer Edge nodes Customer Edge Virtual Site Two or more CE nodes advertising identical VIP prefixes via BGP The architecture follows a straightforward principle: the upstream firewall establishes BGP peering with each CE node. Each CE advertises its VIP addresses as /32 routes. The firewall, seeing multiple equal-cost paths to the same destination, distributes incoming traffic across all available CE nodes using ECMP. Low-Level Architecture with IP Addressing The low-level diagram provides the specific details needed for implementation, including IP addresses and AS numbers: Network Details: Component IP Address Role Firewall (Inside) 10.154.4.119/24 BGP Peer, ECMP Router CE1 (Outside) 10.154.4.160/24 Customer Edge Node 1 CE2 (Outside) 10.154.4.33/24 Customer Edge Node 2 Global VIP 192.168.100.10/32 Load Balancer VIP BGP Configuration: Parameter Firewall Customer Edge AS Number 65001 65002 Router ID 10.154.4.119 Auto-assigned based on interface IP Advertised Prefix None 192.168.100.0/24 le 32 This configuration uses eBGP (External BGP) between the firewall and CE nodes, with different AS numbers for each. The CE nodes share the same AS number (65002), which is the standard approach for multi-node CE deployments advertising the same VIP prefixes. Configuring BGP in F5 Distributed Cloud Console The F5 Distributed Cloud Console provides a centralized interface for configuring BGP peering and routing policies on your Customer Edge nodes. This section walks you through the complete configuration process. Step 1: Configure the BGP peering Go to: Multi-Cloud Network Connect --> Manage --> Networking --> External Connectivity --> BGP Peers & Policies Click on Add BGP Peer Then add the following information: Object name Site where to apply this BGP configuration ASN Router ID Here is an example of the required parameters. Then click on Peers --> Add Item And filled the relevant fields like below by adapting the parameters for your requirements. Step 2: Configure the BGP routing policies Go to: Multi-Cloud Network Connect --> Manage --> Networking --> External Connectivity --> BGP Peers & Policies --> BGP Routing Policies Click on Add BGP Routing Policy Add a name for your BGP routing policy object and click on Configure to add the rules. Click on Add Item to add a rule. Here we are going to allow the /32 prefixes from our VIP subnet (192.168.100.0/24). Save the BGP Routing Policy Repeat the action to create another BGP routing policy with the exact same parameters except the Action Type, which should be of type Deny. Now we have two BGP routing policies: One to allow the VIP prefixes (for normal operations) One to deny the VIP prefixes (for maintenance mode) We still need to a a third and final BGP routing policy, in order to deny any prefixes on the CE. For that, create a third BGP routing policy with this match. Step 3: Apply the BGP routing policies To apply the BGP routing policies in your BGP peer object, edit the Peer and: Enable the BGP routing policy Apply the BGP routing policy objects created before for Inbound and Outbound Fortinet FortiGate Configuration FortiGate firewalls are widely deployed as network security appliances and support robust BGP capabilities. This section provides the minimum configuration for establishing BGP peering with Customer Edge nodes and enabling ECMP load distribution. Step 1: Configure the Router ID and AS Number Configure the basic BGP settings: config router bgp set as 65001 set router-id 10.154.4.119 set ebgp-multipath enable Step 2: Configure BGP Neighbors Add each CE node as a BGP neighbor: config neighbor edit "10.154.4.160" set remote-as 65002 set route-map-in "ACCEPT-CE-VIPS" set route-map-out "DENY-ALL" set soft-reconfiguration enable next edit "10.154.4.63" set remote-as 65002 set route-map-in "ACCEPT-CE-VIPS" set route-map-out "DENY-ALL" set soft-reconfiguration enable next end end Step 3: Create Prefix List for VIP Range Define the prefix list that matches the CE VIP range: config router prefix-list edit "CE-VIP-PREFIXES" config rule edit 1 set prefix 192.168.100.0 255.255.255.0 set ge 32 set le 32 next end next end Important: The ge 32 and le 32 parameters ensure we only match /32 prefixes within the 192.168.100.0/24 range, which is exactly what CE nodes advertise for their VIPs. Step 4: Create Route Maps Configure route maps to implement the filtering policies: Inbound Route Map (Accept VIP prefixes): config router route-map edit "ACCEPT-CE-VIPS" config rule edit 1 set match-ip-address "CE-VIP-PREFIXES" next end next end Outbound Route Map (Deny all advertisements): config router route-map edit "DENY-ALL" config rule edit 1 set action deny next end next end Step 5: Verify BGP Configuration After applying the configuration, verify the BGP sessions and routes: Check BGP neighbor status: get router info bgp summary VRF 0 BGP router identifier 10.154.4.119, local AS number 65001 BGP table version is 4 1 BGP AS-PATH entries 0 BGP community entries Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 10.154.4.33 4 65002 2092 2365 0 0 0 00:05:33 1 10.154.4.160 4 65002 2074 2346 0 0 0 00:14:14 1 Total number of neighbors 2 Verify ECMP routes: get router info routing-table bgp Routing table for VRF=0 B 192.168.100.10/32 [20/255] via 10.154.4.160 (recursive is directly connected, port2), 00:00:11, [1/0] [20/255] via 10.154.4.33 (recursive is directly connected, port2), 00:00:11, [1/0] Palo Alto Networks Configuration Palo Alto Networks firewalls provide enterprise-grade security with comprehensive routing capabilities. This section covers the minimum BGP configuration for peering with Customer Edge nodes. Note: This part is assuming that Palo Alto firewall is configured in the new "Advanced Routing Engine" mode. And we will use the logical-router named "default". Step 1: Configure ECMP parameters set network logical-router default vrf default ecmp enable yes set network logical-router default vrf default ecmp max-path 4 set network logical-router default vrf default ecmp algorithm ip-hash Step 2: Configure objects IPs and firewall rules for BGP peering set address CE1 ip-netmask 10.154.4.160/32 set address CE2 ip-netmask 10.154.4.33/32 set address-group BGP_PEERS static [ CE1 CE2 ] set address LOCAL_BGP_IP ip-netmask 10.154.4.119/32 set rulebase security rules ALLOW_BGP from service set rulebase security rules ALLOW_BGP to service set rulebase security rules ALLOW_BGP source LOCAL_BGP_IP set rulebase security rules ALLOW_BGP destination BGP_PEERS set rulebase security rules ALLOW_BGP application bgp set rulebase security rules ALLOW_BGP service application-default set rulebase security rules ALLOW_BGP action allow Step 3: Palo Alto Configuration Summary (CLI Format) set network routing-profile filters prefix-list ALLOWED_PREFIXES type ipv4 ipv4-entry 1 prefix entry network 192.168.100.0/24 set network routing-profile filters prefix-list ALLOWED_PREFIXES type ipv4 ipv4-entry 1 prefix entry greater-than-or-equal 32 set network routing-profile filters prefix-list ALLOWED_PREFIXES type ipv4 ipv4-entry 1 prefix entry less-than-or-equal 32 set network routing-profile filters prefix-list ALLOWED_PREFIXES type ipv4 ipv4-entry 1 action permit set network routing-profile filters prefix-list ALLOWED_PREFIXES description "Allow only m32 inside 192.168.100.0m24" set network routing-profile filters prefix-list DENY_ALL type ipv4 ipv4-entry 1 prefix entry network 0.0.0.0/0 set network routing-profile filters prefix-list DENY_ALL type ipv4 ipv4-entry 1 prefix entry greater-than-or-equal 0 set network routing-profile filters prefix-list DENY_ALL type ipv4 ipv4-entry 1 prefix entry less-than-or-equal 32 set network routing-profile filters prefix-list DENY_ALL type ipv4 ipv4-entry 1 action deny set network routing-profile filters prefix-list DENY_ALL description "Deny all prefixes" set network routing-profile bgp filtering-profile FILTER_INBOUND ipv4 unicast inbound-network-filters prefix-list ALLOWED_PREFIXES set network routing-profile bgp filtering-profile FILTER_OUTBOUND ipv4 unicast inbound-network-filters prefix-list DENY_ALL set network logical-router default vrf default bgp router-id 10.154.4.119 set network logical-router default vrf default bgp local-as 65001 set network logical-router default vrf default bgp install-route yes set network logical-router default vrf default bgp enable yes set network logical-router default vrf default bgp peer-group BGP_PEERS type ebgp set network logical-router default vrf default bgp peer-group BGP_PEERS address-family ipv4 ipv4-unicast-default set network logical-router default vrf default bgp peer-group BGP_PEERS filtering-profile ipv4 FILTER_INBOUND set network logical-router default vrf default bgp peer-group BGP_PEERS filtering-profile ipv4 FILTER_OUTBOUND set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE1 peer-as 65002 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE1 local-address interface ethernet1/2 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE1 local-address ip svc-intf-ip set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE1 peer-address ip 10.154.4.160 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE2 peer-as 65002 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE2 local-address interface ethernet1/2 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE2 local-address ip svc-intf-ip set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE2 peer-address ip 10.154.4.33 Step 4: Verify BGP Configuration After committing the configuration, verify the BGP sessions and routes: Check BGP neighbor status: run show advanced-routing bgp peer status logical-router default Logical Router: default ============== Peer Name: CE2 BGP State: Established, up for 00:01:55 Peer Name: CE1 BGP State: Established, up for 00:00:44 Verify ECMP routes: run show advanced-routing route logical-router default Logical Router: default ========================== flags: A:active, E:ecmp, R:recursive, Oi:ospf intra-area, Oo:ospf inter-area, O1:ospf ext 1, O2:ospf ext 2 destination protocol nexthop distance metric flag tag age interface 0.0.0.0/0 static 10.154.1.1 10 10 A 01:47:33 ethernet1/1 10.154.1.0/24 connected 0 0 A 01:47:37 ethernet1/1 10.154.1.99/32 local 0 0 A 01:47:37 ethernet1/1 10.154.4.0/24 connected 0 0 A 01:47:37 ethernet1/2 10.154.4.119/32 local 0 0 A 01:47:37 ethernet1/2 192.168.100.10/32 bgp 10.154.4.33 20 255 A E 00:01:03 ethernet1/2 192.168.100.10/32 bgp 10.154.4.160 20 255 A E 00:01:03 ethernet1/2 total route shown: 7 Implementing CE Isolation for Maintenance As discussed in Part One, one of the key advantages of BGP-based deployments is the ability to gracefully isolate CE nodes for maintenance. Here’s how to implement this in practice. Isolation via F5 Distributed Cloud Console To isolate a CE node from receiving traffic, in your BGP peer object, edit the Peer and: Change the Outbound BGP routing policy from the one that is allowing the VIP prefixes to the one that is denying the VIP prefixes The CE will stop advertising its VIP routes, and within seconds (based on BGP timers), the upstream firewall will remove this CE from its ECMP paths. Verification During Maintenance On your firewall, verify the route withdrawal (in this case we are using a Fortigate firewall): get router info bgp summary VRF 0 BGP router identifier 10.154.4.119, local AS number 65001 BGP table version is 4 1 BGP AS-PATH entries 0 BGP community entries Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 10.154.4.33 4 65002 2070 2345 0 0 0 00:04:05 0 10.154.4.160 4 65002 2057 2326 0 0 0 00:12:46 1 Total number of neighbors 2 We are not receiving any prefixes anymore for the 10.154.4.33 peer. get router info routing-table bgp Routing table for VRF=0 B 192.168.100.10/32 [20/255] via 10.154.4.160 (recursive is directly connected, port2), 00:06:34, [1/0] End we have now only one path. Restoring the CE in the data path After maintenance is complete: Return to the BGP Peer configuration in the F5XC Console Restore the original export policy (permit VIP prefixes) Save the configuration On the upstream firewall, confirm that CE prefixes are received again and that ECMP paths are restored Conclusion This article has provided the complete implementation details for deploying BGP and ECMP with F5 Distributed Cloud Customer Edge nodes. You now have: A clear understanding of the architecture at both high and low levels Step-by-step instructions for configuring BGP in F5 Distributed Cloud Console Ready-to-use configurations for both Fortinet FortiGate and Palo Alto Networks firewalls Practical guidance for implementing graceful CE isolation for maintenance By combining the concepts from the first article with the practical configurations in this article, you can build a robust, highly available application delivery infrastructure that maximizes resource utilization, provides automatic failover, and enables zero-downtime maintenance operations. The BGP-based approach transforms your Customer Edge deployment from a traditional Active/Standby model into a full active topology where every node contributes to handling traffic, and any node can be gracefully removed for maintenance without impacting your users.57Views2likes0CommentsLeveraging BGP and ECMP for F5 Distributed Cloud Customer Edge, Part One
Introduction Achieving high availability for application delivery while maintaining operational flexibility is a fundamental challenge for modern enterprises. When deploying F5 Distributed Cloud Customer Edge (CE) nodes in private data centers, on-premises environments or in some cases public cloud environments, the choice of how traffic reaches these nodes significantly impacts both service resilience and operational agility. This article explores how Border Gateway Protocol (BGP) combined with Equal-Cost Multi-Path (ECMP) routing provides an elegant solution for two critical operational requirements: High availability of traffic for load balancers running on Customer Edge nodes Easier maintenance and upgrades of CE nodes without service disruption By leveraging dynamic routing protocols instead of static configurations, you gain the ability to gracefully remove individual CE nodes from the traffic path, perform maintenance or upgrades, and seamlessly reintroduce them—all without impacting your application delivery services. Understanding BGP and ECMP Benefits for Customer Edge Deployments Why BGP and ECMP? Traditional approaches to high availability often rely on protocols like VRRP, which create Active/Standby topologies. While functional, this model leaves standby nodes idle and creates potential bottlenecks on the active node. The BGP with ECMP fundamentally changes this paradigm. The Power of ECMP Equal-Cost Multi-Path routing allows your network infrastructure to distribute traffic across multiple CE nodes simultaneously. When each CE node advertises the same VIP prefix via BGP, your upstream router learns multiple equal-cost paths and distributes traffic across all available nodes: This creates a true Active/Active topology where: All CE nodes actively process traffic Load is distributed across the entire set of CEs Failure of any single node automatically redistributes traffic to remaining nodes No manual intervention required for failover Key Benefits Benefit Description Active/Active/Active All nodes handle traffic simultaneously, maximizing resource utilization Automatic Failover When a CE stops advertising, its VIP, traffic automatically shifts to the remaining nodes Graceful Maintenance Withdraw BGP advertisements to drain traffic before maintenance Horizontal Scaling Add new CE nodes and they automatically join the traffic distribution Understanding Customer Edge VIP Architecture F5 Distributed Cloud Customer Edge nodes support a flexible VIP architecture that integrates seamlessly with BGP. Understanding how VIPs work is essential for proper BGP configuration. The Global VIP Each Customer Edge site can be configured with a Global VIP—a single IP address that serves as the default listener for all load balancers instantiated on that CE. Key characteristics: Configured at the CE site level in the F5 Distributed Cloud Console Acts as the default VIP for any load balancer that doesn’t have a dedicated VIP configured Advertised as a /32 prefix in the routing table To know: The Global VIP is NOT generated in the CE's routing table until at least one load balancer is configured on that CE This last point is particularly important: if you configure a Global VIP but haven't deployed any load balancers, the VIP won't be advertised via BGP. This prevents advertising unreachable services. For this article, we are going to use 192.168.100.0/24 as VIP subnet for all the examples. Load Balancer Dedicated VIPs Individual load balancers can be configured with their own Dedicated VIP, separate from the Global VIP. When a dedicated VIP is configured: The load balancer responds only to its dedicated VIP The load balancer does not respond to the Global VIP The dedicated VIP is also advertised as a /32 prefix Multiple load balancers can have different dedicated VIPs on the same CE This flexibility allows you to: Separate different applications on different VIPs Implement different routing policies per application Maintain granular control over traffic distribution VIP Summary VIP Type Scope Prefix Length When Advertised Global VIP Per CEs /32 When at least one LB is configured on CE Dedicated VIP Per load balancer /32 When the specific LB is configured BGP Filtering Best Practices Proper BGP filtering is essential for security and operational stability. This section covers the recommended filtering policies for both the upstream network device (firewall/router) and the Customer Edge nodes. Design Principles The filtering strategy follows the principle of explicit allow, implicit deny: Only advertise what is necessary Only accept what is expected Use prefix lists with appropriate matching for /32 routes Upstream Device Configuration (Firewall/Router) The device peering with your CE nodes should implement strict filtering: Inbound policy on Firewall/Router The firewall/router should accept only the CE VIP prefixes. In our example, all VIPs fall within 192.168.100.0/24: Why "or longer" (le 32)? Since VIPs are advertised as /32 prefixes, you need to match prefixes more specific than /24. The le 32 (less than or equal to 32) or "or longer" modifier ensures your filter matches the actual /32 routes while still using a manageable prefix range. Outbound policy on Firewall/Router By default, the firewall/router should not advertise any prefixes to the CE nodes. Customer Edge Configuration The CE nodes should implement complementary filtering: Outbound Policy CEs should advertise only their VIP prefixes. Since all VIPs on Customer Edge nodes are /32 addresses, your prefix filters must also follow the "or longer" approach. Inbound Policy CEs should not accept any prefixes from the upstream firewall/router. Filtering Summary Table Device Direction Policy Prefix Match Firewall/Router Inbound (from CE) Accept VIP range 192.168.100.0/24 le 32 Firewall/Router Outbound (to CE) Deny all N/A CE Outbound (to Router) Advertise VIPs only 192.168.100.0/24 le 32 CE Inbound (from Router) Deny all N/A Graceful CE Isolation for Maintenance One of the most powerful benefits of using BGP is the ability to gracefully remove a CE node from the traffic path for maintenance, upgrades, or troubleshooting. This section explains how to isolate a CE by manipulating its BGP route advertisements. The Maintenance Challenge When you need to perform maintenance on a CE node (OS upgrade, software update, reboot, troubleshooting), you want to: Stop new traffic from reaching the node Allow existing connections to complete gracefully Perform your maintenance tasks Reintroduce the node to the traffic pool With VRRP, this can require manual failover procedures. With BGP, you simply stop advertising VIP routes. Isolation Process Overview Step 1: Configure BGP Route Filtering on the CE To isolate a CE, you need to apply a BGP policy that prevents the VIP prefixes from being advertised or received. Where to Apply the Policy? There are two possible approaches to stop a CE from receiving traffic: On the BGP peer (firewall/router): Configure an inbound filter on the upstream device to reject routes from the specific CE you want to isolate. On the Customer Edge itself: Configure an outbound export policy on the CE to stop advertising its VIP prefixes. We recommend the F5 Distributed Cloud approach (option 2) for several reasons: Consideration Firewall/Router Approach F5 Distributed Cloud Approach Automation Requires separate automation for network devices Can be performed in the F5 XC Console or fully automated through API/Terraform infrastructure as a code approach. Team ownership Requires coordination with network team CE team has full autonomy Consistency Configuration syntax varies by vendor Single, consistent interface Audit trail Spread across multiple systems Centralized in F5 XC Console In many organizations, the team responsible for managing the Customer Edge nodes is different from the team managing the network infrastructure (firewalls, routers). By implementing isolation policies on the CE side, you eliminate cross-team dependencies and enable self-service maintenance operations. Applying the Filter The filter is configured through the F5 Distributed Cloud Console on the specific CE site. The filter configuration will be: Match the VIP prefix range (192.168.100.0/24 or longer) Set the action to Deny Apply to the outbound direction (export policy) Once applied, the CE stops advertising its VIP /32 routes to its BGP peers. Step 2: Perform Maintenance With the CE isolated from the traffic path, you can safely: Reboot the CE node Perform OS upgrades Apply software updates Existing long-lived connections to the isolated CE will eventually timeout, while new connections are automatically directed to the remaining CEs. Step 3: Reintroduce the CE in the data path After maintenance is complete: Remove or modify the BGP export filter to allow VIP advertisement The CE will begin advertising its VIP /32 routes again The upstream firewall/router will add the CE back to its ECMP paths Traffic will automatically start flowing to the restored CE Isolation Benefits Summary Aspect Benefit Zero-touch failover Traffic automatically shifts to the remaining CEs Controlled maintenance windows Isolate at your convenience No application impact Users experience no disruption Reversible Simply re-enable route advertisement to restore Per-node granularity Isolate individual nodes without affecting others Rolling Upgrade Strategy Using this isolation technique, you can implement rolling upgrades across your CEs: Rolling Upgrade Sequence: Step 1: Isolate CE1 → Upgrade CE1 → Put back CE1 in the data path Step 2: Isolate CE2 → Upgrade CE2 → Put back CE2 in the data path Throughout this process: • At least 1 CE is always handling traffic • No service interruption occurs • Each CE is validated before moving to the next Conclusion BGP with ECMP provides a robust, flexible foundation for high-availability F5 Distributed Cloud Customer Edge deployments. By leveraging dynamic routing protocols: Traffic is distributed across all active CE nodes, maximizing resource utilization Failover is automatic when a CE becomes unavailable Maintenance is graceful through controlled route withdrawal Scaling is seamless as new CEs automatically join the traffic distribution once they are BGP-peered The combination of proper BGP filtering (accepting only VIP prefixes, advertising only what’s necessary) and the ability to isolate individual CEs through route manipulation gives you complete operational control over your application delivery infrastructure. Whether you’re performing routine maintenance, emergency troubleshooting, or rolling out upgrades, BGP-based CE deployments ensure your applications remain available and your operations remain smooth.82Views3likes0CommentsF5 Container Ingress Services (CIS) and using k8s traffic policies to send traffic directly to pods
This article will take a look how you can use health monitors on the BIG-IP to solve the issue with constant AS3 REST-API pool member changes or when there is a sidecar service mesh like Istio (F5 has version called Aspen mesh of the istio mesh) or Linkerd mesh. I also have described some possible enchantments for CIS/AS3, Nginx Ingress Controller or Gateway Fabric that will be nice to have in the future. Intro Install Nginx Ingress Open source and CIS F5 CIS without Ingress/Gateway F5 CIS with Ingress F5 CIS with Gateway fabric Summary 1. Intro F5 CIS allows integration between F5 and k8s kubernetes or openshift clusters. F5 CIS has two modes and that are NodePort and ClusterIP and this is well documented at https://clouddocs.f5.com/containers/latest/userguide/config-options.html . There is also a mode called auto that I prefer as based on k8s service type NodePort or ClusterIP it knows how to configure the pool members. CIS in ClusterIP mode generally is much better as you bypass the kube-proxy as send traffic directly to pods but there could be issues if k8s pods are constantly being scaled up or down as CIS uses AS3 REST-API to talk and configure the F5 BIG-IP. I also have seen some issues where a bug or a config error that is not well validated can bring the entire CIS to BIG-IP control channel down as you then see 422 errors in the F5 logs and on CIS logs. By using NodePort and "externaltrafficpolicy: local" and if there is an ingress also "internaltrafficpolicy: local" you can also bypass the kubernetes proxy and send traffic directly to the pods and BIG-IP health monitoring will mark the nodes that don't have pods as down as the traffic policies prevent nodes that do not have the web application pods to send the traffic to other nodes. 2..Install Nginx Ingress Open source and CIS As I already have the k8s version of nginx and F5 CIS I need 3 different classes of ingress. k8s nginx is end of life https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/ , so my example also shows how you can have in parallel the two nginx versions the k8s nginx and F5 nginx. There is a new option to use The Operator Lifecycle Manager (OLM) that when installed will install the components and this is even better way than helm (you can install OLM with helm and this is even newer way to manage nginx ingress!) but I found it still in early stage for k8s while for Openshift it is much more advanced. I have installed Nginx in a daemonset not deployment and I will mention why later on and I have added a listener config for the F5 TransportServer even if later it is seen why at the moment it is not usable. helm install -f values.yaml ginx-ingress oci://ghcr.io/nginx/charts/nginx-ingress \ --version 2.4.1 \ --namespace f5-nginx \ --set controller.kind=daemonset \ --set controller.image.tag=5.3.1 \ --set controller.ingressClass.name=nginx-nginxinc \ --set controller.ingressClass.create=true \ --set controller.ingressClass.setAsDefaultIngress=false cat values.yaml controller: enableCustomResources: true globalConfiguration: create: true spec: listeners: - name: nginx-tcp port: 88 protocol: TCP kubectl get ingressclasses NAME CONTROLLER PARAMETERS AGE f5 f5.com/cntr-ingress-svcs <none> 8d nginx k8s.io/ingress-nginx <none> 40d nginx-nginxinc nginx.org/ingress-controller <none> 32s niki@master-1:~$ kubectl get pods -o wide -n f5-nginx NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ingress-controller-2zbdr 1/1 Running 0 62s 10.10.133.234 worker-2 <none> <none> nginx-ingress-controller-rrrc9 1/1 Running 0 62s 10.10.226.87 worker-1 <none> <none> niki@master-1:~$ The CIS config is shown below. I have used "pool_member_type" auto as this allows Cluster-IP or NodePort services to be used at the same time. helm install -f values.yaml f5-cis f5-stable/f5-bigip-ctlr cat values.yaml bigip_login_secret: f5-bigip-ctlr-login rbac: create: true serviceAccount: create: true name: namespace: f5-cis args: bigip_url: X.X.X.X bigip_partition: kubernetes log_level: DEBUG pool_member_type: auto insecure: true as3_validation: true custom_resource_mode: true log-as3-response: true load-balancer-class: f5 manage-load-balancer-class-only: true namespaces: [default, test, linkerd-viz, ingress-nginx, f5-nginx] # verify-interval: 35 image: user: f5networks repo: k8s-bigip-ctlr pullPolicy: Always nodeSelector: {} tolerations: [] livenessProbe: {} readinessProbe: {} resources: {} version: latest 3. F5 CIS without Ingress/Gateway Without Ingress actually the F5's configuration is much simpler as you just need to create nodeport service and the VirtualServer CR. As you see below the health monitor marks the control node and the worker node that do not have pod from "hello-world-app-new-node" as shown in the F5 picture below. Sending traffic without Ingresses or Gateways removes one extra hop and sub-optimal traffic patterns as when the Ingress or Gateway is in deployment mode for example there could be 20 nodes and only 2 ingress/gateway pods on 1 node each. Traffic will need to go to only those 2 nodes to enter the cluster. apiVersion: v1 kind: Service metadata: name: hello-world-app-new-node labels: app: hello-world-app-new-node spec: externalTrafficPolicy: Local ports: - name: http protocol: TCP port: 8080 targetPort: 8080 selector: app: hello-world-app-new type: NodePort --- apiVersion: "cis.f5.com/v1" kind: VirtualServer metadata: name: vs-hello-new namespace: default labels: f5cr: "true" spec: virtualServerAddress: "192.168.1.71" virtualServerHTTPPort: 80 host: www.example.com hostGroup: "new" snat: auto pools: - monitor: interval: 10 recv: "" send: "GET /" timeout: 31 type: http path: / service: hello-world-app-new-node servicePort: 8080 For Istio and Linkerd Integration an irule could be needed to send custom ALPN extensions to the backend pods that now have a sidecar. I suggest seeing my article at "the Medium" for more information see https://medium.com/@nikoolayy1/connecting-kubernetes-k8s-cluster-to-external-router-using-bgp-with-calico-cni-and-nginx-ingress-2c45ebe493a1 Keep in mind that for the new options with Ambient mesh (sidecarless) the CIS without Ingress will not work as F5 does not speak HBONE (or HTTP-Based Overlay Network Environment) protocol that is send in the HTTP Connect tunnel to inform the zTunnel (layer 3/4 proxy that starts or terminates the mtls) about the real source identity (SPIFFE and SPIRE) that may not be the same as the one in CN/SAN client SSL cert. Maybe in the future there could be an option based on a CRD to provide the IP address of an external device like F5 and the zTunnel proxy to terminate the TLS/SSL (the waypoint layer 7 proxy usually Envoy is not needed in this case as F5 will do the HTTP processing) and send traffic to the pod but for now I see no way to make F5 work directly with Ambient mesh. If the ztunnel takes the identity from the client cert CN/SAN F5 will not have to even speak HBONE. 4. F5 CIS with Ingress Why we may need an ingress just as a gateway into the k8s you may ask? Nowadays many times a service mesh like linkerd or istio or F5 aspen mesh is used and the pods talk to each other with mTLS handled by the sidecars and an Ingress as shown in https://linkerd.io/2-edge/tasks/using-ingress/ is an easy way for the client-side to be https while the server side to be the service mesh mtls, Even ambient mesh works with Ingresses as it captures traffic after them. It is possible from my tests F5 to talk to a linkerd injected pods for example but it is hard! I have described this in more detail at https://medium.com/@nikoolayy1/connecting-kubernetes-k8s-cluster-to-external-router-using-bgp-with-calico-cni-and-nginx-ingress-2c45ebe493a1 Unfortunately when there is an ingress things as much more complex! F5 has Integration called "IngressLink" but as I recently found out it is when BIG-IP is only for Layer 3/4 Load Balancing and the Nginx Ingress Controller will actually do the decryption and AppProtect WAF will be on the Nginx as well F5 CIS IngressLink attaching WAF policy on the big-ip through the CRD ? | DevCentral Wish F5 to make an integration like "IngressLink" but the reverse where each node will have nginx ingress as this can be done with demon set and not deployment on k8s and Nginx Ingress will be the layer 3/4, as the Nginx VirtualServer CRD support this and to just allow F5 in the k8s cluster. Below is how currently this can be done. I have created a Transportserver but is not used as it does not at the momemt support the option "use-cluster-ip" set to true so that Nginx does not bypass the service and to go directly to the endpoints as this will cause nodes that have nginx ingress pod but no application pod to send the traffic to other nodes and we do not want that as add one more layer of load balancing latency and performance impact. The gateway is shared as you can have a different gateway per namespace or shared like the Ingress. apiVersion: v1 kind: Service metadata: name: hello-world-app-new-cluster labels: app: hello-world-app-new-cluster spec: internalTrafficPolicy: Local ports: - name: http protocol: TCP port: 8080 targetPort: 8080 selector: app: hello-world-app-new type: ClusterIP --- apiVersion: k8s.nginx.org/v1 kind: TransportServer metadata: name: nginx-tcp annotations: nginx.org/use-cluster-ip: "true" spec: listener: name: nginx-tcp protocol: TCP upstreams: - name: nginx-tcp service: hello-world-app-new-cluster port: 8080 action: pass: nginx-tcp --- apiVersion: k8s.nginx.org/v1 kind: VirtualServer metadata: name: nginx-http spec: host: "app.example.com" upstreams: - name: webapp service: hello-world-app-new-cluster port: 8080 use-cluster-ip: true routes: - path: / action: pass: webapp The second part of the configuration is to expose the Ingress to BIG-IP using CIS. --- apiVersion: v1 kind: Service metadata: name: f5-nginx-ingress-controller namespace: f5-nginx labels: app.kubernetes.io/name: nginx-ingress spec: externalTrafficPolicy: Local type: NodePort selector: app.kubernetes.io/name: nginx-ingress ports: - name: http protocol: TCP port: 80 targetPort: http --- apiVersion: "cis.f5.com/v1" kind: VirtualServer metadata: name: vs-hello-ingress namespace: f5-nginx labels: f5cr: "true" spec: virtualServerAddress: "192.168.1.81" virtualServerHTTPPort: 80 snat: auto pools: - monitor: interval: 10 recv: "200" send: "GET / HTTP/1.1\r\nHost:app.example.com\r\nConnection: close\r\n\r\n" timeout: 31 type: http path: / service: f5-nginx-ingress-controller servicePort: 80 Only the nodes that have a pod will answer the health monitor. Hopefully F5 can make some Integration and CRD that makes this configuration simpler like the "IngressLink" and to add the option "use-cluster-ip" to the Transport server as Nginx does not need to see the HTTP traffic at all. This is on my wish list for this year 😁 Also if AS3 could reference existing group of nodes and just with different ports this could help CIS will need to push AS3 declaration of nodes just one time and then the different VirtualServers could reference it but with different ports and this will make the AS3 REST-API traffic much smaller. 5. F5 CIS with Gateway fabric This does not at the moment work as gateway-fabric unfortunately does not support "use-cluster-ip" option. The idea is to deploy the gateway fabric in daemonset and to inject it with a sidecar or even without one this will work with ambient meshes. As k8s world is moving away from an Ingress this will be a good option. Gateway fabric natively supports TCP , UDP traffic and even TLS traffic that is not HTTPS and by exposing the gateway fabric with a Cluster-IP or Node-Port service then with different hostnames the Gateway fabric will select to correct route to send the traffic to! helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway -f values-gateway.yaml cat values-gateway.yaml nginx: # Run the data plane per-node kind: daemonSet # How the data plane gets exposed when you create a Gateway service: type: NodePort # or NodePort # (optional) if you’re using Gateway API experimental channel features: nginxGateway: gwAPIExperimentalFeatures: enable: true apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: shared-gw namespace: nginx-gateway spec: gatewayClassName: nginx listeners: - name: https port: 443 protocol: HTTPS tls: mode: Terminate certificateRefs: - kind: Secret name: wildcard-tls allowedRoutes: namespaces: from: ALL --- apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: app-route namespace: app spec: parentRefs: - name: shared-gw namespace: nginx-gateway hostnames: - app.example.com rules: - backendRefs: - name: app-svc port: 8080 F5 Nginx Fabric mesh is evolving really fast from what I see , so hopefully we see the features I mentioned soon and always you can open a github case. The documentation is at https://docs.nginx.com/nginx-gateway-fabric and as this use k8s CRD the full options can be seen at TLS - Kubernetes Gateway API 6. Summary With the release of TMOS 21 F5 now supports much more health monitors and pool members, so this way of deploying CIS with NodePort services may offer benefits with TMOS 21.1 that will be the stable version as shown in https://techdocs.f5.com/en-us/bigip-21-0-0/big-ip-release-notes/big-ip-new-features.html With auto mode some services can still be directly exposed to BIG-IP as the CIS config changes are usually faster to remove a pool member pod than BIG-IP health monitors to mark a node as down. The new version of CIS that will be CIS advanced may take of the concerns of hitting a bug or not well validated configuration that could bring the control channel down and TMOS 21.1 may also handle AS3 config changes better with less cpu/memory issue, so there could be no need in the future of using trafficpolicies and NodePort mode and k8s services of this type. For ambient mesh my example with Ingress and Gateway seems the only option for direct communication at the moment. We will see what the future holds!135Views4likes0CommentsUsing the Model Context Protocol with Open WebUI
This year we started building out a series of hands-on labs you can do on your own in our AI Step-by-Step repo on GitHub. In my latest lab, I walk you through setting up a Model Context Protocol (MCP) server and the mcpo proxy to allow you to use MCP tools in a locally-hosted Open WebUI + Ollama environment. The steps are well-covered there, but I wanted to highlight what you learn in the lab. What is MCP and why does it matter? MCP is a JSON-based open standard from Anthropic that (shockingly!) is only about 13 months old now. It allows AI assistants to securely connect to external data sources and tools through a unified interface. The key delivery that led to it's rapid adoption is that it solves the fragmentation problem in AI integrations—instead of every AI system needing custom code to connect to each tool or database, MCP provides a single protocol that works across different AI models and data sources. MCP in the local lab My first exposure to MCP was using Claude and Docker tools to replicate a video Sebastian_Maniak released showing how to configure a BIG-IP application service. I wanted to see how F5-agnostic I could be in my prompt and still get a successful result, and it turned out that the only domain-specific language I needed, after it came up with a solution and deployed it, was to specify the load balancing algorithm. Everything else was correct. Kinda blew my mind. I spoke about this experience throughout the year at F5 Academy events and at a solutions days event in Toronto, but more-so, I wanted to see how far I could take this in a local setting away from the pay-to-play tooling offered at that time. This was the genesis for this lab. Tools In this lab, you'll use the following tools: Ollama - Open WebUI mcpo custom mcp server Ollama and Open WebUI are assumed to already be installed, those labs are also in the AI Step-by-Step repo: Installing Ollama Installing Open WebUI Once those are in place, you can clone the repo and deploy in docker or podman, just make sure the containers for open WebUI are in the same network as the repo you're deploying. Results The success for getting your Open WebUI inference through the mcpo proxy and the MCP servers (mine is very basic just for test purposes, there are more that you can test or build yourself) depends greatly on your prompting skills and the abilities of the local models you choose. I had varying success with llama3.2:3b. But the goal here isn't production-ready tooling, it's to build and discover and get comfortable in this new world of AI assistants and leveraging them where it makes sense to augment our toolbox. Drop a comment below if you build this lab and share your successes and failures. Community is the best learning environment.
131Views2likes0CommentsBIG-IP Next Edge Firewall CNF for Edge workloads
Introduction The CNF architecture aligns with cloud-native principles by enabling horizontal scaling, ensuring that applications can expand seamlessly without compromising performance. It preserves the deterministic reliability essential for telecom environments, balancing scalability with the stringent demands of real-time processing. More background information about what value CNF brings to the environment, https://community.f5.com/kb/technicalarticles/from-virtual-to-cloud-native-infrastructure-evolution/342364 Telecom service providers make use of CNFs for performance optimization, Enable efficient and secure processing of N6-LAN traffic at the edge to meet the stringent requirements of 5G networks. Optimize AI-RAN deployments with dynamic scaling and enhanced security, ensuring that AI workloads are processed efficiently and securely at the edge, improving overall network performance. Deploy advanced AI applications at the edge with the confidence of carrier-grade security and traffic management, ensuring real-time processing and analytics for a variety of edge use cases. CNF Firewall Implementation Overview Let’s start with understanding how different CRs are enabled within a CNF implementation this allows CNF to achieve more optimized performance, Capex and Opex. The traditional way of inserting services to the Kubernetes is as below, Moving to a consolidated Dataplane approach saved 60% of the Kubernetes environment’s performance The F5BigFwPolicy Custom Resource (CR) applies industry-standard firewall rules to the Traffic Management Microkernel (TMM), ensuring that only connections initiated by trusted clients will be accepted. When a new F5BigFwPolicy CR configuration is applied, the firewall rules are first sent to the Application Firewall Management (AFM) Pod, where they are compiled into Binary Large Objects (BLOBs) to enhance processing performance. Once the firewall BLOB is compiled, it is sent to the TMM Proxy Pod, which begins inspecting and filtering network packets based on the defined rules. Enabling AFM within BIG-IP Controller Let’s explore how we can enable and configure CNF Firewall. Below is an overview of the steps needed to set up the environment up until the CNF CRs installations [Enabling the AFM] Enabling AFM CR within BIG-IP Controller definition global: afm: enabled: true pccd: enabled: true f5-afm: enabled: true cert-orchestrator: enabled: true afm: pccd: enabled: true image: repository: "local.registry.com" [Configuration] Example for Firewall policy settings apiVersion: "k8s.f5net.com/v1" kind: F5BigFwPolicy metadata: name: "cnf-fw-policy" namespace: "cnf-gateway" spec: rule: - name: allow-10-20-http action: "accept" logging: true servicePolicy: "service-policy1" ipProtocol: tcp source: addresses: - "2002::10:20:0:0/96" zones: - "zone1" - "zone2" destination: ports: - "80" zones: - "zone3" - "zone4" - name: allow-10-30-ftp action: "accept" logging: true ipProtocol: tcp source: addresses: - "2002::10:30:0:0/96" zones: - "zone1" - "zone2" destination: ports: - "20" - "21" zones: - "zone3" - "zone4" - name: allow-us-traffic action: "accept" logging: true source: geos: - "US:California" destination: geos: - "MX:Baja California" - "MX:Chihuahua" - name: drop-all action: "drop" logging: true ipProtocol: any source: addresses: - "::0/0" - "0.0.0.0/0" [Logging & Monitoring] CNF firewall settings allow not only local logging but also to use HSL logging to external logging destinations. apiVersion: "k8s.f5net.com/v1" kind: F5BigLogProfile metadata: name: "cnf-log-profile" namespace: "cnf-gateway" spec: name: "cnf-logs" firewall: enabled: true network: publisher: "cnf-hsl-pub" events: aclMatchAccept: true aclMatchDrop: true tcpEvents: true translationFields: true Verifying the CNF firewall settings can be done through the sidecar container kubectl exec -it deploy/f5-tmm -c debug -n cnf-gateway – bash tmctl -d blade fw_rule_stat context_type context_name ------------ ------------------------------------------ virtual cnf-gateway-cnf-fw-policy-SecureContext_vs rule_name micro_rules counter last_hit_time action ------------------------------------ ----------- ------- ------------- ------ allow-10-20-http-firewallpolicyrule 1 2 1638572860 2 allow-10-30-ftp-firewallpolicyrule 1 5 1638573270 2 Conclusion To conclude our article, we showed how CNFs with consolidated data planes help with optimizing CNF deployments. In this article we went through the overview of BIG-IP Next Edge Firewall CNF implementation, sample configuration and monitoring capabilities. More use cases to cover different use cases to be following. Related content F5BigFwPolicy BIG-IP Next Cloud-Native Network Functions (CNFs) CNF Home121Views2likes2CommentsCisco TACACS+ Config on ISE LTM Pair
I'm trying to add TACACS+ configuration to my ISE LTMs (v17.1.3). We use Active Directory for authentication. The problem is when I try to create the profile, the "type" dropdown does not show "TACACS+". APM is not provisioned either, not if that is needed. I provisioned it on our lab, but no help.141Views0likes8Comments