devops
1610 TopicsF5 VELOS: A Next-Generation Fully Automatable Platform
What is VELOS? The F5 VELOS platform is the next generation of F5’s chassis-based systems. VELOS can bridge traditional and modern application architectures by supporting a mix of traditional F5 BIG-IP tenants as well as next-generation BIG-IP Next tenants in the future. F5 VELOS is a key component of the F5 Application Delivery and Security Platform (ADSP). VELOS relies on a Kubernetes-based platform layer (F5OS) that is tightly integrated with F5 TMOS software. Going to a microservice-based platform layer allows VELOS to provide additional functionality that was not possible in previous generations of F5 BIG-IP platforms. Customers do not need to learn Kubernetes but still get the benefits of it. Management of the chassis will still be done via a familiar F5 CLI, webUI, or API. The additional benefit of automation capabilities can greatly simplify the process of deploying F5 products. A significant amount of time and resources are saved due to automation, which translates to more time to perform critical tasks. F5OS VELOS UI Why is VELOS important? Get more done in less time by using a highly automatable hardware platform that can deploy software solutions in seconds, not minutes or hours. Increased performance improves ROI: The VELOS platform is a high-performance and highly scalable chassis with improved processing power. Running multiple versions on the same platform allows for more flexibility than previously possible. Significantly reduce the TCO of previous-generation hardware by consolidating multiple platforms into one. Key VELOS Use-Cases NetOps Automation Shorten time to market by automating network operations and offering cloud-like orchestration with full-stack programmability Drive app development and delivery with self-service and faster response time Business Continuity Drive consistent policies across on-prem and public cloud and across hardware and software-based ADCs Build resiliency with VELOS’ superior platform redundancy and failover capabilities Future-proof investments by running multiple versions of apps side-by-side; migrate applications at your own pace Cloud Migration On-Ramp Accelerate cloud strategy by adopting cloud operating models and on-demand scalability with VELOS and use that as on-ramp to cloud Dramatically reduce TCO with VELOS systems; extend commercial models to migrate from hardware to software or as applications move to cloud Automation Capabilities Declarative APIs and integration with automation frameworks (Terraform, Ansible) greatly simplifies operations and reduces overhead: AS3 (Application Services 3 Extension): A declarative API that simplifies the configuration of application services. With AS3, customers can deploy and manage configurations consistently across environments. Ansible Automation: Prebuilt Ansible modules for VELOS enable automated provisioning, configuration, and updates, reducing manual effort and minimizing errors. Terraform: Organizations leveraging Infrastructure as Code (IaC) can use Terraform to define and automate the deployment of VELOS appliances and associated configurations. Example json file: Example of running the Automation Playbook: Example of the results: More information on Automation: Automating F5OS on VELOS GitHub Automation Repository Specialized Hardware Performance VELOS offers more hardware-accelerated performance capabilities with more FPGA chipsets that are more tightly integrated with TMOS. It also includes the latest Intel processing capabilities. This enhances the following: SSL and compression offload L4 offload for higher performance and reduced load on software Hardware-accelerated SYN flood protection Hardware-based protection from more than 100 types of denial-of-service (DoS) attacks Support for F5 Intelligence Services VELOS CX1610 chassis VELOS BX520 blade Migration Options (BIG-IP Journeys) Use BIG-IP Journeys to easily migrate your existing configuration to VELOS. This covers the following: Entire L4-L7 configuration can be migrated Individual Applications can be migrated BIG-IP Tenant configuration can be migrated Automatically identify and resolve migration issues Convert UCS files into AS3 declarations if needed Post-deployment diagnostics and health The Journeys Tool, available on DevCentral’s GitHub, facilitates the migration of legacy BIG-IP configurations to VELOS-compatible formats. Customers can convert UCS files, validate configurations, and highlight unsupported features during the migration process. Multi-tenancy capabilities in VELOS simplify the process of isolating workloads during and after migration. GitHub repository for F5 Journeys Conclusion The F5 VELOS platform addresses the modern enterprise’s need for high-performance, scalable, and efficient application delivery and security solutions. By combining cutting-edge hardware capabilities with robust automation tools and flexible migration options, VELOS empowers organizations to seamlessly transition from legacy platforms while unlocking new levels of performance and operational agility. Whether driven by the need for increased throughput, advanced multi-tenancy, the VELOS platform stands as a future-ready solution for securing and optimizing application delivery in an increasingly complex IT landscape. Related Content Cloud Docs VELOS Guide F5 VELOS Chassic System Datasheet F5 rSeries: Next-Generation Fully Automatable Hardware Demo Video
480Views3likes0CommentsF5 rSeries: Next-Generation Fully Automatable Hardware
What is rSeries? F5 rSeries is a rearchitected, next-generation hardware platform that scales application delivery performance and automates application services to address many of today’s most critical business challenges. F5 rSeries is a key component of the F5 Application Delivery and Security Platform (ADSP). rSeries relies on a Kubernetes-based platform layer (F5OS) that is tightly integrated with F5 TMOS software. Going to a microservice-based platform layer allows rSeries to provide additional functionality that was not possible in previous generations of F5 BIG-IP platforms. Customers do not need to learn Kubernetes but still get the benefits of it. Management of the hardware will still be done via a familiar F5 CLI, webUI or API. The additional benefit of automation capabilities can greatly simplify the process of deploying F5 products. A significant amount of time and resources are saved due to automation, which translates to more time to perform critical tasks. F5OS rSeries UI Why is this important? Get more done in less time by using a highly automatable hardware platform that can deploy software solutions in seconds, not minutes or hours. Increased performance improves ROI: The rSeries platform is a high performance and highly scalable appliance with improved processing power. Running multiple versions on the same platform allows for more flexibility than previously possible. Pay-as-you-Grow licensing options that unlock more CPU resources. Key rSeries Use-Cases NetOps Automation Shorten time to market by automating network operations and offering cloud like orchestration with full stack programmability Drive app development and delivery with self-service and faster response time Business Continuity Drive consistent policies across on-prem and public cloud and across hardware and software based ADCs Build resiliency with rSeries’ superior performance and failover capabilities Future proof investments by running multiple versions of apps side-by-side; migrate applications at your own pace Cloud Migration On-Ramp Accelerate cloud strategy by adopting cloud operating models and on-demand scalability with rSeries and use that as on ramp to cloud Dramatically reduce TCO with rSeries systems; extend commercial models to migrate from hardware to software or as applications move to cloud Automation Capabilities Declarative APIs and integration with automation frameworks (Terraform, Ansible) greatly simplifies operations and reduces overhead: AS3 (Application Services 3 Extension): A declarative API that simplifies the configuration of application services. With AS3, customers can deploy and manage configurations consistently across environments. Ansible Automation: Prebuilt Ansible modules for rSeries enable automated provisioning, configuration, and updates, reducing manual effort and minimizing errors. Terraform: Organizations leveraging Infrastructure as Code (IaC) can use Terraform to define and automate the deployment of rSeries appliances and associated configurations. Example json file: Example of running the Automation Playbook: Example of the results: More information on Automation: Automating F5OS on rSeries GitHub Automation Repository Specialized Hardware Performance rSeries offers more hardware-accelerated performance capabilities with more FPGA chipsets that are more tightly integrated with TMOS. It also includes the latest Intel processing capabilities. This enhances the following: SSL and compression offload L4 offload for higher performance and reduced load on software Hardware-accelerated SYN flood protection Hardware-based protection from more than 100 types of denial-of-service (DoS) attacks Support for F5 Intelligence Services Migration Options (BIG-IP Journeys) Use BIG-IP Jouneys to easily migrate your existing configuration to rSeries. This covers the following: Entire L4-L7 configuration can be migrated Individual Applications can be migrated BIG-IP Tenant configuration can be migrated Automatically identify and resolve migration issues Convert UCS files into AS3 declarations if needed Post-deployment diagnostics and health The Journeys Tool, available on DevCentral’s GitHub, facilitates the migration of legacy BIG-IP configurations to rSeries-compatible formats. Customers can convert UCS files, validate configurations, and highlight unsupported features during the migration process. Multi-tenancy capabilities in rSeries simplify the process of isolating workloads during and after migration. GitHub repository for F5 Journeys Conclusion The F5 rSeries platform addresses the modern enterprise’s need for high-performance, scalable, and efficient application delivery and security solutions. By combining cutting-edge hardware capabilities with robust automation tools and flexible migration options, rSeries empowers organizations to seamlessly transition from legacy platforms while unlocking new levels of performance and operational agility. Whether driven by the need for increased throughput, advanced multi-tenancy, the rSeries platform stands as a future-ready solution for securing and optimizing application delivery in an increasingly complex IT landscape. Related Content Cloud Docs rSeries Guide F5 rSeries Appliance Datasheet F5 VELOS: A Next-Generation Fully Automatable Platform Demo Video
541Views2likes0CommentsGetting Started with the Certified F5 NGINX Gateway Fabric Operator on Red Hat OpenShift
As enterprises modernize their Kubernetes strategies, the shift from standard Ingress Controllers to the Kubernetes Gateway API is redefining how we manage traffic. For years, the F5 NGINX Ingress Controller has been a foundational component in OpenShift environments. With the certification of F5 NGINX Gateway Fabric (NGF) 2.2 for Red Hat OpenShift, that legacy enters its next chapter. This new certified operator brings the high-performance NGINX data plane into the standardized, role-oriented Gateway API model—with full integration into OpenShift Operator Lifecycle Manager (OLM). Whether you're a platform engineer managing cluster ingress or a developer routing traffic to microservices, NGF on OpenShift 4.19+ delivers a unified, secure, and fully supported traffic fabric. In this guide, we walk through installing the operator, configuring the NginxGatewayFabric resource, and addressing OpenShift-specific networking patterns such as NodePort + Route. Why NGINX Gateway Fabric on OpenShift? While Red Hat OpenShift 4.19+ includes native support for the Gateway API (v1.2.1), integrating NGF adds critical enterprise capabilities: ✔ Certified & OpenShift-Ready The operator is fully validated by Red Hat, ensuring UBI-compliant images and compatibility with OpenShift’s strict Security Context Constraints (SCCs). ✔ High Performance, Low Complexity NGF delivers the core benefits long associated with NGINX—efficiency, simplicity, and predictable performance. ✔ Advanced Traffic Capabilities Capabilities like Regular Expression path matching and support for ExternalName services allow for complex, hybrid-cloud traffic patterns. ✔ AI/ML Readiness NGF 2.2 supports the Gateway API Inference Extension, enabling inference-aware routing for GenAI and LLM workloads on platforms like Red Hat OpenShift AI. Prerequisites Before we begin, ensure you have: Cluster Administrator access to an OpenShift cluster (version 4.19 or later is recommended for Gateway API GA support). Access to the OpenShift Console and the oc CLI. Ability to pull images from ghcr.io or your internal mirror. Step 1: Installing the Operator from OperatorHub We leverage the Operator Lifecycle Manager (OLM) for a "point-and-click" installation that handles lifecycle management and upgrades. Log into the OpenShift Web Console as an administrator. Navigate to Operators > OperatorHub. Search for NGINX Gateway Fabric in the search box. Select the NGINX Gateway Fabric Operator card and click Install Accept the default installation mode (All namespaces) or select a specific namespace (e.g. nginx-gateway), and click Install. Wait until the status shows Succeeded. Once installed, the operator will manage NGF lifecycle automatically. Step 2: Configuring the NginxGatewayFabric Resource Unlike the Ingress Controller, which used NginxIngressController resources, NGF uses the NginxGatewayFabric Custom Resource (CR) to configure the control plane and data plane. In the Console, go to Installed Operators > NGINX Gateway Fabric Operator. Click the NginxGatewayFabric tab and select Create NginxGatewayFabric. Select YAML view to configure the deployment specifics. Step 3: Configuring the NginxGatewayFabric Resource NGF uses a Kubernetes Service to expose its data plane. Before the data plane launches, we must tell the Controller how to expose it. Option A - LoadBalancer (ROSA, ARO, Managed OpenShift) By default, the NGINX Gateway Fabric Operator configures the service type as LoadBalancer. On public cloud managed OpenShift services (like ROSA on AWS or ARO on Azure), this native default works out-of-the-box to provision a cloud load balancer. No additional steps required. Option B - NodePort with OpenShift Route (On-Prem/Hybrid) However, for on-premise or bare-metal OpenShift clusters lacking a native LoadBalancer implementation, the common pattern is to use a NodePort service exposed via an OpenShift Route. Update the NGF CR to use NodePort In the Console, go to Installed Operators > NGINX Gateway Fabric Operator. Click the NginxGatewayFabric tab and select NginxGatewayFabric. Select YAML view to directly edit the configuration specifics. Change the spec.nginx.service.type to NodePort: apiVersion: gateway.nginx.org/v1alpha1 kind: NginxGatewayFabric metadata: name: default namespace: nginx-gateway spec: nginx: service: type: NodePort Create the OpenShift Route: After applying the CR, create a Route to expose the NGINX Service. oc create route edge ngf \ --service=nginxgatewayfabric-sample-nginx-gateway-fabric\ --port=http \ -n nginx-gateway Note: This creates an Edge TLS termination route. For passthrough TLS (allowing NGINX to handle certificates), use --passthrough and target the https port. Step 4: Validating the Deployment Verify that the operator has deployed the control plane pods successfully. oc get pod -n nginx-gateway NAME READY STATUS RESTARTS AGE nginx-gateway-fabric-controller-manager-dd6586597-bfdl5 1/1 Running 0 23m nginxgatewayfabric-sample-nginx-gateway-fabric-564cc6df4d-hztm8 1/1 Running 0 18m oc get gatewayclass NAME CONTROLLER ACCEPTED AGE nginx gateway.nginx.org/nginx-gateway-controller True 4d1h You should also see a GatewayClass named nginx. This indicates the controller is ready to manage Gateway resources. Step 5: Functional Check with Gateway API To test traffic, we will use the standard Gateway API resources (Gateway and HTTPRoute) Deploy a Test Application (Cafe Service) Ensure you have a backend service running. You can use a simple service for validation. Create a Gateway This resource opens the listener on the NGINX data plane. apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: cafe spec: gatewayClassName: nginx listeners: - name: http port: 80 protocol: HTTP Create an HTTPRoute This binds the traffic to your backend service. apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: coffee spec: parentRefs: - name: cafe hostnames: - "cafe.example.com" rules: - matches: - path: type: PathPrefix value: / backendRefs: - name: coffee port: 80 Test Connectivity If you used Option B (Route), send a request to your OpenShift Route hostname. If you used Option A, send it to the LoadBalancer IP. OpenShift 4.19 Compatibility Meanwhile, it is vital to understand the "under the hood" constraints of OpenShift 4.19: Gateway API Version Pinning: OpenShift 4.19 ships with Gateway API CRDs pinned to v1.2.1. While NGF 2.2 supports v1.3.0 features, it has been conformance-tested against v1.2.1 to ensure stability within OpenShift's version-locked environment. oc get crd gateways.gateway.networking.k8s.io -o yaml | grep "gateway.networking.k8s.io/" gateway.networking.k8s.io/bundle-version: v1.2.1 gateway.networking.k8s.io/channel: standard However, looking ahead, future NGINX Gateway Fabric releases may rely on newer Gateway API specifications that are not natively supported by the pinned CRDs in OpenShift 4.19. If you anticipate running a newer NGF version that may not be compatible with the current OpenShift Gateway API version, please reach out to us to discuss your compatibility requirements. Security Context Constraints (SCC): In previous manual deployments, you might have wrestled with NET_BIND_SERVICE capabilities or creating custom SCCs. The Certified Operator handles these permissions automatically, using UBI-based images that comply with Red Hat's security standards out of the box. Next Steps: AI Inference With NGF running, you are ready for advanced use cases: AI Inference: Explore the Gateway API Inference Extension to route traffic to LLMs efficiently, optimizing GPU usage on Red Hat OpenShift AI. The certified NGINX Gateway Fabric Operator simplifies the operational burden, letting you focus on what matters: delivering secure, high-performance applications and AI workloads. References: NGINX Gateway Fabric Operator on Red Hat Catalog F5 NGINX Gateway Fabric Certified for Red Hat OpenShift NGINX Gateway Fabric Installation Docs231Views3likes1CommentI Tried to Beat OpenAI with Ollama in n8n—Here’s Why It Failed (and the Bug I’m Filing)
Hey, community. I wanted to share a story about how I built the n8n Labs workflow. It watches a YouTube channel, summarizes the latest videos with AI agents, and sends a clean HTML newsletter via Gmail. In the video, I show it working flawlessly with OpenAI. But before I got there, I spent a lot of time trying to copy the same flow using open source models through Ollama with the n8n Ollama node. My results were all over the map. I really wanted this to be a great “open source first” build. I tried many local models via Ollama, tuned prompts, adjusted parameters, and re‑ran tests. The outputs were always unpredictable: sometimes I’d get partial JSON, sometimes extra text around the JSON. Sometimes fields would be missing. Sometimes it would just refuse to stick to the structure I asked for. After enough iterations, I started to doubt whether my understanding of the agent setup was off. So, I built a quick proof inside the n8n Code node. If the AI Agent step is supposed to take the XML→JSON feed and reshape it into a structured list—title, description, content URL, thumbnail URL—then I should be able to do that deterministically in JavaScript and compare. I wrote a tiny snippet that reads the entries array, grabs the media fields, and formats a minimal output. And guess what? Voila. It worked on the first try and my HTML generator lit up exactly the way I wanted. That told me two things: one, my upstream data (HTTP Request + XML→JSON) was solid; and two, my desired output structure was clear and achievable without any trickery. With that proof in hand, I turned to OpenAI. I wired the same agent prompt, the same structured output parser, and the same workflow wiring—but swapped the Ollama node for an OpenAI chat model. It worked immediately. Fast, cheap, predictable. The agent returned a perfectly clean JSON with the fields I requested. My code node transformed it into HTML. The preview looked right, and Gmail sent the newsletter just like in the demo. So at that point, I felt confident the approach was sound and the transcript you saw in the video was repeatable—at least with OpenAI in the loop. Where does that leave Ollama and open source models? I’m not throwing shade—I love open source, and I want this path to be great. My current belief is the failure is somewhere inside the n8n Ollama node code path. I don’t think it’s the models themselves in isolation; I think the node may be mishandling one or more of these details: how messages are composed (system vs user). Whether “JSON mode” or a grammar/format hint is being passed, token/length defaults that cause truncation, stop settings that let extra text leak into the output; or the way the structured output parser constraints are communicated. If you’ve worked with local models, you know they can follow structure very well when you give them a strict format or grammar. If the node isn’t exposing that (or is dropping it on the floor), you get variability. To make sure this gets eyes from the right folks, my intent is to file a bug with n8n for the Ollama node. I’ll include a minimal, reproducible workflow: the same RSS fetch, the same XML→JSON conversion, the same agent prompt and required output shape, and a comparison run where OpenAI succeeds and Ollama does not. I’ll share versions, logs, model names, and settings so the team can trace exactly where the behavior diverges. If there’s a missing parameter (like format: json) or a message-role mix‑up, great—let’s fix it. If it needs a small enhancement to pass a grammar or schema to the model, even better. The net‑net is simple: for AI agents inside n8n to feel predictable with Ollama, we need the node to enforce reliably structured outputs the same way the OpenAI path does. That unlocks a ton of practical automation for folks who prefer local models. In the meantime, if you’re following the lab and want a rock‑solid fallback, you can use the Code node to do the exact transformation the agent would do. Here’s the JavaScript I wrote and tested in the workflow: const entries = $input.first().json.feed?.entry ?? []; function truncate(str, max) { if (!str) return ''; const s = String(str).trim(); return s.length > max ? s.slice(0, max) + '…' : s; // If you want total length (including …) to be max, use: // return s.length > max ? s.slice(0, Math.max(0, max - 1)) + '…' : s; } const output = entries.map(entry => { const g = entry['media:group'] ?? {}; return { title: g['media:title'] ?? '', description: truncate(g['media:description'], 60), contentUrl: g['media:content']?.url ?? '', thumbnailUrl: g['media:thumbnail']?.url ?? '' }; }); return [{ json: { output } }]; That snippet proves the data is there and your HTML builder is fine. If OpenAI reproduces the same structured JSON as the code, and Ollama doesn’t, the issue is likely in the node’s request/response handling rather than your workflow logic. I’ll keep pushing on the bug report so we can make agents with Ollama as predictable as they need to be. Until then, if you want speed and consistency to get the job done, OpenAI works great. If you’re experimenting with open source, try enforcing stricter formats and shorter outputs—and keep an eye on what the node actually sends to the model. As always, I’ll share updates, because I love sharing knowledge—and I want the open-source path to shine right alongside the rest of our AI, agents, n8n, Gmail, and OpenAI workflows. As always, community, if you have a resolution and can pull it off, please share!
296Views2likes1CommentLeveraging BGP and ECMP for F5 Distributed Cloud Customer Edge, Part Two
Introduction This is the second part of our series on leveraging BGP and ECMP for F5 Distributed Cloud Customer Edge deployments. In Part One, we explored the high-level concepts, architecture decisions, and design principles that make BGP and ECMP such a powerful combination for Customer Edge high availability and maintenance operations. This article provides step-by-step implementation guidance, including: High-level and low-level architecture diagrams Complete BGP peering and routing policy configuration in F5 Distributed Cloud Console Practical configuration examples for Fortinet FortiGate and Palo Alto Networks firewalls By the end of this article, you'll have everything you need to implement BGP-based high availability for your Customer Edge deployment. Architecture Overview Before diving into configuration, let’s establish a clear picture of the architecture we’re implementing. We’ll examine this from two perspectives: a high-level logical view and a detailed low-level view showing specific IP addressing and AS numbers. High-Level Architecture The high-level architecture illustrates the fundamental traffic flow and BGP relationships in our deployment: Key Components: Component Role Internet External connectivity to the network Next-Generation Firewall Acts as the BGP peer and performs ECMP distribution to Customer Edge nodes Customer Edge Virtual Site Two or more CE nodes advertising identical VIP prefixes via BGP The architecture follows a straightforward principle: the upstream firewall establishes BGP peering with each CE node. Each CE advertises its VIP addresses as /32 routes. The firewall, seeing multiple equal-cost paths to the same destination, distributes incoming traffic across all available CE nodes using ECMP. Low-Level Architecture with IP Addressing The low-level diagram provides the specific details needed for implementation, including IP addresses and AS numbers: Network Details: Component IP Address Role Firewall (Inside) 10.154.4.119/24 BGP Peer, ECMP Router CE1 (Outside) 10.154.4.160/24 Customer Edge Node 1 CE2 (Outside) 10.154.4.33/24 Customer Edge Node 2 Global VIP 192.168.100.10/32 Load Balancer VIP BGP Configuration: Parameter Firewall Customer Edge AS Number 65001 65002 Router ID 10.154.4.119 Auto-assigned based on interface IP Advertised Prefix None 192.168.100.0/24 le 32 This configuration uses eBGP (External BGP) between the firewall and CE nodes, with different AS numbers for each. The CE nodes share the same AS number (65002), which is the standard approach for multi-node CE deployments advertising the same VIP prefixes. Configuring BGP in F5 Distributed Cloud Console The F5 Distributed Cloud Console provides a centralized interface for configuring BGP peering and routing policies on your Customer Edge nodes. This section walks you through the complete configuration process. Step 1: Configure the BGP peering Go to: Multi-Cloud Network Connect --> Manage --> Networking --> External Connectivity --> BGP Peers & Policies Click on Add BGP Peer Then add the following information: Object name Site where to apply this BGP configuration ASN Router ID Here is an example of the required parameters. Then click on Peers --> Add Item And filled the relevant fields like below by adapting the parameters for your requirements. Step 2: Configure the BGP routing policies Go to: Multi-Cloud Network Connect --> Manage --> Networking --> External Connectivity --> BGP Peers & Policies --> BGP Routing Policies Click on Add BGP Routing Policy Add a name for your BGP routing policy object and click on Configure to add the rules. Click on Add Item to add a rule. Here we are going to allow the /32 prefixes from our VIP subnet (192.168.100.0/24). Save the BGP Routing Policy Repeat the action to create another BGP routing policy with the exact same parameters except the Action Type, which should be of type Deny. Now we have two BGP routing policies: One to allow the VIP prefixes (for normal operations) One to deny the VIP prefixes (for maintenance mode) We still need to a a third and final BGP routing policy, in order to deny any prefixes on the CE. For that, create a third BGP routing policy with this match. Step 3: Apply the BGP routing policies To apply the BGP routing policies in your BGP peer object, edit the Peer and: Enable the BGP routing policy Apply the BGP routing policy objects created before for Inbound and Outbound Fortinet FortiGate Configuration FortiGate firewalls are widely deployed as network security appliances and support robust BGP capabilities. This section provides the minimum configuration for establishing BGP peering with Customer Edge nodes and enabling ECMP load distribution. Step 1: Configure the Router ID and AS Number Configure the basic BGP settings: config router bgp set as 65001 set router-id 10.154.4.119 set ebgp-multipath enable Step 2: Configure BGP Neighbors Add each CE node as a BGP neighbor: config neighbor edit "10.154.4.160" set remote-as 65002 set route-map-in "ACCEPT-CE-VIPS" set route-map-out "DENY-ALL" set soft-reconfiguration enable next edit "10.154.4.63" set remote-as 65002 set route-map-in "ACCEPT-CE-VIPS" set route-map-out "DENY-ALL" set soft-reconfiguration enable next end end Step 3: Create Prefix List for VIP Range Define the prefix list that matches the CE VIP range: config router prefix-list edit "CE-VIP-PREFIXES" config rule edit 1 set prefix 192.168.100.0 255.255.255.0 set ge 32 set le 32 next end next end Important: The ge 32 and le 32 parameters ensure we only match /32 prefixes within the 192.168.100.0/24 range, which is exactly what CE nodes advertise for their VIPs. Step 4: Create Route Maps Configure route maps to implement the filtering policies: Inbound Route Map (Accept VIP prefixes): config router route-map edit "ACCEPT-CE-VIPS" config rule edit 1 set match-ip-address "CE-VIP-PREFIXES" next end next end Outbound Route Map (Deny all advertisements): config router route-map edit "DENY-ALL" config rule edit 1 set action deny next end next end Step 5: Verify BGP Configuration After applying the configuration, verify the BGP sessions and routes: Check BGP neighbor status: get router info bgp summary VRF 0 BGP router identifier 10.154.4.119, local AS number 65001 BGP table version is 4 1 BGP AS-PATH entries 0 BGP community entries Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 10.154.4.33 4 65002 2092 2365 0 0 0 00:05:33 1 10.154.4.160 4 65002 2074 2346 0 0 0 00:14:14 1 Total number of neighbors 2 Verify ECMP routes: get router info routing-table bgp Routing table for VRF=0 B 192.168.100.10/32 [20/255] via 10.154.4.160 (recursive is directly connected, port2), 00:00:11, [1/0] [20/255] via 10.154.4.33 (recursive is directly connected, port2), 00:00:11, [1/0] Palo Alto Networks Configuration Palo Alto Networks firewalls provide enterprise-grade security with comprehensive routing capabilities. This section covers the minimum BGP configuration for peering with Customer Edge nodes. Note: This part is assuming that Palo Alto firewall is configured in the new "Advanced Routing Engine" mode. And we will use the logical-router named "default". Step 1: Configure ECMP parameters set network logical-router default vrf default ecmp enable yes set network logical-router default vrf default ecmp max-path 4 set network logical-router default vrf default ecmp algorithm ip-hash Step 2: Configure objects IPs and firewall rules for BGP peering set address CE1 ip-netmask 10.154.4.160/32 set address CE2 ip-netmask 10.154.4.33/32 set address-group BGP_PEERS static [ CE1 CE2 ] set address LOCAL_BGP_IP ip-netmask 10.154.4.119/32 set rulebase security rules ALLOW_BGP from service set rulebase security rules ALLOW_BGP to service set rulebase security rules ALLOW_BGP source LOCAL_BGP_IP set rulebase security rules ALLOW_BGP destination BGP_PEERS set rulebase security rules ALLOW_BGP application bgp set rulebase security rules ALLOW_BGP service application-default set rulebase security rules ALLOW_BGP action allow Step 3: Palo Alto Configuration Summary (CLI Format) set network routing-profile filters prefix-list ALLOWED_PREFIXES type ipv4 ipv4-entry 1 prefix entry network 192.168.100.0/24 set network routing-profile filters prefix-list ALLOWED_PREFIXES type ipv4 ipv4-entry 1 prefix entry greater-than-or-equal 32 set network routing-profile filters prefix-list ALLOWED_PREFIXES type ipv4 ipv4-entry 1 prefix entry less-than-or-equal 32 set network routing-profile filters prefix-list ALLOWED_PREFIXES type ipv4 ipv4-entry 1 action permit set network routing-profile filters prefix-list ALLOWED_PREFIXES description "Allow only m32 inside 192.168.100.0m24" set network routing-profile filters prefix-list DENY_ALL type ipv4 ipv4-entry 1 prefix entry network 0.0.0.0/0 set network routing-profile filters prefix-list DENY_ALL type ipv4 ipv4-entry 1 prefix entry greater-than-or-equal 0 set network routing-profile filters prefix-list DENY_ALL type ipv4 ipv4-entry 1 prefix entry less-than-or-equal 32 set network routing-profile filters prefix-list DENY_ALL type ipv4 ipv4-entry 1 action deny set network routing-profile filters prefix-list DENY_ALL description "Deny all prefixes" set network routing-profile bgp filtering-profile FILTER_INBOUND ipv4 unicast inbound-network-filters prefix-list ALLOWED_PREFIXES set network routing-profile bgp filtering-profile FILTER_OUTBOUND ipv4 unicast inbound-network-filters prefix-list DENY_ALL set network logical-router default vrf default bgp router-id 10.154.4.119 set network logical-router default vrf default bgp local-as 65001 set network logical-router default vrf default bgp install-route yes set network logical-router default vrf default bgp enable yes set network logical-router default vrf default bgp peer-group BGP_PEERS type ebgp set network logical-router default vrf default bgp peer-group BGP_PEERS address-family ipv4 ipv4-unicast-default set network logical-router default vrf default bgp peer-group BGP_PEERS filtering-profile ipv4 FILTER_INBOUND set network logical-router default vrf default bgp peer-group BGP_PEERS filtering-profile ipv4 FILTER_OUTBOUND set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE1 peer-as 65002 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE1 local-address interface ethernet1/2 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE1 local-address ip svc-intf-ip set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE1 peer-address ip 10.154.4.160 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE2 peer-as 65002 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE2 local-address interface ethernet1/2 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE2 local-address ip svc-intf-ip set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE2 peer-address ip 10.154.4.33 Step 4: Verify BGP Configuration After committing the configuration, verify the BGP sessions and routes: Check BGP neighbor status: run show advanced-routing bgp peer status logical-router default Logical Router: default ============== Peer Name: CE2 BGP State: Established, up for 00:01:55 Peer Name: CE1 BGP State: Established, up for 00:00:44 Verify ECMP routes: run show advanced-routing route logical-router default Logical Router: default ========================== flags: A:active, E:ecmp, R:recursive, Oi:ospf intra-area, Oo:ospf inter-area, O1:ospf ext 1, O2:ospf ext 2 destination protocol nexthop distance metric flag tag age interface 0.0.0.0/0 static 10.154.1.1 10 10 A 01:47:33 ethernet1/1 10.154.1.0/24 connected 0 0 A 01:47:37 ethernet1/1 10.154.1.99/32 local 0 0 A 01:47:37 ethernet1/1 10.154.4.0/24 connected 0 0 A 01:47:37 ethernet1/2 10.154.4.119/32 local 0 0 A 01:47:37 ethernet1/2 192.168.100.10/32 bgp 10.154.4.33 20 255 A E 00:01:03 ethernet1/2 192.168.100.10/32 bgp 10.154.4.160 20 255 A E 00:01:03 ethernet1/2 total route shown: 7 Implementing CE Isolation for Maintenance As discussed in Part One, one of the key advantages of BGP-based deployments is the ability to gracefully isolate CE nodes for maintenance. Here’s how to implement this in practice. Isolation via F5 Distributed Cloud Console To isolate a CE node from receiving traffic, in your BGP peer object, edit the Peer and: Change the Outbound BGP routing policy from the one that is allowing the VIP prefixes to the one that is denying the VIP prefixes The CE will stop advertising its VIP routes, and within seconds (based on BGP timers), the upstream firewall will remove this CE from its ECMP paths. Verification During Maintenance On your firewall, verify the route withdrawal (in this case we are using a Fortigate firewall): get router info bgp summary VRF 0 BGP router identifier 10.154.4.119, local AS number 65001 BGP table version is 4 1 BGP AS-PATH entries 0 BGP community entries Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 10.154.4.33 4 65002 2070 2345 0 0 0 00:04:05 0 10.154.4.160 4 65002 2057 2326 0 0 0 00:12:46 1 Total number of neighbors 2 We are not receiving any prefixes anymore for the 10.154.4.33 peer. get router info routing-table bgp Routing table for VRF=0 B 192.168.100.10/32 [20/255] via 10.154.4.160 (recursive is directly connected, port2), 00:06:34, [1/0] End we have now only one path. Restoring the CE in the data path After maintenance is complete: Return to the BGP Peer configuration in the F5XC Console Restore the original export policy (permit VIP prefixes) Save the configuration On the upstream firewall, confirm that CE prefixes are received again and that ECMP paths are restored Conclusion This article has provided the complete implementation details for deploying BGP and ECMP with F5 Distributed Cloud Customer Edge nodes. You now have: A clear understanding of the architecture at both high and low levels Step-by-step instructions for configuring BGP in F5 Distributed Cloud Console Ready-to-use configurations for both Fortinet FortiGate and Palo Alto Networks firewalls Practical guidance for implementing graceful CE isolation for maintenance By combining the concepts from the first article with the practical configurations in this article, you can build a robust, highly available application delivery infrastructure that maximizes resource utilization, provides automatic failover, and enables zero-downtime maintenance operations. The BGP-based approach transforms your Customer Edge deployment from a traditional Active/Standby model into a full active topology where every node contributes to handling traffic, and any node can be gracefully removed for maintenance without impacting your users.69Views2likes0CommentsLeveraging BGP and ECMP for F5 Distributed Cloud Customer Edge, Part One
Introduction Achieving high availability for application delivery while maintaining operational flexibility is a fundamental challenge for modern enterprises. When deploying F5 Distributed Cloud Customer Edge (CE) nodes in private data centers, on-premises environments or in some cases public cloud environments, the choice of how traffic reaches these nodes significantly impacts both service resilience and operational agility. This article explores how Border Gateway Protocol (BGP) combined with Equal-Cost Multi-Path (ECMP) routing provides an elegant solution for two critical operational requirements: High availability of traffic for load balancers running on Customer Edge nodes Easier maintenance and upgrades of CE nodes without service disruption By leveraging dynamic routing protocols instead of static configurations, you gain the ability to gracefully remove individual CE nodes from the traffic path, perform maintenance or upgrades, and seamlessly reintroduce them—all without impacting your application delivery services. Understanding BGP and ECMP Benefits for Customer Edge Deployments Why BGP and ECMP? Traditional approaches to high availability often rely on protocols like VRRP, which create Active/Standby topologies. While functional, this model leaves standby nodes idle and creates potential bottlenecks on the active node. The BGP with ECMP fundamentally changes this paradigm. The Power of ECMP Equal-Cost Multi-Path routing allows your network infrastructure to distribute traffic across multiple CE nodes simultaneously. When each CE node advertises the same VIP prefix via BGP, your upstream router learns multiple equal-cost paths and distributes traffic across all available nodes: This creates a true Active/Active topology where: All CE nodes actively process traffic Load is distributed across the entire set of CEs Failure of any single node automatically redistributes traffic to remaining nodes No manual intervention required for failover Key Benefits Benefit Description Active/Active/Active All nodes handle traffic simultaneously, maximizing resource utilization Automatic Failover When a CE stops advertising, its VIP, traffic automatically shifts to the remaining nodes Graceful Maintenance Withdraw BGP advertisements to drain traffic before maintenance Horizontal Scaling Add new CE nodes and they automatically join the traffic distribution Understanding Customer Edge VIP Architecture F5 Distributed Cloud Customer Edge nodes support a flexible VIP architecture that integrates seamlessly with BGP. Understanding how VIPs work is essential for proper BGP configuration. The Global VIP Each Customer Edge site can be configured with a Global VIP—a single IP address that serves as the default listener for all load balancers instantiated on that CE. Key characteristics: Configured at the CE site level in the F5 Distributed Cloud Console Acts as the default VIP for any load balancer that doesn’t have a dedicated VIP configured Advertised as a /32 prefix in the routing table To know: The Global VIP is NOT generated in the CE's routing table until at least one load balancer is configured on that CE This last point is particularly important: if you configure a Global VIP but haven't deployed any load balancers, the VIP won't be advertised via BGP. This prevents advertising unreachable services. For this article, we are going to use 192.168.100.0/24 as VIP subnet for all the examples. Load Balancer Dedicated VIPs Individual load balancers can be configured with their own Dedicated VIP, separate from the Global VIP. When a dedicated VIP is configured: The load balancer responds only to its dedicated VIP The load balancer does not respond to the Global VIP The dedicated VIP is also advertised as a /32 prefix Multiple load balancers can have different dedicated VIPs on the same CE This flexibility allows you to: Separate different applications on different VIPs Implement different routing policies per application Maintain granular control over traffic distribution VIP Summary VIP Type Scope Prefix Length When Advertised Global VIP Per CEs /32 When at least one LB is configured on CE Dedicated VIP Per load balancer /32 When the specific LB is configured BGP Filtering Best Practices Proper BGP filtering is essential for security and operational stability. This section covers the recommended filtering policies for both the upstream network device (firewall/router) and the Customer Edge nodes. Design Principles The filtering strategy follows the principle of explicit allow, implicit deny: Only advertise what is necessary Only accept what is expected Use prefix lists with appropriate matching for /32 routes Upstream Device Configuration (Firewall/Router) The device peering with your CE nodes should implement strict filtering: Inbound policy on Firewall/Router The firewall/router should accept only the CE VIP prefixes. In our example, all VIPs fall within 192.168.100.0/24: Why "or longer" (le 32)? Since VIPs are advertised as /32 prefixes, you need to match prefixes more specific than /24. The le 32 (less than or equal to 32) or "or longer" modifier ensures your filter matches the actual /32 routes while still using a manageable prefix range. Outbound policy on Firewall/Router By default, the firewall/router should not advertise any prefixes to the CE nodes. Customer Edge Configuration The CE nodes should implement complementary filtering: Outbound Policy CEs should advertise only their VIP prefixes. Since all VIPs on Customer Edge nodes are /32 addresses, your prefix filters must also follow the "or longer" approach. Inbound Policy CEs should not accept any prefixes from the upstream firewall/router. Filtering Summary Table Device Direction Policy Prefix Match Firewall/Router Inbound (from CE) Accept VIP range 192.168.100.0/24 le 32 Firewall/Router Outbound (to CE) Deny all N/A CE Outbound (to Router) Advertise VIPs only 192.168.100.0/24 le 32 CE Inbound (from Router) Deny all N/A Graceful CE Isolation for Maintenance One of the most powerful benefits of using BGP is the ability to gracefully remove a CE node from the traffic path for maintenance, upgrades, or troubleshooting. This section explains how to isolate a CE by manipulating its BGP route advertisements. The Maintenance Challenge When you need to perform maintenance on a CE node (OS upgrade, software update, reboot, troubleshooting), you want to: Stop new traffic from reaching the node Allow existing connections to complete gracefully Perform your maintenance tasks Reintroduce the node to the traffic pool With VRRP, this can require manual failover procedures. With BGP, you simply stop advertising VIP routes. Isolation Process Overview Step 1: Configure BGP Route Filtering on the CE To isolate a CE, you need to apply a BGP policy that prevents the VIP prefixes from being advertised or received. Where to Apply the Policy? There are two possible approaches to stop a CE from receiving traffic: On the BGP peer (firewall/router): Configure an inbound filter on the upstream device to reject routes from the specific CE you want to isolate. On the Customer Edge itself: Configure an outbound export policy on the CE to stop advertising its VIP prefixes. We recommend the F5 Distributed Cloud approach (option 2) for several reasons: Consideration Firewall/Router Approach F5 Distributed Cloud Approach Automation Requires separate automation for network devices Can be performed in the F5 XC Console or fully automated through API/Terraform infrastructure as a code approach. Team ownership Requires coordination with network team CE team has full autonomy Consistency Configuration syntax varies by vendor Single, consistent interface Audit trail Spread across multiple systems Centralized in F5 XC Console In many organizations, the team responsible for managing the Customer Edge nodes is different from the team managing the network infrastructure (firewalls, routers). By implementing isolation policies on the CE side, you eliminate cross-team dependencies and enable self-service maintenance operations. Applying the Filter The filter is configured through the F5 Distributed Cloud Console on the specific CE site. The filter configuration will be: Match the VIP prefix range (192.168.100.0/24 or longer) Set the action to Deny Apply to the outbound direction (export policy) Once applied, the CE stops advertising its VIP /32 routes to its BGP peers. Step 2: Perform Maintenance With the CE isolated from the traffic path, you can safely: Reboot the CE node Perform OS upgrades Apply software updates Existing long-lived connections to the isolated CE will eventually timeout, while new connections are automatically directed to the remaining CEs. Step 3: Reintroduce the CE in the data path After maintenance is complete: Remove or modify the BGP export filter to allow VIP advertisement The CE will begin advertising its VIP /32 routes again The upstream firewall/router will add the CE back to its ECMP paths Traffic will automatically start flowing to the restored CE Isolation Benefits Summary Aspect Benefit Zero-touch failover Traffic automatically shifts to the remaining CEs Controlled maintenance windows Isolate at your convenience No application impact Users experience no disruption Reversible Simply re-enable route advertisement to restore Per-node granularity Isolate individual nodes without affecting others Rolling Upgrade Strategy Using this isolation technique, you can implement rolling upgrades across your CEs: Rolling Upgrade Sequence: Step 1: Isolate CE1 → Upgrade CE1 → Put back CE1 in the data path Step 2: Isolate CE2 → Upgrade CE2 → Put back CE2 in the data path Throughout this process: • At least 1 CE is always handling traffic • No service interruption occurs • Each CE is validated before moving to the next Conclusion BGP with ECMP provides a robust, flexible foundation for high-availability F5 Distributed Cloud Customer Edge deployments. By leveraging dynamic routing protocols: Traffic is distributed across all active CE nodes, maximizing resource utilization Failover is automatic when a CE becomes unavailable Maintenance is graceful through controlled route withdrawal Scaling is seamless as new CEs automatically join the traffic distribution once they are BGP-peered The combination of proper BGP filtering (accepting only VIP prefixes, advertising only what’s necessary) and the ability to isolate individual CEs through route manipulation gives you complete operational control over your application delivery infrastructure. Whether you’re performing routine maintenance, emergency troubleshooting, or rolling out upgrades, BGP-based CE deployments ensure your applications remain available and your operations remain smooth.101Views3likes0CommentsUsing the Model Context Protocol with Open WebUI
This year we started building out a series of hands-on labs you can do on your own in our AI Step-by-Step repo on GitHub. In my latest lab, I walk you through setting up a Model Context Protocol (MCP) server and the mcpo proxy to allow you to use MCP tools in a locally-hosted Open WebUI + Ollama environment. The steps are well-covered there, but I wanted to highlight what you learn in the lab. What is MCP and why does it matter? MCP is a JSON-based open standard from Anthropic that (shockingly!) is only about 13 months old now. It allows AI assistants to securely connect to external data sources and tools through a unified interface. The key delivery that led to it's rapid adoption is that it solves the fragmentation problem in AI integrations—instead of every AI system needing custom code to connect to each tool or database, MCP provides a single protocol that works across different AI models and data sources. MCP in the local lab My first exposure to MCP was using Claude and Docker tools to replicate a video Sebastian_Maniak released showing how to configure a BIG-IP application service. I wanted to see how F5-agnostic I could be in my prompt and still get a successful result, and it turned out that the only domain-specific language I needed, after it came up with a solution and deployed it, was to specify the load balancing algorithm. Everything else was correct. Kinda blew my mind. I spoke about this experience throughout the year at F5 Academy events and at a solutions days event in Toronto, but more-so, I wanted to see how far I could take this in a local setting away from the pay-to-play tooling offered at that time. This was the genesis for this lab. Tools In this lab, you'll use the following tools: Ollama - Open WebUI mcpo custom mcp server Ollama and Open WebUI are assumed to already be installed, those labs are also in the AI Step-by-Step repo: Installing Ollama Installing Open WebUI Once those are in place, you can clone the repo and deploy in docker or podman, just make sure the containers for open WebUI are in the same network as the repo you're deploying. Results The success for getting your Open WebUI inference through the mcpo proxy and the MCP servers (mine is very basic just for test purposes, there are more that you can test or build yourself) depends greatly on your prompting skills and the abilities of the local models you choose. I had varying success with llama3.2:3b. But the goal here isn't production-ready tooling, it's to build and discover and get comfortable in this new world of AI assistants and leveraging them where it makes sense to augment our toolbox. Drop a comment below if you build this lab and share your successes and failures. Community is the best learning environment.
143Views2likes0CommentsBIG-IP Next Edge Firewall CNF for Edge workloads
Introduction The CNF architecture aligns with cloud-native principles by enabling horizontal scaling, ensuring that applications can expand seamlessly without compromising performance. It preserves the deterministic reliability essential for telecom environments, balancing scalability with the stringent demands of real-time processing. More background information about what value CNF brings to the environment, https://community.f5.com/kb/technicalarticles/from-virtual-to-cloud-native-infrastructure-evolution/342364 Telecom service providers make use of CNFs for performance optimization, Enable efficient and secure processing of N6-LAN traffic at the edge to meet the stringent requirements of 5G networks. Optimize AI-RAN deployments with dynamic scaling and enhanced security, ensuring that AI workloads are processed efficiently and securely at the edge, improving overall network performance. Deploy advanced AI applications at the edge with the confidence of carrier-grade security and traffic management, ensuring real-time processing and analytics for a variety of edge use cases. CNF Firewall Implementation Overview Let’s start with understanding how different CRs are enabled within a CNF implementation this allows CNF to achieve more optimized performance, Capex and Opex. The traditional way of inserting services to the Kubernetes is as below, Moving to a consolidated Dataplane approach saved 60% of the Kubernetes environment’s performance The F5BigFwPolicy Custom Resource (CR) applies industry-standard firewall rules to the Traffic Management Microkernel (TMM), ensuring that only connections initiated by trusted clients will be accepted. When a new F5BigFwPolicy CR configuration is applied, the firewall rules are first sent to the Application Firewall Management (AFM) Pod, where they are compiled into Binary Large Objects (BLOBs) to enhance processing performance. Once the firewall BLOB is compiled, it is sent to the TMM Proxy Pod, which begins inspecting and filtering network packets based on the defined rules. Enabling AFM within BIG-IP Controller Let’s explore how we can enable and configure CNF Firewall. Below is an overview of the steps needed to set up the environment up until the CNF CRs installations [Enabling the AFM] Enabling AFM CR within BIG-IP Controller definition global: afm: enabled: true pccd: enabled: true f5-afm: enabled: true cert-orchestrator: enabled: true afm: pccd: enabled: true image: repository: "local.registry.com" [Configuration] Example for Firewall policy settings apiVersion: "k8s.f5net.com/v1" kind: F5BigFwPolicy metadata: name: "cnf-fw-policy" namespace: "cnf-gateway" spec: rule: - name: allow-10-20-http action: "accept" logging: true servicePolicy: "service-policy1" ipProtocol: tcp source: addresses: - "2002::10:20:0:0/96" zones: - "zone1" - "zone2" destination: ports: - "80" zones: - "zone3" - "zone4" - name: allow-10-30-ftp action: "accept" logging: true ipProtocol: tcp source: addresses: - "2002::10:30:0:0/96" zones: - "zone1" - "zone2" destination: ports: - "20" - "21" zones: - "zone3" - "zone4" - name: allow-us-traffic action: "accept" logging: true source: geos: - "US:California" destination: geos: - "MX:Baja California" - "MX:Chihuahua" - name: drop-all action: "drop" logging: true ipProtocol: any source: addresses: - "::0/0" - "0.0.0.0/0" [Logging & Monitoring] CNF firewall settings allow not only local logging but also to use HSL logging to external logging destinations. apiVersion: "k8s.f5net.com/v1" kind: F5BigLogProfile metadata: name: "cnf-log-profile" namespace: "cnf-gateway" spec: name: "cnf-logs" firewall: enabled: true network: publisher: "cnf-hsl-pub" events: aclMatchAccept: true aclMatchDrop: true tcpEvents: true translationFields: true Verifying the CNF firewall settings can be done through the sidecar container kubectl exec -it deploy/f5-tmm -c debug -n cnf-gateway – bash tmctl -d blade fw_rule_stat context_type context_name ------------ ------------------------------------------ virtual cnf-gateway-cnf-fw-policy-SecureContext_vs rule_name micro_rules counter last_hit_time action ------------------------------------ ----------- ------- ------------- ------ allow-10-20-http-firewallpolicyrule 1 2 1638572860 2 allow-10-30-ftp-firewallpolicyrule 1 5 1638573270 2 Conclusion To conclude our article, we showed how CNFs with consolidated data planes help with optimizing CNF deployments. In this article we went through the overview of BIG-IP Next Edge Firewall CNF implementation, sample configuration and monitoring capabilities. More use cases to cover different use cases to be following. Related content F5BigFwPolicy BIG-IP Next Cloud-Native Network Functions (CNFs) CNF Home126Views2likes2CommentsAgentic AI with F5 BIG-IP v21 using Model Context Protocoland OpenShift
Introduction to Agentic AI Agentic AI is the capability of extending the Large Language Models (LLM) by means of adding tools. This allows the LLMs to interoperate with functionalities external to the LLM. Examples of these are the capability to search a flight or to push code into github. Agentic AI operates proactively, minimising human intervention and making decisions and adapting to perform complex tasks by using tools, data, and the Internet. This is done by basically giving to the LLM the knowledge of the APIs of github or the flight agency, then the reasoning of the LLM makes use of these APIs. The external (to the LLM) functionality can be run in the local computer or in network MCP servers. This article focuses in network MCP servers, which fits in the F5 AI Reference Architecture components and the insertion point indicated in green of the shown next: Introduction to Model Context Protocol Model Context Protocol (MCP) is a universal connector between LLMs and tools. Without MCP, it is needed that the LLM is programmed to support the different APIs of the different tools. This is not a scalable model because it requires a lot of effort to add all tools for a given LLM and for a tool to support several LLMs. Instead, when using MCP, the LLM (or AI application) and the tool only need to support MCP. Without further coding, the LLM model automatically is able to use any tool that exposes its functionalities through MCP. This is exhibit in the following figure: MCP example workflow In the next diagram it is exposed the basic MCP workflow using the LibreChat AI application as example. The flow is as follows: The AI application queries agents (MCP servers) which tools they provide. The agents return a list of the tools, with a description and parameters required. When the AI application makes a request to the AI model it includes in the request information about the tools available. When the AI model finds out it doesn´t have built-in what it is required to fulfil the request, it makes use of the tools. The tools are accessed through the AI application. The AI model composes a result with its local knowledge and the results from the tools. Out of the workflow above, the most interesting is step 1 which is used to retrieve the information required for the AI model to use the tools. Using the mcpLogger iRule provided in this article later on, we can see the MCP messages exchanged. Step 1a: { "method": "tools/list", "jsonrpc": "2.0", "id": 2 } Step 1b: { "jsonrpc": "2.0", "id": 2, "result": { "tools": [ { "name": "airport_search", "description": "Search for airport codes by name or city.\n\nArgs:\n query: The search term (city name, airport name, or partial code)\n\nReturns:\n List of matching airports with their codes", "inputSchema": { "properties": { "query": { "type": "string" } }, "required": [ "query" ], "type": "object" }, "outputSchema": { "properties": { "result": { "type": "string" } }, "required": [ "result" ], "type": "object", "x-fastmcp-wrap-result": 1 }, "_meta": { "_fastmcp": { "tags": [] } } } ] } } Note from the above that the AI model only requires a description of the tool in human language and a formal declaration of the input and output parameters. That´s all!. The reasoning of the AI model is what will make good use of the API described through MCP. The AI models will interpret even the error messages. For example, if the AI model miss-interprets the input parameters (typically because of wrong descriptor of the tool), the AI model might correct itself if the error message is descriptive enough and call the tool again with the right parameters. Of course, the MCP protocol is more than this but the above is necessary to understand the basis of how tools are used by LLM and how the magic works. F5 BIG-IP and MCP BIG-IP v21 introduces support for MCP, which is based on JSON-RPC. MCP protocol had several iterations. For IP based communication, initially the transport of the JSON-RPC messages used HTTP+SSE transport (now considered legacy) but this has been completely replaced by Streamable HTTP transport. This later still uses SSE when streaming multiple server messages. Regardless of the MCP version, in the F5 BIG-IP it is just needed to enable the JSON and SSE profiles in the Virtual Server for handling MCP. This is shown next: By enabling these profiles we automatically get basic protocol validation but more relevantly, we obtain the ability to handle MCP messages with JSON and SSE oriented events and functions. These allows parsing and manipulation of MCP messages but also the capability of doing traffic management (load balancing, rate limiting, etc...). Next it can be seen the parameters available for these profiles, which allow to limit the size of the various parts of the messages. Defaults are fine for most of the cases: Check the next links for information on iRules events and commands available for the JSON and SSE protocols. MCP and persistence Session persistence is optional in MCP but when the server indicates an Mcp-Session-Id it is mandatory for the client. MCP servers require persistence when they keep a context (state) for the MCP dialog. This means that the F5 BIG-IP must support handling this Mcp-Session-Id as well and it does by using UIE (Universal) persistence with this header. A sample iRule mcpPersistence is provided in the gitHub repository. Demo and gitHub repository The video below demonstrate 3 functionalities using the BIG-IP MCP functionalities, these are: Using MCP persistence Getting visibility of MCP traffic by logging remotely the JSON-RPC payloads of the request and response messages using High Speed Logging. Controlling which tools are allowed or blocked, and logging the allowed/block actions with High Speed Logging. These functionalities are implemented with iRules available in this GitHub repository and deployed in Red Hat OpenShift using the Container Ingress Services (CIS) controller which automates the deployment of the configuration using Kubernetes resources. The overall setup is shown next: In the next embedded video we can see how this is deployed and used. Conclusion and next steps F5 BIG-IP v21 introduces support for MCP protocol and thanks to F5 CIS these setups can be automated in your OpenShift cluster using the Kubernetes API. The possibilities of Agentic AI are infinite, thanks to MCP it is possible to extend the LLM models to use any tool easily. The tools can be used to query or execute actions. I suggest to take a look to repositories of MCP servers and realize the endless possibilities of Agentic AI: https://mcpservers.org/ https://www.pulsemcp.com/servers https://mcpmarket.com/server https://mcp.so/ https://github.com/punkpeye/awesome-mcp-servers
446Views4likes0CommentsDelivering Secure Application Services Anywhere with Nutanix Flow and F5 Distributed Cloud
Introduction F5 Application Delivery and Security Platform (ADSP) is the premier solution for converging high-performance delivery and security for every app and API across any environment. It provides a unified platform offering granular visibility, streamlined operations, and AI-driven insights — deployable anywhere and in any form factor. The F5 ADSP Partner Ecosystem brings together a broad range of partners to deliver customer value across the entire lifecycle. This includes cohesive solutions, cloud synergies, and access to expert services that help customers maximize outcomes while simplifying operations. In this article, we’ll explore the upcoming integration between Nutanix Flow and F5 Distributed Cloud, showcasing how F5 and Nutanix collaborate to deliver secure, resilient application services across hybrid and multi-cloud environments. Integration Overview At the heart of this integration is the capability to deploy a F5 Distributed Cloud Customer Edge (CE) inside a Nutanix Flow VPC, establish BGP peering with the Nutanix Flow BGP Gateway, and inject CE-advertised BGP routes into the VPC routing table. This architecture provides us complete control over application delivery and security within the VPC. We can selectively advertise HTTP load balancers (LBs) or VIPs to designated VPCs, ensuring secure and efficient connectivity. Additionally, the integration securely simplifies network segmentation across hybrid and multi-cloud environments. By leveraging F5 Distributed Cloud to segment and extend the network to remote locations, combined with Nutanix Flow Security for microsegmentation within VPCs, we deliver comprehensive end-to-end network security. This approach enforces a consistent security posture while simplifying segmentation across environments. In this article, we’ll focus on application delivery and security, and explore segmentation in the next article. Demo Walkthrough Let’s walk through a demo to see how this integration works. The goal of this demo is to enable secure application delivery for nutanix5.f5-demo.com within the Nutanix Flow Virtual Private Cloud (VPC) named dev3. Our demo environment, dev3, is a Nutanix Flow VPC with a F5 Distributed Cloud Customer Edge (CE) named jy-nutanix-overlay-dev3 deployed inside: *Note: CE is named jy-nutanix-overlay-dev3 in the F5 Distributed Cloud Console and xc-ce-dev3 in the Nutanix Prism Central. eBGP peering is ESTABLISHED between the CE and the Nutanix Flow BGP Gateway: On the F5 Distributed Cloud Console, we created an HTTP Load Balancer named jy-nutanix-internal-5 serving the FQDN nutanix5.f5-demo.com. This load balancer distributes workloads across hybrid multicloud environments and is protected by a WAF policy named nutanix-demo: We advertised this HTTP Load Balancer with a Virtual IP (VIP) 10.10.111.175 to the CE jy-nutanix-overlay-dev3 deployed inside Nutanix Flow VPC dev3: The CE then advertised the VIP route to its peer via BGP – the Nutanix Flow BGP Gateway: The Nutanix Flow BGP Gateway received the VIP route and installed it in the VPC routing table: Finally, the VMs in dev3 can securely access nutanix5.f5-demo.com while continuing to use the VPC logical router as their default gateway: F5 Distributed Cloud Console observability provides deep visibility into applications and security events. For example, it offers comprehensive dashboards and metrics to monitor the performance and health of applications served through HTTP load balancers. These include detailed insights into traffic patterns, latency, HTTP error rates, and the status of backend services: Furthermore, the built-in AI assistant provides real-time visibility and actionable guidance on security incidents, improving situational awareness and supporting informed decision-making. This capability enables rapid threat detection and response, helping maintain a strong and resilient security posture: Conclusion The integration demonstrates how F5 Distributed Cloud and Nutanix Flow collaborate to deliver secure, resilient application services across hybrid and multi-cloud environments. Together, F5 and Nutanix enable organizations to scale with confidence, optimize application performance, and maintain robust security—empowering businesses to achieve greater agility and resilience across any environment. This integration is coming soon in CY2026. If you’re interested in early access, please contact your F5 representative. Reference URLs https://www.f5.com/products/distributed-cloud-services https://www.nutanix.com/products/flow/networking
111Views1like0Comments