application delivery
2281 TopicsI Tried to Beat OpenAI with Ollama in n8n—Here’s Why It Failed (and the Bug I’m Filing)
Hey, community. I wanted to share a story about how I built the n8n Labs workflow. It watches a YouTube channel, summarizes the latest videos with AI agents, and sends a clean HTML newsletter via Gmail. In the video, I show it working flawlessly with OpenAI. But before I got there, I spent a lot of time trying to copy the same flow using open source models through Ollama with the n8n Ollama node. My results were all over the map. I really wanted this to be a great “open source first” build. I tried many local models via Ollama, tuned prompts, adjusted parameters, and re‑ran tests. The outputs were always unpredictable: sometimes I’d get partial JSON, sometimes extra text around the JSON. Sometimes fields would be missing. Sometimes it would just refuse to stick to the structure I asked for. After enough iterations, I started to doubt whether my understanding of the agent setup was off. So, I built a quick proof inside the n8n Code node. If the AI Agent step is supposed to take the XML→JSON feed and reshape it into a structured list—title, description, content URL, thumbnail URL—then I should be able to do that deterministically in JavaScript and compare. I wrote a tiny snippet that reads the entries array, grabs the media fields, and formats a minimal output. And guess what? Voila. It worked on the first try and my HTML generator lit up exactly the way I wanted. That told me two things: one, my upstream data (HTTP Request + XML→JSON) was solid; and two, my desired output structure was clear and achievable without any trickery. With that proof in hand, I turned to OpenAI. I wired the same agent prompt, the same structured output parser, and the same workflow wiring—but swapped the Ollama node for an OpenAI chat model. It worked immediately. Fast, cheap, predictable. The agent returned a perfectly clean JSON with the fields I requested. My code node transformed it into HTML. The preview looked right, and Gmail sent the newsletter just like in the demo. So at that point, I felt confident the approach was sound and the transcript you saw in the video was repeatable—at least with OpenAI in the loop. Where does that leave Ollama and open source models? I’m not throwing shade—I love open source, and I want this path to be great. My current belief is the failure is somewhere inside the n8n Ollama node code path. I don’t think it’s the models themselves in isolation; I think the node may be mishandling one or more of these details: how messages are composed (system vs user). Whether “JSON mode” or a grammar/format hint is being passed, token/length defaults that cause truncation, stop settings that let extra text leak into the output; or the way the structured output parser constraints are communicated. If you’ve worked with local models, you know they can follow structure very well when you give them a strict format or grammar. If the node isn’t exposing that (or is dropping it on the floor), you get variability. To make sure this gets eyes from the right folks, my intent is to file a bug with n8n for the Ollama node. I’ll include a minimal, reproducible workflow: the same RSS fetch, the same XML→JSON conversion, the same agent prompt and required output shape, and a comparison run where OpenAI succeeds and Ollama does not. I’ll share versions, logs, model names, and settings so the team can trace exactly where the behavior diverges. If there’s a missing parameter (like format: json) or a message-role mix‑up, great—let’s fix it. If it needs a small enhancement to pass a grammar or schema to the model, even better. The net‑net is simple: for AI agents inside n8n to feel predictable with Ollama, we need the node to enforce reliably structured outputs the same way the OpenAI path does. That unlocks a ton of practical automation for folks who prefer local models. In the meantime, if you’re following the lab and want a rock‑solid fallback, you can use the Code node to do the exact transformation the agent would do. Here’s the JavaScript I wrote and tested in the workflow: const entries = $input.first().json.feed?.entry ?? []; function truncate(str, max) { if (!str) return ''; const s = String(str).trim(); return s.length > max ? s.slice(0, max) + '…' : s; // If you want total length (including …) to be max, use: // return s.length > max ? s.slice(0, Math.max(0, max - 1)) + '…' : s; } const output = entries.map(entry => { const g = entry['media:group'] ?? {}; return { title: g['media:title'] ?? '', description: truncate(g['media:description'], 60), contentUrl: g['media:content']?.url ?? '', thumbnailUrl: g['media:thumbnail']?.url ?? '' }; }); return [{ json: { output } }]; That snippet proves the data is there and your HTML builder is fine. If OpenAI reproduces the same structured JSON as the code, and Ollama doesn’t, the issue is likely in the node’s request/response handling rather than your workflow logic. I’ll keep pushing on the bug report so we can make agents with Ollama as predictable as they need to be. Until then, if you want speed and consistency to get the job done, OpenAI works great. If you’re experimenting with open source, try enforcing stricter formats and shorter outputs—and keep an eye on what the node actually sends to the model. As always, I’ll share updates, because I love sharing knowledge—and I want the open-source path to shine right alongside the rest of our AI, agents, n8n, Gmail, and OpenAI workflows. As always, community, if you have a resolution and can pull it off, please share!
51Views1like0CommentsWhat's new in BIG-IP v21.0?
Introduction In November of 2025 F5 released the latest version of BIG-IP software, v21.0. This release is packed with fixes and new features that enhance the F5 Application Delivery and Security Platform (ADSP). These changes complement the Delivery, Security and Deployment aspects of the ADSP. New SSL Orchestrator Features SNI Preservation SNI (Server Name Indication) Preservation is now supported for Inbound Gateway Mode. This preserves the client’s original SNI information as traffic passes through the reverse proxy, allowing backend TLS servers to access and use this information. This enables accurate application routing and supports security workflows like threat detection and compliance enforcement. Previous software versions required custom iRules to enable this functionality. Note: SNI preservation is enabled by default. However, if you have existing Inbound Gateway Topologies, you must redeploy them for the change to take effect. iRule Control for Service Entry and Return Previously, iRules were only available on the entry (ingress) side, limiting customization to traffic entering the Inspection Service. iRule control is now extended to the return-side traffic of Inspection Services. You can now apply iRules on both sides of an Inspection Service (L2, L3, HTTP). This enhancement provides full control over traffic entering and leaving the Inspection Service, enabling more flexible, powerful, and fine-grained traffic handling. The Services page will now include configuration for iRules on service entry and iRules on service return. A typical use-case for this feature is what we call Header Enrichment. In this case, iRules are used to add headers to the payload before sending it to the Inspection Service. The headers could contain the authenticated username/group membership of the person who initiated the connection. This information can be useful for Inspection Services for either logging, policy enforcement, or both. The benefit of this feature is that the authenticated username/group membership header can be removed from the payload on egress, preventing it from being leaked to origin servers. New Access Policy Manager (APM) Features Expanded Exclusion Support for Locked Client Mode Previously, APM-locked client mode allowed a maximum of 10 exclusions, preventing administrators from adding more than 10 destinations. This limitation has now been removed, and the exclusion list can contain more than 10 entries. OAuth Authorization Server Max Claims Data Support The max claim data size is set to 8kb by default, but a large claim size can lead to excessive memory consumption. You must allocate the right amount of memory dynamically as required based on claims configuration. New Features in BIG-IP v21.0.0 Control Plane Performance and Scalability Improvements The BIG-IP 21.0.0 release introduces significant improvements to the BIG-IP control plane, including better scalability and support for large-scale configurations (up to 1 million objects). This includes MCPD efficiency enhancements and eXtremeDB scale improvements. AI Data Delivery Optimize performance and simplify configuration with new S3 data storage integrations. Use cases include secure ingestion for fine-tuning and batch inference, high-throughput retrieval for RAG and embeddings generation, policy-driven model artifact distribution with observability, and controlled egress with consistent security and compliance. F5 BIG-IP optimizes and secures S3 data ingress and egress for AI workloads. Model Context Protocol (MCP) support for AI traffic Accelerate and scale AI workloads with support for MCP that enables seamless communication between AI models, applications, and data sources. This enhances performance, secures connections, and streamlines deployment for AI workloads. F5 BIG-IP optimizes and secures S3 data ingress and egress for AI workloads. Migrating BIG-IP from Entrust to Alternative Certificate Authorities Entrust is soon to be delisted as a certificate authority by many major browsers. Following a variety of compliance failures with industry standards in recent years, browsers like Google Chrome and Mozilla made their distrust for Entrust certificates public last year. As such, Entrust certificates issued on or after November 12, 2024, are deemed insecure by most browsers. Conclusion Upgrade your BIG-IP to version 21.0 today to take advantage of these fixes and new features that enhance the F5 Application Delivery and Security Platform (ADSP). These changes complement the Delivery, Security and Deployment aspects of the ADSP. Related Content SSL Orchestrator Release Notes BIG-IP Release Notes BLOG F5 BIG-IP v21.0: Control plane, AI data delivery and security enhancements Press Release F5 launches BIG-IP v21.0 Introduction to BIG-IP SSL Orchestrator114Views2likes0CommentsModernizing F5 Platforms with Ansible
I’ve been meaning to publish this article for some time now. Over the past few months, I’ve been building Ansible automation that I believe will help customers modernize their F5 infrastructure. This especially true for those looking to migrate from legacy BIG-IP hardware to next-generation platforms like VELOS and rSeries. As I explored tools like F5 Journeys and traditional CLI-based migration methods, I noticed a significant amount of manual pre-work was still required. This includes: Ensuring the Master Key used to encrypt the UCS archive is preserved and securely handled Storing UCS, Master Key and information assets in a backup host Pre-configuring all VLANs and properly tagging them on the VELOS partition before deploying a Tenant OS To streamline this, I created an Ansible Playbook with supporting roles tailored for Red Hat Ansible Automation Platform. It’s built to perform a lift-and-shift migration of a F5 BIG-IP configuration from one device to another—with optional OS upgrades included. In the demo video below, you’ll see an automated migration of a F5 i10800 running 15.1.10 to a VELOS BX110 Tenant OS running 17.5.0—demonstrating a smooth, hands-free modernization process. Currently Working Velos Velos Controller/Partition running (F5OS-C 1.8.1) - which allows Tenant Management IP to be in a different VLAN Migrates a standalone F5 BIG-IP i10800 to a VELOS BX110 Tenant OS VLAN'ed Source tenant required (Doesn’t support non-vlan tenants) rSeries Shares MGMT IP with the same subnet as the Chassis Partition. Migrates a standalone F5 BIG-IP i10800 to a R5000 Tenant OS VLAN'ed Source tenant required (Doesn’t support non-vlan tenants) Handles: Configuration and crypto backup UCS creation, transfer, and validation F5OS System VLAN Creation, and Association to Tenant - (Does Not manage Interface to VLAN Mapping) F5 OS Tenant provisioning and deployment inline OS upgrades during the migration Roadmap / What's Next Expanding Testing to include Viprion/iSeries (Using VCMP) Tenant Testing. Supporting hardware-to-virtual platform migrations Adding functionality for HA (High Availability) environments Watch the Demo Video View the Source Code on GitHub https://github.com/f5devcentral/f5-bd-ansible-platform-modernization This project is built for the community—so feel free to take it, fork it, and expand it. Let’s make F5 platform modernization as seamless and automated as possible.
1.2KViews4likes2CommentsDistributed Cloud for App Delivery & Security for Hybrid Environments
As enterprises modernize and expand their digital services, they increasingly deploy multiple instances of the same applications across diverse infrastructure environments—such as VMware, OpenShift, and Nutanix—to support distributed teams, regional data sovereignty, redundancy, or environment-specific compliance needs. These application instances often integrate into service chains that span across clouds and data centers, introducing both scale and operational complexity. F5 Distributed Cloud provides a unified solution for secure, consistent application delivery and security across hybrid and multi-cloud environments. It enables organizations to add workloads seamlessly—whether for scaling, redundancy, or localization—without sacrificing visibility, security, or performance.309Views3likes0CommentsBuilding a Secure Application DMZ with F5 Distributed Cloud and Equinix Network Edge
Why: Establishing a Secure Application DMZ Enterprises increasingly need to deliver their own applications directly to customers across geographies. Relying solely on external providers for Points of Presence (PoPs) can limit control, visibility, and flexibility. A secure Application Demilitarized Zone (DMZ) empowers organizations to: Establish their own PoPs for internet-facing applications. Maintain control over security, compliance, and performance. Deliver applications consistently across regions. Reduce dependency on third-party infrastructure. This approach enables enterprises to build a globally distributed application delivery footprint tailored to their business needs. What: A Unified Solution to Secure Global Application Delivery The joint solution integrates F5 Distributed Cloud (F5XC) Customer Edge (CE) deployed via the Equinix Network Edge Marketplace, with Equinix Fabric to create a strategic point of control for secure, scalable application delivery. Key Capabilities Secure Ingress/Egress: CE devices serve as secure gateways for public-facing applications, integrating WAF, API protection, and DDoS mitigation. Global Reach: Equinix’s infrastructure enables CE deployment in strategic locations worldwide. Multi cloud Networking: Seamless connectivity across public clouds, private data centers, and edge locations. Centralized Management: F5XC Console provides unified visibility, policy enforcement, and automation. Together, these components form a cohesive solution that supports enterprise-grade application delivery with security, performance, and control. How: Architectural Overview Core Components F5XC Customer Edge (CE): Deployed as a virtual network function at Equinix PoPs, CE serves as the secure entry point for applications. F5 Distributed Cloud Console: Centralized control plane for managing CE devices, policies, and analytics. Equinix Network Edge Marketplace: Enables rapid provisioning of CE devices as virtual appliances. Equinix Fabric: High-performance interconnectivity between CE devices, clouds, and data centers. Key Tenets of the Solution Strategic Point of Control - CE becomes the enterprise’s own PoP, enabling secure and scalable delivery of applications. Unified Security Posture - Integrated WAF, API security, and DDoS protection across all CE locations. Consistent Policy Enforcement - Centralized control plane ensures uniform security and compliance policies. Multicloud and Edge Flexibility - Seamless connectivity across AWS, Azure, GCP, private clouds, and data centers. Rapid Deployment - CE provisioning via Equinix Marketplace reduces time-to-market and operational overhead. Partner and Customer Connectivity - Supports business partner exchanges and direct customer access without traditional networking complexity. Additional Links Multicloud chaos ends at the Equinix Edge with F5 Distributed Cloud CE F5 and Equinix Partnership Equinix Fabric Overview Secure Extranet with Equinix Fabric and F5 Distributed Cloud Additional Equinix and F5 partner information149Views2likes0CommentsBIG-IP Next Edge Firewall CNF for Edge workloads
Introduction The CNF architecture aligns with cloud-native principles by enabling horizontal scaling, ensuring that applications can expand seamlessly without compromising performance. It preserves the deterministic reliability essential for telecom environments, balancing scalability with the stringent demands of real-time processing. More background information about what value CNF brings to the environment, https://community.f5.com/kb/technicalarticles/from-virtual-to-cloud-native-infrastructure-evolution/342364 Telecom service providers make use of CNFs for performance optimization, Enable efficient and secure processing of N6-LAN traffic at the edge to meet the stringent requirements of 5G networks. Optimize AI-RAN deployments with dynamic scaling and enhanced security, ensuring that AI workloads are processed efficiently and securely at the edge, improving overall network performance. Deploy advanced AI applications at the edge with the confidence of carrier-grade security and traffic management, ensuring real-time processing and analytics for a variety of edge use cases. CNF Firewall Implementation Overview Let’s start with understanding how different CRs are enabled within a CNF implementation this allows CNF to achieve more optimized performance, Capex and Opex. The traditional way of inserting services to the Kubernetes is as below, Moving to a consolidated Dataplane approach saved 60% of the Kubernetes environment’s performance The F5BigFwPolicy Custom Resource (CR) applies industry-standard firewall rules to the Traffic Management Microkernel (TMM), ensuring that only connections initiated by trusted clients will be accepted. When a new F5BigFwPolicy CR configuration is applied, the firewall rules are first sent to the Application Firewall Management (AFM) Pod, where they are compiled into Binary Large Objects (BLOBs) to enhance processing performance. Once the firewall BLOB is compiled, it is sent to the TMM Proxy Pod, which begins inspecting and filtering network packets based on the defined rules. Enabling AFM within BIG-IP Controller Let’s explore how we can enable and configure CNF Firewall. Below is an overview of the steps needed to set up the environment up until the CNF CRs installations [Enabling the AFM] Enabling AFM CR within BIG-IP Controller definition global: afm: enabled: true pccd: enabled: true f5-afm: enabled: true cert-orchestrator: enabled: true afm: pccd: enabled: true image: repository: "local.registry.com" [Configuration] Example for Firewall policy settings apiVersion: "k8s.f5net.com/v1" kind: F5BigFwPolicy metadata: name: "cnf-fw-policy" namespace: "cnf-gateway" spec: rule: - name: allow-10-20-http action: "accept" logging: true servicePolicy: "service-policy1" ipProtocol: tcp source: addresses: - "2002::10:20:0:0/96" zones: - "zone1" - "zone2" destination: ports: - "80" zones: - "zone3" - "zone4" - name: allow-10-30-ftp action: "accept" logging: true ipProtocol: tcp source: addresses: - "2002::10:30:0:0/96" zones: - "zone1" - "zone2" destination: ports: - "20" - "21" zones: - "zone3" - "zone4" - name: allow-us-traffic action: "accept" logging: true source: geos: - "US:California" destination: geos: - "MX:Baja California" - "MX:Chihuahua" - name: drop-all action: "drop" logging: true ipProtocol: any source: addresses: - "::0/0" - "0.0.0.0/0" [Logging & Monitoring] CNF firewall settings allow not only local logging but also to use HSL logging to external logging destinations. apiVersion: "k8s.f5net.com/v1" kind: F5BigLogProfile metadata: name: "cnf-log-profile" namespace: "cnf-gateway" spec: name: "cnf-logs" firewall: enabled: true network: publisher: "cnf-hsl-pub" events: aclMatchAccept: true aclMatchDrop: true tcpEvents: true translationFields: true Verifying the CNF firewall settings can be done through the sidecar container kubectl exec -it deploy/f5-tmm -c debug -n cnf-gateway – bash tmctl -d blade fw_rule_stat context_type context_name ------------ ------------------------------------------ virtual cnf-gateway-cnf-fw-policy-SecureContext_vs rule_name micro_rules counter last_hit_time action ------------------------------------ ----------- ------- ------------- ------ allow-10-20-http-firewallpolicyrule 1 2 1638572860 2 allow-10-30-ftp-firewallpolicyrule 1 5 1638573270 2 Conclusion To conclude our article, we showed how CNFs with consolidated data planes help with optimizing CNF deployments. In this article we went through the overview of BIG-IP Next Edge Firewall CNF implementation, sample configuration and monitoring capabilities. More use cases to cover different use cases to be following. Related content F5BigFwPolicy BIG-IP Next Cloud-Native Network Functions (CNFs) CNF Home67Views2likes1CommentLeverage F5 BIG-IP APM and Azure AD Conditional Access Easy button
Integrating F5 BIG-IP APM’s Identity Aware Proxy (IAP) with Microsoft EntraID (Previously called AzureAD) Conditional Access enables fine-grained, adaptable, zero trust access to any application, regardless of location and authentication method, with continuous monitoring and verification.
2.4KViews1like2CommentsZero Trust building blocks - Leverage Microsoft Intune endpoint Compliance with F5 BIG-IP APM Access
Use case summary Let's walk through a real life scenario, we have company A that's building its Zero Trust strategy and of course it will be great to make use of existing solutions to reach our target. Microsoft Intune introduces a great source of intelligence and compliance enforcement for endpoints, combined with F5 BIG-IP Access Policy Manager (APM) integrated with Microsoft EntraID (previously called AzureAD) this extends the enforcement to the endpoints accessing Company A resources whether it's a SAAS or locally hosted. Below is the flow of some use cases that leverage how F5 BIG-IP APM and Microsoft Intune pave the way to achieve Zero Trust strategy. We've an endpoint Managed by Microsoft Intune. Microsoft Intune contains device compliance policy to determine the conditions at which the machine to be considered compliant and the configuration profile determine the configurations for specific applications in our case (F5 Access VPN). We have the following use cases, User tries to access web application through F5 BIG-IP APM, BIG-IP is already integrated with Microsoft Intune and Microsoft EntraID (previously called AzureAD). F5 BIG-IP APM acts as SP, that directs user request to Microsoft EntraID (previously called AzureAD) for authentication and compliance check. If the user successfully authenticate and pass compliance policy, user will be redirected back to the application with SAML assertion response otherwise the user will be denied to acces. A demo was created by our awesome Access guru Matt_Dierick User tries to use SSL VPN to access corporate resources, User click on F5 Access VPN connection pushed to the endpoint via configuration profile at Microsoft Intune. User selects the proper authentication method (Username&Password, Smart Card or Certificate based Authentication). Once user successfully authenticate and pass compliance check, a temporary certificate is pushed to the machine. The temporary certificate is used to authenticate with F5 BIG-IP APM and then the user is granted access to SSL VPN connection. A demo was created for this use case as well by our awesome Access guru Matt_Dierick , as Microsoft Intune portal got updated, we may now perform the endpoint management related tasks through endpoint.microsoft.com portal instead of portal.azure.com, make sure to follow Microsoft documentations for any new updates. Conclusion In conclusion to the highlighted use cases, we can see that we can make use of existing solutions and extend their capabilities with the ease of integration to acheive our organization Zero Trust strategy. F5 BIG-IP in general allows the organization to decouple client side connection from server side, which simplifies further services integration to boost organization security posture. F5 BIG-IP APM allows us to integrate with different parties to extend their capabilties whether they endpoint compliance, risk factor or IDaaS to use such insights for securing application or network access. In addition to corporate related secure access, if we have customers accessing applications and need integration with Google or other Open ID Connect (OIDC) provider, you can make use of F5 BIG-IP APM OIDC integration to that 3rd party for customers' access. Additional resources Configuring Access Policy Manager for MDM applications BIG-IP Access Policy Manager: Third-Party Integration OAuth and OpenID Connect - Made easy with Access Guided Configurations templates3.6KViews6likes0CommentsPowering Progressive Deployment in Kubernetes with NGINX and Argo Rollouts
This article demonstrates you how you can use NGINX Gateway Fabric combined with Argo Rollouts to perform progressive delivery. The Canary pattern will be introduced and is employed as we explore and execute rollout scenarios.458Views1like0CommentsHow I did it - "F5 BIG-IP Observability with Dynatrace and F5 Telemetry Streaming"
Welcome back to another edition of “How I Did It.” It’s been a while since we looked at observability… Oh wait, I just said that. Anyway, in this post I’ll walk through how I integrated F5 Telemetry Streaming with Dynatrace. To show the results, I’ve included sample dashboards that highlight how the ingested telemetry data can be visualized effectively. Let’s dive in before I repeat myself again.130Views2likes0Comments