application delivery
2317 TopicsDistributed Cloud for App Delivery & Security for Hybrid Environments
As enterprises modernize and expand their digital services, they increasingly deploy multiple instances of the same applications across diverse infrastructure environments—such as VMware, OpenShift, and Nutanix—to support distributed teams, regional data sovereignty, redundancy, or environment-specific compliance needs. These application instances often integrate into service chains that span across clouds and data centers, introducing both scale and operational complexity. F5 Distributed Cloud provides a unified solution for secure, consistent application delivery and security across hybrid and multi-cloud environments. It enables organizations to add workloads seamlessly—whether for scaling, redundancy, or localization—without sacrificing visibility, security, or performance.292Views3likes0CommentsBuilding a Secure Application DMZ with F5 Distributed Cloud and Equinix Network Edge
Why: Establishing a Secure Application DMZ Enterprises increasingly need to deliver their own applications directly to customers across geographies. Relying solely on external providers for Points of Presence (PoPs) can limit control, visibility, and flexibility. A secure Application Demilitarized Zone (DMZ) empowers organizations to: Establish their own PoPs for internet-facing applications. Maintain control over security, compliance, and performance. Deliver applications consistently across regions. Reduce dependency on third-party infrastructure. This approach enables enterprises to build a globally distributed application delivery footprint tailored to their business needs. What: A Unified Solution to Secure Global Application Delivery The joint solution integrates F5 Distributed Cloud (F5XC) Customer Edge (CE) deployed via the Equinix Network Edge Marketplace, with Equinix Fabric to create a strategic point of control for secure, scalable application delivery. Key Capabilities Secure Ingress/Egress: CE devices serve as secure gateways for public-facing applications, integrating WAF, API protection, and DDoS mitigation. Global Reach: Equinix’s infrastructure enables CE deployment in strategic locations worldwide. Multi cloud Networking: Seamless connectivity across public clouds, private data centers, and edge locations. Centralized Management: F5XC Console provides unified visibility, policy enforcement, and automation. Together, these components form a cohesive solution that supports enterprise-grade application delivery with security, performance, and control. How: Architectural Overview Core Components F5XC Customer Edge (CE): Deployed as a virtual network function at Equinix PoPs, CE serves as the secure entry point for applications. F5 Distributed Cloud Console: Centralized control plane for managing CE devices, policies, and analytics. Equinix Network Edge Marketplace: Enables rapid provisioning of CE devices as virtual appliances. Equinix Fabric: High-performance interconnectivity between CE devices, clouds, and data centers. Key Tenets of the Solution Strategic Point of Control - CE becomes the enterprise’s own PoP, enabling secure and scalable delivery of applications. Unified Security Posture - Integrated WAF, API security, and DDoS protection across all CE locations. Consistent Policy Enforcement - Centralized control plane ensures uniform security and compliance policies. Multicloud and Edge Flexibility - Seamless connectivity across AWS, Azure, GCP, private clouds, and data centers. Rapid Deployment - CE provisioning via Equinix Marketplace reduces time-to-market and operational overhead. Partner and Customer Connectivity - Supports business partner exchanges and direct customer access without traditional networking complexity. Additional Links Multicloud chaos ends at the Equinix Edge with F5 Distributed Cloud CE F5 and Equinix Partnership Equinix Fabric Overview Secure Extranet with Equinix Fabric and F5 Distributed Cloud Additional Equinix and F5 partner information106Views2likes0CommentsBIG-IP Next Edge Firewall CNF for Edge workloads
Introduction The CNF architecture aligns with cloud-native principles by enabling horizontal scaling, ensuring that applications can expand seamlessly without compromising performance. It preserves the deterministic reliability essential for telecom environments, balancing scalability with the stringent demands of real-time processing. More background information about what value CNF brings to the environment, https://community.f5.com/kb/technicalarticles/from-virtual-to-cloud-native-infrastructure-evolution/342364 Telecom service providers make use of CNFs for performance optimization, Enable efficient and secure processing of N6-LAN traffic at the edge to meet the stringent requirements of 5G networks. Optimize AI-RAN deployments with dynamic scaling and enhanced security, ensuring that AI workloads are processed efficiently and securely at the edge, improving overall network performance. Deploy advanced AI applications at the edge with the confidence of carrier-grade security and traffic management, ensuring real-time processing and analytics for a variety of edge use cases. CNF Firewall Implementation Overview Let’s start with understanding how different CRs are enabled within a CNF implementation this allows CNF to achieve more optimized performance, Capex and Opex. The traditional way of inserting services to the Kubernetes is as below, Moving to a consolidated Dataplane approach saved 60% of the Kubernetes environment’s performance The F5BigFwPolicy Custom Resource (CR) applies industry-standard firewall rules to the Traffic Management Microkernel (TMM), ensuring that only connections initiated by trusted clients will be accepted. When a new F5BigFwPolicy CR configuration is applied, the firewall rules are first sent to the Application Firewall Management (AFM) Pod, where they are compiled into Binary Large Objects (BLOBs) to enhance processing performance. Once the firewall BLOB is compiled, it is sent to the TMM Proxy Pod, which begins inspecting and filtering network packets based on the defined rules. Enabling AFM within BIG-IP Controller Let’s explore how we can enable and configure CNF Firewall. Below is an overview of the steps needed to set up the environment up until the CNF CRs installations [Enabling the AFM] Enabling AFM CR within BIG-IP Controller definition global: afm: enabled: true pccd: enabled: true f5-afm: enabled: true cert-orchestrator: enabled: true afm: pccd: enabled: true image: repository: "local.registry.com" [Configuration] Example for Firewall policy settings apiVersion: "k8s.f5net.com/v1" kind: F5BigFwPolicy metadata: name: "cnf-fw-policy" namespace: "cnf-gateway" spec: rule: - name: allow-10-20-http action: "accept" logging: true servicePolicy: "service-policy1" ipProtocol: tcp source: addresses: - "2002::10:20:0:0/96" zones: - "zone1" - "zone2" destination: ports: - "80" zones: - "zone3" - "zone4" - name: allow-10-30-ftp action: "accept" logging: true ipProtocol: tcp source: addresses: - "2002::10:30:0:0/96" zones: - "zone1" - "zone2" destination: ports: - "20" - "21" zones: - "zone3" - "zone4" - name: allow-us-traffic action: "accept" logging: true source: geos: - "US:California" destination: geos: - "MX:Baja California" - "MX:Chihuahua" - name: drop-all action: "drop" logging: true ipProtocol: any source: addresses: - "::0/0" - "0.0.0.0/0" [Logging & Monitoring] CNF firewall settings allow not only local logging but also to use HSL logging to external logging destinations. apiVersion: "k8s.f5net.com/v1" kind: F5BigLogProfile metadata: name: "cnf-log-profile" namespace: "cnf-gateway" spec: name: "cnf-logs" firewall: enabled: true network: publisher: "cnf-hsl-pub" events: aclMatchAccept: true aclMatchDrop: true tcpEvents: true translationFields: true Verifying the CNF firewall settings can be done through the sidecar container kubectl exec -it deploy/f5-tmm -c debug -n cnf-gateway – bash tmctl -d blade fw_rule_stat context_type context_name ------------ ------------------------------------------ virtual cnf-gateway-cnf-fw-policy-SecureContext_vs rule_name micro_rules counter last_hit_time action ------------------------------------ ----------- ------- ------------- ------ allow-10-20-http-firewallpolicyrule 1 2 1638572860 2 allow-10-30-ftp-firewallpolicyrule 1 5 1638573270 2 Conclusion To conclude our article, we showed how CNFs with consolidated data planes help with optimizing CNF deployments. In this article we went through the overview of BIG-IP Next Edge Firewall CNF implementation, sample configuration and monitoring capabilities. More use cases to cover different use cases to be following. Related content F5BigFwPolicy BIG-IP Next Cloud-Native Network Functions (CNFs) CNF Home47Views2likes1CommentLeverage F5 BIG-IP APM and Azure AD Conditional Access Easy button
Integrating F5 BIG-IP APM’s Identity Aware Proxy (IAP) with Microsoft EntraID (Previously called AzureAD) Conditional Access enables fine-grained, adaptable, zero trust access to any application, regardless of location and authentication method, with continuous monitoring and verification.
2.3KViews1like2CommentsZero Trust building blocks - Leverage Microsoft Intune endpoint Compliance with F5 BIG-IP APM Access
Use case summary Let's walk through a real life scenario, we have company A that's building its Zero Trust strategy and of course it will be great to make use of existing solutions to reach our target. Microsoft Intune introduces a great source of intelligence and compliance enforcement for endpoints, combined with F5 BIG-IP Access Policy Manager (APM) integrated with Microsoft EntraID (previously called AzureAD) this extends the enforcement to the endpoints accessing Company A resources whether it's a SAAS or locally hosted. Below is the flow of some use cases that leverage how F5 BIG-IP APM and Microsoft Intune pave the way to achieve Zero Trust strategy. We've an endpoint Managed by Microsoft Intune. Microsoft Intune contains device compliance policy to determine the conditions at which the machine to be considered compliant and the configuration profile determine the configurations for specific applications in our case (F5 Access VPN). We have the following use cases, User tries to access web application through F5 BIG-IP APM, BIG-IP is already integrated with Microsoft Intune and Microsoft EntraID (previously called AzureAD). F5 BIG-IP APM acts as SP, that directs user request to Microsoft EntraID (previously called AzureAD) for authentication and compliance check. If the user successfully authenticate and pass compliance policy, user will be redirected back to the application with SAML assertion response otherwise the user will be denied to acces. A demo was created by our awesome Access guru Matt_Dierick User tries to use SSL VPN to access corporate resources, User click on F5 Access VPN connection pushed to the endpoint via configuration profile at Microsoft Intune. User selects the proper authentication method (Username&Password, Smart Card or Certificate based Authentication). Once user successfully authenticate and pass compliance check, a temporary certificate is pushed to the machine. The temporary certificate is used to authenticate with F5 BIG-IP APM and then the user is granted access to SSL VPN connection. A demo was created for this use case as well by our awesome Access guru Matt_Dierick , as Microsoft Intune portal got updated, we may now perform the endpoint management related tasks through endpoint.microsoft.com portal instead of portal.azure.com, make sure to follow Microsoft documentations for any new updates. Conclusion In conclusion to the highlighted use cases, we can see that we can make use of existing solutions and extend their capabilities with the ease of integration to acheive our organization Zero Trust strategy. F5 BIG-IP in general allows the organization to decouple client side connection from server side, which simplifies further services integration to boost organization security posture. F5 BIG-IP APM allows us to integrate with different parties to extend their capabilties whether they endpoint compliance, risk factor or IDaaS to use such insights for securing application or network access. In addition to corporate related secure access, if we have customers accessing applications and need integration with Google or other Open ID Connect (OIDC) provider, you can make use of F5 BIG-IP APM OIDC integration to that 3rd party for customers' access. Additional resources Configuring Access Policy Manager for MDM applications BIG-IP Access Policy Manager: Third-Party Integration OAuth and OpenID Connect - Made easy with Access Guided Configurations templates3.6KViews6likes0CommentsPowering Progressive Deployment in Kubernetes with NGINX and Argo Rollouts
This article demonstrates you how you can use NGINX Gateway Fabric combined with Argo Rollouts to perform progressive delivery. The Canary pattern will be introduced and is employed as we explore and execute rollout scenarios.444Views1like0CommentsHow I did it - "F5 BIG-IP Observability with Dynatrace and F5 Telemetry Streaming"
Welcome back to another edition of “How I Did It.” It’s been a while since we looked at observability… Oh wait, I just said that. Anyway, in this post I’ll walk through how I integrated F5 Telemetry Streaming with Dynatrace. To show the results, I’ve included sample dashboards that highlight how the ingested telemetry data can be visualized effectively. Let’s dive in before I repeat myself again.64Views2likes0CommentsThinking Outside the Box: Rewriting Web Pages with F5 Distributed Cloud (XC)
This article demonstrates how to dynamically rewrite web page content, such as updating links or replacing text, by using native features in F5 Distributed Cloud (XC). It provides a creative workaround that leverages JavaScript injection to modify pages on the fly, avoiding the need for a separate proxy like NGINX or BIG-IP.348Views4likes3CommentsF5 NGINX Plus R36 Release Now Available
We’re thrilled to announce the availability of F5 NGINX Plus Release 36 (R36). NGINX Plus R36 adds advanced capabilities like an HTTP CONNECT forward proxy, richer OpenID Connect (OIDC) abilities, expanded ACME options, and new container images packaged with popular modules. In addition, NGINX Plus inherits all the latest capabilities from NGINX Open Source, the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway. Here’s a summary of the most important updates in R36: HTTP CONNECT Forward Proxy NGINX Plus can now tunnel HTTP traffic via the HTTP CONNECT method, making it easy to centralize egress policies through a trusted NGINX Plus server. Advanced features enable capabilities that limit proxied traffic to specific origin clients, ports, or destination hosts and networks. Native OIDC Support for PKCE, Front-Channel Logout, and POST Client Authentication We’ve made additional enhancements to the popular OpenID Connect module by adding support for PKCE enforcement, the ability to log out of all applications a client was signed in to, and support for authenticating the OIDC client using the client_secret_post method. ACME Enhancements for Certificate Automation Certificate automation capabilities are more powerful than ever, as we continue to make improvements to the ACME (Automatic Certificate Management Environment) module. New features include support for the TLS-ALPN-01 challenge method, the ability to select a specific certificate chain, IP-based certificates, ACME profiles, and external account bindings. TLS Certificate Compression High-volume or high-latency connections can now benefit from a new capability that optimizes TLS handshakes by compressing certificates and associated CA chains. Container Images with Popular Modules Container images now include our most popular modules, making it even easier to deploy NGINX Plus in container environments. Alongside the previously included njs module, images now ship with the ACME, OpenTelemetry Tracing, and Prometheus Exporter modules. Key features inherited from NGINX Open Source include: Support for 0-RTT in QUIC Inheritance control for headers and trailers Support for OpenSSL 3.5 New Features in Detail HTTP CONNECT Forward Proxy NGINX Plus R36 adds native HTTP CONNECT support via a new ngx_http_tunnel_module that enables NGINX Plus to operate as a forward proxy for outbound HTTP/HTTPS traffic. You can restrict clients, ports, and hosts, as well as which networks are reachable via centralized egress policies. This new capability allows you to monitor outbound connections through one or more NGINX Plus instances instead of relying on separate proxy products or DIY open source projects. Why it matters Enforces consistent egress policies without introducing another proxy to your stack. Centralizes proxy rules and threat monitoring for outbound connections. Reduces the need for custom modules or sidecar proxies to tunnel outbound TLS. Who it helps Platform and network teams that must route all outbound traffic through a controlled proxy. Security teams that want a single place to enforce and audit egress rules. Operators consolidating L7 functions (inbound and outbound) on NGINX Plus. The following is a sample forward proxy configuration that filters ports, destination hosts and networks. Note the requirement to specify a DNS resolver. For simplicity, we've configured a popular, publicly available DNS. However, doing so is not recommended for certain production environments, due to access limitations and unpredictable availability. num_map $request_port $allowed_ports { 443 1; 8440-8445 1; } geo $upstream_last_addr $allowed_nets { 52.58.199.0/24 1; 3.125.197.172/24 1; } map $host $allowed_hosts { hostnames; nginx.org 1; *.nginx.org 1; } server { listen 3128; resolver 1.1.1.1; # unsafe tunnel_allow_upstream $allowed_nets $allowed_ports $allowed_hosts; tunnel_pass; } Native OpenID Connect Support for PKCE, Front-Channel Logout, and POST Client Authentication R36 extends the native OIDC features introduced in R34 with PKCE enforcement, front-channel logout, and support for authenticating clients via the client_secret_post method. PKCE for authorization code flow can now be auto-enabled for a configured Identity Provider when S256 is advertised for “code_challenge_methods_supported” in federated metadata. Alternatively, it can be explicitly toggled with a simple directive: pkce on; # force PKCE even if the OP does not advertise it pkce off; # disable PKCE even if the OP advertises S256 A new front-channel logout endpoint lets an Identity Provider trigger a browser-based sign-out of all applications that a client is signed in to. oidc_provider okta_app { issuer https://dev.okta.com/oauth2/default; client_id <client_id>; client_secret <client_secret>; logout_uri /logout; logout_token_hint on; frontchannel_logout_uri /front_logout; userinfo on; } Support for client_secret_post improves compatibility with identity providers that require POST-based client authentication per OpenID Connect Core 1.0. Why it matters Brings NGINX Plus in line with modern identity provider expectations that treat PKCE as a best practice for all authorization code flows. Enables reliable logout across multiple applications from a single identity provider trigger. Makes NGINX Plus easier to integrate with providers that insist on POST-based client authentication. Who it helps Identity access management and security teams standardizing on OIDC and PKCE. Application and API owners who need consistent login and logout behavior across many services. Architects integrating with cloud identity providers (Entra ID, Okta, etc.) that require specific OIDC patterns. ACME Enhancements for Certificate Automation Building on the ACME support shipped in our previous release, NGINX Plus R36 incorporates new features from the nginx acme project, including: TLS-ALPN-01 challenge support Selection of a specific certificate chain IP-based certificates ACME profiles External account bindings These capabilities align NGINX Plus with what is available in NGINX Open Source via the ngx_http_acme_module. Why it matters Lets you keep more of the certificate lifecycle inside NGINX instead of relying on external tooling. Makes it easier to comply with CA policies and organizational PKI standards. Supports more complex environments such as IP-only endpoints and specific chain requirements. Who it helps SRE and platform teams running large fleets that rely on automated certificate renewal. Security and PKI teams that need tighter control over certificate chains, profiles, and CA bindings. Operators who want to standardize on a single ACME implementation across NGINX Plus and NGINX Open Source. TLS Certificate Compression TLS certificate compression introduced in NGINX Open Source 1.29.1 is now available in NGINX Plus R36. The ssl_certificate_compression directive controls certificate compression during TLS handshakes, which remains off by default and must be explicitly enabled. Why it matters Reduces the size of TLS handshakes when certificate chains are large. Can optimize performance in high connection volume or high-latency environments. Offers incremental performance optimization without changing application behavior. Who it helps Performance-focused operators running services with many short-lived TLS connections. Teams supporting mobile or high-latency clients where every byte in the handshake matters. Operators who want to experiment with handshake optimizations while retaining control via explicit configuration. Container Images with Popular Modules Included Finally, NGINX Plus R36 introduces container images that bundle NGINX Plus with commonly requested first-party modules such as OpenTelemetry (OTel) Tracing, ACME, Prometheus Exporter, and NGINX JavaScript (njs). Options include images with or without F5 NGINX Agent and those running NGINX in privileged or unprivileged mode. Additionally, a slim NGINX Plus–only image will be made available for teams who prefer to build bespoke custom containers. Additional Enhancements Available in NGINX Plus R36 Upstream-specific request headers Dynamically assigned proxy_bind pools Changes Inherited from NGINX Open Source NGINX Plus R36 is based on the NGINX 1.29.3 mainline release and inherits all functional changes, features, and bug fixes made since NGINX Plus R35 was released (which was based on the 1.29.0 mainline release). For the full list of new changes, features, bug fixes, and workarounds inherited from recent releases, see the NGINX changes . Changes to Platform Support Added Platforms Support for the following platforms has been added: Debian 13 Rocky Linux 10 SUSE Linux Enterprise Server 16 Removed Platforms Support for the following platforms has been removed: Alpine Linux 3.19 – Reached End of Support in November 2025 Deprecated Platforms Support for the following platforms will be removed in a future release: Alpine Linux 3.20 Important Warning NGINX Plus is built on the latest minor release of each supported operating system platform. In many cases, the latest revisions of these operating systems are adapting their platforms to support OpenSSL 3.5 (e.g. RHEL 9.7 and 10.1). In these situations, NGINX Plus requires that OpenSSL 3.5.0 or later is installed for proper operation. F5 NGINX in F5’s Application Delivery & Security Platform NGINX One is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, improve, and secure new applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability for applications deployed across cloud, hybrid, and edge architectures. NGINX One is the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, F5 NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. NGINX Plus, a key component of NGINX One, adds features to NGINX Open Source that are designed for enterprise-grade performance, scalability, and security. Follow this guide for more information on installing and deploying NGINX Plus R36 or NGINX Open Source.332Views1like0CommentsUsing n8n To Orchestrate Multiple Agents
I’ve been heads-down building a series of AI step-by-step labs, and this one might be my favorite so far: a practical, cost-savvy “mixture of experts” architectural pattern you can run with n8n and self-hosted models on Ollama. The idea is simple. Not every prompt needs a heavyweight reasoning model. In fact, most don’t. So we put a small, fast model in front to classify the user’s request—coding, reasoning, or something else—and then hand that prompt to the right expert. That way, you keep your spend and latency down, and only bring out the big guns when you really need them. Architecture at a glance: Two hosts: one for your models (Ollama) and one for your n8n app. Keeping these separate helps n8n stay snappy while the model server does the heavy lifting. Docker everywhere, with persistent volumes for both Ollama and n8n so nothing gets lost across restarts. Optional but recommended: NVIDIA GPU on the model host, configured with the NVIDIA Container Toolkit to get the most out of inference. On the model server, we spin up Ollama and pull a small set of targeted models: deepseek-r1:1.5b for the classifier and general chit-chat deepseek-r1:7b for the reasoning agent (this is your “brains-on” model) codellama:latest for coding tasks (Python, JSON, Node.js, iRules, etc.) llama3.2:3b as an alternative generalist On the app server, we run n8n. Inside n8n, the flow starts with the “On Chat Message” trigger. I like to immediately send a test prompt so there’s data available in the node inspector as I build. It makes mapping inputs easier and speeds up debugging. Next up is the Text Classifier node. The trick here is a tight system, prompt and clear categories: Categories: Reasoning and Coding Options: When no clear match → Send to an “Other” branch Optional: You can allow multiple matches if you want the same prompt to hit more than one expert. I’ve tried both approaches. For certain, ambiguous asks, allowing multiple can yield surprisingly strong results. I attach deepseek-r1:1.5b to the classifier. It’s inexpensive and fast, which is exactly what you want for routing. In the System Prompt Template, I tell it: If a prompt explicitly asks for coding help, classify it as Coding If it explicitly asks for reasoning help, classify it as Reasoning Otherwise, pass the original chat input to a Generalist From there, each classifier output connects to its own AI Agent node: Reasoning Agent → deepseek-r1:7b Coding Agent → codellama:latest Generalist Agent (the “Other” branch) → deepseek-r1:1.5b or llama3.2:3b I enable “Retry on Fail” on the classifier and each agent. In my environment (cloud and long-lived connections), a few retries smooth out transient hiccups. It’s not a silver bullet, but it prevents a lot of unnecessary red Xs while you’re iterating. Does this actually save money? If you’re paying per token on hosted models, absolutely. You’re deferring the expensive reasoning calls until a small model decides they’re justified. Even self-hosted, you’ll feel the difference in throughput and latency. CodeLlama crushes most code-related queries without dragging a reasoning model into it. And for general questions—“How do I make this sandwich?”—A small generalist is plenty. A few practical notes from the build: Good inputs help. If you know you’re asking for code, say so. Your classifier and downstream agent will have an easier time. Tuning beats guessing. Spend time on the classifier’s system prompt. Small changes go a long way. Non-determinism is real. You’ll see variance run-to-run. Between retries, better prompts, and a firm “When no clear match” path, you can keep that variance sane. Bigger models, better answers. If you have the budget or hardware, plugging in something like Claude, GPT, or a higher-parameter DeepSeek will lift quality. The routing pattern stays the same. Where to take it next: Wire this to Slack so an engineering channel can drop prompts and get routed answers in place. Add more “experts” (e.g., a data-analysis agent or an internal knowledge agent) and expand your classifier categories. Log token counts/latency per branch so you can actually measure savings and adjust thresholds/models over time. This is a lab, not a production, but the pattern is production-worthy with the right guardrails. Start small, measure, tune, and only scale up the heavy models where you’re seeing real business value. Let me know what you build—especially if you try multi-class routing and send prompts to more than one expert. Some of the combined answers I’ve seen are pretty great. Here's the lab in our git, if you'd like to try it out for yourself. If video is more your thing, try this: Thanks for building along, and I’ll see you in the next lab.
124Views3likes0Comments