security
18052 TopicsAdding metadata to certificates objects
Hello. In order to make renewing easier, we'd like to add custom metadata to certificates objects, such as contact information (we have a lot of customer-provided certificates). However, it seems to be impossible via tmsh, ie: # modify /sys crypto cert domain.tld metadata add { foo { value bar }} Syntax Error: "foo" unknown property # edit /sys crypto cert domain.tld Authorization Error: user rousse with role admin doesn't have access to "cert" Did anyone try something similar ?101Views0likes6CommentsExtend visibility - BIG-IP joins forces with CrowdStrike
Introduction The traditional focus in cybersecurity has prioritized endpoints like laptops and mobiles with EDR, as they are key entry points for intrusions. Modern threats target the full network infrastructure, like routers, ADCs, firewalls, servers, VMs, and cloud instances, as interconnected endpoints. All network software is a potential target in today’s sprawling attack surface. Summarizing some of those blind points below, Servers, including hardware, VMs, and cloud instances: Often under-monitored, rapid spin-up creates ephemeral risks for exfiltration and lateral movement. Network appliances: Enable traffic redirection, data sniffing, or backdoors, if compromised. Application delivery components: Vulnerable to session hijacking, code injection, or DDoS, due to high-traffic processing. Falcon sensor integration In this section, we go through download and installation steps, and observe how the solution works with detecting/blocking malicious packages. For more information, follow our KB articles, https://my.f5.com/manage/s/article/K000157015 Related content K000157015: Getting Started with Falcon sensor for BIG-IP K000156881: Install Falcon sensor for BIG-IP on the BIG-IP system K000157014: F5 Support for Falcon for BIG-IP https://www.f5.com/partners/technology-alliances/crowdstrike
59Views1like0CommentsModern Applications-Demystifying Ingress solutions flavors
Introduction In this article, we explore the different ingress services provided by F5 and how those solutions fit within our environment. With different ingress services flavors, you gain the ability to interact with your microservices at different points, allowing for flexible, secure deployment. The ingress services tools can be summarized into two main categories, Management plane: NGINX One BIG-IP CIS Traffic plane: NGINX Ingress Controller / Plus / App Protect / Service Mesh BIG-IP Next for Kubernetes Cloud Native Functions (CNFs) F5 Distributed Cloud kubernetes deployment mode Name Integration type Licensing Features NGINX One Console Management Plane Try for free Access to different NGINX products, NGINX Plus, NGINX Ingress Controller, NGINX Instance Manager, etc. BIG-IP CIS Management Plane Free, need to integrate with licensed BIG-IP Automatically configure performance, routing, and security services on BIG-IP. NGINX OSS Traffic Plane Free Features availability NGINX Ingress Controller Traffic Plane Vary based on the deployment https://www.f5.com/products/nginx/nginx-ingress-controller#introduction BIG-IP Next for Kubernetes (BNK) Traffic Plane Paid Ingress, Load balancing, Routing, firewall policing BIG-IP Next for Kubernetes Cloud Native Functions (CNFs) Traffic Plane Paid CGNAT, FW, DoS, TLS proxy, DNS, IPS and more upcoming F5 Distributed Cloud kubernetes deployment mode Traffic Plane Part of F5 Distributed Cloud Use F5 Distributed Cloud to integrate and work with your owned K8s environment Ingress solutions definitions In this section we go quickly through the Ingress services to understand the concept for each service, and then later move to the use cases’ comparison. BIG-IP Next for Kubernetes Kubernetes' native networking architecture does not inherently support multi-network integration or non-HTTP/HTTPS protocols, creating operational and security challenges for complex deployments. BIG-IP Next for Kubernetes addresses these limitations by centralizing ingress and egress traffic control, aligning with Kubernetes design principles to integrate with existing security frameworks and broader network infrastructure. This reduces operational overhead by consolidating cross-network traffic management into a unified ingress/egress point, eliminating the need for multiple external firewalls that traditionally require isolated configuration. The solution enables zero-trust security models through granular policy enforcement and provides robust threat mitigation, including DDoS protection, by replacing fragmented security measures with a centralized architecture. Additionally, BIG-IP Next supports 5G Core deployments by managing North/South traffic flows in containerized environments, facilitating use cases such as network slicing and multi-access edge computing (MEC). These capabilities enable dynamic resource allocation aligned with application-specific or customer-driven requirements, ensuring scalable, secure connectivity for next-generation 5G consumer and enterprise solutions while maintaining compatibility with existing network and security ecosystems. Cloud Native Functions (CNFs) BIG-IP Next for Kubernetes enables the advanced networking, traffic management and security functionalities; CNFs enables additional advanced services. VNFs and CNFs can be consolidated in the S/Gi-LAN or the N6 LAN in 5G networks. A consolidated approach results in simpler management and operation, reduced operational costs up to reduced TCO by 60% and more opportunities to monetize functions and services. Functions can include DNS, Edge Firewall, DDoS, Policy Enforcer, and more. BIG-IP Next CNFs provide scalable, automated, resilient, manageable, and observable cloud-native functions and applications. Support dynamic elasticity, occupy a smaller footprint with fast restart, and use continuous deployment and automation principles. NGINX for Kubernetes / NGINX One NGINX for Kubernetes is a versatile and cloud-native application delivery platform that aligns closely with DevOps and microservices principles. It is built around two primary models: NGINX Ingress Controller (OSS and Plus): Deployed directly inside Kubernetes clusters, it acts as the primary ingress gateway for HTTP/S, TCP, and UDP traffic. It supports Kubernetes-native CRDs, and integrates easily with GitOps pipelines, service meshes (e.g., Istio, Linkerd), and modern observability stacks like Prometheus and OpenTelemetry. NGINX One/NGINXaaS: This SaaS-delivered, managed service extends the NGINX experience by offloading the operational overhead, providing scalability, resilience, and simplified security configurations for Kubernetes environments across hybrid and multi-cloud platforms. NGINX solutions prioritize lightweight deployment, fast performance, and API-driven automation. NGINX Plus variants offer extended features like advanced WAF (NGINX App Protect), JWT authentication, mTLS, session persistence, and detailed application-layer observability. Some under the hood differences, BIG-IP Next for Kubernetes/CNF make use of F5 own TMM to perform application delivery and security, NGINX rely on Kernel to perform some network level functions like NAT, IP tables and routing. So it’s a matter of the architecture of your environment to go with one or both options to enhance your application delivery and security experience. BIG-IP Container Ingress Services (CIS) BIG-IP CIS works on management flow. The CIS service is deployed at Kubernetes cluster, sending information on created Pods to an integrated BIG-IP external to Kubernetes environment. This allows to automatically create LTM pools and forwarding traffic based on pool members health. This service allows for application teams to focus on microservice development and automatically update BIG-IP, allowing for easier configuration management. Use cases categorization Let’s talk in use cases terms to make it more related to the field and our day-to-day work, NGINX One Access to NGINX commercial products, support for open-source, and the option to add WAF. Unified dashboard and APIs to discover and manage your NGINX instances. Identify and fix configuration errors quickly and easily with the NGINX One configuration recommendation engine. Quickly diagnose bottlenecks and act immediately with real-time performance monitoring across all NGINX instances. Enforce global security polices across diverse environments. Real-time vulnerability management identifies and addresses CVEs in NGINX instances. Visibility into compliance issues across diverse app ecosystems. Update groups of NGINX systems simultaneously with a single configuration file change. Unified view of your NGINX fleet for collaboration, performance tuning, and troubleshooting. NGINX One to automate manual configuration and updating tasks for security and platform teams. BIG-IP CIS Enable self-service Ingress HTTP routing and app services selection by subscribing to events to automatically configure performance, routing, and security services on BIG-IP. Integrate with the BIG-IP platform to scale apps for availability and enable app services insertion. In addition, integrate with the BIG-IP system and NGINX for Ingress load balancing. BIG-IP Next for Kubernetes Supports ingress and egress traffic management and routing for seamless integration to multiple networks. Enables support for 4G and 5G protocols that are not supported by Kubernetes—such as Diameter, SIP, GTP, SCTP, and more. BIG-IP Next for Kubernetes enables security services applied at ingress and egress, such as firewalling and DDoS. Topology hiding at ingress obscures the internal structure within the cluster. As a central point of control, per-subscriber traffic visibility at ingress and egress allows traceability for compliance tracking and billing. Support for multi-tenancy and network isolation for AI applications, enabling efficient deployment of multiple users and workloads on a single AI infrastructure. Optimize AI factories implementations with BIG-IP Next for Kubernetes on Nvidia DPU. F5 Cloud Native Functions (CNFs) Add containerized services for example Firewall, DDoS, and Intrusion Prevention System (IPS) technology Based on F5 BIG-IP AFM. Ease IPv6 migration and improve network scalability and security with IPv4 address management. Deploy as part of a security strategy. Support DNS Caching, DNS over HTTPS (DoH). Supports advanced policy and traffic management use cases. Improve QoE and ARPU with tools like traffic classification, video management and subscriber awareness. NGINX Ingress Controller Provide L4-L7 NGINX services within Kubernetes cluster. Manage user and service identities and authorize access and actions with HTTP Basic authentication, JSON Web Tokens (JWTs), OpenID Connect (OIDC), and role-based access control (RBAC). Secure incoming and outgoing communications through end-to-end encryption (SSL/TLS passthrough, TLS termination). Collect, monitor, and analyze data through prebuilt integrations with leading ecosystem tools, including OpenTelemetry, Grafana, Prometheus, and Jaeger. Easy integration with Kubernetes Ingress API, Gateway API (experimental support), and Red Hat OpenShift Routes F5 Distributed Cloud Kubernetes deployment mode The F5 Distributed Cloud k8s deployment is supported only for Sites running Managed Kubernetes, also known as Physical K8s (PK8s). Deployment of the ingress controller is supported only using Helm. The Ingress Controller manages external access to HTTP services in a Kubernetes cluster using the F5 Distributed Cloud Services Platform. The ingress controller is a K8s deployment that configures the HTTP Load Balancer using the K8s ingress manifest file. The Ingress Controller automates the creation of load balancer and other required objects such as VIP, Layer 7 routes (path-based routing), advertise policy, certificate creation (k8s secrets or automatic custom certificate) Conclusion As you can see, the diverse Ingress controllers tools give you more flexibility, tailoring your architecture based on organization requirements and maintain application delivery and security practices across your applications ecosystem. Related Content and Technical demos NGINX One Console Experience the power of F5 NGINX One with feature demos | DevCentral Introducing F5 WAF for NGINX with Intuitive GUI in NGINX One Console and NGINX Instance Manager | DevCentral F5 NGINX One Console July features | DevCentral NGINX One BIG-IP Container Ingress Services (CIS) F5 CIS, TLS Extensions, and troubleshooting Use topology labels to reduce cross-AZ ingress traffic with F5 CIS and EKS | DevCentral Enable Consistent Application Services for Containers with CIS | DevCentral Configuring ExternalDNS for Kubernetes with F5 CIS, LTM and DNS | DevCentral My first CRD deployment with CIS | DevCentral Overview of F5 BIG-IP Container Ingress Services NGINX Ingress Controller JWT authorization with NGINX Ingress Controller Better together - F5 Container Ingress Services and NGINX Plus Ingress Controller Integration | DevCentral Integrating Hashicorp Vault with Cert Manager and F5 NGINX Ingress Controller | DevCentral Using F5 NGINX Plus as the Ingress Controller within Nutanix Kubernetes Platform (NKP) | DevCentral Announcing F5 NGINX Ingress Controller v4.0.0 | DevCentral F5 NGINX Ingress Controller BIG-IP Next for Kubernetes (BNK) BIG-IP Next for Kubernetes Nvidia DPU deployment walkthrough | DevCentral BIG-IP Next for Kubernetes, addressing today’s enterprise challenges | DevCentral BIG-IP Next SPK: a Kubernetes native ingress and egress gateway for Telco workloads. BIG-IP Next for Kubernetes Cloud Native Functions (CNFs) F5 BIG-IP Next CNF solutions suite of Kubernetes native 5G Network Functions F5 Cloud-Native Functions For Modern Demands - Part 2 Deploy F5 Cloud Native Functions in Kubernetes From virtual to cloud-native, infrastructure evolution | DevCentral F5 Distributed Cloud kubernetes deployment mode Kubernetes architecture options with F5 Distributed Cloud Services323Views2likes0CommentsCIS F5 Benchmark Reporter
Code is community submitted, community supported, and recognized as ‘Use At Your Own Risk’. The CIS_F5_Benchmark_Reporter.py is a Python script that can be run on a F5 BIG-IP. This script will check if the configuration of the F5 BIG-IP is compliant with the CIS Benchmark for F5. The script will generate a report that can be saved to a file, send by e-mail or send its output to the screen. Just use the appropriate arguments when running the script. [root@bigipa:Active:Standalone] # ./CIS_F5_Benchmark_Reporter.py Usage: CIS_F5_Benchmark_Reporter.py [OPTION]... Mandatory arguments to long options are mandatory for short options too. -f, --file=FILE output report to file. -m, --mail output report to mail. -s, --screen output report to screen. Report bugs to nvansluis@gmail.com [root@bigipa:Active:Standalone] # To receive a daily or weekly report from your F5 BIG-IP, you can create a cron job. Below is a screenshot that shows what the report will look like. Settings In the script, there is a section named 'User Options'. These options should be modified to reflect your setup. #----------------------------------------------------------------------- # User Options - Configure as desired #----------------------------------------------------------------------- E-mail settings Here the e-mail setting can be configured, so the script will be able to send a report by e-mail. # e-mail settings port = 587 smtp_server = "smtp.example.com" sender_email = "johndoe@example.com" receiver_email = "johndoe@example.com" login = "johndoe" password = "mySecret" SNMP settings Here you can add additional SNMP clients. These are necessary to be compliant with control 6.1. # list containing trusted IP addresses and networks that have access to SNMP (control 6.1) snmp_client_allow_list = [ "127.0.0.0/8", ] Exceptions Sometimes there are valid circumstances, why a specific requirement of a security control can't be met. In this case you can add an exception. See the example below. # set exceptions (add your own exceptions) exceptions = { '2.1' : "Exception in place, because TACACS is used instead of RADIUS.", '2.2' : "Exception in place, because TACACS is used and there are two TACACS-servers present." } Recommendations Store the script somewhere in the /shared partition. The data stored on this partition will still be available after an upgrade. Feedback This script has been tested on F5 BIG-IP version 17.x. If you have any questions, remarks or feedback, just let me know. Download The script can be downloaded from github.com. https://github.com/nvansluis/CIS_F5_Benchmark_Reporter202Views6likes3CommentsF5 AWAF/ASM ASM_RESPONSE_VIOLATION event seem to not trigger on 17.1.x
Hey Everyone, The F5 AWAF/ASM ASM_RESPONSE_VIOLATION event seem to not trigger on 17.1.x. I have enabled irules support the waf policy and I tested in Normal and Compatibility mode but no luck. The other events trigger without an issue. I created 2 custom signatures for response and request match and request match one has no issues so it seems a bug to me. This can be easily tested with the below irule that logs to /var/log/asm when ASM_REQUEST_DONE { log local3. "test request" } when ASM_RESPONSE_VIOLATION { log local3. "test response" } The custom response signature is in the policy to just trigger alarm. I tried string or regex match " (?i)failed " PCRE-style as F5 15.x and up are using this regex style.86Views0likes2CommentsAccelerating AI Data Delivery with F5 BIG-IP
Introduction AI continues to rely heavily on efficient data delivery infrastructures to innovate across industries. S3 is the protocol that AL/ML engineers rely on for data delivery. As AI workloads grow in complexity, ensuring seamless and resilient data ingestion and delivery becomes critical. This will support massive datasets, robust training workflows, and production-grade outputs. S3 is HTTP-based, so F5 is commonly used to provide advanced capabilities for managing S3-compatible storage pipelines, enforcing policies, and preventing delivery failures. This enables businesses to maintain operational excellence in AI environments. This article explores three key functions of F5 BIG-IP within AI data delivery through embedded demo videos. From optimizing S3 data pipelines and enforcing granular policies to monitoring traffic health in real time, F5 presents core functions for developers and organizations striving for agility in their AI operations. The diagram shows a scalable, resilient, and secure AI architecture facilitated by F5 BIG-IP. End-user traffic is directed to the front-end application through F5, ensuring secure and load-balanced access via the "Web and API front door." This traffic interacts with the AI Factory, comprising components like AI agents, inference, and model training, also secured and scaled through F5. Data is ingested into enterprise events and data stores, which are securely delivered back to the AI Factory's model training through F5 to support optimized resource utilization. Additionally, the architecture includes Retrieval-Augmented Generation (RAG), securely backed by AI object storage and connected through F5 for AI APIs. Whether from the front-end applications or the AI Factory, traffic to downstream services like AI agents, databases, websites, or queues is routed via F5 to ensure consistency, security, and high availability across the ecosystem. This comprehensive deployment highlights F5's critical role in enabling secure, efficient AI-powered operations. 1. Ensure Resilient AI Data and S3 Delivery Pipelines with F5 BIG-IP Modern AI workflows often rely on S3-compatible storage for high-throughput data delivery. However, a common problem is inefficient resource utilization in clusters due to uneven traffic distribution across storage nodes, causing bottlenecks, delays, and reliability concerns. If you manage your own storage environment, or have spoken to a storage administrator, you’ll know that “hot spots” are something to avoid when dealing with disk arrays. In this demo, F5 BIG-IP demonstrates how a loose-coupling architecture solves these issues. By intelligently distributing traffic across all cluster nodes via a virtual server, BIG-IP ensures balanced load distribution, eliminates bottlenecks, and provides high-performance bandwidth for AI workloads. The demo uses Warp, a S3 benchmarking too, to highlight how F5 BIG-IP can take incoming S3 traffic and route it efficiently to storage clusters. We use the least-connection load balancing algorithm to minimize latency across the nodes while maximizing resource utilization. We also add new nodes to the load balancing pool, ensuring smooth, scalable, and resilient storage pipelines. 2.Enforce Policy-Driven AI Data Delivery with F5 BIG-IP AI workloads are susceptible to traffic spikes that can destabilize storage clusters and impact concurrent data workflows. The video demonstrates using iRules to cap connections and stabilize clusters under high request-per-second spikes. Additionally, we use local traffic policies to redirect specific buckets while preserving other ongoing requests. For operational clarity, the study tool visualizes real-time cluster metrics, offering deep insights into how policies influence traffic. 3.Prevent AI Data Delivery Failures with F5 BIG-IP AI operations depend on high efficiency and reliable data delivery to maintain optimal training and model fine-tuning workflows. The video demonstrates how F5 BIG-IP uses real-time health monitors to ensure storage clusters remain operational during failure scenarios. By dynamically detecting node health and write quorum thresholds, BIG-IP intelligently routes traffic to backup pools or read quorum clusters without disrupting endpoints. The health monitors also detect partial node failures, which is important to avoid risk of partial writes when working with S3 storage.. Conclusion Once again, with AI so reliant on HTTP-based S3 storage, F5 administrators find themselves as a critical part of the latest technologies. By enabling loose coupling, enforcing granular policies, and monitoring traffic health in real time, F5 optimizes data delivery for improved AI model accuracy, faster innovation, and future-proof architectures. Whether facing unpredictable traffic surges or handling partial failures in clusters, BIG-IP ensures your applications remain resilient and ready to meet business demands with ease. Related Resources AI Data Delivery Use Case AI Reference Architecture Enterprise AI delivery and security
83Views1like0Comments- 455Views2likes0Comments
HTTP Request Smuggling Using Chunk Extensions (CVE-2025-55315)
Executive Summary HTTP request smuggling remains one of the nastier protocol-level surprises: it happens when different components in the HTTP chain disagree about where one request ends and the next begins. A recent, high-visibility ASP.NET Core disclosure brought one particular flavor of this problem into the spotlight: attackers abusing chunk extensions in chunked transfer encoding to craft ambiguous request boundaries. The vulnerability was assigned a very high severity (CVSS 9.9) by Microsoft, their highest for ASP.NET Core to date. This article explains what chunk extensions are, why they can be abused for smuggling, how the recent ASP.NET Core issue fits into the bigger picture, and what defenders, implementers, and F5 customers should consider: particularly regarding HTTP normalization, compliance settings, and protection coverage across F5 Advanced WAF, NGINX App Protect, and Distributed Cloud. Background: What Are Chunk Extensions? In HTTP/1.1, chunked transfer encoding (via Transfer-Encoding: chunked) allows the body of a message to be sent in a sequence of chunks, each preceded by its size in hex, terminated by a zero-length chunk. The specification also allows chunk extensions to be appended after the chunk length, e.g.: In theory, chunk extensions were meant for metadata or transfer-layer options: for example, integrity checks or special directives. But in practice, they’re almost never used by legitimate clients or servers: many HTTP libraries ignore or inconsistently handle them, and this inconsistency across intermediaries (proxies and servers) can serve as a source of request smuggling vulnerabilities. But if a lot of servers and proxies ignore it, why would that even be an issue? Let’s see. Root Cause Analysis for CVE-2025-55315 The CVE description reads: “Inconsistent interpretation of HTTP requests (‘HTTP request/response smuggling’) in ASP.NET Core allows an authorized attacker to bypass a security feature over a network.” Examining the GitHub commit reveals a relatively straightforward fix. In essence, the patch adjusts the chunk-extension parser to correctly handle \r\n line endings and to throw an error if either \r or \n appears unpaired. Additionally, a new flag was introduced for backward compatibility. As expected, the vulnerable logic resides in the ParseExtension function. The new InsecureChunkedParsing flag preserves legacy behavior - but it must be explicitly enabled, since that mode reflects the prior (and now considered insecure) implementation. Previously, the parser looked only for the carriage return (\r) character to determine the end of a line. In the updated implementation, it now checks for either a line feed (\n) or a carriage return (\r). Next, we encounter the following condition: The syntax may look a bit dense, but the logic is straightforward. In short, they retained the old insecure behavior when the InsecureChunkedParsing flag is enabled, which is checking the presence of \n only after encountering \r . This is problematic because it allows injecting a single \r or \n inside the chunk extension. In depth, the vulnerable condition, suffixSpan[1] == ByteLF, mirrors the old behavior - it verifies that the second character is \n. We reach this part only if we previously saw \r. The new condition validates that the last two characters of the chunk extension are \r\n. Remember that in the new version, we reach this part when encountering either \r or \n. The fixed condition ensures that if an attacker tries to inject a single \r or \n somewhere within the chunk extension, the check will fail - the condition will evaluate to false. When that happens, and if the backward-compatibility flag is not enabled, the parser throws an exception: Bad chunk extension. And what happened before the patch if the character following \r wasn’t \n? They simply continued parsing, making the following characters part of the chunk extension. That means that a chunk extension could include line terminator characters. The attack affecting unpatched ASP.NET Core applications is HTTP request smuggling via chunk extensions, a technique explained clearly and in depth in this article, which we’ll briefly summarize in this post. Request smuggling using chunk extensions variants Before diving into the different chunk-extension smuggling variants, it’s worth recalling the classic Content-Length / Transfer-Encoding (CL.TE and TE.CL) request smuggling techniques. These rely on discrepancies between how proxies and back-end servers interpret message boundaries: one trusts the Content-Length, the other trusts Transfer-Encoding, allowing attackers to sneak an extra request inside a single HTTP message. If you’re not familiar with CL.TE and TE.CL and other variants, this article gives an excellent overview of how these desync vulnerabilities work in practice. TERM.EXT (terminator - extension mismatch): The proxy treats a line terminator (usually \n) inside a chunk extension as the end of the chunk header, while the backend treats the same bytes as part of the extension. EXT.TERM (extension - terminator mismatch) The proxy treats only \r\n sequence as the end of the chunk header, while the backend treats the line terminator character inside the chunk extension as the end of the chunk header. The ASP.NET Core issue Previously, ASP.NET Core allowed lone \r or \n characters to appear within a chunk extension if the line ended with \r\n, placing it in the EXT category. If a proxy ahead has TERM behavior (treating \n as line end), their parsing mismatch can enable request smuggling. The figure shows an example malicious request that exploits this parsing mismatch. The proxy treats a lone \n as the end of the chunk extension. As a result, the bytes xx become the start of the body and 47 is interpreted as the size of the following chunk. If the proxy forwards the request unchanged (i.e., it does not strip the extension), those next chunks can effectively carry a second, smuggled request destined for an internal endpoint that the proxy would normally block. When Kestrel (the ASP.NET Core backend) receives that same raw stream, it enforces a strict \r\n terminator for extensions. Because the backend searches specifically for the \r\n sequence, it parses the received stream differently - splitting the forwarded data into two requests (the extension content, 2;\nxx is treated as a chunk header + chunk body). The end result: a GET /admin request can reach the backend, even though the proxy would have blocked such a request if it had been observed as a separate, external request. F5 WAF Protections NGINX App Protect and F5 Distributed Cloud NGINX App Protect and F5 Distributed Cloud (XC) normalize incoming HTTP requests and do not support chunk extensions. This means that any request arriving at NAP or XC with chunk extensions will have those extensions removed before being forwarded to the backend server. As a result, both NAP and XC are inherently protected against this class of chunk-extension smuggling attacks by design. To illustrate this, let’s revisit the example from the referenced article. NGINX, which treats a lone \n as a valid line terminator, falls under the TERM category. When this request is sent through NAP, it is parsed and normalized accordingly - effectively split into two separate requests: What does this mean? NAP does not forward the request the same as it arrived. It normalizes the message by stripping out any chunk extensions, replacing the Transfer-Encoding header with a Content-Length, and ensuring the body is parsed deterministically - leaving no room for ambiguity or smuggling. If a proxy precedes NAP and interprets the traffic as a single request, NAP will safely split and sanitize it. F5 Distributed Cloud (XC) doesn’t treat lone \n as line terminators and also discards chunk extensions entirely. Advanced WAF Advanced WAF does not support chunk extensions. Requests containing a chunk header that is too long (more than 10 bytes) are treated as unparsable and trigger an HTTP compliance violation. To improve detection, we’ve released a new attack signature, “ASP.NET Core Request Smuggling - 200020232", which helps identify and block malicious attempts that rely on chunk extensions. Conclusions HTTP request smuggling via chunk extensions remains a very real threat, even in modern stacks. The disclosure of CVE-2025-55315 in the Kestrel web server underlines this: a seemingly small parsing difference (how \r, \n, and \r\n are treated in chunk extensions) can allow an attacker to conceal a second request within a legitimate one, enabling account takeover, code injection, SSRF, and many other severe attacks. This case offers a great reminder: don’t assume that because “nobody uses chunk extensions” they cannot be weaponized. And of course - use HTTP/2. Its binary framing model eliminates chunked encoding altogether, removing the ambiguity that makes these attacks possible in HTTP/1.1.91Views2likes1Commentf5 client certificate forwarding
i have website secure over F5 , it require client certificate which i need to forward it to the server. i don't f5 to validate the certifcate . just i need to pass it to sever.. i have add in ssl profile the client certificate as " require" , and i have add the root CA as Advertised Certificate Authorities because the client will use self sighn certificate .. in irule i did the below: CLIENTSSL_CLIENTCERT { if { [SSL::cert count] > 0 } { set client_cert [X509::whole [SSL::cert 0]] set session_cert $client_cert } } when HTTP_REQUEST { if {[info exists session_cert]} { HTTP::header replace "X-Client-Cert" $session_cert } now when i try to access the portal, certifcate popup is displayed and after choose the certifcate i got " the site can't provide a secure connection, err_ssl_protocol_error .. and in f5 i see the client certifcicate is attach to the header. so what might be the issue?117Views1like2CommentsSSL Orchestrator and Layer 2 Service Integration
Has anyone encountered issues with rSeries Big IP Tenant with the integration of a layer 2 service? In my case, I cannot make the service to come up even though I have the exact VLAN name and tagging set in the OS bare metal, and exactly the same VLAN and tagging configured in the tenant.163Views0likes6Comments