security
2932 TopicsBIG-IP Next for Kubernetes CNF 2.2 what's new
Introduction BIG-IP Next CNF v2.2.0 offers new enhancements to BIG-IP Next for Kubernetes CNFs with a focus on analytics capabilities, traffic distribution, subscriber management, and operational improvements that address real-world challenges in high-scale deployments. High-Speed Logging for Traffic Analysis The Reporting feature introduces high-speed logging (HSL) capabilities that capture session and flow-level metrics in CSV format. Key data points include subscriber identifiers, traffic volumes, transaction counts, video resolution metrics, and latency measurements, exported via Syslog (RFC5424, RFC3164, or legacy-BIG-IP formats) over TCP or UDP. Fluent-bit handles TMM container log processing, forwarding to Fluentd for external analytics servers. Custom Resources simplify configuration of log publishers, reporting intervals, and enforcement policies, making it straightforward to integrate into existing Kubernetes workflows. DNS Cache Inspection and Management New utilities provide detailed visibility into DNS cache operations. The bdt_cli tool supports listing, counting, and selectively deleting cache records using filters for domain names, TTL ranges, response codes, and cache types (RRSet, message, or nameserver). Complementing this, dns-cache-stats delivers performance metrics including hit/miss ratios, query volumes, response time distributions across intervals, and nameserver behavior patterns. These tools enable systematic cache analysis and maintenance directly from debug sidecars. Stateless and Bidirectional DAG Traffic Distribution Stateless DAG implements pod-based hashing to distribute traffic evenly across TMM pods without maintaining flow state. This approach embeds directly within the CNE installation, eliminating separate DAG infrastructure. Bidirectional DAG extends this with symmetric routing for client-to-server and return flows, using consistent redirect VLANs and hash tables. Deployments must align TMM pod counts with self-IP configurations on pod_hash-enabled VLANs to ensure balanced distribution. Dynamic GeoDB Updates for Edge Firewall Policies Edge Firewall Geo Location policies now support dynamic GeoDB updates, replacing static country/region lists embedded in container images. The Controller and PCCD components automatically incorporate new locations and handle deprecated entries with appropriate logging. Firewall Policy CRs can reference newly available geos immediately, enabling responsive policy adjustments without container restarts or rebuilds. This maintains policy currency in environments requiring frequent threat intelligence updates. Subscriber Creation and CGNAT Logging RADIUS-triggered subscriber creation integrates with distributed session storage (DSSM) for real-time synchronization across TMM pods. Subscriber records capture identifiers like IMSI, MSISDN, or NAI, enabling automated session lifecycle management. CGNAT logging enhancements include Subscriber ID in translation events, providing clear IP-to-subscriber mapping. This facilitates correlation of network activity with individual users, supporting troubleshooting, auditing, and regulatory reporting requirements. Kubernetes Secrets Integration for Sensitive Configuration Custom Resources now reference sensitive data through Kubernetes’ native Secrets using secretRef fields (name, namespace, key). The cne-controller fetches secrets securely via mTLS, monitors for updates, and propagates changes to consuming components. This supports certificate rotation through cert-manager without CR reapplication. RBAC controls ensure appropriate access while eliminating plaintext sensitive data from YAML manifests. Dynamic Log Management and Storage Optimization REST API endpoints and ConfigMap watching enable runtime log level adjustments per pod without restarts. Changes propagate through pod-specific ConfigMaps monitored by the F5 logging library. An optional Folder Cleaner CronJob automatically removes orphaned log directories, preventing storage exhaustion in long-running deployments with heavy Fluentd usage. Key Enhancements Overview Several refinements have improved operational aspects: CNE Controller RBAC: Configurable CRD monitoring via ConfigMap eliminates cluster-wide list permissions, with manual controller restart required for list changes. CGNAT/DNAT HA: F5Ingress automatically distributes VLAN configurations to standby TMM pods (excluding self-IPs) for seamless failover. Memory Optimization: 1GB huge page support via tmm.hugepages.preferredhugepagesize parameter. Diagnostics: QKView requests can be canceled by ID, generating partial diagnostics from collected data. Metrics Control: Per-table aggregation modes (Aggregated, Semi-Aggregated, Diagnostic) with configurable export intervals via f5-observer-operator-config ConfigMap. Related content BIG-IP Next for Kubernetes CNF - latest release BIG-IP Next Cloud-Native Network Functions (CNFs) BIG-IP Next for Kubernetes CNFs deployment walkthrough | DevCentral BIG-IP Next Edge Firewall CNF for Edge workloads | DevCentral F5 BIG-IP Next CNF solutions suite of Kubernetes native 5G Network Functions52Views2likes0CommentsThe End of ClientAuth EKU…Oh Mercy…What to do?
If you’ve spent any time recently monitoring the cryptography and/or public key infrastructure (PKI) spaces…beyond that ever-present “post-quantum” thing, you may have read that starting in May of 2026, Google Chrome Root Program Policy will start requiring public certificate authorities (CAs) to stop issuing certificates with the Client Authentication Extended Key Usage (ClientAuth EKU) extension. While removing ClientAuth EKU from TLS server certificates correctly reduces the scope of these certificates, some internal client certificate authenticated TLS machine-to-machine and API workloads could fail when new/renewed certificates are received from a public CA. Read more here for details and options.2.5KViews11likes4CommentsThe Fast Path to Safer Labs: CycloneDX SBOMs for F5 Practitioners
Quick note up front about my intent with this lab... I built it to quickly help F5 practitioners keep their lab environments safe from hidden threats. Fast, approachable, and useful on day one. We used the bundled Dependency-Track container because it’s trivial to stand up in a lab. In production, please deploy Dependency-Track backed by a production‑grade database and tune it for scale and durability. Lab-first, but think ahead to enterprise‑ready. Now, let’s talk about why I chose CycloneDX for the SBOM we generated with Trivy, and why it’s the accepted standard I recommend for modern, AI‑heavy workloads. At a high level, an SBOM is your ingredient list for software. Containers that host LLM apps are layered: base OS, GPU drivers and CUDA, language runtimes, Python packages, app binaries, plus external services you call (hosted inference, embeddings, vector databases). If you don’t know what’s in that stack, you can’t manage risk when new CVEs land. CycloneDX gives you that visibility and does it with a security-first design. Here’s why CycloneDX is such a good fit: - Security-first schema. CycloneDX was born into the AppSec world at OWASP. It bakes in identifiers that vulnerability tooling actually uses—package URLs (purls), CPEs, hashes—and a proper dependency graph. That graph matters when the vulnerable thing isn’t your top-level app but the library three layers deep. - Broad component coverage, including services. Real LLM apps don’t stop at “libraries.” CycloneDX can represent applications, libraries, containers, operating systems, files, and services. That service support is huge: if you depend on an external inference API, a hosted vector DB, or a third-party embedding service, CycloneDX can document that right in your SBOM. Your risk picture is no longer just what’s “in the image,” but what the image calls it. - VEX support to cut noise. CycloneDX supports VEX (Vulnerability Exploitability eXchange), which lets you annotate “not affected” or “mitigated” when a CVE shows up in your base image but is not exploitable in your specific deployment. That’s how you keep the signal high and the noise low. - Toolchain adoption. The path we used in the lab—Trivy generates CycloneDX JSON in a single command, Dependency-Track ingests it cleanly—is exactly what you want. Fewer conversions, fewer surprises, more time looking at risk with a project-centric view. So how does that map to LLM app security, specifically? - Containers and drivers: CycloneDX captures the full container context—OS packages, runtime layers, GPU driver stacks—so when you rebuild to pick up a CUDA or base image update, your SBOM reflects the change and your risk dashboard stays current. - Python ecosystems: For model-serving and data pipelines, CycloneDX tracks the Python libraries and their transitive dependencies, so when a popular package pushes a patch for a nasty CVE, you’ll see the impact across your projects. - Model artifacts and files: CycloneDX can represent file components with hashes. If you pin or verify model files, that checksum data helps you detect drift or tampering. - External services: Many LLM apps rely on hosted endpoints. CycloneDX’s service component type lets you document those dependencies, so governance isn’t blind to the parts of your “system” that live outside your containers. Now, let’s compare CycloneDX to other SBOM standards you’ll hear about. SPDX (Software Package Data Exchange) - Strengths: It’s a Linux Foundation standard with deep traction, especially for license compliance. Legal and compliance teams love it for moving license information through CI/CD. - Tradeoffs for AppSec: SPDX can represent dependencies and has added security-relevant fields, but its heritage is compliance rather than vulnerability analysis. Modeling external services is less natural, and a lot of AppSec tooling (like the Trivy -> Dependency-Track workflow we used) is tuned for CycloneDX. If your primary goal is security visibility and CVE triage for containerized AI apps, CycloneDX tends to be the smoother path. SWID tags (ISO/IEC 19770-2) - Strengths: Vendor provided software identification for asset management—who installed what, what version, and how it’s licensed. - Tradeoffs: Limited open tooling, and not a great fit for layered containers or fast-moving dependency graphs. You won’t get the rich, developer-centric view you need for daily AppSec in LLM environments. And a quick reality check: package manifests and lockfiles (pip freeze, requirements.txt, package-lock.json) are useful, but they’re not SBOMs. They miss OS packages, drivers, and container layers. CycloneDX gives you the whole picture. Practically speaking, here’s the loop we ran—and why CycloneDX makes it painless: - Generate: Use Trivy to scan your AI container and spit out CycloneDX JSON. It’s trivial—one line, usually under a minute. - Ingest: Push that SBOM into Dependency-Track via the API. You get components, licenses, vulnerability scores, dependency graphs, and a clean project/version history. - Act: Watch for new CVEs. Use VEX to mark what’s not exploitable in your context. Rebuild, rescan, repeat. Automate it in CI so your SBOM stays fresh without manual babysitting. Production note again, because it matters: the bundled Dependency-Track container is perfect for labs and demos. In production, deploy Dependency-Track with a production-grade database, persistent storage, backups, and access controls that match your enterprise standards. Bottom line: SPDX and CycloneDX are both legitimate, widely accepted SBOM standards. If your priority is license compliance, SPDX is an excellent fit. If your priority is application security for modern, service-heavy, containerized LLM apps, CycloneDX gives you security-first modeling, service coverage, VEX, and an ecosystem that lets you move fast without sacrificing visibility. Voila—grab Trivy, generate CycloneDX, feed Dependency-Track, and start getting signals instead of noise. Fresh installs often look green on day one, but when something changes tomorrow, you’ll see it. That’s the whole game: make hidden threats visible, then make them go away. If you’d like to try the lab, it’s located here. If you want to check out the video of the lab, instead, try this one:
63Views3likes0CommentsMulti‑Cluster Kubernetes App Delivery Made Simple with F5 BIG‑IP CIS & Nutanix Kubernetes Platform
Organizations are increasingly deploying applications across multiple Kubernetes clusters to achieve greater resilience, scalability, and operational flexibility. However, as environments expand, so does the complexity. Managing traffic, ensuring consistent security policies, and delivering applications seamlessly across multiple Kubernetes clusters can quickly become operationally overwhelming. F5 and Nutanix jointly address these challenges together by combining the application delivery and security capabilities of F5 BIG-IP with the simplicity and operational consistency of the Nutanix Kubernetes Platform (NKP). See it in action—watch the demo video: F5 BIG-IP Container Ingress Services (CIS) Overview F5 BIG‑IP Container Ingress Services (CIS) is a Kubernetes‑native ingress and automation controller that connects F5 BIG‑IP directly to Kubernetes. F5 BIG-IP CIS watches the Kubernetes API in real time and translates native Kubernetes resources—including Ingress, Routes, VirtualServer, TransportServer, and AS3 declarations—into F5 BIG‑IP configurations. This transforms F5 BIG‑IP from an external appliance into a declarative, automated extension of the Kubernetes environment, enabling cloud‑native workflows and eliminating manual, error‑prone configuration. This tight integration ensures that application delivery, security, and traffic management remain consistent and automatically adapt as Kubernetes environments change. Multi-Cluster Application Delivery with CIS Multi-cluster architectures are rapidly becoming the enterprise standard. But delivering applications across multiple Kubernetes clusters introduces challenges, including: Maintaining consistent security policies Automatically routing traffic to the most appropriate cluster as workloads scale or shift Avoiding configuration drift and fragmented visibility Reducing operational friction caused by manual updates Without the right tooling, these challenges can lead to operational sprawl and deployment delays. F5 BIG-IP CIS addresses these challenges through its built‑in multi‑cluster capabilities, enabling a single BIG‑IP Virtual Server to front applications that span multiple Kubernetes clusters. This approach: Consolidates application access behind one unified entry point Automatically updates traffic routing as clusters scale or workloads migrate Enforces consistent policies across environments Significantly reduces operational overhead by eliminating per‑cluster configuration F5 BIG-IP CIS supports both standalone mode and high‑availability (HA) mode for multi-cluster environments. In HA mode, the primary F5 BIG-IP CIS instance is responsible for managing F5 BIG‑IP configuration, while a secondary instance continuously monitors its health. If the primary instance becomes unavailable, the secondary automatically takes over, ensuring uninterrupted management and application delivery continuity. F5 BIG-IP CIS + Nutanix Kubernetes Platform (NKP): Better Together When F5 BIG‑IP CIS is combined with the Nutanix Kubernetes Platform (NKP), organizations gain a unified and automated approach to delivering, securing, and scaling applications across multiple Kubernetes clusters—a cohesive multi‑cluster application services solution. Key benefits include: Unified North–South Control Plane F5 BIG‑IP acts as the intelligent front door for all Kubernetes clusters, centralizing traffic management and visibility. Consistent Security Policies WAF, DDoS protection, and traffic policies can be applied uniformly across Kubernetes clusters to maintain a consistent security posture. Automated Orchestration and Reduced Operational Overhead F5 BIG-IP CIS’s event‑driven automation aligns with NKP’s streamlined cluster lifecycle management, reducing manual configuration and operational complexity. Direct Pod Routing in Cluster Mode Static route support in cluster mode enables CIS to automatically configure static routes on BIG‑IP using the node subnets assigned to Kubernetes cluster nodes. This allows BIG‑IP to route directly to Kubernetes pod subnets without requiring any tunnel configuration, greatly simplifying the networking architecture. Flexible Deployment Topologies: Standalone or HA CIS supports both standalone and high‑availability deployment in multi-cluster environments, enabling resilient application exposure across Kubernetes clusters. Conclusion As Kubernetes environments continue to expand, the need for consistent, secure, and efficient multi‑cluster application delivery becomes increasingly critical. Together, F5 BIG‑IP CIS and Nutanix Kubernetes Platform (NKP) provide a unified, automated, and future‑ready solution that removes much of the operational complexity traditionally associated with distributed architectures. This joint solution delivers consistent security enforcement, intelligent traffic management, and streamlined operations across any number of Kubernetes clusters. Whether an organization is aimed at modernization, expanding into multi‑cluster architectures, or working to streamline and secure Kubernetes traffic flows, F5 and Nutanix jointly offer a forward-looking path. Multi‑cluster Kubernetes doesn’t have to be complex—and with F5 BIG‑IP CIS and Nutanix Kubernetes Platform (NKP), it’s never been simpler. Related URLs F5 BIG-IP Container Ingress Services (CIS) for Multi-Cluster https://clouddocs.f5.com/containers/latest/userguide/multicluster/ Nutanix Kubernetes Platform (NKP) https://www.nutanix.com/products/kubernetes-management-platform
83Views2likes0CommentsAI Security - LLM-DOS, and predictions of 2025 and beyond
Introduction Hello again, this article is part of AI security series. I have been discussing AI security along with the OWASP LLM top10. LLM01 and LLM02 were discussed in the "AI Security : Prompt Injection and Insecure Output Handling", and LLM03 and its basic concepts were discussed in the "Using ChatGPT for security and introduction of AI security". In this article, I am going to discuss LLM04. And, since we are almost at the end of the year 2024, I would like to present some discussions and predictions for AI security in 2025 and beyond. LLM04: Model Denial of Service LLM04 is relatively easy to understand for security engineers who is familiar with conventional cyber attack methods. Denial of Service (DoS) is a common method of cyber attack, in which a large amount of data is given to the server to make it unable to provide services and/or crash. DoS attacks usually aim to exhaust computational resources and block services rather than stealing data, but the disruption they cause can be used as a smokescreen for more malicious activities, such as data breaches or malware installation. DOS attack against LLM (LLM-DOS) is same. It aims to exhaust computational resources of DOS (like CPU/GPU usage) and block services (like responding to chat). LLM-DOS can be done in two ways. One is a simple LLM-DOS attack which is to mass input against the LLM's input, similar to a DOS attack against a server. This method, as described in this article, can deplete the LLM's resources, like CPU/GPU usages. If you call this as a simple DoS attack, in such a scenario would be to instruct the model to keep repeating Hello, but we see that relying only on natural instructions limits the output length, which is limited by the maximum length of the LLM's Supervised Fine-Tuning (SFT) data The another method of LLM-DOS is to include code in the input that over-consumes resources. Denial-of-Service Poisoning Attacks on Large Language Models is discussing this. In the paper, this is called as a poisoning-based DoS (P-DoS) attack and it demonstrates that the output length limit can be broken by injecting a single poisoning sample designed for DoS purposes. Experiments reveal that an attacker can easily compromise models such as GPT-4o and GPT-4o mini by injecting a single poisoning sample through the OpenAI API at a minimal cost of less than $1. To understand this, it is easier to think about simple programming - for example, if you put an inescapable loop statement in your code, it can hang the computer (in fact, the IDE will warn you before it compiles). And if the network does not have a Spanning Tree Protocol, it will loop and hang the router. So same things happens on prompt injection. When using this idea LLLM-DOS, we must consider that such input should be blacklisted, so the simple way of using inescapable loop is impossible. Also, even if it is possible against WhiteBox, but we do not know what kind of attack is possible in BlackBox. However, according to "Crabs: Consuming Resource via Auto-generation for LLM-DoS Attack under Black-box Settings", a Prompt input to the BlackBox can generate multiple sub-prompts (e.g., 25 sub-prompts). Its experiments show that the delay could be increased by a factor of 250. Given these serious safety concerns, researchers advocate further research aimed at defending against LLM-DoS threats in custom fine tuning of aligned LLMs. What will happen in 2025 and beyond? Some news site predicts an intensifying AI arms race in coming year. I would like to share an article on AI security predictions for the coming year and beyond. According to an article by EG Secure solutions, the generative AI makes it possible to create a malware without specialized skills, that makes easier to do cyber attacks. Thus, the article predicted that cyber attacks by malware created by generative AI would increase. The article also points out that LLM-generated applications such as RAGs are being used, but their code may contain vulnerabilities, and that will be another threat in 2025 and beyond. McAfee has released "McAfee Unveils 2025 Cybersecurity Predictions: AI-Powered Scams and Emerging Digital Threats Take Center Stage". According to the article, cyber attacks by malicious attackers will be highly optimized by generative AI, and the quality of DeepFake and AI-generated images/videos will increase, making it difficult to determine whether they are created by humans or generative AI. Thus it is expected that fake emails generated by generative AI, such as phishing emails, will also become harder to distinguish from real emails. Furthermore, the article points out that malware which is using (maybe created by) generative AI will become more sophisticated, thereby breaking through conventional security defense systems and may succeed to extracting personal information and sensitive data. Finally, the "Infosec experts divided on AI's potential to assist red teams" discusses the pros and cons of using generative AI for red teaming, one type of security audit. According to the article, the benefit of using generative AI is that it accelerates threat detection by allowing AI to scour multiple data feeds, applications, and other sources of performance data and run them as part of a larger automated workflow. On the other hand, the article also argues that using generative AI for red teaming is still limited, because the vulnerability discovery process by AI is a black box so the pen-tester cannot explain how they discovered to their clients.1KViews1like1CommentMitigating OWASP Web Application Risk: Insecure Design using F5 XC platform
Overview: This article is the last part in a series of articles on mitigation of OWASP Web Application vulnerabilities using F5 Distributed Cloud platform (F5 XC). Introduction to Insecure Design: In an effort to speed up the development cycle, some phases might be reduced in scope which leads to give chance for many vulnerabilities. To focus the risks which are been ignored from design to deployment phases, a new category of “Insecure Design” is added under OWASP Web Application Top 10 2021 list. Insecure Design represents the weaknesses i.e. lack of security controls which are been integrated to the website/application throughout the development cycle. If we do not have any security controls to defend the specific attacks, Insecure Design cannot be fixed by any perfect implementation while at the same time a secure design can still have an implementation flaw which leads to vulnerabilities that may be exploited. Hence the attackers will get vast scope to leverage the vulnerabilities created by the insecure design principles. Here are the multiple scenarios which comes under insecure design vulnerabilities. Credential Leak Authentication Bypass Injection vulnerabilities Scalper bots etc. In this article we will see how F5 XC platform helps to mitigate the scalper bot scenario. What is Scalper Bot: In the e-commerce industry, Scalping is a process which always leads to denial of inventory. Especially, online scalping uses bots nothing but the automated scripts which will check the product availability periodically (in seconds), add the items to the cart and checkout the products in bulk. Hence the genuine users will not get a fair chance to grab the deals or discounts given by the website or company. Alternatively, attackers use these scalper bots to abandon the items added to the cart later, causing losses to the business as well. Demonstration: In this demonstration, we are using an open-source application “Online Boutique” (refer boutique-repo) which will provide end to end online shopping cart facility. Legitimate customer can add any product of their choice to the cart and checkout the order. Customer Page: Scalper bot with automation script: The below automation script will add products in bulk into the cart of the e-commerce application and place the order successfully. import requests import random # List of User-Agents USER_AGENTS = [ "sqlmap/1.5.2", # Automated SQL injection tool "Nikto/2.1.6", # Nikto vulnerability scanner "nmap", # Network mapper used in reconnaissance "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)", # Spoofed Search Engine Bot "php", # PHP Command Line Tool "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)", # Old Internet Explorer (suspicious outdated) "libwww-perl/6.36", # Perl-based automation, often found in attacks or scrapers "wget/1.20.3", # Automation tool for downloading files or making requests "Python-requests/2.26.0", # Python automation library ] # Function to select a random User-Agent def get_random_user_agent(): return random.choice(USER_AGENTS) # Base URL of the API BASE_URL = "https://insecure-design.f5-hyd-xcdemo.com" # Perform the API request to add products to the cart def add_to_cart(product_id, quantity): url = f"{BASE_URL}/cart" headers = { "User-Agent": get_random_user_agent(), # Random User-Agent "Content-Type": "application/x-www-form-urlencoded" } payload = { "product_id": product_id, "quantity": quantity } # Send POST request with cookies included response = requests.post(url, headers=headers, data=payload) if response.status_code == 200: print(f"Successfully added {quantity} to cart!") else: print(f"Failed to add to cart. Status Code: {response.status_code}, Response: {response.text}") return response # Perform the API request to place an order def place_order(): url = f"{BASE_URL}/cart/checkout" headers = { "User-Agent": get_random_user_agent(), # Random User-Agent "Content-Type": "application/x-www-form-urlencoded" } payload = { "email": "someone@example.com", "street_address": "1600 Amphitheatre Parkway", "zip_code": "94043", "city": "Mountain View", "state": "CA", "country": "United States", "credit_card_number": "4432801561520454", "credit_card_expiration_month": "1", "credit_card_expiration_year": "2026", "credit_card_cvv": "672" } # Send POST request with cookies included response = requests.post(url, headers=headers, data=payload) if response.status_code == 200: print("Order placed successfully!") else: print(f"Failed to place order. Status Code: {response.status_code}, Response: {response.text}") return response # Main function to execute the API requests def main(): # Add product to cart product_id = "OLJCESPC7Z" quantity = 10 print("Adding product to cart...") add_to_cart_response = add_to_cart(product_id, quantity) # If the add_to_cart request is successful, proceed to checkout if add_to_cart_response.status_code == 200: print("Placing order...") place_order() # Run the main function if __name__ == "__main__": main() To mitigate this problem, F5 XC is providing the feasibility of identifying and blocking these bots based on the configuration provided under HTTP load balancer. Here is the procedure to configure the bot defense with mitigation action ‘block’ in the load balancer and associate the backend application nothing but ‘evershop’ as the origin pool. Create origin pool Refer pool-creation for more info Create http load balancer (LB) and associate the above origin pool to it. Refer LB-creation for more info Configure bot defense on the load balancer and add the policy with mitigation action as ‘block’. Click on “Save and Exit” to save the Load Balancer configuration. Run the automation script by providing the LB domain details to exploit the items in the application. Validating the product availability for the genuine user manually. Monitor the logs through F5 XC, Navigate to WAAP --> Apps & APIs --> Security Dashboard, select your LB and click on ‘Security Event’ tab. The above screenshot gives the detailed info on the blocked attack along with the mitigation action. Conclusion: As you have seen from the demonstration, F5 Distributed Cloud WAAP (Web Application and API Protection) has detected the scalpers with the bot defense configuration applied on the Load balancer and mitigated the exploits of scalper bots. It also provides the mitigation action of “_allow_”, “_redirect_” along with “_block_”. Please refer link for more info. Reference links: OWASP Top 10 - 2021 Overview of OWASP Web Application Top 10 2021 F5 Distributed Cloud Services F5 Distributed Cloud Platform Authentication Bypass Injection vulnerabilities2.5KViews2likes0CommentsIdentity-centric F5 ADSP Integration Walkthrough
In this article we explore F5 ADSP from the Identity lense by using BIG-IP APM, BIG-IP SSLO and add BIG-IP AWAF to the service chain. The F5 ADSP addresses four core areas: Deployment at scale, Security against evolving threats, Deliver application reliably, Operate your day to day work efficiently. Each comes with its own challenges, but together they define the foundation for keeping systems fast, stable, and safe. Each architecture deployment example is designed to cover at least two of the four core areas: Deployment, Security, Delivery and XOps.253Views3likes0CommentsRuntime API Security: From Visibility to Enforced Protection
This article examines how runtime visibility and enforcement address API security failures that emerge only after deployment. It shows how observing live traffic enables evidence-based risk prioritization, exposes active exploitation, and supports targeted controls such as schema enforcement and scoped protection rules. The focus is on reducing real-world attack impact by aligning specifications, behavior, and enforcement, and integrating runtime protection as a feedback mechanism within a broader API security strategy.
186Views1like1CommentBIG-IP Next for Kubernetes CNFs - DNS walkthrough
Introduction F5 enables advanced DNS implementations across different deployments, whether it’s hardware, Virtual Functions and F5 Distributed Cloud. Also, in Kubernetes environment through the F5BigDnsApp Custom Resource Definition (CRD), allowing declarative configuration of DNS listeners, pools, monitors, and profiles directly in-cluster. Deploying DNS services like Express, Cache, and DoH within the Kubernetes cluster using BIG-IP Next for Kubernetes CNF DNS saves external traffic by resolving queries locally (reducing egress to upstream resolvers by up to 80% with caching) and enhances security through in-cluster isolation, mTLS enforcement, and protocol encryption like DoH, preventing plaintext DNS exposure over cluster boundaries. This article provides a walkthrough for DNS Express, DNS Cache, and DNS-over-HTTPS (DoH) on top of Red Hat OpenShift. Prerequisites Deploy BIG-IP Next for Kubernetes CNF following the steps in F5’s Cloud-Native Network Functions (CNFs) Verify the nodes and CNF components are installed [cloud-user@ocp-provisioner f5-cne-2.1.0]$ kubectl get nodes NAME STATUS ROLES AGE VERSION master-1.ocp.f5-udf.com Ready control-plane,master,worker 2y221d v1.29.8+f10c92d master-2.ocp.f5-udf.com Ready control-plane,master,worker 2y221d v1.29.8+f10c92d master-3.ocp.f5-udf.com Ready control-plane,master,worker 2y221d v1.29.8+f10c92d worker-1.ocp.f5-udf.com Ready worker 2y221d v1.29.8+f10c92d worker-2.ocp.f5-udf.com Ready worker 2y221d v1.29.8+f10c92d [cloud-user@ocp-provisioner f5-cne-2.1.0]$ kubectl get pods -n cne-core NAME READY STATUS RESTARTS AGE f5-cert-manager-656b6db84f-dmv78 2/2 Running 10 (15h ago) 19d f5-cert-manager-cainjector-5cd9454d6c-sc8q2 1/1 Running 21 (15h ago) 19d f5-cert-manager-webhook-6d87b5797b-954v6 1/1 Running 4 19d f5-dssm-db-0 3/3 Running 13 (18h ago) 15d f5-dssm-db-1 3/3 Running 0 18h f5-dssm-db-2 3/3 Running 4 (18h ago) 42h f5-dssm-sentinel-0 3/3 Running 0 14h f5-dssm-sentinel-1 3/3 Running 10 (18h ago) 5d8h f5-dssm-sentinel-2 3/3 Running 0 18h f5-rabbit-64c984d4c6-xn2z4 2/2 Running 8 19d f5-spk-cwc-77d487f955-j5pp4 2/2 Running 9 19d [cloud-user@ocp-provisioner f5-cne-2.1.0]$ kubectl get pods -n cnf-fw-01 NAME READY STATUS RESTARTS AGE f5-afm-76c7d76fff-5gdhx 2/2 Running 2 42h f5-downloader-657b7fc749-vxm8l 2/2 Running 0 26h f5-dwbld-d858c485b-6xfq8 2/2 Running 2 26h f5-ipsd-79f97fdb9c-zfqxk 2/2 Running 2 26h f5-tmm-6f799f8f49-lfhnd 5/5 Running 0 18h f5-zxfrd-d9db549c4-6r4wz 2/2 Running 2 (18h ago) 26h f5ingress-f5ingress-7bcc94b9c8-zhldm 5/5 Running 6 26h otel-collector-75cd944bcc-xnwth 1/1 Running 1 42h DNS Express Walkthrough DNS Express configures BIG-IP to authoritatively answer queries for a zone by pulling it via AXFR/IXFR from an upstream server, with optional TSIG auth keeping zone data in-cluster for low-latency authoritative resolution. Step 1: Create a F5BigDnsZone CR for zone transfer (e.g., example.com from upstream 10.1.1.12). # cat 10-cr-dnsxzone.yaml apiVersion: k8s.f5net.com/v1 kind: F5BigDnsZone metadata: name: example.com spec: dnsxAllowNotifyFrom: ["10.1.1.12"] dnsxServer: address: "10.1.1.12" port: 53 dnsxEnabled: true dnsxNotifyAction: consume dnsxVerifyNotifyTsig: false #kubectl apply -f 10-cr-dnsxzone.yaml -n cnf-fw-01 Step 2: Deploy F5BigDnsApp CR with DNS Express enabled # cat 11-cr-dnsx-app-udp.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigDnsApp metadata: name: "dnsx-app-listener" namespace: "cnf-fw-01" spec: destination: address: "10.1.30.100" port: 53 ipProtocol: "udp" snat: type: "automap" dns: dnsExpressEnabled: true logProfile: "cnf-log-profile" # kubectl apply -f 11-cr-dnsx-app-udp.yaml -n cnf-fw-01 Step 3: Validate: Query from our client pod & tmm statistics dig @10.1.30.100 www.example.com ; <<>> DiG 9.18.30-0ubuntu0.20.04.2-Ubuntu <<>> @10.1.30.100 www.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43865 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;www.example.com. IN A ;; ANSWER SECTION: www.example.com. 604800 IN A 192.168.1.11 ;; AUTHORITY SECTION: example.com. 604800 IN NS ns.example.com. ;; ADDITIONAL SECTION: ns.example.com. 604800 IN A 192.168.1.10 ;; Query time: 0 msec ;; SERVER: 10.1.30.100#53(10.1.30.100) (UDP) ;; WHEN: Thu Jan 22 11:10:24 UTC 2026 ;; MSG SIZE rcvd: 93 kubectl exec -it deploy/f5-tmm -c debug -n cnf-fw-01 -- bash /tmctl -id blade tmmdns_zone_stat name=example.com name dnsx_queries dnsx_responses dnsx_xfr_msgs dnsx_notifies_recv ----------- ------------ -------------- ------------- ------------------ example.com 2 2 0 0 DNS Cache Walkthrough DNS Cache reduces latency by storing responses non-authoritatively, referenced via a separate cache CR in the DNS profile, cutting repeated upstream queries and external bandwidth use. Step 1: Create a DNS Cache CR F5BigDnsCache # cat 13-cr-dnscache.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigDnsCache metadata: name: "cnf-dnscache" spec: cacheType: resolver resolver: useIpv4: true useTcp: false useIpv6: false forwardZones: - forwardZone: "example.com" nameServers: - ipAddress: 10.1.1.12 port: 53 - forwardZone: "." nameServers: - ipAddress: 8.8.8.8 port: 53 # kubectl apply -f 13-cr-dnscache.yaml -n cnf-fw-01 Step 2: Deploy F5BigDnsApp CR with DNS Cache enabled # cat 11-cr-dnsx-app-udp.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigDnsApp metadata: name: "dnsx-app-listener" namespace: "cnf-fw-01" spec: destination: address: "10.1.30.100" port: 53 ipProtocol: "udp" snat: type: "automap" dns: dnsCache: "cnf-dnscache" logProfile: "cnf-log-profile" # kubectl apply -f 11-cr-dnsx-app-udp.yaml -n cnf-fw-01 Step 3: Validate: Query from our client pod dig @10.1.30.100 www.example.com ; <<>> DiG 9.18.30-0ubuntu0.20.04.2-Ubuntu <<>> @10.1.30.100 www.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18302 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;www.example.com. IN A ;; ANSWER SECTION: www.example.com. 19076 IN A 192.168.1.11 ;; Query time: 4 msec ;; SERVER: 10.1.30.100#53(10.1.30.100) (UDP) ;; WHEN: Thu Jan 22 11:04:45 UTC 2026 ;; MSG SIZE rcvd: 60 DoH Walkthrough DoH exposes DNS over HTTPS (port 443) for encrypted queries, using BIG-IP's protocol inspection and UDP profiles, securing in-cluster DNS from eavesdropping and MITM attacks. Step 1: Ensure TLS secret exists and HTTP profiles exist # cat 14-tls-clientsslsettings.yaml apiVersion: k8s.f5net.com/v1 kind: F5BigClientsslSetting metadata: name: "cnf-clientssl-profile" namespace: "cnf-fw-01" spec: enableTls13: true enableRenegotiation: false renegotiationMode: "require" # cat 15-http-profiles.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigHttp2Setting metadata: name: http2-profile spec: activationModes: "alpn" concurrentStreamsPerConnection: 10 connectionIdleTimeout: 300 frameSize: 2048 insertHeader: false insertHeaderName: "X-HTTP2" receiveWindow: 32 writeSize: 16384 headerTableSize: 4096 enforceTlsRequirements: true --- apiVersion: "k8s.f5net.com/v1" kind: F5BigHttpSetting metadata: name: http-profile spec: oneConnect: false responseChunking: "sustain" lwsMaxColumn: 80 # kubectl apply -f 14-tls-clientsslsettings.yaml -n cnf-fw-01 # kubectl apply -f 15-http-profiles.yaml -n cnf-fw-01 Step 2: Create DNSApp for DoH service # cat 16-DNSApp-doh.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigDnsApp metadata: name: "cnf-dohapp" namespace: "cnf-fw-01" spec: ipProtocol: "udp" dohProtocol: "udp" destination: address: "10.1.20.100" port: 443 snat: type: "automap" dns: dnsExpressEnabled: false dnsCache: "cnf-dnscache" clientSslSettings: "clientssl-profile" pool: members: - address: "10.1.10.50" monitors: dns: enabled: true queryName: "www.example.com" queryType: "a" recv: "192.168.1.11" # kubectl apply -f 16-DNSApp-doh.yaml -n cnf-fw-01 Step 3: Testing from our client pod ubuntu@client:~$ dig @10.1.20.100 -p 443 +https +notls-ca www.google.com ; <<>> DiG 9.18.30-0ubuntu0.20.04.2-Ubuntu <<>> @10.1.20.100 -p 443 +https +notls-ca www.google.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4935 ;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;www.google.com. IN A ;; ANSWER SECTION: www.google.com. 69 IN A 142.251.188.103 www.google.com. 69 IN A 142.251.188.147 www.google.com. 69 IN A 142.251.188.106 www.google.com. 69 IN A 142.251.188.105 www.google.com. 69 IN A 142.251.188.99 www.google.com. 69 IN A 142.251.188.104 ;; Query time: 8 msec ;; SERVER: 10.1.20.100#443(10.1.20.100) (HTTPS) ;; WHEN: Thu Jan 22 11:27:05 UTC 2026 ;; MSG SIZE rcvd: 139 ubuntu@client:~$ dig @10.1.20.100 -p 443 +https +notls-ca www.example.com ; <<>> DiG 9.18.30-0ubuntu0.20.04.2-Ubuntu <<>> @10.1.20.100 -p 443 +https +notls-ca www.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20401 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;www.example.com. IN A ;; ANSWER SECTION: www.example.com. 17723 IN A 192.168.1.11 ;; Query time: 4 msec ;; SERVER: 10.1.20.100#443(10.1.20.100) (HTTPS) ;; WHEN: Thu Jan 22 11:27:18 UTC 2026 ;; MSG SIZE rcvd: 60 Conclusion BIG-IP Next DNS CRs transform Kubernetes into a production-grade DNS platform, delivering authoritative resolution, caching efficiency, and encrypted DoH, all while optimizing external traffic costs and hardening security boundaries for cloud-native deployments. Related Content BIG-IP Next for Kubernetes CNF guide BIG-IP Next Cloud-Native Network Functions (CNFs) BIG-IP Next for Kubernetes CNF deployment walkthrough BIG-IP Next Edge Firewall CNF for Edge workloads | DevCentral Modern Applications-Demystifying Ingress solutions flavors | DevCentral73Views2likes0Comments
