verified designs
212 TopicsRealtime DoS mitigation with VELOS BX520 Blade
Introduction F5 VELOS is a rearchitected, next-generation hardware platform that scales application delivery performance and automates application services to address many of today’s most critical business challenges. F5 VELOS is a key component of the F5 Application Delivery and Security Platform (ADSP). Demo Video DoS attacks are a fact of life Detect and mitigate large-scale, volumetric network and application-targeted attacks in real-time to defend your businesses and your customers against multi-vector, denial of service (DoS) activity attempting to disrupt your business. DoS impacts include: Loss of Revenue Degradation of Infrastructure Indirect costs often include: Negative Customer Experience. Brand Image DoS attacks do not need to be massive to be effective. F5 VELOS: Key Specifications Up to 6Tbps total Layer 4-7 throughput 6.4 Billion concurrent connections Higher density resources/Rack Unit than any previous BIG-IP Flexible support for multi-tenancy and blade groupings API first architecture / fully automatable Future-proof architecture built on Kubernetes Multi-terabit security – firewall and real-time DoS Real-time DoS Mitigation with VELOS Challenges Massive volume attacks are not required to negatively impact “Goodput”. Shorter in duration to avoid Out of Band/Sampling Mitigation. Using BIG-IP inline DoS protection can react quickly and mitigate in real-time. Simulated DoS Attack 600 Gbps 1.5 Million Connections Per Second (CPS) 120 Million Concurrent Flows Example Dashboard without DoS Attack Generated Attack Flood an IP from many sources 10 Gb/s with 10 Million CPS DoS Attack launched (<2% increase in Traffic) Impact High CPU Consumption: 10M+ new CPS High memory utilization with Concurrent Flows increasing quickly Result Open connections much higher New connections increasing rapidly Higher CPU Application Transaction Failures Enable Network Flood Mitigation Mitigation Applied Enabling the Flood Vector on BIG-IP AFM Device DoS Observe “Goodput” returning to normal in seconds as BIG-IP mitigates the Attack Conclusion Distributed denial of service (DDoS) attacks continue to see enormous growth across every metric. This includes an increasing number and frequency of attacks, average peak bandwidth and overall complexity. As organizations face unstoppable growth and the occurrence of these attacks, F5 provides organizations multiple options for complete, layered protection against DDoS threats across layers 3–4 and 7. F5 enables organizations to maintain critical infrastructure and services — ensuring overall business continuity under this barrage of evolving, and increasing DoS/DDoS threats attempting to disrupt or shut down their business. Related Articles F5 VELOS: A Next-Generation Fully Automatable Platform F5 rSeries: Next-Generation Fully Automatable Hardware Accelerate your AI initiatives using F5 VELOS DEMO: The Next Generation of F5 Hardware is Ready for you
431Views3likes0CommentsF5 VELOS: A Next-Generation Fully Automatable Platform
What is VELOS? The F5 VELOS platform is the next generation of F5’s chassis-based systems. VELOS can bridge traditional and modern application architectures by supporting a mix of traditional F5 BIG-IP tenants as well as next-generation BIG-IP Next tenants in the future. F5 VELOS is a key component of the F5 Application Delivery and Security Platform (ADSP). VELOS relies on a Kubernetes-based platform layer (F5OS) that is tightly integrated with F5 TMOS software. Going to a microservice-based platform layer allows VELOS to provide additional functionality that was not possible in previous generations of F5 BIG-IP platforms. Customers do not need to learn Kubernetes but still get the benefits of it. Management of the chassis will still be done via a familiar F5 CLI, webUI, or API. The additional benefit of automation capabilities can greatly simplify the process of deploying F5 products. A significant amount of time and resources are saved due to automation, which translates to more time to perform critical tasks. F5OS VELOS UI Demo Video Why is VELOS important? Get more done in less time by using a highly automatable hardware platform that can deploy software solutions in seconds, not minutes or hours. Increased performance improves ROI: The VELOS platform is a high-performance and highly scalable chassis with improved processing power. Running multiple versions on the same platform allows for more flexibility than previously possible. Significantly reduce the TCO of previous-generation hardware by consolidating multiple platforms into one. Key VELOS Use-Cases NetOps Automation Shorten time to market by automating network operations and offering cloud-like orchestration with full-stack programmability Drive app development and delivery with self-service and faster response time Business Continuity Drive consistent policies across on-prem and public cloud and across hardware and software-based ADCs Build resiliency with VELOS’ superior platform redundancy and failover capabilities Future-proof investments by running multiple versions of apps side-by-side; migrate applications at your own pace Cloud Migration On-Ramp Accelerate cloud strategy by adopting cloud operating models and on-demand scalability with VELOS and use that as on-ramp to cloud Dramatically reduce TCO with VELOS systems; extend commercial models to migrate from hardware to software or as applications move to cloud Automation Capabilities Declarative APIs and integration with automation frameworks (Terraform, Ansible) greatly simplifies operations and reduces overhead: AS3 (Application Services 3 Extension): A declarative API that simplifies the configuration of application services. With AS3, customers can deploy and manage configurations consistently across environments. Ansible Automation: Prebuilt Ansible modules for VELOS enable automated provisioning, configuration, and updates, reducing manual effort and minimizing errors. Terraform: Organizations leveraging Infrastructure as Code (IaC) can use Terraform to define and automate the deployment of VELOS appliances and associated configurations. Example json file: Example of running the Automation Playbook: Example of the results: More information on Automation: Automating F5OS on VELOS GitHub Automation Repository Specialized Hardware Performance VELOS offers more hardware-accelerated performance capabilities with more FPGA chipsets that are more tightly integrated with TMOS. It also includes the latest Intel processing capabilities. This enhances the following: SSL and compression offload L4 offload for higher performance and reduced load on software Hardware-accelerated SYN flood protection Hardware-based protection from more than 100 types of denial-of-service (DoS) attacks Support for F5 Intelligence Services VELOS CX1610 chassis VELOS BX520 blade Migration Options (BIG-IP Journeys) Use BIG-IP Journeys to easily migrate your existing configuration to VELOS. This covers the following: Entire L4-L7 configuration can be migrated Individual Applications can be migrated BIG-IP Tenant configuration can be migrated Automatically identify and resolve migration issues Convert UCS files into AS3 declarations if needed Post-deployment diagnostics and health The Journeys Tool, available on DevCentral’s GitHub, facilitates the migration of legacy BIG-IP configurations to VELOS-compatible formats. Customers can convert UCS files, validate configurations, and highlight unsupported features during the migration process. Multi-tenancy capabilities in VELOS simplify the process of isolating workloads during and after migration. GitHub repository for F5 Journeys Conclusion The F5 VELOS platform addresses the modern enterprise’s need for high-performance, scalable, and efficient application delivery and security solutions. By combining cutting-edge hardware capabilities with robust automation tools and flexible migration options, VELOS empowers organizations to seamlessly transition from legacy platforms while unlocking new levels of performance and operational agility. Whether driven by the need for increased throughput, advanced multi-tenancy, the VELOS platform stands as a future-ready solution for securing and optimizing application delivery in an increasingly complex IT landscape. Related Content Cloud Docs VELOS Guide F5 VELOS Chassic System Datasheet DEMO: The Next Generation of F5 Hardware is Ready for you
665Views3likes0CommentsPractical Mapping Guide - F5 BIG-IP TMOS Modules to Feature-Scoped CNFs
Introduction BIG-IP TMOS and BIG-IP CNFs solve similar problems with very different deployment and configuration models. In TMOS you deploy whole modules (LTM, AFM, DNS, etc.), while in CNFs you deploy only the specific features and data plane functions you need as cloud native components. Modules vs feature scoped services In TMOS, enabling LTM gives you a broad set of capabilities in one module: virtual servers, pools, profiles, iRules, SNAT, persistence, etc., all living in the same configuration namespace. In CNFs, you deploy only the features you actually need, expressed as discrete custom resources: for example, a routing CNF for more precise TMM routing, a firewall CNF for policies, or specific CRs for profiles and NAT, rather than bringing the entire LTM or AFM module along “just in case”. Configuration objects vs custom resources TMOS configuration is organized as objects under modules (for example: LTM virtual, LTM pool, security, firewall policy, net vlan), managed via tmsh, GUI, or iControl REST. CNFs expose those same capabilities through Kubernetes Custom Resources (CRs): routes, policies, profiles, NAT, VLANs, and so on are expressed as YAML and applied with GitOps-style workflows, making individual features independently deployable with a version history. Coarse-grained vs precise feature deployment On TMOS, deploying a use case often means standing up a full BIG-IP instance with multiple modules enabled, even if the application only needs basic load balancing with a single HTTP profile and a couple of firewall rules. With CNFs, you can carve out exactly what you need: for example, only a precise TMM routing function plus the specific TCP/HTTP/SSL profiles and security policies required for a given application or edge segment, reducing blast radius and resource footprint. BIG-IP Next CNF Custom Resources (CRs) extend Kubernetes APIs to configure the Traffic Management Microkernel (TMM) and Advanced Firewall Manager (AFM), providing declarative equivalents to BIG-IP TMOS configuration objects. This mapping ensures functional parity for L4-L7 services, security, and networking while enabling cloud-native scalability. Focus here covers core mappings with examples for iRules and Profiles. What are Kubernetes Custom Resources Think of Kubernetes API as a menu of objects you can create: Pods (containers), Services (networking), Deployments (replicas). CRs add new menu items for your unique needs. You first define a CustomResourceDefinition (CRD) (the blueprint), then create CR instances (actual objects using that blueprint). How CRs Work (Step-by-Step) Create CRD - Define new object type (e.g., "F5BigFwPolicy") Apply CRD - Kubernetes API server registers it, adding new REST endpoints like /apis/f5net.com/v1/f5bigfwpolicies Create CR - Write YAML instances of your new type and apply with kubectl Controllers Watch - Custom operators/controllers react to CR changes, creating real resources (Pods, ConfigMaps, etc.) CR examples Networking and NAT Mappings Networking CRs handle interfaces and routing, mirroring, TMOS VLANs, Self-IPs and NAT mapping. CNF CR TMOS Object(s) Purpose F5BigNetVlan VLANs, Self-IPs, MTU Interface config F5BigCneSnatpool SNAT Automap Source NAT s Example: F5BigNetVlan CR sets VLAN tags and IPs, applied via kubectl apply, propagating to TMM like tmsh create net vlan. Security and Protection Mappings Protection CRs map to AFM and DoS modules for threat mitigation. CNF CR CR Purpose TMOS Object(s) Purpose F5BigFwPolicy applies industry-standard firewall rules to TMM AFM Policies Stateful ACL filtering F5BigFwRulelist Consists of an array of ACL rules AFM Rule list Stateful ACL filtering F5BigSvcPolicy Allows creation of Timer Policies and attaching them to the Firewall Rules AFM Policies Stateful ACL filtering F5BigIpsPolicy Allows you to filter inspected traffic (matched events) by various properties such as, the inspection profile’s host (virtual server or firewall policy), traffic properties, inspection action, or inspection service. IPS AFM policies Enforce IPS F5BigDdosGlobal Configures the TMM Proxy Pod to protect applications and the TMM Pod from Denial of Service / Distributed Denial of Service (Dos/DDoS) attacks. Global DoS/DDoS Device DoS protection F5BigDdosProfile The Percontext DDoS CRD configures the TMM Proxy Pod to protect applications from Denial of Service / Distributed Denial of Service (Dos/DDoS) attacks. DoS/DDoS per profile Per Context DoS profile protection F5BigIpiPolicy Each policy contains a list of categories and actions that can be customized, with IPRep database being common. Feedlist based policies can also customize the IP addresses configured. IP Intelligence Policies Reputation-based blocking Example: F5BigFwPolicy references rule-lists and zones, equivalent to tmsh creating security firewall policy. Traffic Management Mappings Traffic CRs proxy TCP/UDP and support ALGs, akin to LTM Virtual Servers and Pools. CNF CR TMOS Object(s) Purpose Part of the Apps CRs Virtual Server (Ingress/HTTPRoute/GRPCRoute) LTM Virtual Servers Load balancing Part of the Apps CRs Pool (Endpoints) LTM Pools Node groups F5BigAlgFtp FTP Profile/ALG FTP gateway F5BigDnsCache DNS Cache Resolution/caching Example: GRPCRoute CR defines backend endpoints and profiles, mapping to tmsh create ltm virtual. Profiles Mappings with Examples Profiles CRs customize traffic handling, referenced by Traffic Management CRs, directly paralleling TMOS profiles. CNF CR TMOS Profile(s) Example Configuration Snippet F5BigTcpSetting TCP Profile spec: { defaultsFrom: "tcp-lan-optimized" } tunes congestion like tmsh create LTM profile TCP optimized F5BigUdpSetting UDP Profile spec: { idleTimeout: 300 } sets timeouts F5BigUdpSetting HTTP Profile spec: { http2Profile: "http2" } enables HTTP/2 F5BigClientSslSetting ClientSSL Profile spec: { certKeyChain: { name: "default" } } for TLS termination F5PersistenceProfile Persistence Profile Enables source/dest persistence Profiles attach to Virtual Servers/Ingresses declaratively, e.g., spec.profiles: [{ name: "my-tcp", context: "tcp" }]. iRules Support and Examples CNF fully supports iRules for custom logic, integrated into Traffic Management CRs like Virtual Servers or FTP/RTSP. iRules execute in TMM, preserving TMOS scripting. Example CR Snippet: apiVersion: k8s.f5net.com/v1 kind: F5BigCneIrule metadata: name: cnfs-dns-irule namespace: cnf-gateway spec: iRule: > when DNS_REQUEST { if { [IP::addr [IP::remote_addr] equals 10.10.1.0/24] } { cname cname.siterequest.com } else { host 10.20.20.20 } } This mapping facilitates TMOS-to-CNF migrations using tools like F5Big-Ip Controller for CR generation from UCS configs. For full CR specs, refer to official docs. The updated full list of CNF CRs per release can be found over here, https://clouddocs.f5.com/cnfs/robin/latest/cnf-custom-resources.html Conclusion In this article, we examined how BIG-IP Next CNF redefines the TMOS module model into a feature-scoped, cloud-native service architecture. By mapping TMOS objects such as virtual servers, profiles, and security policies to their corresponding CNF Custom Resources, we illustrated how familiar traffic management and security constructs translate into declarative Kubernetes workflows. Related content F5 Cloud-Native Network Functions configurations guide BIG-IP Next Cloud-Native Network Functions (CNFs) CNF DNS Express BIG-IP Next for Kubernetes CNFs - DNS walkthrough BIG-IP Next for Kubernetes CNFs deployment walkthrough | DevCentral BIG-IP Next Edge Firewall CNF for Edge workloads | DevCentral Modern Applications-Demystifying Ingress solutions flavors | DevCentral50Views1like0CommentsVisibility for Modern Telco and Cloud‑Native Networks
Introduction As operators transition from 5G to 6G-ready cloud-native architectures, their networks are becoming more disaggregated, dynamic, and intelligent. Functions increasingly span virtualized 5G cores, AI-enhanced 6G control domains, MEC platforms, and hyperscale distributed edge clouds. Traditional visibility tools built for static or centralized topologies can no longer keep pace. Telco and security architects now face the challenge of maintaining real‑time, end‑to‑end observability across highly adaptive, multi‑vendor, and multi‑cloud infrastructures where workloads may move between 5G and 6G service fabrics in milliseconds. BIG‑IP eBPF Observability (EOB) meets this challenge with a high‑performance, kernel‑level telemetry framework built on Linux’s eBPF technology, delivering visibility that scales from 6G microcells to data center cores without adding operational overhead. Why Legacy Visibility Approaches Fail in 5G/6G Environments Tools like SPANs, TAPs, and broker appliances were effective for static topologies. But in cloud‑native 5G and 6G deployments, where AI dynamically places network functions across cores, edges, and reconfigurable slices, they break down. Common limitations include: No reliable physical tap points in cloud-distributed or satellite-connected nodes SPAN mirroring constrained by virtual and container limits Encryption and service mesh layers hiding the real traffic context Vendor probes exposing only proprietary NFs, limiting multi‑domain visibility These gaps fragment insights across control-plane (SBI, PFCP, NGAP, F1‑AP) and AI‑driven management planes now emerging within 6G intelligent network layers (INNs). eBPF: Core Technology for Adaptive Observability eBPF (Extended Berkeley Packet Filter) allows sandboxed programs to execute inside the Linux kernel, offering in‑situ visibility into packet, process, and syscall activities at near‑zero latency. Its key advantages for 5G and 6G include: Safe, programmable telemetry without kernel module changes Full observability across containers, namespaces, and network functions Ultra‑low‑latency insights ideal for closed‑loop automation and AI inference workflows 6G networks depend on autonomous observability loops, eBPF forms the telemetry foundation that lets those loops sense and adapt to conditions in real time. BIG‑IP EOB Architecture and Data Model EOB leverages lightweight containerized sensors orchestrated via Kubernetes or OpenShift across core, edge, and RAN domains. Its data model captures: Raw packets and deep forensic traces Dynamic service topologies reflecting 5G/6G slice relationships Plaintext records of TLS 1.3 and service mesh sessions for deep insight Metadata for telco protocols (SBI, PFCP, DNS, HTTP/2, F1‑AP, NGAP) and emerging 6G access protocols Rich CNFlow telemetry correlating control- and user-plane activity Then telemetry streams to a message bus or observability fabric, ready for real‑time analytics, SIEM integration, or AI‑based fault prediction systems that’s vital for 6G’s full autonomy vision. Core‑to‑Edge‑to‑AI Deployment Model EOB spans the entire 5G/6G topology, from centralized cores to AI‑powered edge clouds: 5G Core Functions: AMF, SMF, NRF, PCF, UDM, UDR 6G Expansion: Cloud‑native service networks running AI‑orchestrated CNFs and reconfigurable RAN domains Edge & MEC: Low‑latency compute nodes supporting URLLC and industrial AI Open RAN: O‑RU, O‑DU, O‑CU, and AI‑RAN management functions A central controller enforces data routing and observability policies, making EOB a unifying visibility plane for multi‑band, multi‑vendor networks that may straddle both 5G and 6G service layers. Restoring Visibility in Encrypted and AI‑Automated Planes Modern telco cores encrypt almost everything, including control messages used for orchestration and identity management. EOB restores inspection capability by extracting essential 5G/6G identifiers and slice attributes from encrypted flows, enabling real‑time anomaly detection. Capabilities include: Extraction of SUPI, SUCI, GUTI, slice/service IDs, and new AI‑Service Identifiers emerging in 6G Node‑level contextual threat detection across AMF, SMF, and cognitive NFs Direct integration with different security products and AI threat analytics for real‑time prevention This removes “blind spots” that AI‑automated security systems would otherwise misinterpret or miss entirely. Let’s go over a demo showing how BIG-IP EOB enhances visibility by TomCreighton Ecosystem and Integration BIG‑IP EOB integrates seamlessly with telco cloud environments: Kubernetes and Red Hat OpenShift: Certified operator framework, integrated with Red Hat’s bpfman for large-scale eBPF management AI/ML Pipelines: Telemetry exported to AIOps, CI/CD, and orchestration frameworks, key for autonomous fault resolution in 6G While we highlighted multiple use cases for Service Providers, EOB can expand other capabilities to the enterprise sector, Application and data monitoring Security and policy assurance User experience and monitoring Cloud-native and infrastructure Conclusion BIG‑IP EOB enables a future‑proofed observability framework that supports the continuous evolution from 5G to 6G: Unified, vendor-neutral visibility across physical, virtual, and AI-driven domains Granular kernel-level insight without probe sprawl Control and user-plane correlation for real-time SLA and security validation Encrypted and service‑mesh traffic observability Telemetry foundation for 6G, autonomous and cognitive networking EOB forms the visibility fabric of the self‑intelligent network—turning real-time telemetry into adaptive intelligence for secure, resilient, and autonomous telco operations. Related Content F5 eBPF Observability: Real-Time Traffic Visibility Dashboard Demo F5 eBPF Observability: Kernel-Level Observability for Modern Applications eBPF: It's All About Observability eBPF: Revolutionizing Security and Observability in 2023
93Views2likes0CommentsBIG-IP Next for Kubernetes CNF 2.2 what's new
Introduction BIG-IP Next CNF v2.2.0 offers new enhancements to BIG-IP Next for Kubernetes CNFs with a focus on analytics capabilities, traffic distribution, subscriber management, and operational improvements that address real-world challenges in high-scale deployments. High-Speed Logging for Traffic Analysis The Reporting feature introduces high-speed logging (HSL) capabilities that capture session and flow-level metrics in CSV format. Key data points include subscriber identifiers, traffic volumes, transaction counts, video resolution metrics, and latency measurements, exported via Syslog (RFC5424, RFC3164, or legacy-BIG-IP formats) over TCP or UDP. Fluent-bit handles TMM container log processing, forwarding to Fluentd for external analytics servers. Custom Resources simplify configuration of log publishers, reporting intervals, and enforcement policies, making it straightforward to integrate into existing Kubernetes workflows. DNS Cache Inspection and Management New utilities provide detailed visibility into DNS cache operations. The bdt_cli tool supports listing, counting, and selectively deleting cache records using filters for domain names, TTL ranges, response codes, and cache types (RRSet, message, or nameserver). Complementing this, dns-cache-stats delivers performance metrics including hit/miss ratios, query volumes, response time distributions across intervals, and nameserver behavior patterns. These tools enable systematic cache analysis and maintenance directly from debug sidecars. Stateless and Bidirectional DAG Traffic Distribution Stateless DAG implements pod-based hashing to distribute traffic evenly across TMM pods without maintaining flow state. This approach embeds directly within the CNE installation, eliminating separate DAG infrastructure. Bidirectional DAG extends this with symmetric routing for client-to-server and return flows, using consistent redirect VLANs and hash tables. Deployments must align TMM pod counts with self-IP configurations on pod_hash-enabled VLANs to ensure balanced distribution. Dynamic GeoDB Updates for Edge Firewall Policies Edge Firewall Geo Location policies now support dynamic GeoDB updates, replacing static country/region lists embedded in container images. The Controller and PCCD components automatically incorporate new locations and handle deprecated entries with appropriate logging. Firewall Policy CRs can reference newly available geos immediately, enabling responsive policy adjustments without container restarts or rebuilds. This maintains policy currency in environments requiring frequent threat intelligence updates. Subscriber Creation and CGNAT Logging RADIUS-triggered subscriber creation integrates with distributed session storage (DSSM) for real-time synchronization across TMM pods. Subscriber records capture identifiers like IMSI, MSISDN, or NAI, enabling automated session lifecycle management. CGNAT logging enhancements include Subscriber ID in translation events, providing clear IP-to-subscriber mapping. This facilitates correlation of network activity with individual users, supporting troubleshooting, auditing, and regulatory reporting requirements. Kubernetes Secrets Integration for Sensitive Configuration Custom Resources now reference sensitive data through Kubernetes’ native Secrets using secretRef fields (name, namespace, key). The cne-controller fetches secrets securely via mTLS, monitors for updates, and propagates changes to consuming components. This supports certificate rotation through cert-manager without CR reapplication. RBAC controls ensure appropriate access while eliminating plaintext sensitive data from YAML manifests. Dynamic Log Management and Storage Optimization REST API endpoints and ConfigMap watching enable runtime log level adjustments per pod without restarts. Changes propagate through pod-specific ConfigMaps monitored by the F5 logging library. An optional Folder Cleaner CronJob automatically removes orphaned log directories, preventing storage exhaustion in long-running deployments with heavy Fluentd usage. Key Enhancements Overview Several refinements have improved operational aspects: CNE Controller RBAC: Configurable CRD monitoring via ConfigMap eliminates cluster-wide list permissions, with manual controller restart required for list changes. CGNAT/DNAT HA: F5Ingress automatically distributes VLAN configurations to standby TMM pods (excluding self-IPs) for seamless failover. Memory Optimization: 1GB huge page support via tmm.hugepages.preferredhugepagesize parameter. Diagnostics: QKView requests can be canceled by ID, generating partial diagnostics from collected data. Metrics Control: Per-table aggregation modes (Aggregated, Semi-Aggregated, Diagnostic) with configurable export intervals via f5-observer-operator-config ConfigMap. Related content BIG-IP Next for Kubernetes CNF - latest release BIG-IP Next Cloud-Native Network Functions (CNFs) BIG-IP Next for Kubernetes CNFs deployment walkthrough | DevCentral BIG-IP Next Edge Firewall CNF for Edge workloads | DevCentral F5 BIG-IP Next CNF solutions suite of Kubernetes native 5G Network Functions122Views2likes0CommentsMitigating OWASP Web Application Risk: Insecure Design using F5 XC platform
Overview: This article is the last part in a series of articles on mitigation of OWASP Web Application vulnerabilities using F5 Distributed Cloud platform (F5 XC). Introduction to Insecure Design: In an effort to speed up the development cycle, some phases might be reduced in scope which leads to give chance for many vulnerabilities. To focus the risks which are been ignored from design to deployment phases, a new category of “Insecure Design” is added under OWASP Web Application Top 10 2021 list. Insecure Design represents the weaknesses i.e. lack of security controls which are been integrated to the website/application throughout the development cycle. If we do not have any security controls to defend the specific attacks, Insecure Design cannot be fixed by any perfect implementation while at the same time a secure design can still have an implementation flaw which leads to vulnerabilities that may be exploited. Hence the attackers will get vast scope to leverage the vulnerabilities created by the insecure design principles. Here are the multiple scenarios which comes under insecure design vulnerabilities. Credential Leak Authentication Bypass Injection vulnerabilities Scalper bots etc. In this article we will see how F5 XC platform helps to mitigate the scalper bot scenario. What is Scalper Bot: In the e-commerce industry, Scalping is a process which always leads to denial of inventory. Especially, online scalping uses bots nothing but the automated scripts which will check the product availability periodically (in seconds), add the items to the cart and checkout the products in bulk. Hence the genuine users will not get a fair chance to grab the deals or discounts given by the website or company. Alternatively, attackers use these scalper bots to abandon the items added to the cart later, causing losses to the business as well. Demonstration: In this demonstration, we are using an open-source application “Online Boutique” (refer boutique-repo) which will provide end to end online shopping cart facility. Legitimate customer can add any product of their choice to the cart and checkout the order. Customer Page: Scalper bot with automation script: The below automation script will add products in bulk into the cart of the e-commerce application and place the order successfully. import requests import random # List of User-Agents USER_AGENTS = [ "sqlmap/1.5.2", # Automated SQL injection tool "Nikto/2.1.6", # Nikto vulnerability scanner "nmap", # Network mapper used in reconnaissance "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)", # Spoofed Search Engine Bot "php", # PHP Command Line Tool "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)", # Old Internet Explorer (suspicious outdated) "libwww-perl/6.36", # Perl-based automation, often found in attacks or scrapers "wget/1.20.3", # Automation tool for downloading files or making requests "Python-requests/2.26.0", # Python automation library ] # Function to select a random User-Agent def get_random_user_agent(): return random.choice(USER_AGENTS) # Base URL of the API BASE_URL = "https://insecure-design.f5-hyd-xcdemo.com" # Perform the API request to add products to the cart def add_to_cart(product_id, quantity): url = f"{BASE_URL}/cart" headers = { "User-Agent": get_random_user_agent(), # Random User-Agent "Content-Type": "application/x-www-form-urlencoded" } payload = { "product_id": product_id, "quantity": quantity } # Send POST request with cookies included response = requests.post(url, headers=headers, data=payload) if response.status_code == 200: print(f"Successfully added {quantity} to cart!") else: print(f"Failed to add to cart. Status Code: {response.status_code}, Response: {response.text}") return response # Perform the API request to place an order def place_order(): url = f"{BASE_URL}/cart/checkout" headers = { "User-Agent": get_random_user_agent(), # Random User-Agent "Content-Type": "application/x-www-form-urlencoded" } payload = { "email": "someone@example.com", "street_address": "1600 Amphitheatre Parkway", "zip_code": "94043", "city": "Mountain View", "state": "CA", "country": "United States", "credit_card_number": "4432801561520454", "credit_card_expiration_month": "1", "credit_card_expiration_year": "2026", "credit_card_cvv": "672" } # Send POST request with cookies included response = requests.post(url, headers=headers, data=payload) if response.status_code == 200: print("Order placed successfully!") else: print(f"Failed to place order. Status Code: {response.status_code}, Response: {response.text}") return response # Main function to execute the API requests def main(): # Add product to cart product_id = "OLJCESPC7Z" quantity = 10 print("Adding product to cart...") add_to_cart_response = add_to_cart(product_id, quantity) # If the add_to_cart request is successful, proceed to checkout if add_to_cart_response.status_code == 200: print("Placing order...") place_order() # Run the main function if __name__ == "__main__": main() To mitigate this problem, F5 XC is providing the feasibility of identifying and blocking these bots based on the configuration provided under HTTP load balancer. Here is the procedure to configure the bot defense with mitigation action ‘block’ in the load balancer and associate the backend application nothing but ‘evershop’ as the origin pool. Create origin pool Refer pool-creation for more info Create http load balancer (LB) and associate the above origin pool to it. Refer LB-creation for more info Configure bot defense on the load balancer and add the policy with mitigation action as ‘block’. Click on “Save and Exit” to save the Load Balancer configuration. Run the automation script by providing the LB domain details to exploit the items in the application. Validating the product availability for the genuine user manually. Monitor the logs through F5 XC, Navigate to WAAP --> Apps & APIs --> Security Dashboard, select your LB and click on ‘Security Event’ tab. The above screenshot gives the detailed info on the blocked attack along with the mitigation action. Conclusion: As you have seen from the demonstration, F5 Distributed Cloud WAAP (Web Application and API Protection) has detected the scalpers with the bot defense configuration applied on the Load balancer and mitigated the exploits of scalper bots. It also provides the mitigation action of “_allow_”, “_redirect_” along with “_block_”. Please refer link for more info. Reference links: OWASP Top 10 - 2021 Overview of OWASP Web Application Top 10 2021 F5 Distributed Cloud Services F5 Distributed Cloud Platform Authentication Bypass Injection vulnerabilities2.5KViews2likes0CommentsIdentity-centric F5 ADSP Integration Walkthrough
In this article we explore F5 ADSP from the Identity lense by using BIG-IP APM, BIG-IP SSLO and add BIG-IP AWAF to the service chain. The F5 ADSP addresses four core areas: Deployment at scale, Security against evolving threats, Deliver application reliably, Operate your day to day work efficiently. Each comes with its own challenges, but together they define the foundation for keeping systems fast, stable, and safe. Each architecture deployment example is designed to cover at least two of the four core areas: Deployment, Security, Delivery and XOps.272Views3likes0CommentsRuntime API Security: From Visibility to Enforced Protection
This article examines how runtime visibility and enforcement address API security failures that emerge only after deployment. It shows how observing live traffic enables evidence-based risk prioritization, exposes active exploitation, and supports targeted controls such as schema enforcement and scoped protection rules. The focus is on reducing real-world attack impact by aligning specifications, behavior, and enforcement, and integrating runtime protection as a feedback mechanism within a broader API security strategy.
206Views1like1CommentBIG-IP Next for Kubernetes CNFs - DNS walkthrough
Introduction F5 enables advanced DNS implementations across different deployments, whether it’s hardware, Virtual Functions and F5 Distributed Cloud. Also, in Kubernetes environment through the F5BigDnsApp Custom Resource Definition (CRD), allowing declarative configuration of DNS listeners, pools, monitors, and profiles directly in-cluster. Deploying DNS services like Express, Cache, and DoH within the Kubernetes cluster using BIG-IP Next for Kubernetes CNF DNS saves external traffic by resolving queries locally (reducing egress to upstream resolvers by up to 80% with caching) and enhances security through in-cluster isolation, mTLS enforcement, and protocol encryption like DoH, preventing plaintext DNS exposure over cluster boundaries. This article provides a walkthrough for DNS Express, DNS Cache, and DNS-over-HTTPS (DoH) on top of Red Hat OpenShift. Prerequisites Deploy BIG-IP Next for Kubernetes CNF following the steps in F5’s Cloud-Native Network Functions (CNFs) Verify the nodes and CNF components are installed [cloud-user@ocp-provisioner f5-cne-2.1.0]$ kubectl get nodes NAME STATUS ROLES AGE VERSION master-1.ocp.f5-udf.com Ready control-plane,master,worker 2y221d v1.29.8+f10c92d master-2.ocp.f5-udf.com Ready control-plane,master,worker 2y221d v1.29.8+f10c92d master-3.ocp.f5-udf.com Ready control-plane,master,worker 2y221d v1.29.8+f10c92d worker-1.ocp.f5-udf.com Ready worker 2y221d v1.29.8+f10c92d worker-2.ocp.f5-udf.com Ready worker 2y221d v1.29.8+f10c92d [cloud-user@ocp-provisioner f5-cne-2.1.0]$ kubectl get pods -n cne-core NAME READY STATUS RESTARTS AGE f5-cert-manager-656b6db84f-dmv78 2/2 Running 10 (15h ago) 19d f5-cert-manager-cainjector-5cd9454d6c-sc8q2 1/1 Running 21 (15h ago) 19d f5-cert-manager-webhook-6d87b5797b-954v6 1/1 Running 4 19d f5-dssm-db-0 3/3 Running 13 (18h ago) 15d f5-dssm-db-1 3/3 Running 0 18h f5-dssm-db-2 3/3 Running 4 (18h ago) 42h f5-dssm-sentinel-0 3/3 Running 0 14h f5-dssm-sentinel-1 3/3 Running 10 (18h ago) 5d8h f5-dssm-sentinel-2 3/3 Running 0 18h f5-rabbit-64c984d4c6-xn2z4 2/2 Running 8 19d f5-spk-cwc-77d487f955-j5pp4 2/2 Running 9 19d [cloud-user@ocp-provisioner f5-cne-2.1.0]$ kubectl get pods -n cnf-fw-01 NAME READY STATUS RESTARTS AGE f5-afm-76c7d76fff-5gdhx 2/2 Running 2 42h f5-downloader-657b7fc749-vxm8l 2/2 Running 0 26h f5-dwbld-d858c485b-6xfq8 2/2 Running 2 26h f5-ipsd-79f97fdb9c-zfqxk 2/2 Running 2 26h f5-tmm-6f799f8f49-lfhnd 5/5 Running 0 18h f5-zxfrd-d9db549c4-6r4wz 2/2 Running 2 (18h ago) 26h f5ingress-f5ingress-7bcc94b9c8-zhldm 5/5 Running 6 26h otel-collector-75cd944bcc-xnwth 1/1 Running 1 42h DNS Express Walkthrough DNS Express configures BIG-IP to authoritatively answer queries for a zone by pulling it via AXFR/IXFR from an upstream server, with optional TSIG auth keeping zone data in-cluster for low-latency authoritative resolution. Step 1: Create a F5BigDnsZone CR for zone transfer (e.g., example.com from upstream 10.1.1.12). # cat 10-cr-dnsxzone.yaml apiVersion: k8s.f5net.com/v1 kind: F5BigDnsZone metadata: name: example.com spec: dnsxAllowNotifyFrom: ["10.1.1.12"] dnsxServer: address: "10.1.1.12" port: 53 dnsxEnabled: true dnsxNotifyAction: consume dnsxVerifyNotifyTsig: false #kubectl apply -f 10-cr-dnsxzone.yaml -n cnf-fw-01 Step 2: Deploy F5BigDnsApp CR with DNS Express enabled # cat 11-cr-dnsx-app-udp.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigDnsApp metadata: name: "dnsx-app-listener" namespace: "cnf-fw-01" spec: destination: address: "10.1.30.100" port: 53 ipProtocol: "udp" snat: type: "automap" dns: dnsExpressEnabled: true logProfile: "cnf-log-profile" # kubectl apply -f 11-cr-dnsx-app-udp.yaml -n cnf-fw-01 Step 3: Validate: Query from our client pod & tmm statistics dig @10.1.30.100 www.example.com ; <<>> DiG 9.18.30-0ubuntu0.20.04.2-Ubuntu <<>> @10.1.30.100 www.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43865 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;www.example.com. IN A ;; ANSWER SECTION: www.example.com. 604800 IN A 192.168.1.11 ;; AUTHORITY SECTION: example.com. 604800 IN NS ns.example.com. ;; ADDITIONAL SECTION: ns.example.com. 604800 IN A 192.168.1.10 ;; Query time: 0 msec ;; SERVER: 10.1.30.100#53(10.1.30.100) (UDP) ;; WHEN: Thu Jan 22 11:10:24 UTC 2026 ;; MSG SIZE rcvd: 93 kubectl exec -it deploy/f5-tmm -c debug -n cnf-fw-01 -- bash /tmctl -id blade tmmdns_zone_stat name=example.com name dnsx_queries dnsx_responses dnsx_xfr_msgs dnsx_notifies_recv ----------- ------------ -------------- ------------- ------------------ example.com 2 2 0 0 DNS Cache Walkthrough DNS Cache reduces latency by storing responses non-authoritatively, referenced via a separate cache CR in the DNS profile, cutting repeated upstream queries and external bandwidth use. Step 1: Create a DNS Cache CR F5BigDnsCache # cat 13-cr-dnscache.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigDnsCache metadata: name: "cnf-dnscache" spec: cacheType: resolver resolver: useIpv4: true useTcp: false useIpv6: false forwardZones: - forwardZone: "example.com" nameServers: - ipAddress: 10.1.1.12 port: 53 - forwardZone: "." nameServers: - ipAddress: 8.8.8.8 port: 53 # kubectl apply -f 13-cr-dnscache.yaml -n cnf-fw-01 Step 2: Deploy F5BigDnsApp CR with DNS Cache enabled # cat 11-cr-dnsx-app-udp.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigDnsApp metadata: name: "dnsx-app-listener" namespace: "cnf-fw-01" spec: destination: address: "10.1.30.100" port: 53 ipProtocol: "udp" snat: type: "automap" dns: dnsCache: "cnf-dnscache" logProfile: "cnf-log-profile" # kubectl apply -f 11-cr-dnsx-app-udp.yaml -n cnf-fw-01 Step 3: Validate: Query from our client pod dig @10.1.30.100 www.example.com ; <<>> DiG 9.18.30-0ubuntu0.20.04.2-Ubuntu <<>> @10.1.30.100 www.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18302 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;www.example.com. IN A ;; ANSWER SECTION: www.example.com. 19076 IN A 192.168.1.11 ;; Query time: 4 msec ;; SERVER: 10.1.30.100#53(10.1.30.100) (UDP) ;; WHEN: Thu Jan 22 11:04:45 UTC 2026 ;; MSG SIZE rcvd: 60 DoH Walkthrough DoH exposes DNS over HTTPS (port 443) for encrypted queries, using BIG-IP's protocol inspection and UDP profiles, securing in-cluster DNS from eavesdropping and MITM attacks. Step 1: Ensure TLS secret exists and HTTP profiles exist # cat 14-tls-clientsslsettings.yaml apiVersion: k8s.f5net.com/v1 kind: F5BigClientsslSetting metadata: name: "cnf-clientssl-profile" namespace: "cnf-fw-01" spec: enableTls13: true enableRenegotiation: false renegotiationMode: "require" # cat 15-http-profiles.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigHttp2Setting metadata: name: http2-profile spec: activationModes: "alpn" concurrentStreamsPerConnection: 10 connectionIdleTimeout: 300 frameSize: 2048 insertHeader: false insertHeaderName: "X-HTTP2" receiveWindow: 32 writeSize: 16384 headerTableSize: 4096 enforceTlsRequirements: true --- apiVersion: "k8s.f5net.com/v1" kind: F5BigHttpSetting metadata: name: http-profile spec: oneConnect: false responseChunking: "sustain" lwsMaxColumn: 80 # kubectl apply -f 14-tls-clientsslsettings.yaml -n cnf-fw-01 # kubectl apply -f 15-http-profiles.yaml -n cnf-fw-01 Step 2: Create DNSApp for DoH service # cat 16-DNSApp-doh.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigDnsApp metadata: name: "cnf-dohapp" namespace: "cnf-fw-01" spec: ipProtocol: "udp" dohProtocol: "udp" destination: address: "10.1.20.100" port: 443 snat: type: "automap" dns: dnsExpressEnabled: false dnsCache: "cnf-dnscache" clientSslSettings: "clientssl-profile" pool: members: - address: "10.1.10.50" monitors: dns: enabled: true queryName: "www.example.com" queryType: "a" recv: "192.168.1.11" # kubectl apply -f 16-DNSApp-doh.yaml -n cnf-fw-01 Step 3: Testing from our client pod ubuntu@client:~$ dig @10.1.20.100 -p 443 +https +notls-ca www.google.com ; <<>> DiG 9.18.30-0ubuntu0.20.04.2-Ubuntu <<>> @10.1.20.100 -p 443 +https +notls-ca www.google.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4935 ;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;www.google.com. IN A ;; ANSWER SECTION: www.google.com. 69 IN A 142.251.188.103 www.google.com. 69 IN A 142.251.188.147 www.google.com. 69 IN A 142.251.188.106 www.google.com. 69 IN A 142.251.188.105 www.google.com. 69 IN A 142.251.188.99 www.google.com. 69 IN A 142.251.188.104 ;; Query time: 8 msec ;; SERVER: 10.1.20.100#443(10.1.20.100) (HTTPS) ;; WHEN: Thu Jan 22 11:27:05 UTC 2026 ;; MSG SIZE rcvd: 139 ubuntu@client:~$ dig @10.1.20.100 -p 443 +https +notls-ca www.example.com ; <<>> DiG 9.18.30-0ubuntu0.20.04.2-Ubuntu <<>> @10.1.20.100 -p 443 +https +notls-ca www.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20401 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;www.example.com. IN A ;; ANSWER SECTION: www.example.com. 17723 IN A 192.168.1.11 ;; Query time: 4 msec ;; SERVER: 10.1.20.100#443(10.1.20.100) (HTTPS) ;; WHEN: Thu Jan 22 11:27:18 UTC 2026 ;; MSG SIZE rcvd: 60 Conclusion BIG-IP Next DNS CRs transform Kubernetes into a production-grade DNS platform, delivering authoritative resolution, caching efficiency, and encrypted DoH, all while optimizing external traffic costs and hardening security boundaries for cloud-native deployments. Related Content BIG-IP Next for Kubernetes CNF guide BIG-IP Next Cloud-Native Network Functions (CNFs) BIG-IP Next for Kubernetes CNF deployment walkthrough BIG-IP Next Edge Firewall CNF for Edge workloads | DevCentral Modern Applications-Demystifying Ingress solutions flavors | DevCentral110Views3likes0CommentsBIG-IP Next for Kubernetes CNFs deployment walkthrough
Introduction F5 Application Delivery and Security Platform covers different deployment scenarios to help deliver and secure any application anywhere. BIG-IP Next for Kubernetes CNF architecture aligns with cloud-native principles by enabling horizontal scaling, ensuring that applications can expand seamlessly without compromising performance. It preserves the deterministic reliability essential for telecom environments, balancing scalability with the stringent demands of real-time processing. The way F5 implements CNF enables dynamic traffic steering across CNF pods and optimizes resource utilization through intelligent workload distribution. The architecture supports horizontal scaling patterns typical of cloud-native applications while maintaining the deterministic performance characteristics required for telecommunications workloads. Lab environment Below are the lab components, Kubernetes cluster with a control plane and two worker nodes. TMM is deployed on worker node 1. Client connects to the subscriber vlan via worker node 2. Grafana is reachable through the internal network. The below lab walk-through assumes, A working Kubernetes cluster with Red Hat OpenShift Local repo is configured. Storage is configured whether Local or NFS. Below is an overview of the installation steps flow. Creating our CNF namespaces, in our lab, we are using cne-core and cnf-fw-01 lab$ oc create namespace cne-core lab$ oc create namespace cnf-fw-01 lab$ oc get ns NAME STATUS AGE cne-core Active 60d cnf-fw-01 Active 59d Installing helm charts with the required values per environment lab$ helm install f5-cert-manager oci://repo.f5.com/charts/f5-cert-manager --version 0.23.35-0.0.10 -f cert-manager-values.yaml -n cne-core lab$ helm install f5-fluentd oci://repo.f5.com/charts/f5-toda-fluentd --version 1.31.30-0.0.7 -f fluentd-values.yaml -n cne-core --wait lab$ helm install f5-dssm oci://repo.f5.com/charts/f5-dssm --version 1.27.1-0.0.20 -f dssm-values.yaml -n cne-core --wait lab$ helm install cnf-rabbit oci://repo.f5.com/charts/rabbitmq --version 0.6.1-0.0.13 -f rabbitmq-values.yaml -n cne-core --wait lab$ helm install cnf-cwc oci://repo.f5.com/charts/cwc --version 0.43.1-0.0.15 -f cwc-values.yaml -n cne-core --wait lab$ helm install f5ingress oci://repo.f5.com/charts/f5ingress --version v13.7.1-0.3.22 -f values-ingress.yaml -n cnf-fw-01 --wait lab$ oc get pods -n cne-core NAME READY STATUS RESTARTS AGE f5-cert-manager-656b6db84f-t9dhn 2/2 Running 0 3h46m f5-cert-manager-cainjector-5cd9454d6c-nz46d 1/1 Running 0 3h46m f5-cert-manager-webhook-6d87b5797b-pmlwv 1/1 Running 0 3h46m f5-dssm-db-0 3/3 Running 0 3h43m f5-dssm-db-1 3/3 Running 0 3h42m f5-dssm-db-2 3/3 Running 0 3h41m f5-dssm-sentinel-0 3/3 Running 0 3h43m f5-dssm-sentinel-1 3/3 Running 0 3h42m f5-dssm-sentinel-2 3/3 Running 0 3h41m f5-rabbit-64c984d4c6-rjd2d 2/2 Running 0 3h40m f5-spk-cwc-77d487f955-7vqtl 2/2 Running 0 3h39m f5-toda-fluentd-558cd5b9bd-9cr6w 1/1 Running 0 3h43m lab$ oc get pods -n cnf-fw-01 NAME READY STATUS RESTARTS AGE f5-afm-76c7d76fff-8pj4c 2/2 Running 0 3h37m f5-downloader-657b7fc749-nzfgt 2/2 Running 0 3h37m f5-dwbld-d858c485b-c6bmf 2/2 Running 0 3h37m f5-ipsd-79f97fdb9c-dbsqb 2/2 Running 0 3h37m f5-tmm-7565b4c798-hvsfd 5/5 Running 0 3h37m f5-zxfrd-d9db549c4-qqhtr 2/2 Running 0 3h37m f5ingress-f5ingress-7bcc94b9c8-jcbfg 5/5 Running 0 3h37m otel-collector-75cd944bcc-fsvz4 1/1 Running 0 3h37m Deploy subscriber and external (data) vlans lab$ cat 01-cr-vlan.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigNetVlan metadata: name: "subscriber-vlan" namespace: "cnf-fw-01" spec: name: subscriber interfaces: - "1.2" selfip_v4s: - 10.1.20.100 - 10.1.20.101 - 10.1.20.102 prefixlen_v4: 24 mtu: 1500 cmp_hash: SRC_ADDR --- apiVersion: "k8s.f5net.com/v1" kind: F5BigNetVlan metadata: name: "data-vlan" namespace: "cnf-fw-01" spec: name: data interfaces: - "1.1" selfip_v4s: - 10.1.30.100 - 10.1.30.101 - 10.1.30.102 prefixlen_v4: 24 mtu: 1500 cmp_hash: SRC_ADDR Now, we start testing from the client side and observe our monitoring system, already configured OTEL to send data to Grafana Now, let’s have a look at our firewall policy, CR $ cat 06-cr-fw-policy-crd.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigFwPolicy metadata: name: "cnfcop-n6policy" spec: rule: - name: deny-53-hsl-log ipProtocol: udp source: addresses: - "0.0.0.0/0" ports: [] zones: - "subscriber" destination: addresses: - "0.0.0.0/0" ports: - "53" zones: - "data" action: "drop" logging: true - name: permit-any-hsl-log ipProtocol: any source: addresses: - "0.0.0.0/0" ports: [] zones: - "subscriber" destination: addresses: - "0.0.0.0/0" ports: [] zones: - "data" action: "accept" logging: true Let’s observe the logs from our monitoring system Conclusion In conclusion, BIG-IP Next for Kubernetes CNFs optimizes edge environments and AI workloads by providing consolidated data plane with BIG-IP market-leading application delivery and security platform capabilities for different deployment models. In this article, we explored CNF implementation with an example focused around CNF edge firewall, following articles to cover additional CRs (IPS, DNS, etc.) Related Content BIG-IP Next Cloud-Native Network Functions (CNFs) CNF Home F5 BIG-IP Next CGNAT CNF - Redhat Directory Modern Applications-Demystifying Ingress solutions flavors BIG-IP Next for Kubernetes Nvidia DPU deployment walkthrough From virtual to cloud-native, infrastructure evolution | DevCentral215Views2likes0Comments