verified designs
213 TopicsPost-Quantum Cryptography: Building Resilience Against Tomorrow’s Threats
Modern cryptographic systems such as RSA, ECC (Elliptic Curve Cryptography), and DH (Diffie-Hellman) rely heavily on the mathematical difficulty of certain problems, like factoring large integers or computing discrete logarithms. However, with the rise of quantum computing, algorithms like Shor's and Grover's threaten to break these systems, rendering them insecure. Quantum computers are not yet at the scale required to break these encryption methods in practice, but their rapid development has pushed the cryptographic community to act now. This is where Post-Quantum Cryptography (PQC) comes in — a new wave of algorithms designed to remain secure against both classical and quantum attacks. Figure 1: Cryptography evolution Why PQC Matters Quantum computers exploit quantum mechanics principles like superposition and entanglement to perform calculations that would take classical computers millennia2. This threatens: Public-key cryptography: Algorithms like RSA rely on factoring large primes or solving discrete logarithms-problems quantum computers could crack using Shor’s algorithm. Long-term data security: Attackers may already be harvesting encrypted data to decrypt later ("harvest now, decrypt later") once quantum computers mature. How PQC Works The National Institute of Standards and Technology (NIST) has led a multi-year standardization effort. Here are the main algorithm families and notable examples. Lattice-Based Cryptography. Lattice problems are believed to be hard for quantum computers. Most of the leading candidates come from this category. CRYSTALS-Kyber (Key Encapsulation Mechanism) CRYSTALS-Dilithium (Digital Signatures) Uses complex geometric structures (lattices) where finding the shortest vector is computationally hard, even for quantum computers Example: ML-KEM (formerly Kyber) establishes encryption keys using lattices but requires more data transfer (2,272 bytes vs. 64 bytes for elliptic curves) The below figure shows an illustration of how Lattice-based cryptography works. Imagine solving a maze with two maps-one public (twisted paths) and one private (shortest route). Only the private map holder can navigate efficiently Code-Based Cryptography Based on the difficulty of decoding random linear codes. Classic McEliece: Resistant to quantum attacks for decades. Pros: Very well-studied and conservative. Cons: Very large public key sizes. Relies on error-correcting codes. The Classic McEliece scheme hides messages by adding intentional errors only the recipient can fix. How it works: Key generation: Create a parity-check matrix (public key) and a secret decoder (private key). Encryption: Encode a message with random errors. Decryption: Use the private key to correct errors and recover the message Figure3: Code-Based Cryptography Illustration Multivariate & Hash-Based Quadratic Equations Multivariate These are based on solving systems of multivariate quadratic equations over finite fields and relies on solving systems of multivariate equations, a problem believed to be quantum-resistant. Hash-Based Use hash functions to construct secure digital signatures. SPHINCS+: Stateless and hash-based, good for long-term digital signature security. Challenges and Adoption Integration: PQC must work within existing TLS, VPN, and hardware stacks. Key sizes: PQC algorithms often require larger keys. For example, Classic McEliece public keys can exceed 1MB. Hybrid Schemes: Combining classical and post-quantum methods for gradual adoption. Performance: Lattice-based methods are fast but increase bandwidth usage. Standardization: NIST has finalized three PQC standards (e.g., ML-KEM) and is testing others. Organizations must start migrating now, as transitions can take decades. Adopting PQC with BIG-IP As of F5 BIG-IP 17.5, the BIG-IP now supports the widely implemented MLKEM cipher group for client-side TLS negotiations as well as Server side TLS negotiation. Other cipher groups and capabilities will become available in subsequent releases. Cipher walkthrough Let's take the supported cipher in v17.5.0 (Hybrid X25519_Kyber768) as an example and walk through it. X25519: A classical elliptic-curve Diffie-Hellman (ECDH) algorithm Kyber768: A post-quantum Key Encapsulation Mechanism (KEM) The goal is to securely establish a shared secret key between the two parties using both classical and quantum-resistant cryptography. Key Exchange X25519 Exchange: Alice and Bob exchange X25519 public keys. Each computes a shared secret using their own private key + the other’s public key: Kyber768 Exchange: Alice uses Bob’s Kyber768 public key to encapsulate a secret: Produces a ciphertext and a shared secret Bob uses his Kyber768 private key to decapsulate the ciphertext and recover the same shared secret: Both parties now have: A classical shared secret A post-quantum shared secret They combine them using a KDF (Key Derivation Function): Why the hybrid approach is being followed: If quantum computers are not practical yet, X25519 provides strong classical security. If a quantum computer arrives, Kyber768 keeps communications secure. Helps organizations migrate gradually from classical to post-quantum systems. Implementation guide F5 introduced new enhancements in 17.5.1 New Features in BIG-IP Version 17.5.1 BIG-IP now supports the X25519MLKEM768 hybrid key exchange in TLS 1.3 on the client side and server side. This mechanism combines the widely used X25519 elliptic curve key exchange with MLKEM768 They provide enhanced protection by ensuring the confidentiality of communications even in future quantum threats. This enhancement strengthens the application’s cryptographic flexibility and positions it for secure communication in classical and post-quantum environments. This change does not affect existing configurations but provides an additional option for enhanced security where supported. Implementation KB provided by F5 K000149577: Enabling Post-Quantum Cryptography in F5 BIG-IP TMOS NGINX Support for PQC We are pleased to announce support for Post Quantum Cryptography (PQC) starting NGINX Plus R33. NGINX provides PQC support using the Open Quantum Safe provider library for OpenSSL 3.x (oqs-provider). This library is available from the Open Quantum Safe (OQS) project. The oqs-provider library adds support for all post-quantum algorithms supported by the OQS project into network protocols like TLS in OpenSSL-3 reliant applications. All ciphers/algorithms provided by oqs-provider are supported by NGINX. To configure NGINX with PQC support using oqs-provider, follow these steps: Install the necessary dependencies sudo apt update sudo apt install -y build-essential git cmake ninja-build libssl-dev pkg-config Download and install liboqs git clone --branch main https://github.com/open-quantum-safe/liboqs.git cd liboqs mkdir build && cd build cmake -GNinja -DCMAKE_INSTALL_PREFIX=/usr/local -DOQS_DIST_BUILD=ON .. ninja sudo ninja install Download and install oqs-provider git clone --branch main https://github.com/open-quantum-safe/oqs-provider.git cd oqs-provider mkdir build && cd build cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local -DOPENSSL_ROOT_DIR=/usr/local/ssl .. make -j$(nproc) sudo make install Download and install OpenSSL with oqs-provider support git clone https://github.com/openssl/openssl.git cd openssl ./Configure --prefix=/usr/local/ssl --openssldir=/usr/local/ssl linux-x86_64 make -j$(nproc) sudo make install_sw Configure OpenSSL for oqs-provider /usr/local/ssl/openssl.cnf: openssl_conf = openssl_init [openssl_init] providers = provider_sect [provider_sect] default = default_sect oqsprovider = oqsprovider_sect [default_sect] activate = 1 [oqsprovider_sect] activate = 1 Generate post quantum certificates export OPENSSL_CONF=/usr/local/ssl/openssl.cnf # Generate CA key and certificate /usr/local/ssl/bin/openssl req -x509 -new -newkey dilithium3 -keyout ca.key -out ca.crt -nodes -subj "/CN=Post-Quantum CA" -days 365 # Generate server key and certificate signing request (CSR) /usr/local/ssl/bin/openssl req -new -newkey dilithium3 -keyout server.key -out server.csr -nodes -subj "/CN=your.domain.com" # Sign the server certificate with the CA /usr/local/ssl/bin/openssl x509 -req -in server.csr -out server.crt -CA ca.crt -CAkey ca.key -CAcreateserial -days 365 Download and install NGINX Plus Configure NGINX to use the post quantum certificates server { listen 0.0.0.0:443 ssl; ssl_certificate /path/to/server.crt; ssl_certificate_key /path/to/server.key; ssl_protocols TLSv1.3; ssl_ecdh_curve kyber768; location / { return 200 "$ssl_curve $ssl_curves"; } } Conclusion By adopting PQC, we can future-proof encryption against quantum threats while balancing security and practicality. While technical hurdles remain, collaborative efforts between researchers, engineers, and policymakers are accelerating the transition. Related Content K000149577: Enabling Post-Quantum Cryptography in F5 BIG-IP TMOS F5 NGINX Plus R33 Release Now Available | DevCentral New Features in BIG-IP Version 17.5.1 The State of Post-Quantum Cryptography (PQC) on the Web
2.2KViews5likes5CommentsBeyond Five Nines: SRE Practices for BIG-IP Cloud-Native Network Functions
Introduction Five nines (99.999%) availability gets the headline. But any SRE who has been on-call for a telecom user-plane incident knows that uptime percentages don’t capture the full picture. A NAT pool exhausted at 99.98% availability can still affect millions of subscribers. A DNS cache miss storm at 99.99% uptime can still degrade application performance across an entire region. This article explores how SRE principles (specifically SLIs, SLOs, error budgets, and toil reduction) apply to cloud-native network functions (CNFs) deployed with F5 BIG-IP Cloud-Native Edition. The goal is practical: give SRE teams and platform engineers the vocabulary and patterns to instrument, operate, and evolve these functions the same way they operate any other Kubernetes workload. Why subscriber-centric SLIs beat infrastructure metrics Traditional network operations relies on infrastructure health metrics: CPU utilisation, interface counters, and process uptime. These metrics are necessary, but they answer the wrong question. They tell you the system’s perspective, not the subscriber’s. SRE flips this. An SLI is a direct quantitative measurement of user-visible service behavior. For a CNF in the 5G user plane, subscriber-centric SLIs look like: GTP-U flow forwarding success rate (not just firewall process uptime) NAT session establishment latency at P95 (not just CPU idle) DNS query response rate and cache hit ratio (not just resolver process health) Packet drop rate at the N6/Gi-LAN boundary (not just interface RX errors) BIG-IP CNE exposes these metrics natively through Prometheus-compatible endpoints on each CNF pod, meaning your existing Kubernetes observability stack, whether that is Prometheus + Grafana, Datadog, or a vendor-managed observability platform, can consume them without custom instrumentation. As a consultant, if your monitoring today alerts on CNF pod restarts before it alerts on subscriber-impacting packet drops, your SLI hierarchy is inverted. Fix the SLI definition first, then tune your alerting. SLIs and SLOs: the measurement-to-promise pipeline The distinction between SLIs and SLOs is operational, not semantic. An SLI is what you observe; an SLO is what you commit to. Together, they create an error budget (your explicit allowance for controlled unreliability). Table 1 gives a quick summary to further highlight the relation between SLI, SLO and why it matters to SREs. Table 1: SLI vs SLO — what each term means operationally Aspect SLI (Measurement) SLO (Target) Why it matters to SREs Purpose Reports reality Sets reliability goal Drives team alignment Example "99.92% queries succeeded" "≥99.99% over 30d" Error budget = 0.01% Burn rate Changes minute-by-minute Calculated over window Feeds alerting cadence Action Feeds dashboards/alerts Gates releases Halts or accelerates rollouts The gap between your SLI (what you measure) and your SLO (what you target) is the error budget. For a DNS CNF with an SLO of 99.99% queries answered within 20ms over 30 days, the error budget is 4.38 minutes of allowable degradation per month. That budget governs rollout velocity: when the budget is healthy, teams can ship faster; when it burns through, all changes halt until the system stabilizes. Example: Set your SLO as "99.99% of GTP-U flows processed within 2ms." Your error budget is 0.01% of flows, or roughly 52 minutes of allowable impact per year. A CNF upgrade that introduces a 0.005% flow drop during rollout consumes half your annual budget. That’s the signal your CI/CD pipeline should be gating on — not deployment success. Golden signals mapped to BIG-IP CNE metrics The SRE golden signals (latency, traffic, errors, saturation) map directly to BIG-IP CNE telemetry. The table below gives practical SLI examples, SLO targets, and the operator’s actions each signal should trigger. Table 2 shows an example with the relation to the SLO concepts and the actions to be taken. Table 2: Golden signals as operational SLIs for BIG-IP CNE Golden Signal BIG-IP CNE SLI Example SLO Target Operator Action Latency P95 GTP-U at Edge Firewall CNF ≤ 2ms for 99.99% flows Scale pods / tune policy Traffic Packets/sec per CNF pod Autoscale to 4M+ pps HPA trigger or pre-scale Errors NAT session failure rate < 0.01% over 30 days Halt rollout, root-cause Saturation Port/CPU threshold breach Proactive alert at 80% Drain + horizontal scale These SLIs flow into the same Prometheus/Grafana stack your Kubernetes platform team already operates. A single dashboard can surface both pod-level Kubernetes metrics and CNF user-plane metrics, creating a shared view of reliability that eliminates the classic “my side is green” response to incidents. Observability implementation: metrics, logs, and traces BIG-IP CNE exports telemetry natively into Kubernetes observability pipelines. Here is what that looks like in practice for each pillar of observability: Pillars Description Metrics Each CNF pod exposes metrics endpoints compatible with Prometheus scraping. Key metric families include flow_processing_latency_seconds (histogram), nat_session_failures_total (counter), dns_cache_hit_ratio (gauge), and pod_packet_drop_total (counter). These feed directly into your SLI calculations. Logs CNF logs emit structured JSON to stdout, consumable by Fluentd, Fluent Bit, or any log aggregator in your cluster. Event chains like NAT pool exhaustion produce correlated log sequences that enable root-cause analysis without SSH access to the CNF pod. Traces For distributed request tracing (for example, following a DNS query from UE through the DNS CNF to upstream resolvers) BIG-IP CNE supports OpenTelemetry trace propagation. This is particularly useful when debugging latency spikes in multi-CNF traffic chains where the delay source is ambiguous. Config note: To wire CNF metrics into an existing Prometheus stack, annotate the CNF pod spec with prometheus.io/scrape:“true”" and prometheus.io/port matching the CNF metrics port. No additional expertise required. Error budgets as a deployment gate SRE uses error budgets to make deployment velocity a function of reliability, not a function of the change calendar. Here is how this applies to CNF operations with BIG-IP CNE: Healthy budget (burn rate < 1x): Teams can accelerate CNF feature delivery. New CRD configurations, Helm chart upgrades, and policy changes proceed with normal review cycles. Elevated burn (burn rate 1–5x): All non-emergency CNF changes require additional review. Automated rollback thresholds tighten. Budget exhausted: CNF changes halt. The SRE team shifts 100% focus to reliability work until the budget recovers. This is a policy decision, not a technical one. In practice, BIG-IP CNE supports this through Kubernetes-native mechanisms: Helm-managed upgrades can be gated by pre-upgrade hooks that query current SLI state; CRD-based configuration changes can be rolled out with canary patterns using standard Kubernetes deployment strategies; HPA (Horizontal Pod Autoscaler) rules can be tied directly to CNF-emitted metrics rather than generic CPU thresholds. Toil reduction: from runbooks to controllers SRE defines toil as manual, repetitive, automatable operational work that scales with traffic volume but produces no enduring value. In telecom CNF operations, toil accumulates fast: Manual NAT pool expansion during traffic peaks SSH-based policy pushes for firewall rule updates Ticket-driven DNS configuration changes Manual health checks before and after maintenance windows BIG-IP CNE addresses this through Kubernetes-native control loops. Configuration is declarative — CNF policies are expressed as Custom Resource Definitions (CRDs) applied via kubectl or GitOps pipelines. Kubernetes controllers reconcile the actual CNF state to the desired state defined in Git, eliminating configuration drift and manual intervention. Example: Instead of a runbook step that says “SSH to the CGNAT CNF and add 1000 ports to poolX,” your GitOps pipeline applies a CRD update that the CNF controller reconciles automatically. The audit trail is a Git commit, not a change ticket. SRE teams typically target a 50/50 split between operational work and engineering work. CNF operations that rely on manual runbooks push this ratio toward 70–80% operations. Declarative CNF management via CRDs and Helm shifts it back, freeing SRE capacity for SLO definition, observability improvement, and automation engineering. Dissolving the platform/network operations boundary Figure 1: SRE bridges the Kubernetes platform team and telecom network operations team through shared SLIs and a unified observability stack. The most persistent operational problem in cloud-native telecom is not technical; it is organizational. Kubernetes platform teams and telecom network operations teams measure different things, escalate through different processes, and use different tooling. When a GTP-U latency spike occurs, Kubernetes teams check pod health and cluster metrics; telecom teams check interface counters and policy logs. Neither has the full picture. The SRE resolves this by requiring both teams to operate against the same SLIs. When CNF and cluster metrics flow into the same observability stack: A single SLI can span pods, nodes, and network functions Rollouts, autoscaling, and maintenance windows are gated by shared error budgets rather than siloed change calendars Kubernetes engineers declare CNF configurations as code; telecom teams define SLOs that consume those functions as building blocks The result is that when an SLI burns through an error budget (for example, a 0.02% GTP-U drop rate) both teams respond to the same signal. Kubernetes teams scale pods; telecom teams tune policies. No finger-pointing. Shared accountability for the packet-level truth that subscribers experience. 5G N6/Gi-LAN consolidation: a concrete SRE use case Figure 2: BIG-IP CNE consolidating SGi-LAN/N6 functions (Edge Firewall, CGNAT, DNS) as Kubernetes-native CNFs alongside the 5G core. A common deployment pattern for BIG-IP CNE is N6/Gi-LAN consolidation, where edge firewalling, CGNAT, DNS, and DDoS protection are deployed as CNFs alongside the 5G core rather than as discrete physical or virtual appliances. From an SRE perspective, this architecture enables composite SLOs that span multiple CNFs in a single traffic chain: Edge Firewall CNF: SLI = packet drop rate at N6 boundary. SLO = <0.001% drops over 30 days. CGNAT CNF: SLI = NAT session establishment success rate. SLO = 99.99% sessions established within 5ms. DNS CNF: SLI = query response latency at P95. SLO = P95 < 20ms with >80% cache hit ratio. Composite SLOs then drive autoscaling and routing decisions based on real service behavior rather than static capacity plans. When the DNS cache hit ratio drops below threshold, the autoscaler adds DNS CNF replicas driven by the CNF-emitted metric, not a manual capacity review. Conclusion: Path to AI-native 6G The 6G architecture direction (disaggregated, software-defined network functions dynamically placed across distributed edge locations) requires SRE disciplines at the foundation, not bolted on later. Networks that must adapt in near-real time cannot be operated by humans with runbooks. BIG-IP CNE was designed with this trajectory in mind. The same Kubernetes-native architecture that enables SRE practices for 5G today (declarative configuration, horizontal scaling, native observability) is the foundation for AI-driven traffic steering, dynamic policy enforcement, and intent-based networking in 6G environments. For platform teams making architecture decisions now: investing in SLO definition and observability instrumentation for current CNF deployments is not just operational hygiene. It is building the data infrastructure that AI-native operations will require. Key takeaways, Define SLIs at the subscriber boundary, not the infrastructure boundary Use error budgets to gate CNF rollout velocity. Make it a CI/CD policy, not a manual decision Consume CNF Prometheus metrics in your existing Kubernetes observability stack, no separate tooling required Declarative CRD-based CNF management via GitOps is the primary toil-reduction lever Shared SLIs between Kubernetes platform and telecom operations teams eliminate the organizational boundary that causes most major incidents Related content BIG-IP Next for Kubernetes CNFs - DNS walkthrough BIG-IP Next for Kubernetes CNFs deployment walkthrough From virtual to cloud-native, infrastructure evolution Visibility for Modern Telco and Cloud‑Native Networks BIG-IP Next Cloud-Native Network Functions (CNFs)118Views3likes0CommentsRealtime DoS mitigation with VELOS BX520 Blade
Introduction F5 VELOS is a rearchitected, next-generation hardware platform that scales application delivery performance and automates application services to address many of today’s most critical business challenges. F5 VELOS is a key component of the F5 Application Delivery and Security Platform (ADSP). Demo Video DoS attacks are a fact of life Detect and mitigate large-scale, volumetric network and application-targeted attacks in real-time to defend your businesses and your customers against multi-vector, denial of service (DoS) activity attempting to disrupt your business. DoS impacts include: Loss of Revenue Degradation of Infrastructure Indirect costs often include: Negative Customer Experience. Brand Image DoS attacks do not need to be massive to be effective. F5 VELOS: Key Specifications Up to 6Tbps total Layer 4-7 throughput 6.4 Billion concurrent connections Higher density resources/Rack Unit than any previous BIG-IP Flexible support for multi-tenancy and blade groupings API first architecture / fully automatable Future-proof architecture built on Kubernetes Multi-terabit security – firewall and real-time DoS Real-time DoS Mitigation with VELOS Challenges Massive volume attacks are not required to negatively impact “Goodput”. Shorter in duration to avoid Out of Band/Sampling Mitigation. Using BIG-IP inline DoS protection can react quickly and mitigate in real-time. Simulated DoS Attack 600 Gbps 1.5 Million Connections Per Second (CPS) 120 Million Concurrent Flows Example Dashboard without DoS Attack Generated Attack Flood an IP from many sources 10 Gb/s with 10 Million CPS DoS Attack launched (<2% increase in Traffic) Impact High CPU Consumption: 10M+ new CPS High memory utilization with Concurrent Flows increasing quickly Result Open connections much higher New connections increasing rapidly Higher CPU Application Transaction Failures Enable Network Flood Mitigation Mitigation Applied Enabling the Flood Vector on BIG-IP AFM Device DoS Observe “Goodput” returning to normal in seconds as BIG-IP mitigates the Attack Conclusion Distributed denial of service (DDoS) attacks continue to see enormous growth across every metric. This includes an increasing number and frequency of attacks, average peak bandwidth and overall complexity. As organizations face unstoppable growth and the occurrence of these attacks, F5 provides organizations multiple options for complete, layered protection against DDoS threats across layers 3–4 and 7. F5 enables organizations to maintain critical infrastructure and services — ensuring overall business continuity under this barrage of evolving, and increasing DoS/DDoS threats attempting to disrupt or shut down their business. Related Articles F5 VELOS: A Next-Generation Fully Automatable Platform F5 rSeries: Next-Generation Fully Automatable Hardware Accelerate your AI initiatives using F5 VELOS DEMO: The Next Generation of F5 Hardware is Ready for you
439Views3likes0CommentsF5 VELOS: A Next-Generation Fully Automatable Platform
What is VELOS? The F5 VELOS platform is the next generation of F5’s chassis-based systems. VELOS can bridge traditional and modern application architectures by supporting a mix of traditional F5 BIG-IP tenants as well as next-generation BIG-IP Next tenants in the future. F5 VELOS is a key component of the F5 Application Delivery and Security Platform (ADSP). VELOS relies on a Kubernetes-based platform layer (F5OS) that is tightly integrated with F5 TMOS software. Going to a microservice-based platform layer allows VELOS to provide additional functionality that was not possible in previous generations of F5 BIG-IP platforms. Customers do not need to learn Kubernetes but still get the benefits of it. Management of the chassis will still be done via a familiar F5 CLI, webUI, or API. The additional benefit of automation capabilities can greatly simplify the process of deploying F5 products. A significant amount of time and resources are saved due to automation, which translates to more time to perform critical tasks. F5OS VELOS UI Demo Video Why is VELOS important? Get more done in less time by using a highly automatable hardware platform that can deploy software solutions in seconds, not minutes or hours. Increased performance improves ROI: The VELOS platform is a high-performance and highly scalable chassis with improved processing power. Running multiple versions on the same platform allows for more flexibility than previously possible. Significantly reduce the TCO of previous-generation hardware by consolidating multiple platforms into one. Key VELOS Use-Cases NetOps Automation Shorten time to market by automating network operations and offering cloud-like orchestration with full-stack programmability Drive app development and delivery with self-service and faster response time Business Continuity Drive consistent policies across on-prem and public cloud and across hardware and software-based ADCs Build resiliency with VELOS’ superior platform redundancy and failover capabilities Future-proof investments by running multiple versions of apps side-by-side; migrate applications at your own pace Cloud Migration On-Ramp Accelerate cloud strategy by adopting cloud operating models and on-demand scalability with VELOS and use that as on-ramp to cloud Dramatically reduce TCO with VELOS systems; extend commercial models to migrate from hardware to software or as applications move to cloud Automation Capabilities Declarative APIs and integration with automation frameworks (Terraform, Ansible) greatly simplifies operations and reduces overhead: AS3 (Application Services 3 Extension): A declarative API that simplifies the configuration of application services. With AS3, customers can deploy and manage configurations consistently across environments. Ansible Automation: Prebuilt Ansible modules for VELOS enable automated provisioning, configuration, and updates, reducing manual effort and minimizing errors. Terraform: Organizations leveraging Infrastructure as Code (IaC) can use Terraform to define and automate the deployment of VELOS appliances and associated configurations. Example json file: Example of running the Automation Playbook: Example of the results: More information on Automation: Automating F5OS on VELOS GitHub Automation Repository Specialized Hardware Performance VELOS offers more hardware-accelerated performance capabilities with more FPGA chipsets that are more tightly integrated with TMOS. It also includes the latest Intel processing capabilities. This enhances the following: SSL and compression offload L4 offload for higher performance and reduced load on software Hardware-accelerated SYN flood protection Hardware-based protection from more than 100 types of denial-of-service (DoS) attacks Support for F5 Intelligence Services VELOS CX1610 chassis VELOS BX520 blade Migration Options (BIG-IP Journeys) Use BIG-IP Journeys to easily migrate your existing configuration to VELOS. This covers the following: Entire L4-L7 configuration can be migrated Individual Applications can be migrated BIG-IP Tenant configuration can be migrated Automatically identify and resolve migration issues Convert UCS files into AS3 declarations if needed Post-deployment diagnostics and health The Journeys Tool, available on DevCentral’s GitHub, facilitates the migration of legacy BIG-IP configurations to VELOS-compatible formats. Customers can convert UCS files, validate configurations, and highlight unsupported features during the migration process. Multi-tenancy capabilities in VELOS simplify the process of isolating workloads during and after migration. GitHub repository for F5 Journeys Conclusion The F5 VELOS platform addresses the modern enterprise’s need for high-performance, scalable, and efficient application delivery and security solutions. By combining cutting-edge hardware capabilities with robust automation tools and flexible migration options, VELOS empowers organizations to seamlessly transition from legacy platforms while unlocking new levels of performance and operational agility. Whether driven by the need for increased throughput, advanced multi-tenancy, the VELOS platform stands as a future-ready solution for securing and optimizing application delivery in an increasingly complex IT landscape. Related Content Cloud Docs VELOS Guide F5 VELOS Chassic System Datasheet DEMO: The Next Generation of F5 Hardware is Ready for you
711Views3likes0CommentsPractical Mapping Guide - F5 BIG-IP TMOS Modules to Feature-Scoped CNFs
Introduction BIG-IP TMOS and BIG-IP CNFs solve similar problems with very different deployment and configuration models. In TMOS you deploy whole modules (LTM, AFM, DNS, etc.), while in CNFs you deploy only the specific features and data plane functions you need as cloud native components. Modules vs feature scoped services In TMOS, enabling LTM gives you a broad set of capabilities in one module: virtual servers, pools, profiles, iRules, SNAT, persistence, etc., all living in the same configuration namespace. In CNFs, you deploy only the features you actually need, expressed as discrete custom resources: for example, a routing CNF for more precise TMM routing, a firewall CNF for policies, or specific CRs for profiles and NAT, rather than bringing the entire LTM or AFM module along “just in case”. Configuration objects vs custom resources TMOS configuration is organized as objects under modules (for example: LTM virtual, LTM pool, security, firewall policy, net vlan), managed via tmsh, GUI, or iControl REST. CNFs expose those same capabilities through Kubernetes Custom Resources (CRs): routes, policies, profiles, NAT, VLANs, and so on are expressed as YAML and applied with GitOps-style workflows, making individual features independently deployable with a version history. Coarse-grained vs precise feature deployment On TMOS, deploying a use case often means standing up a full BIG-IP instance with multiple modules enabled, even if the application only needs basic load balancing with a single HTTP profile and a couple of firewall rules. With CNFs, you can carve out exactly what you need: for example, only a precise TMM routing function plus the specific TCP/HTTP/SSL profiles and security policies required for a given application or edge segment, reducing blast radius and resource footprint. BIG-IP Next CNF Custom Resources (CRs) extend Kubernetes APIs to configure the Traffic Management Microkernel (TMM) and Advanced Firewall Manager (AFM), providing declarative equivalents to BIG-IP TMOS configuration objects. This mapping ensures functional parity for L4-L7 services, security, and networking while enabling cloud-native scalability. Focus here covers core mappings with examples for iRules and Profiles. What are Kubernetes Custom Resources Think of Kubernetes API as a menu of objects you can create: Pods (containers), Services (networking), Deployments (replicas). CRs add new menu items for your unique needs. You first define a CustomResourceDefinition (CRD) (the blueprint), then create CR instances (actual objects using that blueprint). How CRs Work (Step-by-Step) Create CRD - Define new object type (e.g., "F5BigFwPolicy") Apply CRD - Kubernetes API server registers it, adding new REST endpoints like /apis/f5net.com/v1/f5bigfwpolicies Create CR - Write YAML instances of your new type and apply with kubectl Controllers Watch - Custom operators/controllers react to CR changes, creating real resources (Pods, ConfigMaps, etc.) CR examples Networking and NAT Mappings Networking CRs handle interfaces and routing, mirroring, TMOS VLANs, Self-IPs and NAT mapping. CNF CR TMOS Object(s) Purpose F5BigNetVlan VLANs, Self-IPs, MTU Interface config F5BigCneSnatpool SNAT Automap Source NAT s Example: F5BigNetVlan CR sets VLAN tags and IPs, applied via kubectl apply, propagating to TMM like tmsh create net vlan. Security and Protection Mappings Protection CRs map to AFM and DoS modules for threat mitigation. CNF CR CR Purpose TMOS Object(s) Purpose F5BigFwPolicy applies industry-standard firewall rules to TMM AFM Policies Stateful ACL filtering F5BigFwRulelist Consists of an array of ACL rules AFM Rule list Stateful ACL filtering F5BigSvcPolicy Allows creation of Timer Policies and attaching them to the Firewall Rules AFM Policies Stateful ACL filtering F5BigIpsPolicy Allows you to filter inspected traffic (matched events) by various properties such as, the inspection profile’s host (virtual server or firewall policy), traffic properties, inspection action, or inspection service. IPS AFM policies Enforce IPS F5BigDdosGlobal Configures the TMM Proxy Pod to protect applications and the TMM Pod from Denial of Service / Distributed Denial of Service (Dos/DDoS) attacks. Global DoS/DDoS Device DoS protection F5BigDdosProfile The Percontext DDoS CRD configures the TMM Proxy Pod to protect applications from Denial of Service / Distributed Denial of Service (Dos/DDoS) attacks. DoS/DDoS per profile Per Context DoS profile protection F5BigIpiPolicy Each policy contains a list of categories and actions that can be customized, with IPRep database being common. Feedlist based policies can also customize the IP addresses configured. IP Intelligence Policies Reputation-based blocking Example: F5BigFwPolicy references rule-lists and zones, equivalent to tmsh creating security firewall policy. Traffic Management Mappings Traffic CRs proxy TCP/UDP and support ALGs, akin to LTM Virtual Servers and Pools. CNF CR TMOS Object(s) Purpose Part of the Apps CRs Virtual Server (Ingress/HTTPRoute/GRPCRoute) LTM Virtual Servers Load balancing Part of the Apps CRs Pool (Endpoints) LTM Pools Node groups F5BigAlgFtp FTP Profile/ALG FTP gateway F5BigDnsCache DNS Cache Resolution/caching Example: GRPCRoute CR defines backend endpoints and profiles, mapping to tmsh create ltm virtual. Profiles Mappings with Examples Profiles CRs customize traffic handling, referenced by Traffic Management CRs, directly paralleling TMOS profiles. CNF CR TMOS Profile(s) Example Configuration Snippet F5BigTcpSetting TCP Profile spec: { defaultsFrom: "tcp-lan-optimized" } tunes congestion like tmsh create LTM profile TCP optimized F5BigUdpSetting UDP Profile spec: { idleTimeout: 300 } sets timeouts F5BigUdpSetting HTTP Profile spec: { http2Profile: "http2" } enables HTTP/2 F5BigClientSslSetting ClientSSL Profile spec: { certKeyChain: { name: "default" } } for TLS termination F5PersistenceProfile Persistence Profile Enables source/dest persistence Profiles attach to Virtual Servers/Ingresses declaratively, e.g., spec.profiles: [{ name: "my-tcp", context: "tcp" }]. iRules Support and Examples CNF fully supports iRules for custom logic, integrated into Traffic Management CRs like Virtual Servers or FTP/RTSP. iRules execute in TMM, preserving TMOS scripting. Example CR Snippet: apiVersion: k8s.f5net.com/v1 kind: F5BigCneIrule metadata: name: cnfs-dns-irule namespace: cnf-gateway spec: iRule: > when DNS_REQUEST { if { [IP::addr [IP::remote_addr] equals 10.10.1.0/24] } { cname cname.siterequest.com } else { host 10.20.20.20 } } This mapping facilitates TMOS-to-CNF migrations using tools like F5Big-Ip Controller for CR generation from UCS configs. For full CR specs, refer to official docs. The updated full list of CNF CRs per release can be found over here, https://clouddocs.f5.com/cnfs/robin/latest/cnf-custom-resources.html Conclusion In this article, we examined how BIG-IP Next CNF redefines the TMOS module model into a feature-scoped, cloud-native service architecture. By mapping TMOS objects such as virtual servers, profiles, and security policies to their corresponding CNF Custom Resources, we illustrated how familiar traffic management and security constructs translate into declarative Kubernetes workflows. Related content F5 Cloud-Native Network Functions configurations guide BIG-IP Next Cloud-Native Network Functions (CNFs) CNF DNS Express BIG-IP Next for Kubernetes CNFs - DNS walkthrough BIG-IP Next for Kubernetes CNFs deployment walkthrough | DevCentral BIG-IP Next Edge Firewall CNF for Edge workloads | DevCentral Modern Applications-Demystifying Ingress solutions flavors | DevCentral88Views1like0CommentsVisibility for Modern Telco and Cloud‑Native Networks
Introduction As operators transition from 5G to 6G-ready cloud-native architectures, their networks are becoming more disaggregated, dynamic, and intelligent. Functions increasingly span virtualized 5G cores, AI-enhanced 6G control domains, MEC platforms, and hyperscale distributed edge clouds. Traditional visibility tools built for static or centralized topologies can no longer keep pace. Telco and security architects now face the challenge of maintaining real‑time, end‑to‑end observability across highly adaptive, multi‑vendor, and multi‑cloud infrastructures where workloads may move between 5G and 6G service fabrics in milliseconds. BIG‑IP eBPF Observability (EOB) meets this challenge with a high‑performance, kernel‑level telemetry framework built on Linux’s eBPF technology, delivering visibility that scales from 6G microcells to data center cores without adding operational overhead. Why Legacy Visibility Approaches Fail in 5G/6G Environments Tools like SPANs, TAPs, and broker appliances were effective for static topologies. But in cloud‑native 5G and 6G deployments, where AI dynamically places network functions across cores, edges, and reconfigurable slices, they break down. Common limitations include: No reliable physical tap points in cloud-distributed or satellite-connected nodes SPAN mirroring constrained by virtual and container limits Encryption and service mesh layers hiding the real traffic context Vendor probes exposing only proprietary NFs, limiting multi‑domain visibility These gaps fragment insights across control-plane (SBI, PFCP, NGAP, F1‑AP) and AI‑driven management planes now emerging within 6G intelligent network layers (INNs). eBPF: Core Technology for Adaptive Observability eBPF (Extended Berkeley Packet Filter) allows sandboxed programs to execute inside the Linux kernel, offering in‑situ visibility into packet, process, and syscall activities at near‑zero latency. Its key advantages for 5G and 6G include: Safe, programmable telemetry without kernel module changes Full observability across containers, namespaces, and network functions Ultra‑low‑latency insights ideal for closed‑loop automation and AI inference workflows 6G networks depend on autonomous observability loops, eBPF forms the telemetry foundation that lets those loops sense and adapt to conditions in real time. BIG‑IP EOB Architecture and Data Model EOB leverages lightweight containerized sensors orchestrated via Kubernetes or OpenShift across core, edge, and RAN domains. Its data model captures: Raw packets and deep forensic traces Dynamic service topologies reflecting 5G/6G slice relationships Plaintext records of TLS 1.3 and service mesh sessions for deep insight Metadata for telco protocols (SBI, PFCP, DNS, HTTP/2, F1‑AP, NGAP) and emerging 6G access protocols Rich CNFlow telemetry correlating control- and user-plane activity Then telemetry streams to a message bus or observability fabric, ready for real‑time analytics, SIEM integration, or AI‑based fault prediction systems that’s vital for 6G’s full autonomy vision. Core‑to‑Edge‑to‑AI Deployment Model EOB spans the entire 5G/6G topology, from centralized cores to AI‑powered edge clouds: 5G Core Functions: AMF, SMF, NRF, PCF, UDM, UDR 6G Expansion: Cloud‑native service networks running AI‑orchestrated CNFs and reconfigurable RAN domains Edge & MEC: Low‑latency compute nodes supporting URLLC and industrial AI Open RAN: O‑RU, O‑DU, O‑CU, and AI‑RAN management functions A central controller enforces data routing and observability policies, making EOB a unifying visibility plane for multi‑band, multi‑vendor networks that may straddle both 5G and 6G service layers. Restoring Visibility in Encrypted and AI‑Automated Planes Modern telco cores encrypt almost everything, including control messages used for orchestration and identity management. EOB restores inspection capability by extracting essential 5G/6G identifiers and slice attributes from encrypted flows, enabling real‑time anomaly detection. Capabilities include: Extraction of SUPI, SUCI, GUTI, slice/service IDs, and new AI‑Service Identifiers emerging in 6G Node‑level contextual threat detection across AMF, SMF, and cognitive NFs Direct integration with different security products and AI threat analytics for real‑time prevention This removes “blind spots” that AI‑automated security systems would otherwise misinterpret or miss entirely. Let’s go over a demo showing how BIG-IP EOB enhances visibility by TomCreighton Ecosystem and Integration BIG‑IP EOB integrates seamlessly with telco cloud environments: Kubernetes and Red Hat OpenShift: Certified operator framework, integrated with Red Hat’s bpfman for large-scale eBPF management AI/ML Pipelines: Telemetry exported to AIOps, CI/CD, and orchestration frameworks, key for autonomous fault resolution in 6G While we highlighted multiple use cases for Service Providers, EOB can expand other capabilities to the enterprise sector, Application and data monitoring Security and policy assurance User experience and monitoring Cloud-native and infrastructure Conclusion BIG‑IP EOB enables a future‑proofed observability framework that supports the continuous evolution from 5G to 6G: Unified, vendor-neutral visibility across physical, virtual, and AI-driven domains Granular kernel-level insight without probe sprawl Control and user-plane correlation for real-time SLA and security validation Encrypted and service‑mesh traffic observability Telemetry foundation for 6G, autonomous and cognitive networking EOB forms the visibility fabric of the self‑intelligent network—turning real-time telemetry into adaptive intelligence for secure, resilient, and autonomous telco operations. Related Content F5 eBPF Observability: Real-Time Traffic Visibility Dashboard Demo F5 eBPF Observability: Kernel-Level Observability for Modern Applications eBPF: It's All About Observability eBPF: Revolutionizing Security and Observability in 2023
184Views3likes0CommentsBIG-IP Next for Kubernetes CNF 2.2 what's new
Introduction BIG-IP Next CNF v2.2.0 offers new enhancements to BIG-IP Next for Kubernetes CNFs with a focus on analytics capabilities, traffic distribution, subscriber management, and operational improvements that address real-world challenges in high-scale deployments. High-Speed Logging for Traffic Analysis The Reporting feature introduces high-speed logging (HSL) capabilities that capture session and flow-level metrics in CSV format. Key data points include subscriber identifiers, traffic volumes, transaction counts, video resolution metrics, and latency measurements, exported via Syslog (RFC5424, RFC3164, or legacy-BIG-IP formats) over TCP or UDP. Fluent-bit handles TMM container log processing, forwarding to Fluentd for external analytics servers. Custom Resources simplify configuration of log publishers, reporting intervals, and enforcement policies, making it straightforward to integrate into existing Kubernetes workflows. DNS Cache Inspection and Management New utilities provide detailed visibility into DNS cache operations. The bdt_cli tool supports listing, counting, and selectively deleting cache records using filters for domain names, TTL ranges, response codes, and cache types (RRSet, message, or nameserver). Complementing this, dns-cache-stats delivers performance metrics including hit/miss ratios, query volumes, response time distributions across intervals, and nameserver behavior patterns. These tools enable systematic cache analysis and maintenance directly from debug sidecars. Stateless and Bidirectional DAG Traffic Distribution Stateless DAG implements pod-based hashing to distribute traffic evenly across TMM pods without maintaining flow state. This approach embeds directly within the CNE installation, eliminating separate DAG infrastructure. Bidirectional DAG extends this with symmetric routing for client-to-server and return flows, using consistent redirect VLANs and hash tables. Deployments must align TMM pod counts with self-IP configurations on pod_hash-enabled VLANs to ensure balanced distribution. Dynamic GeoDB Updates for Edge Firewall Policies Edge Firewall Geo Location policies now support dynamic GeoDB updates, replacing static country/region lists embedded in container images. The Controller and PCCD components automatically incorporate new locations and handle deprecated entries with appropriate logging. Firewall Policy CRs can reference newly available geos immediately, enabling responsive policy adjustments without container restarts or rebuilds. This maintains policy currency in environments requiring frequent threat intelligence updates. Subscriber Creation and CGNAT Logging RADIUS-triggered subscriber creation integrates with distributed session storage (DSSM) for real-time synchronization across TMM pods. Subscriber records capture identifiers like IMSI, MSISDN, or NAI, enabling automated session lifecycle management. CGNAT logging enhancements include Subscriber ID in translation events, providing clear IP-to-subscriber mapping. This facilitates correlation of network activity with individual users, supporting troubleshooting, auditing, and regulatory reporting requirements. Kubernetes Secrets Integration for Sensitive Configuration Custom Resources now reference sensitive data through Kubernetes’ native Secrets using secretRef fields (name, namespace, key). The cne-controller fetches secrets securely via mTLS, monitors for updates, and propagates changes to consuming components. This supports certificate rotation through cert-manager without CR reapplication. RBAC controls ensure appropriate access while eliminating plaintext sensitive data from YAML manifests. Dynamic Log Management and Storage Optimization REST API endpoints and ConfigMap watching enable runtime log level adjustments per pod without restarts. Changes propagate through pod-specific ConfigMaps monitored by the F5 logging library. An optional Folder Cleaner CronJob automatically removes orphaned log directories, preventing storage exhaustion in long-running deployments with heavy Fluentd usage. Key Enhancements Overview Several refinements have improved operational aspects: CNE Controller RBAC: Configurable CRD monitoring via ConfigMap eliminates cluster-wide list permissions, with manual controller restart required for list changes. CGNAT/DNAT HA: F5Ingress automatically distributes VLAN configurations to standby TMM pods (excluding self-IPs) for seamless failover. Memory Optimization: 1GB huge page support via tmm.hugepages.preferredhugepagesize parameter. Diagnostics: QKView requests can be canceled by ID, generating partial diagnostics from collected data. Metrics Control: Per-table aggregation modes (Aggregated, Semi-Aggregated, Diagnostic) with configurable export intervals via f5-observer-operator-config ConfigMap. Related content BIG-IP Next for Kubernetes CNF - latest release BIG-IP Next Cloud-Native Network Functions (CNFs) BIG-IP Next for Kubernetes CNFs deployment walkthrough | DevCentral BIG-IP Next Edge Firewall CNF for Edge workloads | DevCentral F5 BIG-IP Next CNF solutions suite of Kubernetes native 5G Network Functions142Views2likes0CommentsMitigating OWASP Web Application Risk: Insecure Design using F5 XC platform
Overview: This article is the last part in a series of articles on mitigation of OWASP Web Application vulnerabilities using F5 Distributed Cloud platform (F5 XC). Introduction to Insecure Design: In an effort to speed up the development cycle, some phases might be reduced in scope which leads to give chance for many vulnerabilities. To focus the risks which are been ignored from design to deployment phases, a new category of “Insecure Design” is added under OWASP Web Application Top 10 2021 list. Insecure Design represents the weaknesses i.e. lack of security controls which are been integrated to the website/application throughout the development cycle. If we do not have any security controls to defend the specific attacks, Insecure Design cannot be fixed by any perfect implementation while at the same time a secure design can still have an implementation flaw which leads to vulnerabilities that may be exploited. Hence the attackers will get vast scope to leverage the vulnerabilities created by the insecure design principles. Here are the multiple scenarios which comes under insecure design vulnerabilities. Credential Leak Authentication Bypass Injection vulnerabilities Scalper bots etc. In this article we will see how F5 XC platform helps to mitigate the scalper bot scenario. What is Scalper Bot: In the e-commerce industry, Scalping is a process which always leads to denial of inventory. Especially, online scalping uses bots nothing but the automated scripts which will check the product availability periodically (in seconds), add the items to the cart and checkout the products in bulk. Hence the genuine users will not get a fair chance to grab the deals or discounts given by the website or company. Alternatively, attackers use these scalper bots to abandon the items added to the cart later, causing losses to the business as well. Demonstration: In this demonstration, we are using an open-source application “Online Boutique” (refer boutique-repo) which will provide end to end online shopping cart facility. Legitimate customer can add any product of their choice to the cart and checkout the order. Customer Page: Scalper bot with automation script: The below automation script will add products in bulk into the cart of the e-commerce application and place the order successfully. import requests import random # List of User-Agents USER_AGENTS = [ "sqlmap/1.5.2", # Automated SQL injection tool "Nikto/2.1.6", # Nikto vulnerability scanner "nmap", # Network mapper used in reconnaissance "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)", # Spoofed Search Engine Bot "php", # PHP Command Line Tool "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)", # Old Internet Explorer (suspicious outdated) "libwww-perl/6.36", # Perl-based automation, often found in attacks or scrapers "wget/1.20.3", # Automation tool for downloading files or making requests "Python-requests/2.26.0", # Python automation library ] # Function to select a random User-Agent def get_random_user_agent(): return random.choice(USER_AGENTS) # Base URL of the API BASE_URL = "https://insecure-design.f5-hyd-xcdemo.com" # Perform the API request to add products to the cart def add_to_cart(product_id, quantity): url = f"{BASE_URL}/cart" headers = { "User-Agent": get_random_user_agent(), # Random User-Agent "Content-Type": "application/x-www-form-urlencoded" } payload = { "product_id": product_id, "quantity": quantity } # Send POST request with cookies included response = requests.post(url, headers=headers, data=payload) if response.status_code == 200: print(f"Successfully added {quantity} to cart!") else: print(f"Failed to add to cart. Status Code: {response.status_code}, Response: {response.text}") return response # Perform the API request to place an order def place_order(): url = f"{BASE_URL}/cart/checkout" headers = { "User-Agent": get_random_user_agent(), # Random User-Agent "Content-Type": "application/x-www-form-urlencoded" } payload = { "email": "someone@example.com", "street_address": "1600 Amphitheatre Parkway", "zip_code": "94043", "city": "Mountain View", "state": "CA", "country": "United States", "credit_card_number": "4432801561520454", "credit_card_expiration_month": "1", "credit_card_expiration_year": "2026", "credit_card_cvv": "672" } # Send POST request with cookies included response = requests.post(url, headers=headers, data=payload) if response.status_code == 200: print("Order placed successfully!") else: print(f"Failed to place order. Status Code: {response.status_code}, Response: {response.text}") return response # Main function to execute the API requests def main(): # Add product to cart product_id = "OLJCESPC7Z" quantity = 10 print("Adding product to cart...") add_to_cart_response = add_to_cart(product_id, quantity) # If the add_to_cart request is successful, proceed to checkout if add_to_cart_response.status_code == 200: print("Placing order...") place_order() # Run the main function if __name__ == "__main__": main() To mitigate this problem, F5 XC is providing the feasibility of identifying and blocking these bots based on the configuration provided under HTTP load balancer. Here is the procedure to configure the bot defense with mitigation action ‘block’ in the load balancer and associate the backend application nothing but ‘evershop’ as the origin pool. Create origin pool Refer pool-creation for more info Create http load balancer (LB) and associate the above origin pool to it. Refer LB-creation for more info Configure bot defense on the load balancer and add the policy with mitigation action as ‘block’. Click on “Save and Exit” to save the Load Balancer configuration. Run the automation script by providing the LB domain details to exploit the items in the application. Validating the product availability for the genuine user manually. Monitor the logs through F5 XC, Navigate to WAAP --> Apps & APIs --> Security Dashboard, select your LB and click on ‘Security Event’ tab. The above screenshot gives the detailed info on the blocked attack along with the mitigation action. Conclusion: As you have seen from the demonstration, F5 Distributed Cloud WAAP (Web Application and API Protection) has detected the scalpers with the bot defense configuration applied on the Load balancer and mitigated the exploits of scalper bots. It also provides the mitigation action of “_allow_”, “_redirect_” along with “_block_”. Please refer link for more info. Reference links: OWASP Top 10 - 2021 Overview of OWASP Web Application Top 10 2021 F5 Distributed Cloud Services F5 Distributed Cloud Platform Authentication Bypass Injection vulnerabilities2.5KViews2likes0CommentsIdentity-centric F5 ADSP Integration Walkthrough
In this article we explore F5 ADSP from the Identity lense by using BIG-IP APM, BIG-IP SSLO and add BIG-IP AWAF to the service chain. The F5 ADSP addresses four core areas: Deployment at scale, Security against evolving threats, Deliver application reliably, Operate your day to day work efficiently. Each comes with its own challenges, but together they define the foundation for keeping systems fast, stable, and safe. Each architecture deployment example is designed to cover at least two of the four core areas: Deployment, Security, Delivery and XOps.287Views3likes0CommentsRuntime API Security: From Visibility to Enforced Protection
This article examines how runtime visibility and enforcement address API security failures that emerge only after deployment. It shows how observing live traffic enables evidence-based risk prioritization, exposes active exploitation, and supports targeted controls such as schema enforcement and scoped protection rules. The focus is on reducing real-world attack impact by aligning specifications, behavior, and enforcement, and integrating runtime protection as a feedback mechanism within a broader API security strategy.
244Views1like1Comment