Verified Designs
170 TopicsModern Applications-Demystifying Ingress solutions flavors
In this article, we explore the different ingress services provided by F5 and how those solutions fit within our environment. With different ingress services flavors, you gain the ability to interact with your microservices at different points, allowing for flexible, secure deployment. The ingress services tools can be summarized into two main categories, Management plane: NGINX One BIG-IP CIS Traffic plane: NGINX Ingress Controller / Plus / App Protect / Service Mesh BIG-IP Next for Kubernetes Cloud Native Functions (CNFs) F5 Distributed Cloud kubernetes deployment mode Ingress solutions definitions In this section we go quickly through the Ingress services to understand the concept for each service, and then later move to the use cases’ comparison. BIG-IP Next for Kubernetes Kubernetes' native networking architecture does not inherently support multi-network integration or non-HTTP/HTTPS protocols, creating operational and security challenges for complex deployments. BIG-IP Next for Kubernetes addresses these limitations by centralizing ingress and egress traffic control, aligning with Kubernetes design principles to integrate with existing security frameworks and broader network infrastructure. This reduces operational overhead by consolidating cross-network traffic management into a unified ingress/egress point, eliminating the need for multiple external firewalls that traditionally require isolated configuration. The solution enables zero-trust security models through granular policy enforcement and provides robust threat mitigation, including DDoS protection, by replacing fragmented security measures with a centralized architecture. Additionally, BIG-IP Next supports 5G Core deployments by managing North/South traffic flows in containerized environments, facilitating use cases such as network slicing and multi-access edge computing (MEC). These capabilities enable dynamic resource allocation aligned with application-specific or customer-driven requirements, ensuring scalable, secure connectivity for next-generation 5G consumer and enterprise solutions while maintaining compatibility with existing network and security ecosystems. Cloud Native Functions (CNFs) BIG-IP Next for Kubernetes enables the advanced networking, traffic management and security functionalities; CNFs enables additional advanced services. VNFs and CNFs can be consolidated in the S/Gi-LAN or the N6 LAN in 5G networks. A consolidated approach results in simpler management and operation, reduced operational costs up to reduced TCO by 60% and more opportunities to monetize functions and services. Functions can include DNS, Edge Firewall, DDoS, Policy Enforcer, and more. BIG-IP Next CNFs provide scalable, automated, resilient, manageable, and observable cloud-native functions and applications. Support dynamic elasticity, occupy a smaller footprint with fast restart, and use continuous deployment and automation principles. NGINX for Kubernetes / NGINX One NGINX for Kubernetes is a versatile and cloud-native application delivery platform that aligns closely with DevOps and microservices principles. It is built around two primary models: NGINX Ingress Controller (OSS and Plus): Deployed directly inside Kubernetes clusters, it acts as the primary ingress gateway for HTTP/S, TCP, and UDP traffic. It supports Kubernetes-native CRDs, and integrates easily with GitOps pipelines, service meshes (e.g., Istio, Linkerd), and modern observability stacks like Prometheus and OpenTelemetry. NGINX One/NGINXaaS: This SaaS-delivered, managed service extends the NGINX experience by offloading the operational overhead, providing scalability, resilience, and simplified security configurations for Kubernetes environments across hybrid and multi-cloud platforms. NGINX solutions prioritize lightweight deployment, fast performance, and API-driven automation. NGINX Plus variants offer extended features like advanced WAF (NGINX App Protect), JWT authentication, mTLS, session persistence, and detailed application-layer observability. Some under the hood differences, BIG-IP Next for Kubernetes/CNF make use of F5 own TMM to perform application delivery and security, NGINX rely on Kernel to perform some network level functions like NAT, IP tables and routing. So it’s a matter of the architecture of your environment to go with one or both options to enhance your application delivery and security experience. BIG-IP Container Ingress Services (CIS) BIG-IP CIS works on management flow. The CIS service is deployed at Kubernetes cluster, sending information on created Pods to an integrated BIG-IP external to Kubernetes environment. This allows to automatically create LTM pools and forwarding traffic based on pool members health. This service allows for application teams to focus on microservice development and automatically update BIG-IP, allowing for easier configuration management. Use cases categorization Let’s talk in use cases terms to make it more related to the field and our day-to-day work, NGINX One Access to NGINX commercial products, support for open-source, and the option to add WAF. Unified dashboard and APIs to discover and manage your NGINX instances. Identify and fix configuration errors quickly and easily with the NGINX One configuration recommendation engine. Quickly diagnose bottlenecks and act immediately with real-time performance monitoring across all NGINX instances. Enforce global security polices across diverse environments. Real-time vulnerability management identifies and addresses CVEs in NGINX instances. Visibility into compliance issues across diverse app ecosystems. Update groups of NGINX systems simultaneously with a single configuration file change. Unified view of your NGINX fleet for collaboration, performance tuning, and troubleshooting. NGINX One to automate manual configuration and updating tasks for security and platform teams. BIG-IP CIS Enable self-service Ingress HTTP routing and app services selection by subscribing to events to automatically configure performance, routing, and security services on BIG-IP. Integrate with the BIG-IP platform to scale apps for availability and enable app services insertion. In addition, integrate with the BIG-IP system and NGINX for Ingress load balancing. BIG-IP Next for Kubernetes Supports ingress and egress traffic management and routing for seamless integration to multiple networks. Enables support for 4G and 5G protocols that are not supported by Kubernetes—such as Diameter, SIP, GTP, SCTP, and more. BIG-IP Next for Kubernetes enables security services applied at ingress and egress, such as firewalling and DDoS. Topology hiding at ingress obscures the internal structure within the cluster. As a central point of control, per-subscriber traffic visibility at ingress and egress allows traceability for compliance tracking and billing. Support for multi-tenancy and network isolation for AI applications, enabling efficient deployment of multiple users and workloads on a single AI infrastructure. Optimize AI factories implementations with BIG-IP Next for Kubernetes on Nvidia DPU. F5 Cloud Native Functions (CNFs) Add containerized services for example Firewall, DDoS, and Intrusion Prevention System (IPS) technology Based on F5 BIG-IP AFM. Ease IPv6 migration and improve network scalability and security with IPv4 address management. Deploy as part of a security strategy. Support DNS Caching, DNS over HTTPS (DoH). Supports advanced policy and traffic management use cases. Improve QoE and ARPU with tools like traffic classification, video management and subscriber awareness. NGINX Ingress Controller Provide L4-L7 NGINX services within Kubernetes cluster. Manage user and service identities and authorize access and actions with HTTP Basic authentication, JSON Web Tokens (JWTs), OpenID Connect (OIDC), and role-based access control (RBAC). Secure incoming and outgoing communications through end-to-end encryption (SSL/TLS passthrough, TLS termination). Collect, monitor, and analyze data through prebuilt integrations with leading ecosystem tools, including OpenTelemetry, Grafana, Prometheus, and Jaeger. Easy integration with Kubernetes Ingress API, Gateway API (experimental support), and Red Hat OpenShift Routes F5 Distributed Cloud Kubernetes deployment mode The F5 XC k8s deployment is supported only for Sites running Managed Kubernetes, also known as Physical K8s (PK8s). Deployment of the ingress controller is supported only using Helm. The Ingress Controller manages external access to HTTP services in a Kubernetes cluster using the F5 Distributed Cloud Services Platform. The ingress controller is a K8s deployment that configures the HTTP Load Balancer using the K8s ingress manifest file. The Ingress Controller automates the creation of load balancer and other required objects such as VIP, Layer 7 routes (path-based routing), advertise policy, certificate creation (k8s secrets or automatic custom certificate) Conclusion As you can see, the diverse Ingress controllers tools give you more flexibility, tailoring your architecture based on organization requirements and maintain application delivery and security practices across your applications ecosystem. Related Content and Technical demos BIG-IP Next SPK: a Kubernetes native ingress and egress gateway for Telco workloads F5 BIG-IP Next CNF solutions suite of Kubernetes native 5G Network Functions Deploy WAF on any Edge with F5 Distributed Cloud Announcing F5 NGINX Ingress Controller v4.0.0 | DevCentral JWT authorization with NGINX Ingress Controller My first CRD deployment with CIS | DevCentral BIG-IP Next for Kubernetes BIG-IP Next for Kubernetes (LA) BIG-IP Next Cloud-Native Network Functions (CNFs) CNF Home F5 NGINX Ingress Controller Overview of F5 BIG-IP Container Ingress Services NGINX One131Views1like0CommentsMitigation of OWASP API Security Risk: BOPLA using F5 XC Platform
Introduction: OWASP API Security Top 10 - 2019 has two categories “Mass Assignment” and “Excessive Data Exposure” which focus on vulnerabilities that stem from manipulation of, or unauthorized access to an object's properties. For ex: let’s say there is a user information in json format {“UserName”: ”apisec”, “IsAdmin”: “False”, “role”: ”testing”, “Email”: “apisec@f5.com”}. In this object payload, each detail is considered as a property, and so vulnerabilities around modifying/showing these sensitive properties like email/role/IsAdmin will fall under these categories. These risks shed light on the hidden vulnerabilities that might appear when modifying the object properties and highlighted the essence of having a security solution to validate user access to functions/objects while also ensuring access control for specific properties within objects. As per them, role-based access, sanitizing the user input, and schema-based validation play a crucial role in safeguarding your data from unauthorized access and modifications. Since these two risks are similar, the OWASP community felt they could be brought under one radar and were merged as “Broken Object Property Level Authorization” (BOPLA) in the newer version of OWASP API Security Top 10 – 2023. Mass Assignment: Mass Assignment vulnerability occurs when client requests are not restricted to modifying immutable internal object properties. Attackers can take advantage of this vulnerability by manually parsing requests to escalate user privileges, bypass security mechanisms or other approaches to exploit the API Endpoints in an illegal/invalid way. For more details on F5 Distributed Cloud mitigation solution, check this link: Mitigation of OWASP API6: 2019 Mass Assignment vulnerability using F5 XC Excessive Data Exposure: Application Programming Interfaces (APIs) don’t have restrictions in place and sometimes expose sensitive data such as Personally Identifiable Information (PII), Credit Card Numbers (CCN) and Social Security Numbers (SSN), etc. Because of these issues, they are the most exploited blocks to gain access to customer information, and identifying the sensitive information in these huge chunks of API response data is crucial in data safety. For more details on this risk and F5 Distributed Cloud mitigation solution, check this link: Mitigating OWASP API Security Risk: Excessive Data Exposure using F5 XC Conclusion: Wrapping up, this article covers the overview of the newly added category of BOPLA in OWASP Top 10 – 2023 edition. Finally, we have also provided minutiae on each section in this risk and reference articles to dig deeper into F5 Distributed Cloud mitigation solutions. Reference links or to get started: F5 Distributed Cloud Services F5 Distributed Cloud WAAP Introduction to OWASP API Security Top 10 2023507Views3likes0CommentsPost-Quantum Cryptography: Building Resilience Against Tomorrow’s Threats
Modern cryptographic systems such as RSA, ECC (Elliptic Curve Cryptography), and DH (Diffie-Hellman) rely heavily on the mathematical difficulty of certain problems, like factoring large integers or computing discrete logarithms. However, with the rise of quantum computing, algorithms like Shor's and Grover's threaten to break these systems, rendering them insecure. Quantum computers are not yet at the scale required to break these encryption methods in practice, but their rapid development has pushed the cryptographic community to act now. This is where Post-Quantum Cryptography (PQC) comes in — a new wave of algorithms designed to remain secure against both classical and quantum attacks. Why PQC Matters Quantum computers exploit quantum mechanics principles like superposition and entanglement to perform calculations that would take classical computers millennia2. This threatens: Public-key cryptography: Algorithms like RSA rely on factoring large primes or solving discrete logarithms-problems quantum computers could crack using Shor’s algorithm. Long-term data security: Attackers may already be harvesting encrypted data to decrypt later ("harvest now, decrypt later") once quantum computers mature. Figure1: Cryptography evolution How PQC Works The National Institute of Standards and Technology (NIST) has led a multi-year standardization effort. Here are the main algorithm families and notable examples. Lattice-Based Cryptography. Lattice problems are believed to be hard for quantum computers. Most of the leading candidates come from this category. CRYSTALS-Kyber (Key Encapsulation Mechanism) CRYSTALS-Dilithium (Digital Signatures) Uses complex geometric structures (lattices) where finding the shortest vector is computationally hard, even for quantum computers Example: ML-KEM (formerly Kyber) establishes encryption keys using lattices but requires more data transfer (2,272 bytes vs. 64 bytes for elliptic curves) The below figure shows an illustration of how Lattice-based cryptography works. Imagine solving a maze with two maps-one public (twisted paths) and one private (shortest route). Only the private map holder can navigate efficiently Code-Based Cryptography Based on the difficulty of decoding random linear codes. Classic McEliece: Resistant to quantum attacks for decades. Pros: Very well-studied and conservative. Cons: Very large public key sizes. Relies on error-correcting codes. The Classic McEliece scheme hides messages by adding intentional errors only the recipient can fix. How it works: Key generation: Create a parity-check matrix (public key) and a secret decoder (private key). Encryption: Encode a message with random errors. Decryption: Use the private key to correct errors and recover the message Figure3: Code-Based Cryptography Illustration Multivariate & Hash-Based Quadratic Equations Multivariate These are based on solving systems of multivariate quadratic equations over finite fields and relies on solving systems of multivariate equations, a problem believed to be quantum-resistant. Hash-Based Use hash functions to construct secure digital signatures. SPHINCS+: Stateless and hash-based, good for long-term digital signature security. Challenges and Adoption Integration: PQC must work within existing TLS, VPN, and hardware stacks. Key sizes: PQC algorithms often require larger keys. For example, Classic McEliece public keys can exceed 1MB. Hybrid Schemes: Combining classical and post-quantum methods for gradual adoption. Performance: Lattice-based methods are fast but increase bandwidth usage. Standardization: NIST has finalized three PQC standards (e.g., ML-KEM) and is testing others. Organizations must start migrating now, as transitions can take decades. Adopting PQC with BIG-IP As of F5 BIG-IP 17.5, the BIG-IP now supports the widely implemented X25519Kyber768Draft00 cipher group for client-side TLS negotiations (BIG-IP as a TLS server). Other cipher groups and capabilities will become available in subsequent releases. Cipher walkthrough Let's take the supported cipher in v17.5.0 (Hybrid X25519_Kyber768) as an example and walk through it. X25519: A classical elliptic-curve Diffie-Hellman (ECDH) algorithm Kyber768: A post-quantum Key Encapsulation Mechanism (KEM) The goal is to securely establish a shared secret key between the two parties using both classical and quantum-resistant cryptography. Key Exchange X25519 Exchange: Alice and Bob exchange X25519 public keys. Each computes a shared secret using their own private key + the other’s public key: Kyber768 Exchange: Alice uses Bob’s Kyber768 public key to encapsulate a secret: Produces a ciphertext and a shared secret Bob uses his Kyber768 private key to decapsulate the ciphertext and recover the same shared secret: Both parties now have: A classical shared secret A post-quantum shared secret They combine them using a KDF (Key Derivation Function): Why the hybrid approach is being followed: If quantum computers are not practical yet, X25519 provides strong classical security. If a quantum computer arrives, Kyber768 keeps communications secure. Helps organizations migrate gradually from classical to post-quantum systems. Implementation guide F5 published article Enabling Post-Quantum Cryptography in F5 BIG-IP TMOS to implement PQC on BIG-IP v17.5 Create a new Cipher Rule To create a new Cipher Rule, log in to the BIG-IP Configuration Utility, go to Local Traffic > Ciphers > Rules. Select Create. In the Name box, provide a name for the Cipher Rule. For Cipher Suits, select any of the suites from the provided Cipher Suites list. Use ALL or DEFAULT to list all of the available suites. For DH Groups, enter X25519KYBER768 to restrict to only this PQC cipher For Example: X25519KYBER768 For Signature Algorithms, select an algorithm. For example: DEFAULT Select Finished. Create a new Cipher Group In the BIG-IP Configuration Utility, go to Local Traffic > Ciphers > Groups Select Create. In the Name box, provide a name for the Cipher Group. Add the newly created Cipher Rule to Allow the following box or Restrict the Allowed list to the following in Group Details. All of the other details, including DH Group, Signature Algorithms, and Cipher Suites, will be reflected in the Group Audit as per the selected rule. Select Finished. Configure a Client SSL Profile In the BIG-IP Configuration Utility, go to Local Traffic > Profiles > SSL > Client. Create a new client SSL profile or edit an existing. For Ciphers, select the Cipher Group radio button and select the created group to enable Post-Quantum cryptography for this client’s SSL profile. NGINX Support for PQC We are pleased to announce support for Post Quantum Cryptography (PQC) starting NGINX Plus R33. NGINX provides PQC support using the Open Quantum Safe provider library for OpenSSL 3.x (oqs-provider). This library is available from the Open Quantum Safe (OQS) project. The oqs-provider library adds support for all post-quantum algorithms supported by the OQS project into network protocols like TLS in OpenSSL-3 reliant applications. All ciphers/algorithms provided by oqs-provider are supported by NGINX. To configure NGINX with PQC support using oqs-provider, follow these steps: Install the necessary dependencies sudo apt update sudo apt install -y build-essential git cmake ninja-build libssl-dev pkg-config Download and install liboqs git clone --branch main https://github.com/open-quantum-safe/liboqs.git cd liboqs mkdir build && cd build cmake -GNinja -DCMAKE_INSTALL_PREFIX=/usr/local -DOQS_DIST_BUILD=ON .. ninja sudo ninja install Download and install oqs-provider git clone --branch main https://github.com/open-quantum-safe/oqs-provider.git cd oqs-provider mkdir build && cd build cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local -DOPENSSL_ROOT_DIR=/usr/local/ssl .. make -j$(nproc) sudo make install Download and install OpenSSL with oqs-provider support git clone https://github.com/openssl/openssl.git cd openssl ./Configure --prefix=/usr/local/ssl --openssldir=/usr/local/ssl linux-x86_64 make -j$(nproc) sudo make install_sw Configure OpenSSL for oqs-provider /usr/local/ssl/openssl.cnf: openssl_conf = openssl_init [openssl_init] providers = provider_sect [provider_sect] default = default_sect oqsprovider = oqsprovider_sect [default_sect] activate = 1 [oqsprovider_sect] activate = 1 Generate post quantum certificates export OPENSSL_CONF=/usr/local/ssl/openssl.cnf # Generate CA key and certificate /usr/local/ssl/bin/openssl req -x509 -new -newkey dilithium3 -keyout ca.key -out ca.crt -nodes -subj "/CN=Post-Quantum CA" -days 365 # Generate server key and certificate signing request (CSR) /usr/local/ssl/bin/openssl req -new -newkey dilithium3 -keyout server.key -out server.csr -nodes -subj "/CN=your.domain.com" # Sign the server certificate with the CA /usr/local/ssl/bin/openssl x509 -req -in server.csr -out server.crt -CA ca.crt -CAkey ca.key -CAcreateserial -days 365 Download and install NGINX Plus Configure NGINX to use the post quantum certificates server { listen 0.0.0.0:443 ssl; ssl_certificate /path/to/server.crt; ssl_certificate_key /path/to/server.key; ssl_protocols TLSv1.3; ssl_ecdh_curve kyber768; location / { return 200 "$ssl_curve $ssl_curves"; } } Conclusion By adopting PQC, we can future-proof encryption against quantum threats while balancing security and practicality. While technical hurdles remain, collaborative efforts between researchers, engineers, and policymakers are accelerating the transition. Related Content New Features in BIG-IP Version 17.5.0 K000149577: Enabling Post-Quantum Cryptography in F5 BIG-IP TMOS F5 NGINX Plus R33 Release Now Available | DevCentral227Views2likes5CommentsBIG-IP APM integration with Open Policy Agent (OPA)
In this article, we are exploring a technical deployment where BIG-IP APM integrates with Open Policy Agent (OPA) via HTTP Connector to fetch client authorization information and enforce access. Open Policy Agent (OPA) OPA is a unified policy engine for cloud-native environments, enabling policy-as-code across infrastructure, APIs, and data. Open Policy Agent (OPA) is widely adopted across industries for cloud-native authorization and policy management. OPA is widely used in the market, Used by around 50% of Fortune 500 companies (per OPA’s creator) in sectors like finance, tech, and healthcare. Major adopters: Netflix, Goldman Sachs, Airbnb, Uber, Pinterest, and Cisco. Kubernetes: Integrated with Istio, Kubernetes Gatekeeper, and cloud platforms (EKS, AKS, GKE). BIG-IP APM HTTP Connector HTTP Connector enables BIG-IP APM to post an HTTP request to an external HTTP server. This enables APM to make HTTP calls from a per-request policy without the need for an iRule, for example. The typical use for an HTTP Connector is to provide access to an external API or service. For example, you can use the HTTP Connector to check a server against an external blocklist, or an external reputation engine, and then use the results in an Access Policy Manager per-request policy. Lab environment and configurations Lab setup, BIG-IP APM v15.1+ OPA server. Backend (API endpoint) BIG-IP APM HTTP Connector request shown below uses APM access variables to fetch authorization level from OPA server. Related Content About the HTTP Connector Open Policy Agent Introduction115Views1like0CommentsMitigating OWASP API Security Risk: Excessive Data Exposure using F5 XC Platform
This is part of the OWASP API Security TOP 10 mitigation series, and you can refer here for an overview of these categories and F5 Distributed Cloud Platform (F5 XC) Web Application and API protection (WAAP). Introduction to Excessive Data Exposure Application Programming Interfaces (APIs) are the foundation stone of modern evolving web applications which are driving the digital world. They are part of all phases in product development life cycle, starting from design, testing to end customer using them in their day-to-day tasks. Since they don't have restrictions in place, sometimes APIs expose sensitive data such as Personally Identifiable Information (PII), Credit Card Numbers (CCN) and Social Security Numbers (SSN), etc. Because of these issues, they are the most exploited blocks in cybercrime to gain access to customer information which can be sold or further used in other exploits like credential stuffing, etc. Most of the time, the design stage doesn't include this security perspective and relies on 3rd party tools to perform sanitization of the data before displaying the results to customers. Identifying the sensitive information in these huge chunks of API response data is sophisticated and most of the available security tools in the market don't support this capability. So instead of relying on third party tools it's recommended to follow shift left strategies and add security as part of the development phase. During this phase, developers must review and ensure that the API returns only required details instead of providing unnecessary properties to avoid sensitive data exposure. Excessive data exposure attack scenario-1 To showcase this category, we are exposing sensitive details like CCN and SSN in one of the product reviews of Juice shop application (refer links for more info) as below - Overview of Data Guard: Data Guard is F5 XC load balancer feature which shields the responses from exposing sensitive information like CCN/SSN by masking these fields with a string of asterisks (*). Depending on the customer's requirement, they can have multiple rules configured to apply or skip processing for certain paths and routes. Preventing excessive data exposure using F5 Distributed Cloud Step1: Create origin pool - Refer here for more information Step2: Create Web Application Firewall policy (WAF) - Refer here for details Step3: Create https load balancer (LB) with above created pool and WAF policy - Refer here for more information Step4: Upload your application swagger file and add it to above load balancer - Refer here for more details Step5: Configure Data Guard on the load balancer with action and path as below Step6: Validate the sensitive data is masked Open postman/browser, check the product reviews section/API and validate these details are hidden and not exposed as in original application In Distributed Cloud Console expand the security event and check the WAF section to understand the reason why these details are masked as below: Excessive data exposure attack scenario-2 In this demonstration we are using an API based vulnerable application VAmPI (VAmPI is a vulnerable API made with Flask, and it includes vulnerabilities from the OWASP top 10 vulnerabilities for APIs, for more info follow the repo link). Follow below steps to bring up the setup: Step1: Host the VAmPI application inside a virtual machine Step2: Login to XC console, create a HTTP LB and add the hosted application as an origin server Step3: Access the application to check its availability. Step4: Now enable API Discovery and configure sensitive data discovery policy by addingall the compliance frameworks in your HTTP LB config. Step5: Hit the vulnerable API Endpoint '/users/v1/_debug' exposing sensitive data like username, password etc. Step6: Navigate to security overview dashboard in the XC console and select the API Endpoints tab. Check for vulnerable endpoint details. Step7: In the Sensitive Data section, click Ellipsis on the right side to get options for action. Step8: Clicking on the option 'Add Sensitive Data Exposure Rule' will automatically add the entries for sensitive data exposure rule to your existing LB configs. Apply the configuration. Step9: Now again, hit the vulnerable API Endpoint '/users/v1/_debug' Here in the above image, you can see masked values in the response. All letters changed to 'a' and number is converted to '1'. Step10: Optionally you can also manually configure sensitive data exposure rule by adding details about the vulnerable API endpoint. Login back to XC console Start configuring API Protection rule in the created HTTP LB Click Configure in the Sensitive Data Exposure Rules section. Click Add Item to create the first rule. In the Target section, enter the path that will respond to the request. Also enter one or more methods with responses containing sensitive information. In the Values field in Pattern section, enter the JSON field value you want to mask. For example, to mask all emails in the array users, enter “users[_].email”. Note that an underscore between the square brackets indicates the array's elements. Once the above rule gets applied, values in the response will be masked as follows: All letters will change to a or A (matching case) and all numbers will convert to 1. Click Apply to save the rule to the list of Sensitive Data Exposure Rules. Optionally, Click Add Item to add more rules. Click Apply to save the list of rules to your load balancer. Step11: After the completion of Step10, Hit back the vulnerable API Endpoint. Here also in the above image, you can see masked values in the response as per the configurations done in Step 10. Conclusion As we have seen in the above use cases sensitive data exposure occurs when an application does not protect sensitive data like PII, CCN, SSN, Auth Credentials etc. Leaking of such information may lead to serious consequences. Hence it becomes extremely critical for organizations to reduce the risk of sensitive data exposure. As demonstrated above, F5 Distributed Cloud Platform can help in protecting the exposure of such sensitive data with its easy to use API Security solution offerings. For further information check the links below OWASP API Security - Excessive Data Exposure OWASP API Security - Overview article F5 XC Data Guard Overview OWASP Juice Shop VAmPI2.7KViews3likes2CommentsMitigating OWASP API Security Risk: Unrestricted Resource Consumption using F5 Distributed Cloud Platform
Introduction: Unrestricted Resource Consumption vulnerability occurs where an API allows end users to over-utilize resources (e.g., CPU, memory, bandwidth, or storage) without enforcing proper limitations. This can lead to overwhelming of the system, performance degradation, denial of service (DoS) or complete unavailability of the services for valid users. Attack Scenario: In this demo, we are going to generate huge traffic and observe the server’s behaviour along with its response time. Fig 1: Using Apache JMeter to send arbitrary number of requests to API endpoint continuously in very short span of time. Fig 2: (From left to right) Response time during normal and server with huge traffic. Above results show higher response time when abnormal traffic is sent to a single API endpoint when compared to normal usage. By further increases in volume, server can become unresponsive, deny requests from real users and result in DoS attacks. Fig 3: Attackers performing arbitrary number of API request to consume the server’s resources Customer Solution: F5 Distributed Cloud (XC) WAAP helps in solving above vulnerability in the application by rate limiting the API requests, thereby preventing complete consumption of memory, file system storage, CPU resources etc. This protects against traffic surge and DoS attacks. This article aims to provide F5 XC WAAP configurations to control the rate of requests sent to the origin server. Step by Step to configure Rate Limiting in F5 XC: These are the steps to enable Rate Limiting feature for APIs and its validation Add API Endpoints with Rate Limiter values Validation of request rate to violate threshold limit Verifying blocked request in F5 XC console Step 1: Add API Endpoints with Rate Limiter values Login to F5 XC console and Navigate to Home > Load Balancers > Manage > Load Balancers Select the load balancer to which API Rate Limiting should be applied. Click on the menu in Actions column of the app’s Load Balancer and click on Manage Configurations as shown below to display load balancer configs. Fig 4: Selecting menu to manage configurations for load balancer Once Load Balancer configurations are displayed, click on Edit configuration button on the top right of the page. Navigate to Security Configuration and select “API Rate Limit” in dropdown of Rate Limiting and click on “Add Item” under API Endpoint section. Fig 5: Choosing API Rate Limit to configure API endpoints. Fig 6: Configuring rate limit to API Endpoint Rate limit is configured to GET request from API Endpoint “/product/OLJCESPC7Z”. Click on Apply button displayed on the right bottom of the screen. Click on “Save and Exit” for above configuration to get saved to Load Balancer. Validation of request rate to violate threshold limit Fig 7: Verifying request for first time Request is sent for the first time after configuring API Endpoint and can be able to see the response along with status code 200. Upon requesting to the same API Endpoint beyond the threshold limit blocks the request as shown below, Fig 8: Rate Limiting the API request Verifying blocked request from F5 XC console From the F5 XC Console homepage, Navigate to WAAP > Apps & APIs > Security and select the Load Balancer. Click on Requests to view the request logs as below, Fig 9: Blocked API request details from F5 XC console You can see requests beyond the rate limiter value get dropped and the response code is 429. Conclusion: In this article, we have seen that when an application receives an abnormal amount of traffic, F5 XC WAAP protects APIs from being overwhelmed by rate limiting the requests. XC's Rate limiting feature helps in preventing DoS attacks and ensures service availability at all times. Related Links: API4:2019 Lack of Resources and Rate Limiting API4:2023 Unrestricted Resource Consumption Creating Load balancer Steps F5 Distributed Cloud Security WAAP F5 Distributed Cloud Platform75Views0likes0CommentsHow To Secure Multi-Cloud Networking with Routing & Web Application and API Protection
Introduction With the proliferation of intra-cloud networking requirements, continuing, organizations are increasingly leveraging multi-cloud solutions to optimize performance, cost, and resilience. However, with this approach comes the challenge of ensuring robust security across diverse cloud and on-prem environments. Secure multi-cloud networking with advanced web application and API protection services, is essential to safeguard digital assets, maintain compliance, and uphold operational integrity. Understanding Multi-Cloud Networking Multi-cloud networking involves orchestrating connectivity between multiple cloud platforms such as AWS, Microsoft Azure, Google Cloud Platform, and others including on-prem and private data centers. This approach allows organizations to avoid vendor lock-in, enhance redundancy, and tailor services to specific workloads. However, managing networking and web application security across these platforms can be complex due to different platform security models, configurations, and interfaces. Key Components of Multi-Cloud Networking Inter-cloud Connectivity: Establishing secure connections between multiple cloud providers to ensure seamless data flow and application interoperability. Unified Management: Implementing centralized management tools to oversee network configurations, policies, and security protocols across all cloud environments. Automated Orchestration: Utilizing automation to provision, configure, and manage network resources dynamically, reducing manual intervention and potential errors. Compliance and Governance: Ensuring adherence to regulatory requirements and best practices for data protection and privacy across all cloud platforms. Securing Multi-Cloud Environments Security is paramount in multi-cloud networking. With multiple entry points and varying security measures across different cloud providers, organizations must adopt a comprehensive strategy to protect their assets. Strategies for Secure Multi-Cloud Networking Zero Trust Architecture: Implementing a zero-trust model that continuously verifies and validates every request, irrespective of its source, to mitigate risks. Encryption: Utilizing advanced encryption methods for data in transit and at rest to protect against unauthorized access. Continuous Monitoring: Deploying monitoring tools to detect, analyze, and respond to threats in real-time. Application Security: Using a common framework for web application and API security reduces the number of steps needed to identify and remediate security risks and misconfigurations across disparate infrastructures. Web Application and API Protection Services Web applications and APIs are critical components of modern digital ecosystems. Protecting these assets from cyber threats is crucial, especially in a multi-cloud environment where they may be distributed across various platforms. Comprehensive Web Application Protection Web Application Firewalls (WAFs) play a vital role in safeguarding web applications. They filter and monitor HTTP traffic between a web application and the internet, blocking malicious requests and safeguarding against common threats such as SQL injection, cross-site scripting (XSS), and DDoS attacks. Advanced Threat Detection: Employing machine learning and artificial intelligence to identify and block sophisticated attacks. Application Layer Defense: Providing protection at the application layer, where traditional network security measures may fall short. Scalability and Performance: Ensuring WAF solutions can scale and perform adequately in response to varying traffic loads and attack volumes. Securing APIs in Multi-Cloud Environments APIs are pivotal for integration and communication between services. Securing APIs involves protecting them from unauthorized access, misuse, and exploitation. Authentication and Authorization: Implement strong authentication mechanisms such as OAuth and JWT to ensure only authorized users and applications can access APIs. Rate Limiting: Controlling the number of API calls to prevent abuse and ensure fair usage across consumers. Input Validation: Validating input data to prevent injection attacks and ensure data integrity. Threat Detection: Monitoring API traffic for anomalies and potential threats, and responding swiftly to mitigate risks. Best Practices for Secure Multi-Cloud Networking To effectively manage and secure multi-cloud networks, organizations should adhere to best practices that align with their operational and security objectives. Adopt a Holistic Security Framework A holistic security framework encompasses the entire multi-cloud environment, focusing on integration and coordination between different security measures across cloud platforms. Unified Policy Enforcement: Implementing consistent security policies across all cloud environments to ensure uniform protection. Regular Audits: Conducting frequent security audits to identify vulnerabilities, assess compliance, and improve security postures. Incident Response Planning: Developing and regularly updating incident response plans to handle potential breaches and disruptions efficiently. Leverage Security Automation Automation can significantly enhance security in multi-cloud environments by reducing human errors and ensuring timely responses to threats. Automated Compliance Checks: Using automation to continuously monitor and enforce compliance with security standards and regulations. Real-time Threat Mitigation: Implementing automated remediation processes to address security threats as they are detected. Demo: Bringing it Together with F5 Distributed Cloud Using services in Distributed Cloud, F5 brings everything together. Deploying and orchestrating connectivity between hybrid and multi-cloud environments, Distributed Cloud not only connects these environments, it also secures them with universal Web App and API Protection policies. The following solution uses Distributed Cloud Network Connect, App Connect, and Web App & API Protection to connect, deliver, and secure an application with services that exist in different cloud and on-prem environments. This video shows how each of the features in this solution come together: Bringing it together with Automation Orchestrating this end-to-end, including the Distributed Cloud components itself is trivial using the combination of GitHub Workflow Actions and Terraform. The following automation workflow guide and companion article at DevCentral provides all the steps and modular code necessary to build a complete multicloud environment, manage Kubernetes clusters, and deploy the sample functional multi-site application, Arcadia Finance. GitHub Repository: https://github.com/f5devcentral/f5-xc-terraform-examples/tree/main/workflow-guides/smcn/mcn-distributed-apps-l3 Conclusion Secure multi-cloud networking, combined with robust web application and API protection services, is vital for organizations seeking to leverage the benefits of a multi-cloud strategy without compromising security. By adopting comprehensive security measures, enforcing best practices, and leveraging advanced technologies, organizations can safeguard their digital assets, ensure compliance, and maintain operational integrity in a dynamic and ever-evolving cloud landscape. Additional Resources Introducing Secure MCN features on F5 Distributed Cloud Driving Down Cost & Complexity: App Migration in the Cloud The App Delivery Fabric with Secure Multicloud Networking Scale Your DMZ with F5 Distributed Cloud Services Seamless Application Migration to OpenShift Virtualization with F5 Distributed Cloud Automate Multicloud Networking w/ Terraform: routing and app connect on F5 Distributed Cloud121Views1like0CommentsBIG-IP Next for Kubernetes, addressing today’s enterprise challenges
Enterprises have started adopting Kubernetes (K8s)—not just cloud service providers—as it offers strategic advantages in agility, cost efficiency, security, and future-proofing. Cloud Native Functions account for around 60% TCO savings Easier to deploy, manage, maintain, and scale. Easier to add and roll out new services. Kubernetes complexities With the move from traditional application deployments to microservices and containerized services, some complexities were introduced, Networking Challenges with Kubernetes Default Deployments Kubernetes networking has some problems when using default settings. These problems can affect performance, security, and reliability in production environments. Core Networking Challenges Flat Network Model All pods can communicate with all other pods by default (east-west traffic) No network segmentation between applications Potential security risks from excessive inter-pod communication Service Discovery Limitations DNS-based service discovery has caching behaviors that can delay updates No built-in load balancing awareness (can route to unhealthy pods during updates) Limited traffic shaping capabilities (all requests treated equally) Ingress Challenges No default ingress controller installed Multiple ingress controllers can conflict if not properly configured SSL/TLS termination requires manual certificate management Network Policy Absence No network policies applied by default (allow all traffic). Difficult to implement zero-trust networking principles No default segmentation between namespaces DNS Issues CoreDNS default cache settings may not be optimal. Pod DNS policies may not match application requirements. Nodelocal DNS cache not enabled by default Load-Balancing Problems Service `ClusterIP` is the default (no external access). NodePort` services can conflict on port allocations. Cloud provider load balancers can be expensive if overused CNI (Container Network Interface) Considerations Default CNI plugin may not support required features Network performance varies significantly between CNI choices IP address management challenges at scale Performance-Specific Issues kube-proxy inefficiencies Default iptables mode becomes slow with many services IPVS (IP Virtual Server) mode requires explicit configuration Service mesh sidecars can double latency Pod Network Overhead Additional hops for cross-node communication Encapsulation overhead with some CNI plugins No QoS guarantees for network traffic Multicluster Communication No default solution for cross-cluster networking Complex to establish secure connections between clusters Service discovery doesn’t span clusters by default Security Challenges No default encryption between pods No default authentication for service-to-service communication. All namespaces are network-accessible to each other by default. External traffic can bypass ingress controllers if misconfigured. These challenges highlight why most production Kubernetes deployments require significant, complex customization beyond the default configuration. Figure 1 shows those workarounds being implemented and how complicated our setup would be, with multiple add-ons required to overcome Kubernetes limitations. In the following section, we are exploring how BIG-IP Next for Kubernetes simplifies and enhances application delivery and security within Kubernetes environment. BIG-IP Next for Kubernetes Introducing BIG-IP Next for Kubernetes not only reduces complexity, but leverages the main networking components to the TMM pods rather than relying on the host server. Think of where current network functions are applied, it’s the host kernel. Whether you are doing NAT or firewalling services, this requires intervention by the host side, which impacts the zero-trust architecture and traffic performance is limited by default kernel IP and routing capabilities. Deployment overview Among the introduced features in 2.0.0 Release API GW CRs (Custom Resources). F5 IPAM Controller to manage IP addresses for Gateway resource. Seamless firewall policy integration in Gateway API. Ingress DDoS protection in Gateway API. Enforced access control for Debug and QKView APIs with Admin Token. In this section, we explore the steps to deploy BIG-IP Next for Kubernetes in your environment, Infrastructure Using different flavors depending on your needs and lab type (demo or production), for labs microk8s, k8s or kind, for example. BIG-IP Next for Kubernetes helm, docker are required packages for this installation. Follow the installation guide BIG-IP Next for Kubernetes current 2.0.0 GA release is available. For the desired objective in this article, you may skip the Nvidia DOCA (that's the focus of the coming article) and go directly for BIG-IP Next for Kubernetes. Install additional CRDs Once the licensing and core pods are ready, you can move to adding additional CRDs (Customer Resources Definition). BIG-IP Next for Kubernetes CRDs BIG-IP Next for Kubernetes CRDs Custom CRDs Install F5 Use case Custom Resource Definitions Related Content BIG-IP Next for Kubernetes v2.0.0 Release Notes System Requirements BIG-IP Next for Kubernetes CRDs BIG-IP Next for Kubernetes BIG-IP Next SPK: a Kubernetes native ingress and egress gateway for Telco workloads F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs BIG-IP Next for Kubernetes running in Amazon EKS257Views1like0Comments