security
3147 TopicsSNI Routing with BIG-IP
In the previous article, The Three HTTP Routing Patterns, Lori MacVittie covers 3 methods of routing. Today we will look at Server Name Indication (SNI) routing as an additional method of routing HTTPS or any protocol that uses TLS and SNI. Using SNI we can route traffic to a destination without having to terminate the SSL connection. This enables several benefits including: Reduced number of Public IPs Simplified configuration More intelligent routing of TLS traffic Terminating SSL Connections When you have a SSL certificate and key you can perform the cryptographic actions required to encrypt traffic using TLS. This is what I refer to as “terminating the SSL connection” throughout this article. When you want to route traffic this is a chicken and an egg problem, because for TLS traffic you want to be able to route the traffic by being able to inspect the contents, but this normally requires being able to “terminate the SSL connection”. The goal of this article is to layer in traffic routing for TLS traffic without having to require having/knowing the original SSL certificate and key. Server Name Indication (SNI) SNI is a TLS extension that makes it possible to "share" certificates on a single IP address. This is possible due to a client using a TLS extension that requests a specific name before the server responds with a SSL certificate. Prior to SNI, the other options would be a wildcard certificate or Subject Alternative Name (SAN) that allows you to specify multiple names with a single certificate. SNI with Virtual Servers It has been possible to use SNI on F5 BIG-IP since TMOS 11.3.0. The following KB13452 outlines how it can be configured. In this scenario (from the KB article) the BIG-IP is terminating the SSL connection. Not all clients support SNI and you will always need to specify a “fallback” profile that will be used if a SNI name is not used or matched. The next example will look at how to use SNI without terminating the SSL connection. SNI Routing Occasionally you may have the need to have a hybrid configuration of terminating SSL connections on the BIG-IP and sending connections without terminating SSL. One method is to create two separate virtual servers, one for SSL connections that the BIG-IP will handle (using clientssl profile) and one that it will not handle SSL (just TCP). This works OK for a small number of backends, but does not scale well if you have many backends (run out of Public IP addresses). Using SNI Routing we can handle everything on a single virtual server / Public IP address. There are 3 methods for performing SNI Routing with BIG-IP iRule with binary scan a. Article by Colin Walker code attribute to Joel Moses b. Code Share by Stanislas Piron iRule with SSL::extensions Local Traffic Policy Option #1 is for folks that prefer complete control of the TLS protocol. It only requires the use of a TCP profile. Options #2 and #3 only require the use of a SSL persistence profile and TCP profile. SNI Routing with Local Traffic Policy We will skip option #1 and #2 in this article and look at using a Local Traffic Policy for SNI Routing. For a review of Local Traffic Policies check out the following DevCentral articles: LTM Policy Jan 2015 Simplifying Local Traffic Policies in BIG-IP 12.1 June 2016 In previous articles about Local Traffic Policies the focus was on routing HTTP traffic, but today we will use it to route SSL connections using SNI. In the following example, using a Local Traffic Policy named “sni_routing”, we are setting a condition on the SSL Extension “servername” and sending the traffic to a pool without terminating the SSL connection. The pool member could be another server or another BIG-IP device. The next example will forward the traffic to another virtual server that is configured with a clientssl profile. This uses VIP targeting to send traffic to another virtual server on the same device. In both examples it is important to note that the “condition”/“action” has been changed from occurring on “request” (that maps to a HTTP L7 request) to “ssl client hello”. By performing the action prior to any L7 functions occurring, we can forward the traffic without terminating the SSL connection. The previous example policy, “sni_routing”, can be attached to a Virtual Server that only has a TCP profile and SSL persistence profile. No HTTP or clientssl profile is required! This method can also be used to solve the issue of how to consolidate multiple SSL virtual servers behind a single virtual server that have different APM and/or ASM policies. This is similar to the architecture that is used by the Container Connector for Cloud Foundry; in creating a two-tier load balancing solution on a single device. Routed Correctly? TLS 1.3 has interesting proposals on how to obscure the servername (TLS in TLS?), but for now this is a useful and practical method of handling multiple SSL certs on a single IP. In the future this may still be possible as well with TLS 1.3. For example the use of a HTTP Fronting service could be a tier 1 virtual server (this is just my personal speculation, I have not tried, at the time of publishing this was still a draft proposal). In other news it has been demonstrated that a combination of using SNI and a different host header can be used for “domain fronting”. A method to enforce consistent policy (prevent domain fronting) would be to layer in additional conditions that match requested SNI servername (TLS extension) with requested HOST header (L7 HTTP header). This would help enforce that a tenant is using a certificate that is associated with their application and not “borrowing” the name and certificate that is being used by an adjacent service. We don’t think of a TLS extension as an attribute that can be used to route application traffic, but it is useful and possible on BIG-IP.27KViews0likes17CommentsIdentity-centric F5 ADSP Integration Walkthrough
In this article we explore F5 ADSP from the Identity lense by using BIG-IP APM, BIG-IP SSLO and add BIG-IP AWAF to the service chain. The F5 ADSP addresses four core areas: Deployment at scale, Security against evolving threats, Deliver application reliably, Operate your day to day work efficiently. Each comes with its own challenges, but together they define the foundation for keeping systems fast, stable, and safe. Each architecture deployment example is designed to cover at least two of the four core areas: Deployment, Security, Delivery and XOps.61Views2likes0CommentsBIG-IP Next for Kubernetes Nvidia DPU deployment walkthrough
Introduction Modern AI factories—hyperscale environments powering everything from generative AI to autonomous systems—are pushing the limits of traditional infrastructure. As these facilities process exabytes of data and demand near-real-time communication between thousands of GPUs, legacy CPUs struggle to balance application logic with infrastructure tasks like networking, encryption, and storage management. Data Processing Units (DPUs), purpose-built accelerators that offload these housekeeping tasks, freeing CPUs and GPUs to focus on what they do best. DPUs are specialized system-on-chip (SoC) devices designed to handle data-centric operations such as network virtualization, storage processing, and security enforcement. By decoupling infrastructure management from computational workloads, DPUs reduce latency, lower operational costs, and enable AI factories to scale horizontally. BIG-IP Next for Kubernetes and Nvidia DPU Looking at F5 ability to deliver and secure every app, we needed it to be deployed at multiple levels, a crucial one being edge and DPU. Installing F5 BIG-IP Next for Kubernetes on Nvidia DPU requires installing Nvidia’s DOCA framework to be installed. What’s DOCA? NVIDIA DOCA is a software development kit for NVIDIA BlueField DPUs. BlueField provides data center infrastructure-on-a-chip, optimized for high-performance enterprise and cloud computing. DOCA is the key to unlocking the potential of the NVIDIA BlueField data processing unit (DPU) to offload, accelerate, and isolate data center workloads. With DOCA, developers can program the data center infrastructure of tomorrow by creating software-defined, cloud-native, GPU-accelerated services with zero-trust protection. Now, let's explore BIG-IP Next for Kubernetes components, The BIG-IP Next for Kubernetes solution has two main parts: the Data Plane - Traffic Management Micro-kernel (TMM) and the Control Plane. The Control Plane watches over the Kubernetes cluster and updates the TMM’s configurations. The BIG-IP Next for Kubernetes Data Plane (TMM) manages the supply of network traffic both entering and leaving the Kubernetes cluster. It also proxies the traffic to applications running in the Kubernetes cluster. The Data Plane (TMM) runs on the BlueField-3 Data Processing Unit (DPU) node. It uses all the DPU resources to handle the traffic and frees up the Host (CPU) for applications. The Control Plane can work on the CPU or other nodes in the Kubernetes cluster. This makes sure that the DPU is still used for processing traffic. Use-case examples: There are some recently awesome use cases released by F5’s team based on conversation and work from the field. Let’s explore those items: Protecting MCP servers with F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs LLM routing with dynamic load balancing with F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs F5 optimizes GPUs for distributed AI inferencing with NVIDIA Dynamo and KV cache integration. Deployment walk-through In our demo, we go through the configurations from BIG-IP Next for Kubernetes Main BIG-IP Next for Kubernetes features L4 ingress flow HTTP/HTTPs ingress flow Egress flow BGP integration Logging and troubleshooting (Qkview, iHealth) You can find a quick walk-through via BIG-IP Next for Kubernetes - walk-through Related Content BIG-IP Next for Kubernetes - walk-through BIG-IP Next for Kubernetes BIG-IP Next for Kubernetes and Nvidia DPU-3 walkthrough BIG-IP Next for Kubernetes F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs656Views1like1CommentF5 Scalable App Delivery & Security for Hybrid Environments
As enterprises modernize and expand their digital services, they increasingly deploy multiple instances of the same applications across diverse infrastructure environments—such as VMware, OpenShift, and Nutanix—to support distributed teams, regional data sovereignty, redundancy, or environment-specific compliance needs. These application instances often integrate into service chains that span across clouds and data centers, introducing both scale and operational complexity. F5 Distributed Cloud provides a unified solution for secure, consistent application delivery and security across hybrid and multi-cloud environments. It enables organizations to add workloads seamlessly—whether for scaling, redundancy, or localization—without sacrificing visibility, security, or performance.91Views2likes0CommentsModern Deployment and Security Strategies for Kubernetes with NGINX Gateway Fabric
Kubernetes has become the foundation for cloud-native applications. However, managing and routing traffic within clusters remains a challenging issue. The traditional Ingress resource, though helpful in exposing services, has shown limitations. Its loosely defined specifications often cause controller-specific behaviors, complicated annotations, and hinder portability across different environments. These challenges become even more apparent as organizations scale their microservices architectures. Ingress was designed primarily for basic service exposure and routing. While it can be extended with annotations or custom controllers, it lacks first-class support for advanced deployment patterns such as canary or blue-green releases. This forces teams to rely on add-ons or vendor-specific features, which adds complexity and reduces portability.123Views1like0CommentsF5 Threat Report - September 10th, 2025
To learn more about the F5 Threat Report click here Critical Flaws in NVIDIA NeMo AI Curator Allow System Takeover NVIDIA has released a critical update for its NeMo Curator software, version 25.07, to address a high-severity code injection vulnerability tracked as CVE-2025-23307. This flaw, affecting all previous versions across Windows, Linux, and macOS, originates from insufficient validation of user-supplied inputs prior to dynamic code evaluation (CWE-94). With a base severity score of 7.8, the vulnerability enables an attacker to achieve remote code execution, privilege escalation, unauthorized information disclosure, or data tampering by crafting a malicious file that the Curator environment processes. While requiring low privileges and local file manipulation, no user interaction is necessary for exploitation. Users are urged to upgrade to Curator version 25.07, which includes input sanitization and stricter evaluation controls, to mitigate this risk. Severity:Critical Sources https://cyberpress.org/flaws-in-nvidia-nemo-ai-curator-allow-system-takeover/ Threat Details and IOCs CVEs: CVE-2025-23307 Victim Industries: Automotive, Manufacturing, Healthcare, Retail, Financial Services, Technology, Government, Telecommunications Victim Technologies: NVIDIA NeMo Curator, Linux, Microsoft Windows, Apple macOS Mitigation Advice Use asset inventory systems, software management tools, or manual checks to identify all instances of NVIDIA NeMo Curator running on company assets, including servers and developer workstations. For all identified instances of NVIDIA NeMo Curator, immediately upgrade the software to version 25.07 or newer from the official NVIDIA NeMo GitHub repository. Compliance Best Practices Implement or enhance a software asset management (SAM) program to maintain a continuously updated inventory of all deployed software, including specialized AI/ML frameworks. Review and enforce the principle of least privilege for user and service accounts, particularly those associated with data processing and AI/ML environments, to minimize the impact of potential code execution vulnerabilities. Establish a formal vulnerability management program that includes subscribing to vendor security advisories (like NVIDIA's PSIRT) and performing regular, authenticated vulnerability scans across all assets. Provide secure coding training to development teams that focuses on input validation (CWE-94) and the secure handling of external data, especially within applications that process complex file formats. s1ngularity Supply Chain Attack Leaks Secrets on GitHub: Everything You Need to Know On August 26, 2025, multiple malicious versions of the widely used Nx build system package were published to the npm registry, initiating a supply chain attack. These versions, including specific releases of `@nrwl/nx`, `nx`, `@nx/devkit`, `@nx/enterprise-cloud`, `@nx/eslint`, `@nx/js`, `@nx/key`, `@nx/node`, and `@nx/workspace`, contained a post-installation malware script named `telemetry.js`. This payload, active on Linux and macOS systems, systematically harvested sensitive developer assets such as cryptocurrency wallets, GitHub and npm tokens, SSH keys, and `.env` files. A notable aspect of the attack involved weaponizing installed AI command-line tools (including Claude, Gemini, and Q) by prompting them with dangerous flags for reconnaissance. The malware also attempted system lockout by appending `sudo shutdown -h 0` to `~/.bashrc` and `~/.zshrc`. Exfiltrated data was triple-base64 encoded and uploaded to publicly accessible attacker-controlled GitHub repositories named `s1ngularity-repository`, `s1ngularity-repository-0`, or `s1ngularity-repository-1` within victims’ GitHub accounts, leading to the exposure of over a thousand valid GitHub tokens, dozens of cloud and npm credentials, and approximately twenty thousand files. The compromise affected developer machines, often via the NX VSCode extension, and CI/CD pipelines like GitHub Actions. Immediate remediation requires removing malicious Nx versions, upgrading to clean releases, manually removing malicious shell entries, and deleting `/tmp/inventory.txt` and its backup. Security teams should audit GitHub accounts for the specific repository names, review audit logs for anomalous API usage, and monitor developer endpoints and CI/CD pipelines for suspicious activity. Crucially, all potentially leaked credentials, including GitHub tokens, npm tokens, SSH keys, API keys, and environment variable secrets, must be revoked and regenerated, and cryptocurrency funds transferred if exposed. Severity:Critical Sources https://www.wiz.io/blog/s1ngularity-supply-chain-attack Threat Details and IOCs Attacker Hashes: 3905475cfd0e0ea670e20c6a9eaeb768169dc33d Victim Industries: Financial Services Victim Technologies: Nx, Google Gemini, Apple macOS, Microsoft Visual Studio Code, Amazon Q, Anthropic Claude, Node.js, Linux, GitHub, npm Mitigation Advice Scan all developer endpoints and CI/CD environments to identify the malicious versions of the Nx packages listed in the article. Remove them by deleting the 'node_modules' directory and then run 'npm cache clean --force' before installing a safe version. On all Linux and macOS developer endpoints, inspect `~/.bashrc` and `~/.zshrc` files for the entry 'sudo shutdown -h 0' and remove it. Also, delete the files `/tmp/inventory.txt` and `/tmp/inventory.txt.bak` if they exist. Audit all company-managed GitHub organizations and developer user accounts for any repositories named 's1ngularity-repository', 's1ngularity-repository-0', or 's1ngularity-repository-1'. Review GitHub audit logs for repository creation events by unexpected actors or automation. Immediately revoke all GitHub and npm tokens for all developers and service accounts. Force users to regenerate new tokens with the minimum required permissions. Initiate a company-wide rotation of all SSH keys and any other API keys or secrets stored in developer environment files that could have been compromised. In your SIEM or network monitoring tools, search for and create alerts on outbound API calls from developer endpoints or CI/CD runners to 'api.github.com' targeting '/user/repos' or '/repos/*/contents/results.b64'. Compliance Best Practices Implement a software composition analysis (SCA) tool to automatically scan npm dependencies for known vulnerabilities and malicious packages before they are used in development or build pipelines. Configure CI/CD pipelines to run in ephemeral, isolated environments with strict egress filtering that only allows network connections to approved package registries and services, preventing unauthorized data exfiltration. Establish and enforce a policy for credential management that mandates the use of short-lived, narrowly-scoped access tokens for CI/CD pipelines and developer environments, instead of long-lived personal access tokens. Develop and implement a corporate policy governing the use of AI command-line tools on developer endpoints, specifically restricting or monitoring the use of permissive flags like '--dangerously-skip-permissions' or '--trust-all-tools'. Implement a recurring security awareness training program for all developers focusing on supply chain attack risks, recognizing suspicious package behavior, and best practices for credential security. Citrix Patches Three NetScaler Zero Days as One Sees Active Exploitation Citrix has released patches for three critical zero-day vulnerabilities in NetScaler ADC and Gateway, identified as CVE-2025-7775 (CVSS 9.2), CVE-2025-7776 (CVSS 8.8), both memory overflows, and CVE-2025-8424 (CVSS 8.7), an improper access control flaw on the management interface. CVE-2025-7775, a pre-authentication remote code execution vulnerability, was actively exploited in the wild to deploy webshells on unmitigated appliances, with campaigns commencing prior to patch availability. As of August 26, 2025, 84% of scanned appliances were vulnerable to CVE-2025-7775, and the Shadowserver Foundation identified at least 28,000 unpatched instances. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) added CVE-2025-7775 to its Known Exploited Vulnerabilities (KEV) catalog, mandating federal agencies apply patches by August 28. Affected systems include NetScaler ADC and Gateway versions 14.1 before 14.1-47.48, 13.1 before 13.1-59.22, 13.1-FIPS/NDcPP before 13.1-37.241, and 12.1-FIPS/NDcPP before 12.1-55.330, alongside Secure Private Access deployments. Citrix urged users to upgrade to specific patched versions, as no other workarounds exist, and noted that versions 12.1 and 13.0 are now End-of-Life. Security experts caution that patching alone is insufficient, emphasizing the critical need to investigate for signs of prior compromise, as sophisticated actors often exploit such memory corruption vulnerabilities, and future attacks may combine initial access flaws like CVE-2025-7775 with secondary vulnerabilities such as CVE-2025-8424 to compromise management interfaces. Severity:Critical Sources https://www.infosecurity-magazine.com/news/citrix-patch-netscaler-zero-days/ Threat Details and IOCs Malware: Webshell, Backdoor Malware CVEs: CVE-2025-6543, CVE-2025-7775, CVE-2025-8424, CVE-2025-7776 Victim Industries: Government, Healthcare, Financial Services, Information Technology Victim Technologies: NetScaler Gateway, NetScaler ADC Victim Countries: United States Mitigation Advice Immediately patch all vulnerable Citrix NetScaler ADC and Gateway appliances to the recommended versions (14.1-47.48+, 13.1-59.22+, etc.) to remediate CVE-2025-7775, CVE-2025-7776, and CVE-2025-8424. Initiate a threat hunt on all Citrix NetScaler appliances to look for indicators of compromise, such as webshells, unauthorized accounts, or unusual outbound network traffic, to identify and remediate existing backdoors. Identify and prioritize the immediate upgrade or decommissioning of all NetScaler appliances running end-of-life (EOL) versions 12.1 and 13.0, as they cannot be patched against these vulnerabilities. Compliance Best Practices Review and reconfigure network firewall rules to ensure that the NetScaler Management Interface is not exposed to the public internet and is only accessible from a secure, isolated management network segment. Implement a comprehensive asset lifecycle management program to track all hardware and software, ensuring that systems are upgraded or replaced before they reach end-of-life (EOL) to avoid exposure to unpatchable vulnerabilities. Docker Desktop Vulnerability Allowed Host Takeover on Windows, macOS A critical vulnerability, CVE-2025-9074, was identified and patched in Docker Desktop for Windows and macOS, allowing malicious containers to escape their isolated environments and achieve administrator-level control over the host system. Rated 9.3 out of 10 for severity, this flaw stemmed from an unauthenticated exposure of the Docker Engine's internal HTTP API, enabling a malicious container to create new privileged containers and access or modify host files, even when Enhanced Container Isolation (ECI) was active. The vulnerability, which could lead to full system takeover on Windows by overwriting critical files, was resolved in Docker Desktop version 4.44.3, released on August 20, 2025. Users are strongly advised to update to this version immediately, avoid overly permissive container configurations like the `--privileged` command, restrict container access, and maintain continuous system monitoring to mitigate risks. Severity:Critical Sources https://hackread.com/docker-desktop-vulnerability-host-takeover-windows-macos/ Threat Details and IOCs CVEs: CVE-2025-9074 Victim Industries: Information Technology Victim Technologies: Apple macOS, Microsoft Windows, Docker Desktop Mitigation Advice Update all Docker Desktop installations on Windows and macOS endpoints to version 4.44.3 or newer. Use asset inventory or vulnerability scanning tools to identify all corporate devices running versions of Docker Desktop vulnerable to CVE-2025-9074. Compliance Best Practices Establish and enforce a security policy that prohibits running Docker containers with the '--privileged' flag, implementing an exception process for documented and approved use cases. Implement a container runtime security solution to monitor for and alert on suspicious activities, such as unexpected process execution or network connections originating from containers. Enforce a policy of least privilege for all container configurations, ensuring they are granted only the specific capabilities, file system access, and network permissions required for their function. Widespread Data Theft Campaign Strikes Salesforce via Salesloft Drift A widespread data theft campaign, active between August 8 and 18, 2025, saw threat actor UNC6395 compromise numerous Salesforce customer instances by leveraging stolen OAuth tokens associated with the Salesloft Drift application. The attackers utilized valid OAuth credentials to execute structured SOQL queries, exfiltrating significant volumes of corporate data from Salesforce objects such as User, Account, Case, and Opportunity, with a specific focus on discovering secrets like AWS access keys, passwords, and Snowflake access tokens. UNC6395 demonstrated operational security by deleting query jobs and employing anonymizing infrastructure, including Tor exit nodes, and automation tools like python-requests/2.32.4 and aiohttp/3.12.15. In response, Salesloft and Salesforce revoked all active tokens for the Drift app on August 20 and temporarily removed it from the Salesforce AppExchange. This incident follows earlier Salesforce-related attacks in June and July 2025 by UNC6040, which used vishing to authorize rogue connected apps, and subsequent extortion by UNC6240 (ShinyHunters). Organizations using Drift with Salesforce are advised to audit for exposed credentials, revoke and rotate API keys, review logs for suspicious SOQL queries tied to the Drift app, and enforce strict access controls for connected applications, including IP restrictions and limited scopes. Severity:Critical Sources https://cyberinsider.com/widespread-data-theft-campaign-strikes-salesforce-via-salesloft-drift/ Threat Details and IOCs Threat Actors: ShinyHunters, UNC6240, UNC6040, UNC6395 Attacker Emails: shinycorp@tuta.com Victim Industries: Retail, Financial Services, Travel & Hospitality Victim Technologies: Salesloft Drift, Salesforce, Snowflake, Amazon Web Services (AWS) Victim Countries: United Kingdom, Germany, United States, France, Denmark, Netherlands Mitigation Advice Review all Salesforce logs between August 8 and August 18, 2025, for unusual SOQL queries originating from the Drift connected application, paying special attention to data exports from User, Account, Case, and Opportunity objects. Immediately audit all Salesforce objects and custom fields to identify any stored AWS access keys or other cloud service provider credentials. Immediately audit all Salesforce objects and custom fields to identify any stored Snowflake tokens or other database credentials. Immediately revoke and rotate any secrets, API keys, or passwords discovered during the audit of Salesforce data. Follow vendor guidance to securely re-authenticate the Drift to Salesforce integration to restore service with new, secure tokens. Compliance Best Practices For all third-party Salesforce connected applications, configure IP Login Ranges to only permit access from the application vendor's known IP addresses. Conduct a comprehensive security review of all Salesforce connected applications to ensure each one operates with the minimum required OAuth scopes and object permissions necessary for its function. Modify Salesforce user profiles to remove the 'API Enabled' permission by default, and grant it only to a limited number of dedicated integration user accounts or specific administrators via permission sets. Implement a Data Loss Prevention (DLP) policy and toolset to continuously scan Salesforce objects and fields to detect and alert on any hardcoded secrets, passwords, or API keys. Implement a recurring security awareness training program that educates employees on identifying and reporting social engineering attempts, specifically including vishing and consent phishing for cloud applications. Click here to sign up for the F5 Threat Report337Views2likes0CommentsIntroducing the F5 Threat Report: Strategic Threat Intelligence with Real-Time Industry and Technology Trends
Challenge widespread assumptions from traditional cybersecurity tools with the latest threat landscape insights including threat movement, threat life-cycles, and more.128Views0likes0CommentsBIG-IP BGP Routing Protocol Configuration And Use Cases
Is the F5 BIG-IP a router? Yes! No! Wait what? Can the BIG-IP run a routing protocol? Yes. But should it be deployed as a core router? An edge router? Stay tuned. We'll explore these questions and more through a series of common use cases using BGP on the BIG-IP... And oddly I just realized how close in typing BGP and BIG-IP are, so hopefully my editors will keep me honest. (squirrel!) In part one we will explore the routing components on the BIG-IP and some basic configuration details to help you understand what the appliance is capable of. Please pay special attention to some of the gotchas along the way. Can I Haz BGP? Ok. So your BIG-IP comes with ZebOS in order to provide routing functionality, but what happens when you turn it on? What do you need to do to get routing updates in to the BGP process? And well does my licensing cover it? Starting with the last question… tmsh show /sys license | grep "Routing Bundle" The above command will help you determine if you’re going to be able to proceed, or be stymied at the bridge like the Black Knight in the Holy Grail. Fear not! There are many licensing options that already come with the routing bundle. Enabling Routing First and foremost, the routing protocol configuration is tied to the route-domain. What’s a route-domain? I’m so glad you asked! Route-domains are separate Layer 3 route tables within the BIG-IP. There is a concept of parent and child route domains, so while they’re similar to another routing concept you may be familiar with; VRF’s, they’re no t quite the same but in many ways they are. Just think of them this way for now. For this context we will just say they are. Therefore, you can enable routing protocols on the individual route-domains. Each route-domain can have it’s own set of routing protocols. Or run no routing protocols at all. By default the BIG-IP starts with just route-domain 0. And well because most router guys live on the cli, we’ll walk through the configuration examples that way on the BIG-IP. tmsh modify net route-domain 0 routing-protocol add { BGP } So great! Now we’re off and running BGP. So the world know’s we’re here right? Nope. Considering what you want to advertise. The most common advertisements sourced from the BIG-IP are the IP addresses for virtual servers. Now why would I want to do that? I can just put the BIG-IP on a large subnet and it will respond to ARP requests and send gratuitous ARPs (GARP). So that I can reach the virtual servers just fine. <rant> Author's opinion here: I consider this one of the worst BIG-IP implementation methods. Why? Well for starters, what if you want to expand the number of virtual servers on the BIG-IP? Well then you need to re-IP the network interfaces of all the devices (routers, firewalls, servers) in order to expand the subnet mask. Yuck! Don't even talk to me about secondary subnets. Second: ARP floods! Too many times I see issues where the BIG-IP has to send a flood of GARPs; and well the infrastructure, in an attempt to protect its control plane, filters/rate limits the number of incoming requests it will accept. So engineers are left to try and troubleshoot the case of the missing GARPs Third: Sometimes you need to migrate applications to maybe another BIG-IP appliance as it grew to big for the existing infrastructure. Having it tied to this interface just leads to confusion. I'm sure there's some corner cases where this is the best route. But I would say it's probably in the minority. </rant> I can hear you all now… “So what do you propose kind sir?” See? I can hear you... Treat the virtual servers as loopback interfaces. Then they’re not tied to a specific interface. To move them you just need to start advertising the /32 from another spot (Yes. You could statically route it too. I hear you out there wanting to show your routing chops.) But also, the only GARPs are those from the self-ip's This allows you to statically route of course the entire /24 to the BIG-IP’s self IP address, but also you can use one of them fancy routing protocols to announce the routes either individually or through a summarization. Announcing Routes Hear ye hear ye! I want the world to know about my virtual servers. *ahem* So quick little tangent on BIG-IP nomenclature. The virtual server does not get announced in the routing protocol. “Well then what does?” Eery mind reading isn't it? Remember from BIG-IP 101, a virtual server is an IP address and port combination and well, routing protocols don’t do well with carrying the port across our network. So what BIG-IP object is solely an IP address construct? The virtual-address! “Wait what?” Yeah… It’s a menu item I often forget is there too. But here’s where you let the BIG-IP know you want to advertise the virtual-address associated with the virtual server. But… but… but… you can have multiple virtual servers tied to a single IP address (http/https/etc.) and that’s where the choices for when to advertise come in. tmsh modify ltm virtual-address 10.99.99.100 route-advertisement all There are four states a virtual address can be in: Unknown, Enabled, Disabled and Offline. When the virtual address is in Unknown or Enabled state, its route will be added to the kernel routing table. When the virtual address is in Disabled or Offline state, its route will be removed if present and will not be added if not already present. But the best part is, you can use this to only advertise the route when the virtual server and it’s associated pool members are all up and functioning. In simple terms we call this route health injection. Based on the health of the application we will conditionally announce the route in to the routing protocol. At this point, if you’d followed me this far you’re probably asking what controls those conditions. I’ll let the K article expand on the options a bit. https://my.f5.com/manage/s/article/K15923612 “So what does BGP have to do with popcorn?” Popcorn? Ohhhhhhhhhhh….. kernel! I see what you did there! I’m talking about the operating system kernel silly. So when a virtual-address is in an unknown or enabled state and it is healthy, the route gets put in the kernel routing table. But that doesn’t get it in to the BGP process. Here is how the kernel (are we getting hungry?) routes are represented in the routing table with a 'K' This is where the fun begins! You guessed it! Route redistribution? Route redistribution! And well to take a step back I guess we need to get you to the ZebOS interface. To enter the router configuration cli from the bash command line, simply type imish. In a multi-route-domain configuration you would need to supply the route-domain number but in this case since we’re just using the 0 default we’re good. It’s a very similar interface to many vendor’s router and switch configuration so many of you CCIE’s should feel right at home. It even still lets you do a write memory or wr mem without having to create an alias. Clearly dating myself here.. I’m not going to get in to the full BGP configuration at this point but the simplest way to get the kernel routes in to the BGP process is simply going under the BGP process and redisitrubting the kernel routes. BUT WAIT! Thar be dragons in that configuration! First landmine and a note about kernel routes. If you manually configure a static route on the BIG-IP via tmsh or the tmui those will show up also as kernel routes Why is that concerning? Well an example is where engineers configure a static default route on the BIG-IP via tmsh. And well, when you redistribute kernel routes and that default route is now being advertised into BGP. Congrats! AND the BIG-IP is NOT your default gateway hilarity ensues. And by hilarity I mean the type of laugh that comes out as you're updating your resume. The lesson here is ALWAYS when doing route redistribution always use a route filter to ensure only your intended routes or IP range make it in to the routing protocol. This goes for your neighbor statements too. In both directions! You should control what routes come in and leave the device. Another way to have some disasterous consequences with BIG-IP routing is through summarization. If you are doing summarization, keep in mind that BGP advertises based on reachability to the networks it wants to advertise. In this case, BGP is receiving it in the form of kernel routes from tmm. But those are /32 addresses and lots of them! And you want to advertise a /23 summary route. But the lone virtual-address that is configured for route advertisement; and the only one your BGP process knows about within that range has a monitor that fails. The summary route will be withdrawn leaving all the /23 stranded. Be sure to configure all your virtual-addresses within that range for advertisement. Next: BGP Behavior In High Availability Configurations3.2KViews7likes21CommentsPost-Quantum Cryptography, OpenSSH, & s1ngularity supply chain attack
This week in security: PQC by default, and a supply-chain gut check. At F5, we are publishing a forward‑looking series of blog posts which help security and IT leaders anticipate tomorrow’s risks and capitalize on emerging tech. Think of it as a field guide to future threats—and how to stay resilient as they arrive. We are about half way through the series, here are some of the highlights from my point of view.210Views2likes2CommentsUsing F5 NGINX Plus as the Ingress Controller within Nutanix Kubernetes Platform (NKP)
Managing incoming traffic is a critical component of running applications efficiently within Kubernetes clusters. As organizations continue to deploy a growing number of microservices, the need for robust, flexible, and intelligent traffic management solutions becomes more apparent. In this article, we provide an overview of how F5 NGINX Plus, when used as the ingress controller in the Nutanix Kubernetes Platform (NKP), offers a comprehensive approach to traffic optimization, application reliability, and security.94Views1like0Comments