application delivery
2245 TopicsF5 NGINX Plus R33 Licensing and Usage Reporting
Beginning with F5 NGINX Plus version R33, all customers are required to deploy a JSON Web Token (JWT) license for each commercial instance of NGINX Plus. Each instance is responsible for validating its own license status. Furthermore, NGINX Plus will report usage either to the F5 NGINX licensing endpoint or to the F5 NGINX Instance Manager for customers who are connected. For those customers who are disconnected or operate in an air-gapped environment, usage can be reported directly to the F5 NGINX Instance Manager. To learn more about the latest features of NGINX R33, please check out the recent blog post. Install or Upgrade NGINX Plus R33 To successfully upgrade to NGINX Plus R33 or perform a fresh installation, begin by downloading the JWT license from your F5 account. Once you have the license, place it in the F5 NGINX directory before proceeding with the upgrade. For a fresh installation, after completing the installation, also place the JWT license in the NGINX directory. For further details, please refer to the provided instructions. This video provides a step-by-step guide on installing or upgrading to NGINX Plus R33. Report Usage to F5 in Connected Environment To effectively report usage data to F5 within a connected environment using NGINX Instance Manager, it's important to ensure that port 443 is open. The default configuration directs the usage endpoint to send reports directly to the F5 licensing endpoint at product.connect.nginx.com. By default, usage reporting is enabled, and it's crucial to successfully send at least one report on installation for NGINX to process traffic. However, you can postpone the initial reporting requirement by turning off the directive in your NGINX configuration. This allows NGINX Plus to handle traffic without immediate reporting during a designated grace period. To configure usage reporting to F5 using NGINX Instance Manager, update the usage endpoint to reflect the fully qualified domain name (FQDN) of the NGINX Instance Manager. For further details, please refer to the provided instructions. This video shows how to report usage in the connected environment using NGINX Instance Manager. Report Usage to F5 in Disconnected Environment using NGINX Instance Manager In a disconnected environment without an internet connection, you need to take certain steps before submitting usage data to F5. First, in NGINX Plus R33, update the `usage report` directive within the management block of your NGINX configuration to point to your NGINX Instance Manager host. Ensure that your NGINX R33 instances can access the NGINX Instance Manager by setting up the necessary DNS entries. Next, in the NMS configuration in NGINX Instance Manager, modify the ‘mode of operation’ to disconnected, save the file, and restart NGINX Instance Manager. There are multiple methods available for adding a license and submitting the initial usage report in this disconnected environment. You can use a Bash script, REST API, or the web interface. For detailed instructions on each method, please refer to the documentation. This video shows how to report usage in disconnected environments using NGINX Instance Manager. Conclusion The transition to NGINX Plus R33 introduces important enhancements in licensing and usage reporting that can greatly improve your management of NGINX instances. With the implementation of JSON Web Tokens (JWT), you can validate your subscription and report telemetry data more effectively. To ensure compliance and optimize performance, it’s crucial to understand the best practices for usage reporting, regardless of whether you are operating in a connected or disconnected environment. Get started today with a 30-day trial, and contact us if you have any questions. Resources NGINX support documentation Blog announcementproviding a comprehensive summary of the new features in this release.72Views1like0CommentsF5 NGINX Plus R33 Release Now Available
We’re excited to announce the availability of NGINX Plus Release 33 (R33). The release introduces major changes to NGINX licensing, support for post quantum cryptography, initial support for QuickJS runtime in NGINX JavaScript and a lot more.358Views1like0CommentsBIG-IP BGP Routing Protocol Configuration And Use Cases
Is the F5 BIG-IP a router? Yes! No! Wait what? Can the BIG-IP run a routing protocol? Yes. But should it be deployed as a core router? An edge router? Stay tuned. We'll explore these questions and more through a series of common use cases using BGP on the BIG-IP... And oddly I just realized how close in typing BGP and BIG-IP are, so hopefully my editors will keep me honest. (squirrel!) In part one we will explore therouting components on the BIG-IP and some basic configuration details to help you understand what the appliance is capable of. Please pay special attention to some of the gotchas along the way. Can I Haz BGP? Ok. So your BIG-IP comes with ZebOS in order to provide routing functionality, but what happens when you turn it on? What do you need to do to get routing updates in to the BGP process? And well does my licensing cover it? Starting with the last question… tmsh show /sys license | grep "Routing Bundle" The above command will help you determine if you’re going to be able to proceed, or be stymied at the bridge like the Black Knight in the Holy Grail. Fear not! There are many licensing options that already come with the routing bundle. Enabling Routing First and foremost, the routing protocol configuration is tied to the route-domain. What’s a route-domain? I’m so glad you asked! Route-domains are separate Layer 3 route tables within the BIG-IP. There is a concept of parent and child route domains, so while they’re similar to another routing concept you may be familiar with; VRF’s, they’re no t quite the same but in many ways they are. Just think of them this way for now. For this context we will just say they are. Therefore, you can enable routing protocols on the individual route-domains. Each route-domain can have it’s own set of routing protocols. Or run no routing protocols at all. By default the BIG-IP starts with just route-domain 0. And well because most router guys live on the cli, we’ll walk through the configuration examples that way on the BIG-IP. tmsh modify net route-domain 0 routing-protocol add { BGP } So great! Now we’re off and running BGP. So the world know’s we’re here right? Nope. Considering what you want to advertise. The most common advertisements sourced from the BIG-IP are the IP addresses for virtual servers. Now why would I want to do that? I can just put the BIG-IP on a large subnet and it will respond to ARP requests and send gratuitous ARPs (GARP). So that I can reach the virtual servers just fine. <rant> Author's opinion here: I consider this one of the worst BIG-IP implementation methods. Why? Well for starters, what if you want to expand the number of virtual servers on the BIG-IP? Well then you need to re-IP the network interfaces of all the devices (routers, firewalls, servers) in order to expand the subnet mask. Yuck! Don't even talk to me about secondary subnets. Second: ARP floods! Too many times I see issues where the BIG-IP has to send a flood of GARPs; and well the infrastructure, in an attempt to protect its control plane, filters/rate limits the number of incoming requests it will accept. So engineers are left to try and troubleshoot the case of the missing GARPs Third: Sometimes you need to migrate applications to maybe another BIG-IP appliance as it grew to big for the existing infrastructure. Having it tied to this interface just leads to confusion. I'm sure there's some corner cases where this is the best route. But I would say it's probably in the minority. </rant> I can hear you all now… “So what do you propose kind sir?” See? I can hear you... Treat the virtual servers as loopback interfaces. Then they’re not tied to a specific interface. To move them you just need to start advertising the /32 from another spot (Yes. You could statically route it too. I hear you out there wanting to show your routing chops.) But also, the only GARPs are those from the self-ip's This allows you to statically route of course the entire /24 to the BIG-IP’s self IP address, but also you can use one of them fancy routing protocols to announce the routes either individually or through a summarization. Announcing Routes Hear ye hear ye! I want the world to know about my virtual servers.*ahem* So quick little tangent on BIG-IP nomenclature. The virtual server does not get announced in the routing protocol. “Well then what does?” Eery mind reading isn't it? Remember from BIG-IP 101, a virtual server is an IP address and port combination and well, routing protocols don’t do well with carrying the port across our network. So what BIG-IP object is solely an IP address construct? The virtual-address! “Wait what?” Yeah… It’s a menu item I often forget is there too. But here’s where you let the BIG-IP know you want to advertise the virtual-address associated with the virtual server. But… but… but… you can have multiple virtual servers tied to a single IP address (http/https/etc.) and that’s where the choices for when to advertise come in. tmsh modify ltm virtual-address 10.99.99.100 route-advertisement all There are four states a virtual address can be in: Unknown, Enabled, Disabled and Offline. When the virtual address is in Unknown or Enabled state, its route will be added to the kernel routing table. When the virtual address is in Disabled or Offline state, its route will be removed if present and will not be added if not already present. But the best part is, you can use this to only advertise the route when the virtual server and it’s associated pool members are all up and functioning. In simple terms we call this route health injection. Based on the health of the application we will conditionally announce the route in to the routing protocol. At this point, if you’d followed me this far you’re probably asking what controls those conditions. I’ll let the K article expand on the options a bit. https://my.f5.com/manage/s/article/K15923612 “So what does BGP have to do with popcorn?” Popcorn? Ohhhhhhhhhhh….. kernel! I see what you did there! I’m talking about the operating system kernel silly. So when a virtual-address is in an unknown or enabled state and it is healthy, the route gets put in the kernel routing table. But that doesn’t get it in to the BGP process. Here is how the kernel (are we getting hungry?) routes are represented in the routing table with a 'K' This is where the fun begins! You guessed it! Route redistribution? Route redistribution! And well to take a step back I guess we need to get you to the ZebOS interface. To enter the router configuration cli from the bash command line, simply type imish. In a multi-route-domain configuration you would need to supply the route-domain number but in this case since we’re just using the 0 default we’re good. It’s a very similar interface to many vendor’s router and switch configuration so many of you CCIE’s should feel right at home. It even still lets you do a write memoryor wr memwithout having to create an alias. Clearly dating myself here.. I’m not going to get in to the full BGP configuration at this point but the simplest way to get the kernel routes in to the BGP process is simply going under the BGP process and redisitrubting the kernel routes. BUT WAIT! Thar be dragons in that configuration! First landmine and a note about kernel routes. If you manually configure a static route on the BIG-IP via tmsh or the tmui those will show up also as kernel routes Why is that concerning? Well an example is where engineers configure a static default route on the BIG-IP via tmsh. And well, when you redistribute kernel routes and that default route is now being advertised into BGP. Congrats! AND the BIG-IP is NOT your default gateway hilarity ensues. And by hilarity I mean the type of laugh that comes out as you're updating your resume. The lesson here is ALWAYS when doing route redistribution always use a route filter to ensure only your intended routes or IP range make it in to the routing protocol. This goes for your neighbor statements too. In both directions! You should control what routes come in and leave the device. Another way to have some disasterous consequences with BIG-IP routing is through summarization. If you are doing summarization, keep in mind that BGP advertises based on reachability to the networks it wants to advertise. In this case, BGP is receiving it in the form of kernel routes from tmm. But those are /32 addresses and lots of them! And you want to advertise a /23 summary route. But the lone virtual-address that is configured for route advertisement; and the only one your BGP process knows about within that range has a monitor that fails. The summary route will be withdrawn leaving all the /23 stranded. Be sure to configure all your virtual-addresses within that range for advertisement. Next: BGP Behavior In High Availability Configurations1.5KViews6likes13CommentsRidiculously Easy Bot Protection: How to Use BIG-IP APM to Streamline Bot Defense Implementation
Ever imagined how your Bot solution implementation would be with a standard entry page at your application side--a page that’s easily referred, with clear parameters, and structured customization options? In this article, we are exploring using F5 BIG-IP Access Policy Manager (BIG-IP APM) along side F5 Distributed Cloud Bot Defense (XC Bot Defense). Bot defense solutions' challenges Implementing bot defense solutions presents several challenges, each with unique considerations: Evolving Bot Tactics: Bot tactics constantly evolve, demanding adaptive detection methods to avoid both false positives (blocking legitimate users) and false negatives (allowing malicious bots through). Effective solutions must be highly flexible and responsive to these changes. Multi-Environment Integration: Bot defenses need to be deployed across diverse environments, including web, mobile, and APIs, adding layers of complexity to integration. Ensuring seamless protection across these platforms is critical. Balancing Security and Performance: Security measures must be balanced with performance to avoid degrading the user experience. A well-calibrated bot defense should secure the application without causing noticeable slowdowns or other disruptions for legitimate users. Data Privacy Compliance: Bot solutions often require extensive data collection, so adherence to data privacy laws is essential. Ensuring that bot defense practices align with regulatory standards helps avoid legal complications and maintains user trust. Resource Demands: Integrating bot defense with existing security stacks can be resource-intensive, both in terms of cost and skilled personnel. Proper configuration, monitoring, and maintenance require dedicated resources to ensure long-term effectiveness and efficiency. What F5 BIG-IP APM brings to the table? For teams working on bot defense solutions, several operational challenges can arise: Targeted Implementation Complexity: Identifying the correct application page for applying bot defense is often a complex process. Teams must ensure the solution targets the page containing the specific parameters they want to protect, which can be time-consuming and resource-intensive. Adaptation to Application Changes: Changes like upgrades or redesigns of the application page often require adjustments to bot defenses. These modifications can translate into significant resource commitments, as teams work to ensure the bot solution remains aligned with the new page structure. BIG-IP APM simplifies this process by making it easier to identify and target the correct page, reducing the time and resources needed for implementation. This allows technical and business resources to focus on more strategic priorities, such as fine-tuning bot defenses, optimizing protection, and enhancing application performance. Architecture and traffic flow In this section, let's explore how F5 XC Bot Defense and BIG-IP APM works together, let's list the prerequisites, F5 XC account with access to Bot Defense. APM licensed and provisioned. F5 BIG-IP min. v16.x for native connector integration. BIG-IP Self IP rechability to Internet to communicate with F5 XC, mainly to reach this domin (ibd-web.fastcache.net). Now, time to go quickly through our beloved TMM packet order. Due to the nature of BIG-IP APM Access events take precedence to the Bot enforcement, hence we will rely on simple iRule to apply Bot Defense on BIG-IP APM logon page. BIG-IP Bot Defense is responsible for inserting the JS and passing traffic from client to APM VS back and forth. BIG-IP APM responsible for logon page, MFA, API security or SSO integrations to manage client Access to the backend application. Solution Implementation Let's start now with our solution implementation, F5 Distributed Cloud Bot defense connector with BIG-IP was discussed in details in this Article F5 Distributed Cloud Bot Defense on BIG-IP 17.1 You will follow the steps mentioned in the article, with few changes mentioned below, API Hostname Web: ibd-web.fastcache.net For Per-session policies we use /my.policy as the target URL, while for Per-request and MFA implementation, you need to add /vdesk/*. Protection Pool - Web: Create pool with FQDN ibd-web.fastcache.net Virtual server, Create LTM virtual server to listen to incoming traffic, perform SSL offloading, HTTP profile and attach Bot Defense connector profile. Forwarding iRule, attach forwarding irule to the Bot virtual server. when CLIENT_ACCEPTED { ## Forwarding to the APM Virtual Server virtual Auth_VS } BIG-IP APM Policies,In this step we are creating two options of our deployment, Per-Session policy, where BIG-IP presents Logon page to the user. Per-Request policy, which services in case initial logon is handled at remote IdP and APM handles Per-request, MFA authentication or API security. Now, it's time to run the traffic and observe the results, From client browser, we can see the customer1.js inserted, From F5 XC Dashboard, Conclusion The primary goal of incorporating BIG-IP APM into the Bot Defense solution is to strike a balance between accelerating application development across web and mobile platforms while enforcing consistent organizational bot policies. By decoupling application login and authentication from the application itself, this approach enables a more streamlined, optimized, and secure bot defense implementation. It allows development teams to concentrate on application performance and feature enhancements, knowing that security measures are robustly managed and seamlessly integrated into the infrastructure. Related Content F5 Distributed Cloud Bot Defense on BIG-IP 17.1 Bot Detection and Security: Stop Automated Attacks 2024 Bad Bots Review62Views1like0CommentsHow I Did it - Migrating Applications to Nutanix NC2 with F5 Distributed Cloud Secure Multicloud Networking
In this edition of "How I Did it", we will explore how F5 Distributed Cloud Services (XC) enables seamless application extension and migration from an on-premises environment to Nutanix NC2 clusters.725Views4likes0CommentsF5 Distributed Cloud - Mitigation for Cross Tenant Origin Exposure (CTOE)
F5 Distributed Cloud (XC) offers a suite of powerful features designed to simplify the lives of administrators and engineers. A key aspect of this ease of use comes from shared objects, such as Regional Edge Proxies which utilize well-known public IP addresses. However, while this shared infrastructure enhances scalability and efficiency, it can also present risks if leveraged by attackers; and in this case, cross tenant origin exposure (CTOE). For instance: Customer(x) has tenant(x) in XC with a Load Balancer pointing to their public IP origin servers. These may be behind a perimeter firewall NAT (as diagrammed below) or be actual public IPs on the servers. Customers perimeter firewall is configured to deny all inbound traffic to public IP for site1.example.com Perimeter Firewall is configured to allow inbound traffic to public IP for site1.example.com for XC IP’s. (which is a well-known and public shared IP range) XC Proxy IP’s Reference Doc This setup is generally considered a minimum best practice because it restricts traffic to only those sources originating from XC. However, depending on the organization’s risk appetite, this level of security may be insufficient. The Risk Another account/tenant(y) within Distributed Cloud could create a load balancer and point to the public IP or DNS name of the origin pools for tenant(x). The attacker must know or learn the actual origin servers IP, or network segment to perform this attack. This discovery is fairly trivial and there are many approaches. In addition, what if the origin pool in tenant(x) is pointing to a DNS name that resolves to public IP’s? This is common with SaaS API gateways such as AWS and Azure to name a few and these gateways all use the same DNS name for the gateway respective to their cloud. Same DNS = Same IP’s = Easy to learn or guess Origin IP’s. For instance a common flow where a customer is using XC for WAF/WAAP and a 3rd party SAAS solution for an APIGW, may be Client–>XC(LB-WAAP)–>APIGW(pub-ip)–>API. In this default configuration, an attacker could learn the customers public NAT IP and add it to their Origin Pool. They can now instantiate attacks from their tenant(y) which will be sourced from the XC IP’s and allowed by the customer(x) perimeter firewall. Mitigation There are at least 4 ways to mitigate this risk. 1. L7 Header - If the origin servers (on-prem or SAAS) have something in front of them that is “L7 aware” or they themselves can be configured to do header validation, a custom HTTP request header could be injected into the flow by the load balancer in “tenant x”. Tenant y would not know or be able to see this header. Of course traffic not containing this header would still make it all the way to the L7 aware service before being dropped. While this would suffice for a L7 DoS or or other L7 type attack, it would not help with a L3/4 type attack which could still make it’s way through the infrastructure. 2. MTLS - A unique differentiator for F5 XC, is our ability to use server-side MTLS. If a customer has the capability on the Web Server/Service or something in front of it similar to the previous L7 header example, then we can add an additional layer of source validation by using mutual certificate authentication (mtls). Even a self-signed cert would add a lot of value here. No cert = no layer 7 access to the app or service. This does not prevent an L3/4 attack but will prevent unwanted application access. 3. Customer Edge (CE) proxies are deploy-able software that creates a private mesh back to our Application Delivery Network (ADN). These come with additional cost and need to be deployed at each location, thus creating a private mesh or overlay network that is unavailable outside of the tenant. in this scenario, the attacker traffic could potentially make it to the public IP of (or in front of) the CE and be dropped, thus protecting the application itself but still potentially allowing bad L3/4. 4. Private Link is a paid add-on to XC that enables connectivity between XC, clients, and resources. It offers many advantages, particularly when addressing regulatory and other security compliance requirements. Perimeter firewall rules can be simplified to allow traffic exclusively from Private Links, which are accessible only from the designated tenancy. Private Links can mitigate L3-L7 attacks because the link is entirely private by design. XC Private Link Overview A Word on L3/4 DDoS: L3/4 attacks were brought up several times above when talking about the technicalities of each mitigation method. While a L3/4 attack is not always distributed by nature, most are. One very important concept to keep in mind is the fact that XC natively provides L3/4 DDoS mitigation at our Regional Edges. Even in the examples above where “attack” traffic could make it all the way to the app or at least to the perimeter, if it was a true DDoS, this would get picked up by our Regional Edges and automatically mitigated. Conclusion In today’s interconnected cloud ecosystems, mitigating CTOE attacks is crucial to maintaining service availability and performance. By understanding the vulnerabilities that stem from cross-cloud communications and applying best practices, organizations can safeguard their systems from exploitation. As we continue to expand our cloud footprints, proactive security measures are not only necessary but must evolve alongside the complexity of the environments we manage. Effective CTOE prevention is an essential part of ensuring a resilient, high-performing network in this cloud-driven world. Like this article? Please drop a like or line below!122Views1like2CommentsCreate F5 BIG-IP Next Instance on Proxmox Virtual Environment
If you are looking to deploy a F5 BIG-IP Next instance on Proxmox Virtual Environment (henceforth referred to as Proxmox for the sake of brevity), perhaps in your home lab, here's how: First, download the BIG-IP Next Central Manager and BIG-IP Next QCOW Files from MyF5 Downloads. Click on the "Copy Download Link" Copy the QCOW file to your Proxmox host. I am using the download links from above in the example below. proxmox $ curl -O -L -J [link for Central Manager from F5 downloads] proxmox $ curl -O -L -J [link for Next from F5 downloads] On the Proxmox host, extract the contents in the QCOW files. You will need to rename the Central Manager file from .qcow to .qcow2. proxmox $ cd ~/ proxmox $ mv BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow2 proxmox $ tar -zxvf BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.tar.gz BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2 BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512 BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512.sig BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512sum.txt.asc BIG-IP-Next-20.2.1-F5-ca-bundle.cert BIG-IP-Next-20.2.1-F5-certificate.cert Then, run the command below to create a virtual machine (VM) from the extracted QCOW files. replace the values to match your environment. # # Central Manager # # use either DHCP or Static IP example # # using DHCP (change values to match your environment) proxmox $ qm create 105 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --name my-central-manager --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=dhcp --ciupgrade=0 --ide2=local-lvm:cloudinit # static IP (change values to match your environment) # proxmox $ qm create 105 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-central-manager --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=192.168.1.5/24,gw=192.168.1.1 --nameserver 192.168.1.1 --ciupgrade=0 --ide2=local-lvm:cloudinit # import disk qm set 105 --virtio0 local-lvm:0,import-from=/root/BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow2 --boot order=virtio0 # # Next instance # # Note that you need at least two interfaces, one for management and one for data-plane # # use either DHCP or Static IP example # # DHCP proxmox $ qm create 107 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-next-instance --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=dhcp --ciupgrade=0 --ciuser=admin --cipassword=admin --ide2=local-lvm:cloudinit # static IP # proxmox $ qm create 107 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-next-instance --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=192.168.1.7/24,gw=192.168.1.1 --nameserver 192.168.1.1 --ciupgrade=0 --ciuser=admin --cipassword=admin --ide2=local-lvm:cloudinit # import disk proxmox $ proxmox $ qm set 107 --virtio0 local-lvm:0,import-from=/root/BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2 --boot order=virtio0 You should now see a new VM created on the Proxmox GUI. Finally, start the VM. This will take a few minutes. The BIG-IP Next VM is now ready to be onboarded per instructions found here.2.4KViews6likes4Comments