cloud
1692 TopicsBIG-IP BGP Routing Protocol Configuration And Use Cases
Is the F5 BIG-IP a router? Yes! No! Wait what? Can the BIG-IP run a routing protocol? Yes. But should it be deployed as a core router? An edge router? Stay tuned. We'll explore these questions and more through a series of common use cases using BGP on the BIG-IP... And oddly I just realized how close in typing BGP and BIG-IP are, so hopefully my editors will keep me honest. (squirrel!) In part one we will explore the routing components on the BIG-IP and some basic configuration details to help you understand what the appliance is capable of. Please pay special attention to some of the gotchas along the way. Can I Haz BGP? Ok. So your BIG-IP comes with ZebOS in order to provide routing functionality, but what happens when you turn it on? What do you need to do to get routing updates in to the BGP process? And well does my licensing cover it? Starting with the last question… tmsh show /sys license | grep "Routing Bundle" The above command will help you determine if you’re going to be able to proceed, or be stymied at the bridge like the Black Knight in the Holy Grail. Fear not! There are many licensing options that already come with the routing bundle. Enabling Routing First and foremost, the routing protocol configuration is tied to the route-domain. What’s a route-domain? I’m so glad you asked! Route-domains are separate Layer 3 route tables within the BIG-IP. There is a concept of parent and child route domains, so while they’re similar to another routing concept you may be familiar with; VRF’s, they’re no t quite the same but in many ways they are. Just think of them this way for now. For this context we will just say they are. Therefore, you can enable routing protocols on the individual route-domains. Each route-domain can have it’s own set of routing protocols. Or run no routing protocols at all. By default the BIG-IP starts with just route-domain 0. And well because most router guys live on the cli, we’ll walk through the configuration examples that way on the BIG-IP. tmsh modify net route-domain 0 routing-protocol add { BGP } So great! Now we’re off and running BGP. So the world know’s we’re here right? Nope. Considering what you want to advertise. The most common advertisements sourced from the BIG-IP are the IP addresses for virtual servers. Now why would I want to do that? I can just put the BIG-IP on a large subnet and it will respond to ARP requests and send gratuitous ARPs (GARP). So that I can reach the virtual servers just fine. <rant> Author's opinion here: I consider this one of the worst BIG-IP implementation methods. Why? Well for starters, what if you want to expand the number of virtual servers on the BIG-IP? Well then you need to re-IP the network interfaces of all the devices (routers, firewalls, servers) in order to expand the subnet mask. Yuck! Don't even talk to me about secondary subnets. Second: ARP floods! Too many times I see issues where the BIG-IP has to send a flood of GARPs; and well the infrastructure, in an attempt to protect its control plane, filters/rate limits the number of incoming requests it will accept. So engineers are left to try and troubleshoot the case of the missing GARPs Third: Sometimes you need to migrate applications to maybe another BIG-IP appliance as it grew to big for the existing infrastructure. Having it tied to this interface just leads to confusion. I'm sure there's some corner cases where this is the best route. But I would say it's probably in the minority. </rant> I can hear you all now… “So what do you propose kind sir?” See? I can hear you... Treat the virtual servers as loopback interfaces. Then they’re not tied to a specific interface. To move them you just need to start advertising the /32 from another spot (Yes. You could statically route it too. I hear you out there wanting to show your routing chops.) But also, the only GARPs are those from the self-ip's This allows you to statically route of course the entire /24 to the BIG-IP’s self IP address, but also you can use one of them fancy routing protocols to announce the routes either individually or through a summarization. Announcing Routes Hear ye hear ye! I want the world to know about my virtual servers. *ahem* So quick little tangent on BIG-IP nomenclature. The virtual server does not get announced in the routing protocol. “Well then what does?” Eery mind reading isn't it? Remember from BIG-IP 101, a virtual server is an IP address and port combination and well, routing protocols don’t do well with carrying the port across our network. So what BIG-IP object is solely an IP address construct? The virtual-address! “Wait what?” Yeah… It’s a menu item I often forget is there too. But here’s where you let the BIG-IP know you want to advertise the virtual-address associated with the virtual server. But… but… but… you can have multiple virtual servers tied to a single IP address (http/https/etc.) and that’s where the choices for when to advertise come in. tmsh modify ltm virtual-address 10.99.99.100 route-advertisement all There are four states a virtual address can be in: Unknown, Enabled, Disabled and Offline. When the virtual address is in Unknown or Enabled state, its route will be added to the kernel routing table. When the virtual address is in Disabled or Offline state, its route will be removed if present and will not be added if not already present. But the best part is, you can use this to only advertise the route when the virtual server and it’s associated pool members are all up and functioning. In simple terms we call this route health injection. Based on the health of the application we will conditionally announce the route in to the routing protocol. At this point, if you’d followed me this far you’re probably asking what controls those conditions. I’ll let the K article expand on the options a bit. https://my.f5.com/manage/s/article/K15923612 “So what does BGP have to do with popcorn?” Popcorn? Ohhhhhhhhhhh….. kernel! I see what you did there! I’m talking about the operating system kernel silly. So when a virtual-address is in an unknown or enabled state and it is healthy, the route gets put in the kernel routing table. But that doesn’t get it in to the BGP process. Here is how the kernel (are we getting hungry?) routes are represented in the routing table with a 'K' This is where the fun begins! You guessed it! Route redistribution? Route redistribution! And well to take a step back I guess we need to get you to the ZebOS interface. To enter the router configuration cli from the bash command line, simply type imish. In a multi-route-domain configuration you would need to supply the route-domain number but in this case since we’re just using the 0 default we’re good. It’s a very similar interface to many vendor’s router and switch configuration so many of you CCIE’s should feel right at home. It even still lets you do a write memory or wr mem without having to create an alias. Clearly dating myself here.. I’m not going to get in to the full BGP configuration at this point but the simplest way to get the kernel routes in to the BGP process is simply going under the BGP process and redisitrubting the kernel routes. BUT WAIT! Thar be dragons in that configuration! First landmine and a note about kernel routes. If you manually configure a static route on the BIG-IP via tmsh or the tmui those will show up also as kernel routes Why is that concerning? Well an example is where engineers configure a static default route on the BIG-IP via tmsh. And well, when you redistribute kernel routes and that default route is now being advertised into BGP. Congrats! AND the BIG-IP is NOT your default gateway hilarity ensues. And by hilarity I mean the type of laugh that comes out as you're updating your resume. The lesson here is ALWAYS when doing route redistribution always use a route filter to ensure only your intended routes or IP range make it in to the routing protocol. This goes for your neighbor statements too. In both directions! You should control what routes come in and leave the device. Another way to have some disasterous consequences with BIG-IP routing is through summarization. If you are doing summarization, keep in mind that BGP advertises based on reachability to the networks it wants to advertise. In this case, BGP is receiving it in the form of kernel routes from tmm. But those are /32 addresses and lots of them! And you want to advertise a /23 summary route. But the lone virtual-address that is configured for route advertisement; and the only one your BGP process knows about within that range has a monitor that fails. The summary route will be withdrawn leaving all the /23 stranded. Be sure to configure all your virtual-addresses within that range for advertisement. Next: BGP Behavior In High Availability Configurations2.2KViews6likes18CommentsLeverage BIG-IP 17.1 Distributed Cloud Services to Integrate F5 Distributed Cloud Bot Defense
Introduction: The F5 Distributed Cloud (XC) Bot Defense protects web and mobile properties from automated attacks by identifying and mitigating malicious bots. The Bot Defense uses JavaScript and API calls to collect telemetry and mitigate malicious users. The F5 Distributed Cloud (XC) Bot Defense is available in Standard and Enterprise service levels. In both the service levels the Bot Defense is available for traffic form web, web scarping, and mobile. The web scrapping is only applicable to web endpoints. This article will show you how to configure and use F5 Distributed Cloud Bot Defense (XC Bot Defense) on BIG-IP version 17.1 and above and monitor the solution on F5 Distributed Cloud Console (XC Console). Prerequisites: A valid XC Console account. If you don't have an account, visit Create a Distributed Cloud Console Account. An Organization plan. If you don't have an Organization plan, upgrade your plan. Getting Started: Log In to F5 XC Console: If XC Bot Defense isn't enabled, a Bot Defense landing page appears. Select Request Service to enable XC Bot Defense. If XC Bot Defense is enabled, you will see the tiles. Select Bot Defense. Verify you are in the correct Namespace. If your Namespace does not have any Protected Applications you will see the following page. Click Add Protected Application When you select a Namespace that has been configured with Protected Applications you will see this page. Scroll down to Manage Click Applications Click Add Application The Protected Application page is presented. Enter: Name Labels Description Select the Application Region - US in this example Connector Type - BIG-IP iApp for this demo. Cloudfront and Custom are other available connectors Scroll to the bottom and Click Save and Exit That will take you back to the Protected Applications Page. Verify your Application is listed with all the Metadata you supplied. Click the three ellipses to the right. Scroll down into the highlighted area and click and Copy App ID, Tenant ID and API Key Copy and save each value to a location where you can access it in the next steps. That completes the configuartion of F5 XC Console. Log In to your BIG-IP You will Notice in version 17.1 and above you will have a new selection along the left pane called Distributed Cloud Services. Expand and you will see all the latest integrations F5 provides. Application Traffic Insight Bot Defense Client-Side Defense Account Protection & Authentication Intelligence Cloud Services This article as stated before will focus on Bot Defense. Look for future articles that will focus on the other integrations. On the Main tab, Click Distributed Cloud Services > Bot Defense > Bot Profiles and Select Create This will bring up the General Properties page where you will enter required and optional information. Mandatory items have a Blue line on the edge. Supply a Name Application ID - From previous step Tenant ID - From previous step API Hostname - Web is filled in for you API Key - from previous step In the JS Injection Configuration section, the BIG-IP Handles JS Injectionsfield is checked by default, if you uncheck the field then follow the Note given in the Web UI. Protected Endpoint(s) - Web - Supply either the URI or IP of the Host Application along with the path and method you are protecting on the protected endpoint. In the following image, I have selected Advanced to show more detail of what is available. Again Mandatory fields have a blue indicator. Here the Protection Pool and SSL Profile. Click Finished when complete. One final step to complete the setup. Go to the Main tab, Local Traffic > Virtual Servers > Virtual Serves List Select the Virtual Server you are going to apply the Bot Defense profile to. Click on Distributed Cloud Services on the top banner Under Service Settings > Bot Defense set to Enable and then select the Bot Defense Profile you created in the above steps. The click Update. You have now sucessfully integrated BIG-IP Distributed Cloud Service on version 17.1 with F5 Distributed Coud Bot Defense. One final visual is the dashboard for F5 Distributed Cloud Bot Defense. This is where you will observe and monitor what bots and actions have been taken against bots and your protected applications. F5 XC Bot Defense on BIG-IP 17.1 Demo: Conclusion: I hope you were able to benefit from this tutorial. I was able to show how quickly and easlity it is to configure F5 Dsitributed Cloud Bot Defense on BIG-IP v17.1 using the built in Distributed Cloud Services integration. Related Links: https://www.f5.com/cloud https://www.f5.com/cloud/products/bot-defense BIG-IP Bot Defense on 14.x-16.x3.9KViews3likes4CommentsRidiculously Easy Bot Protection: How to Use BIG-IP APM to Streamline Bot Defense Implementation
Ever imagined how your Bot solution implementation would be with a standard entry page at your application side--a page that’s easily referred, with clear parameters, and structured customization options? In this article, we are exploring using F5 BIG-IP Access Policy Manager (BIG-IP APM) along side F5 Distributed Cloud Bot Defense (XC Bot Defense). Bot defense solutions' challenges Implementing bot defense solutions presents several challenges, each with unique considerations: Evolving Bot Tactics: Bot tactics constantly evolve, demanding adaptive detection methods to avoid both false positives (blocking legitimate users) and false negatives (allowing malicious bots through). Effective solutions must be highly flexible and responsive to these changes. Multi-Environment Integration: Bot defenses need to be deployed across diverse environments, including web, mobile, and APIs, adding layers of complexity to integration. Ensuring seamless protection across these platforms is critical. Balancing Security and Performance: Security measures must be balanced with performance to avoid degrading the user experience. A well-calibrated bot defense should secure the application without causing noticeable slowdowns or other disruptions for legitimate users. Data Privacy Compliance: Bot solutions often require extensive data collection, so adherence to data privacy laws is essential. Ensuring that bot defense practices align with regulatory standards helps avoid legal complications and maintains user trust. Resource Demands: Integrating bot defense with existing security stacks can be resource-intensive, both in terms of cost and skilled personnel. Proper configuration, monitoring, and maintenance require dedicated resources to ensure long-term effectiveness and efficiency. What F5 BIG-IP APM brings to the table? For teams working on bot defense solutions, several operational challenges can arise: Targeted Implementation Complexity: Identifying the correct application page for applying bot defense is often a complex process. Teams must ensure the solution targets the page containing the specific parameters they want to protect, which can be time-consuming and resource-intensive. Adaptation to Application Changes: Changes like upgrades or redesigns of the application page often require adjustments to bot defenses. These modifications can translate into significant resource commitments, as teams work to ensure the bot solution remains aligned with the new page structure. BIG-IP APM simplifies this process by making it easier to identify and target the correct page, reducing the time and resources needed for implementation. This allows technical and business resources to focus on more strategic priorities, such as fine-tuning bot defenses, optimizing protection, and enhancing application performance. Architecture and traffic flow In this section, let's explore how F5 XC Bot Defense and BIG-IP APM works together, let's list the prerequisites, F5 XC account with access to Bot Defense. APM licensed and provisioned. F5 BIG-IP min. v16.x for native connector integration. BIG-IP Self IP rechability to Internet to communicate with F5 XC, mainly to reach this domin (ibd-web.fastcache.net). Now, time to go quickly through our beloved TMM packet order. Due to the nature of BIG-IP APM Access events take precedence to the Bot enforcement, hence we will rely on simple iRule to apply Bot Defense on BIG-IP APM logon page. BIG-IP Bot Defense is responsible for inserting the JS and passing traffic from client to APM VS back and forth. BIG-IP APM responsible for logon page, MFA, API security or SSO integrations to manage client Access to the backend application. Solution Implementation Let's start now with our solution implementation, F5 Distributed Cloud Bot defense connector with BIG-IP was discussed in details in this Article F5 Distributed Cloud Bot Defense on BIG-IP 17.1 You will follow the steps mentioned in the article, with few changes mentioned below, API Hostname Web: ibd-web.fastcache.net For Per-session policies we use /my.policy as the target URL, while for Per-request and MFA implementation, you need to add /vdesk/*. Protection Pool - Web: Create pool with FQDN ibd-web.fastcache.net Virtual server, Create LTM virtual server to listen to incoming traffic, perform SSL offloading, HTTP profile and attach Bot Defense connector profile. Forwarding iRule, attach forwarding irule to the Bot virtual server. when CLIENT_ACCEPTED { ## Forwarding to the APM Virtual Server virtual Auth_VS } BIG-IP APM Policies, In this step we are creating two options of our deployment, Per-Session policy, where BIG-IP presents Logon page to the user. Per-Request policy, which services in case initial logon is handled at remote IdP and APM handles Per-request, MFA authentication or API security. Now, it's time to run the traffic and observe the results, From client browser, we can see the customer1.js inserted, From F5 XC Dashboard, Conclusion The primary goal of incorporating BIG-IP APM into the Bot Defense solution is to strike a balance between accelerating application development across web and mobile platforms while enforcing consistent organizational bot policies. By decoupling application login and authentication from the application itself, this approach enables a more streamlined, optimized, and secure bot defense implementation. It allows development teams to concentrate on application performance and feature enhancements, knowing that security measures are robustly managed and seamlessly integrated into the infrastructure. Related Content F5 Distributed Cloud Bot Defense on BIG-IP 17.1 Bot Detection and Security: Stop Automated Attacks 2024 Bad Bots Review338Views2likes1CommentMitigating OWASP API Security risks using BIG-IP
The introduction article covered the summary of OWASP API Security TOP 10 categories. As part of this article, we will focus on how we can protect our applications against some of these vulnerabilities using F5 BIG-IP Advanced Web Application Firewall (AdvancedWAF). Excessive Data Exposure: Problem Statement: As shown below in one of the demo application API’s, Personally Identifiable Information (PII) data like Credit Card Numbers (CCN) and Social Security Numbers (SSN) are available which are highly sensitive and so we must hide these details to prevent personal data exploits. Solution: By configuring DataGuard related WAF settings in BIG-IP as below, we are able to mask these numbers thereby preventing data breaches. If needed, we can update settings to block this vulnerability after which all incoming requests for this endpoint will be blocked. Injection: Problem Statement: Customer login pages without secure coding practices may have flaws and intruders will use them to exploit credential validation using different types of injections like SQLi, Command Injections, etc. In our demo application, attackers were able to bypass validation using SQLi (Username as “' OR true --” and any password) thereby getting administrative access as below: Solution: By configuring AdvancedWAF settings in BIG-IP and by enabling appropriate violation blocking settings, we are able to identify and block these types of known injection attacks as below. Improper Assets Management: Problem Statement: In our demo application, attackers have identified deprecated endpoints with a path starting with “/v1” which are currently not being maintained but are still available. Using these undocumented endpoints, attackers can get access to unwanted data causing loss of sensitive app information. Solution: To avoid this specific use case, we have come up with OpenAPI or Swagger files for the demo application, uploaded them to BIG-IP and have configured AdvancedWAF to allow only these known URL’s. If attackers try to access deprecated URL’s which are not available in OpenAPI files, the requests will be blocked. Insufficient Logging & Monitoring: Problem Statement: Appropriate logging and monitoring solutions play a pivotal role in identifying attacks and also in finding the root cause for any security issues. Without these solutions, applications are fully exposed to attackers and are completely blind in identifying details of users and resources being accessed. Solution: BIG-IP provides many dashboards like Statistics, Dos Visibility, Analytics, OWASP, etc for end-to-end visibility of every request being accessed and users have the ability to filter requests as per their requirements. By default, system provides different types of logging profiles and users can also create custom logging profiles. They can attach them to Load Balancers to track these data flows. BIG-IP also supports a reporting service to generate the timely reports as needed by users. Conclusion: As demonstrated above, F5 BIG-IP AdvancedWAF can be used as a mitigation solution to prevent different OWASP security attacks against our modern applications running API’s. Stay tuned for more OWASP videos. For getting started, check below links: BIG-IP AdvancedWAF OWASP API Security Top 10 BIG-IP VE Overview of BIG-IP2.3KViews4likes3CommentsF5 BIG-IP deployment with Red Hat OpenShift - keeping client IP addresses and egress flows
Controlling the egress traffic in OpenShift allows to use the BIG-IP for several use cases: Keeping the source IP of the ingress clients Providing highly scalable SNAT for egress flows Providing security functionalities for egress flows413Views1like1CommentOWASP Automated Threats - CAPTCHA Defeat (OAT-009)
Introduction: In this OWASP Automated Threat Article we'll be highlighting OAT-009 CAPTCHA Defeat with some basic threat information as well as a recorded demo to dive into the concepts deeper. In our demo we'll show how CAPTCHA Defeat works with Automation Tools to allow attackers to accomplish their objectives despite the presence of CAPTCHA's intended purpose of preventing unwanted automation. We'll wrap it up by highlighting F5 Bot Defense to show how we solve this problem for our customers. CAPTCHA Defeat Description: Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) challenges are used to distinguish normal users from bots. Automation is used in an attempt to analyse and determine the answer to visual and/or aural CAPTCHA tests and related puzzles. Apart from conventional visual and aural CAPTCHA, puzzle solving mini games or arithmetical exercises are sometimes used. Some of these may include context-specific challenges. The process that determines the answer may utilise tools to perform optical character recognition, or matching against a prepared database of pre-generated images, or using other machine reading, or human farms. OWASP Automated Threat (OAT) Identity Number OAT-009 Threat Event Name CAPTCHA Defeat Summary Defining Characteristics Solve anti-automation tests. OAT-009 Attack Demographics: Sectors Targeted Parties Affected Data Commonly Misused Other Names and Examples Possible Symptoms Education Application Owners Authentication Credentials Breaking CAPTCHA High CAPTCHA solving success rate on fraudulent accounts Entertainment CAPTCHA breaker Suspiciously fast or fixed CAPTCHA solving times Financial CAPTCHA breaking Government CAPTCHA bypass Retail CAPTCHA decoding Social Networking CAPTCHA solver CAPTCHA solving Puzzle solving CAPTCHA Defeat Demo: In this demo we will be showing how it’s possible to leverage real human click farms via CAPTCHA solving services like 2CAPTCHA to bypass reCAPTCHA. We'll then have a look at the same attack with F5 Distributed Cloud Bot Defense protecting the application. In Conclusion: CAPTCHAs are only a speed bump for motivated attackers while introducing considerable friction for legitimate customers. Today, we’re at a point where bots solve CAPTCHAs more quickly and easily than most humans. Check out our additional resource links below to learn more. OWASP Links OWASP Automated Threats to Web Applications Home Page OWASP Automated Threats Identification Chart OWASP Automated Threats to Web Applications Handbook F5 Related Content Deploy Bot Defense on any Edge with F5 Distributed Cloud (SaaS Console, Automation) F5 Bot Defense Solutions F5 Labs "I Was a Human CATPCHA Solver" The OWASP Automated Threats Project How Attacks Evolve From Bots to Fraud Part: 1 How Attacks Evolve From Bots to Fraud Part: 2 F5 Distributed Cloud Bot Defense F5 Labs 2021 Credential Stuffing Report3.2KViews3likes1CommentF5 Distributed Cloud Bot Defense (Overview and Demo)
What is Distributed Cloud Bot Defense? Distributed Cloud Bot Defense protects your web properties from automated attacks by identifying and mitigating malicious bots. Bot Defense uses JavaScript and API calls to collect telemetry and mitigate malicious users within the context of the Distributed Cloud global network. Bot Defense can easily be integrated into existing applications in a number of ways. For applications already routing traffic through Distributed Cloud Mesh Service, Bot Defense is natively integrated into your Distributed Cloud Mesh HTTP load balancers. This integration allows you to configure the Bot Defense service through the HTTP load balancer's configuration in the Distributed Cloud Console. For other applications, connectors are available for several common insertion points that likely already exist in modern application architectures. Once Bot Defense is enabled and configured, you can view and filter traffic and transaction statistics on the Bot Defense dashboard in Distributed Cloud Console to see which users are malicious and how they’re being mitigated. F5 Distributed Cloud Bot Defense is an advanced add-on security feature included in the first launch of the F5 Web Application and API Protection (WAAP) service with seamless integration to protect your web apps and APIs from a wide variety of attacks in real-time. High Level Distributed Cloud Security Architecture Bot Defense Demo: In this technical demonstration video we will walk through F5 Distributed Cloud Bot Defense, showing you how quick and easy it is to configure, the insights and visibility you have while demonstrating a couple of real attacks with Selenium and Python browser automation. "Nature is a mutable cloud, which is always and never the same." - Ralph Waldo Emerson We might not wax that philosophically around here, but our heads are in the cloud nonetheless! Join the F5 Distributed Cloud user group today and learn more with your peers and other F5 experts. Hope you enjoyed this Distributed Cloud Bot Defense Overview and Demo. If there are any comments or questions please feel free to reach us in the comments section. Thanks! Related Resources: Deploy Bot Defense on any Edge with F5 Distributed Cloud (SaaS Console, Automation) Protecting Your Web Applications Against Critical OWASP Automated Threats Making Mobile SDK Integration Ridiculously Easy with F5 XC Mobile SDK Integrator JavaScript Supply Chains, Magecart, and F5 XC Client-Side Defense (Demo) Bots, Fraud, and the OWASP Automated Threats Project (Overview) Protecting Your Native Mobile Apps with F5 XC Mobile App Shield Enabling F5 Distributed Cloud Client-Side Defense in BIG-IP 17.1 Bot Defense for Mobile Apps in XC WAAP Part 1: The Bot Defense Mobile SDK F5 Distributed Cloud WAAP Distributed Cloud Services Overview Enable and Configure Bot Defense - F5 Distributed Cloud Service7.8KViews2likes0CommentsEvolving Financial Services and how to protect against sophisticated cyber threats
In an era where cyber threats evolve as rapidly as digital innovation, financial institutions face unprecedented challenges. Balancing security, performance, and compliance is no longer optional—it’s critical to survival. F5 empowers financial organizations to modernize their operations, safeguard customer trust, and stay ahead of competitors through a robust suite of solutions designed to mitigate risks, optimize performance, and ensure regulatory compliance. This article (first in a series) provides an overview of how F5 addresses the pressing challenges of modern financial services, from securing APIs to neutralizing sophisticated DDoS attacks. Let’s explore how F5 enables you to deliver fast, reliable, and secure digital experiences—every time. What to expect? Here's what to expect in this article, Technical articles covering related F5 solutions. Overview about how F5 products are able to handle different aspects in Financial services, F5 BIG-IP F5 Distributed Cloud NGINX Mitigating Application Vulnerability Financial institutions are prime targets for cybercriminals. F5’s layered security approach ensures resilience against evolving threats. Protecting against OWASP Top 10 vulnerabilities (e.g., injection attacks, broken authentication) evolved from just mere web protection, to web, API, LLM. You can explore examples of solutions across BIG-IP Advanced WAF, F5 Distributed Cloud, and NGINX which actively blocks exploits while maintaining application performance through those articles, OWASP top 10 Series F5 Hybrid Security Architectures for DevSecOps: F5's Distributed Cloud WAF and BIG-IP Advanced WAF BIG-IP Advanced WAF. NGINX App Protect. Encrypted Traffic Inspection BIG-IP SSL Orchestrator (SSLO) enables organizations to decrypt and inspect encrypted traffic without compromising speed, ensuring threats hidden in SSL/TLS traffic are neutralized, this series or articles shows different integration use cases with BIG-IP SSLO. Implementing SSL Orchestrator - High Level Considerations | DevCentral Bot Mitigation Bot attacks lead to fraud, operational disruptions, and reputational damage by enabling account takeovers, credential stuffing, and synthetic fraud. These attacks increase infrastructure costs, cause service downtime through DDoS, and expose institutions to regulatory penalties. Mitigating such attacks starts at multiple levels, below we are listing some of the helpful items on how to combat Bot attacks. An overview of F5 Distributed Cloud Bot Defense Ridiculously Easy Bot Protection: How to Use BIG-IP APM to Streamline Bot Defense Implementation | DevCentral Securing APIs and Third-Party Integrations APIs drive innovation but introduce risks like data breaches and downtime. how we can tackle API security depends on the applications need to be protected, whether we rely on BIG-IP, F5 Distributed Cloud or NGINX, or the Hybrid integration of different components, This series is about API security, will be a great start Use of NGINX Controller to Authenticate API Calls | DevCentral And to understand more about WAAP, What is WAAP?Community Learning Path: Web Application and API Protection (WAAP) Preventing DDoS Attacks DDoS attacks can cause a lot of impact to the business, whether it’s immediate impact by preventing the business from serving its customer or non-immediate one by impacting business brand image and ability to secure their customers and their data. DDoS attack vectors may vary from targeting application, bandwidth, resources like CPU, Memory or critical protocols like DNS, TCP or UDP. You can explore some interesting use cases on F5 DDoS mitigation through the below, NGINX App Protect. F5 Distributed Cloud DDoS Mitigation Service. DDoS Mitigation with F5 Distributed Cloud How to get started with F5 Distributed Cloud Managed Services How to easily add DoS protection to your F5 Distributed Cloup applications BIG-IP Advanced Firewall Manager. Explanation of F5 DDoS threshold modes | DevCentral Concept of F5 Device DoS and DoS profiles | DevCentral IP-Intelligence and IP-Shunning | DevCentral BIG-IP Advanced WAF. F5 Hybrid Security Architectures for DevSecOps: F5's Distributed Cloud WAAP Bot and DDoS Defense and BIG-IP Advanced WAF F5 BIG-IP Advanced WAF - DOS profile configuration options. | DevCentral F5 Hybrid Security Architectures for DevSecOps: F5's Distributed Cloud WAF and BIG-IP Advanced WAF Conclusion In this introduction article, we went through an overview of F5 solutions in Financial Services, in the following articles, we will dig a bit deeper with each solution. F5 not only helps with security but with maximizing performance as well. Related Content Testing the security controls for a notional FDX Open Banking deployment Decoding PCI-DSS v4.0: F5's Ridiculously Easy Guide to Technical Compliance Banking and Financial Services Why Top Financial Services Companies Rely on F5 NGINX App Protect. F5 Distributed Cloud DDoS Mitigation Service. DDoS Mitigation with F5 Distributed Cloud How to get started with F5 Distributed Cloud Managed Services How to easily add DoS protection to your F5 Distributed Cloup applications BIG-IP Advanced Firewall Manager. Explanation of F5 DDoS threshold modes | DevCentral Concept of F5 Device DoS and DoS profiles | DevCentral IP-Intelligence and IP-Shunning | DevCentral BIG-IP Advanced WAF. F5 Hybrid Security Architectures for DevSecOps: F5's Distributed Cloud WAAP Bot and DDoS Defense and BIG-IP Advanced WAF F5 BIG-IP Advanced WAF - DOS profile configuration options. | DevCentral F5 Hybrid Security Architectures for DevSecOps: F5's Distributed Cloud WAF and BIG-IP Advanced WAF Overview of WAAP Incidents What is WAAP?130Views0likes0CommentsF5 Distributed Cloud – CE High Availability Options: A Comparative Exploration
This article explores an alternative approach to achieve HA across single CE nodes, catering for use cases requiring higher performance and granular control over redundancy and failover management. Introduction F5 Distributed Cloud offers different techniques to achieve High Availability (HA) for Customer Edge (CE) nodes in an active-active configuration to provide redundancy, scaling on-demand and simplify management. By default, F5 Distributed Cloud uses a method for clustering CE nodes, in which CEs keep track of peers by sending heartbeats and facilitating traffic exchange among themselves. This method also handles the automatic transfer of traffic, virtual IPs, and services between CE peers —excellent for simplified deployment and running App Stack sites hosting Kubernetes workloads. However, if CE nodes are deployed mainly to manage L3/L7 traffic and application security, this default model might lack the flexibility needed for certain scenarios. Many of our customers tell us that achieving high availability is not so straightforward with the current clustering model. These customers often have a lot of experience in managing redundancy and high availability across traditional network devices. They like to manage everything themselves—from scheduling when to switch over to a redundant pair (planned failover), to choosing how many network paths (tunnels) to use between CEs to REs (Regional Edges) or other CEs. They also want to handle any issues device by device, decide the number of CE nodes in a redundancy group, and be able to direct traffic to different CEs when one is being updated. Their feedback inspired us to write this article, where we explore a different approach to achieve high availability across CEs. The default clustering model is explained in this document: https://docs.cloud.f5.com/docs/ves-concepts/site#cluster-of-nodes Throughout this article, we will dive into several key areas: An overview of the default CE clustering model, highlighting its inherent challenges and advantages. Introduction to an alternative clustering strategy: Single Node Clustering, including: An analysis of its challenges and benefits. Identification of scenarios where this approach is most applicable. A guide to the configuration steps necessary to implement this model. An exploration of failover behavior within this framework. A comparison table showing how this new method differs from the default clustering method. By the end of this article, readers will gain an understanding of both clustering approaches, enabling informed decisions on the optimal strategy for their specific needs. Default CE Clustering Overview In a standard CE clustering setup, a cluster must have at least three Master nodes, with subsequent additions acting as Worker nodes. A CE cluster is configured as a "Site," centralizing operations like pool configuration and software upgrades to simplify management. In this clustering method, frequent communication is required between control plane components of the nodes on a low latency network. When a failover happens, the VIPs and services - including customer’s compute workloads - will transition to the other active nodes. As shown in the picture above, a CE cluster is treated as a single site, regardless of the number of nodes it contains. In a Mesh Group scenario, each mesh link is associated with one single tunnel connected to the cluster. These tunnels are distributed among the master nodes in the cluster, optimizing the total number of tunnels required for a large-scale Mesh Group. It also means that the site will be connected to REs only via 2 tunnels – one to each RE. Design Considerations for Default CE Clustering model: Best suited for: 1- App Stack Sites: Running Kubernetes workloads necessitates the default clustering method for container orchestration across nodes. 2- Large-scale Site-Mesh Groups (SMG) 3- Cluster-wide upgrade preference: Customers who favour managing nodes collectively will find cluster-wide upgrades more convenient, however without control over the upgrade sequence of individual nodes. Challenges: o Network Bottleneck for Ingress Traffic: A cluster connected to two Regional Edge (RE) sites via only 2 tunnels can lead to only two nodes processing external (ingress) traffic, limiting the use of additional nodes to process internal traffic only. o Three-master node requirement: Some customers are accustomed to dual-node HA models and may find the requirement for three master nodes resource-intensive. o Hitless upgrades: Controlled, phased upgrades are preferred by some customers for testing before widespread deployment, which is challenging with cluster-wide upgrades. o Cross-site deployments: High network latency between remote data centers can impact cluster performance due to the latency sensitivity of etcd daemon, the backbone of cluster state management. If the network connection across the nodes gets disconnected, all nodes will most likely stop the operation due to the quorum requirements of etcd. Therefore, F5 recommends deploying separate clusters for different physical sites. o Service Fault Sprawl and limited Node fault tolerance: Default clusters can sometimes experience a cascading effect where a fault in a node spreads throughout the cluster. Additionally, a standard 3-node cluster can generally only tolerate the failure of two nodes. If the cluster was originally configured with three nodes, functionality may be lost if reduced to a single active node. These limitations stem from the underlying clustering design and its dependency on etcd for maintaining cluster state. The Alternative Solution: HA Between Multiple Single Nodes The good news is that we can achieve the key objectives of the clustering – which are streamlined management and high availability - without the dependency on the control plane clustering mechanisms. Streamlined management using “Virtual Site”: F5 Distributed Cloud provides a mechanism called “Virtual Site” to perform operations on a group of sites (site = node or cluster of nodes), reducing the need to repeat the same set of operations for each site. The “Virtual Site” acts as an abstraction layer, grouping nodes tagged with a unique label and allows collectively addressing these nodes as a single entity. Configuration of origin pools and load balancers can reference Virtual Sites instead of individual sites/nodes, to facilitate cluster-like management for two or more nodes and enabling controlled day 2 operations. When a node is disassociated from Virtual Site by removing the label, it's no longer eligible for new connections, and its listeners are simultaneously deactivated. Upgrading nodes is streamlined: simply remove the node's label to exclude it from the Virtual Site, perform the upgrade, and then reapply the label once the node is operational again. This procedure offers you a controlled failover process, ensuring minimal disruption and enhanced manageability by minimizing the blast radius and limiting the cope of downtime. As traffic is rerouted to other CEs, if something goes wrong with an upgrade of a CE node, the services will not be impacted. HA/Redundancy across multiple nodes: Each single node in a Virtual Site connect to dual REs through IPSec or SSL/TLS tunnels, ensuring even load distribution and true active-active redundancy. External (Ingress) Traffic: In the Virtual Site model, the Regional Edges (REs) distribute external traffic evenly across all nodes. This contrasts with the default clustering approach where only two CE nodes are actively connected to the REs. The main Virtual Site advantage lies in its true active/active configuration for CEs, increasing the total ingress traffic capacity. If a node becomes unavailable, the REs will automatically reroute the new connections to another operational node within the Virtual Site, and the services (connection to origin pools) remain uninterrupted. Internal (East-West) Traffic: For managing internal traffic within a single CE node in a Virtual Site (for example, when LB objects are configured to be advertised within the local site), all network techniques applicable to the default clustering model can be employed in this model as well, except for the Layer 2 attachment (VRRP) method. Preferred load distribution method for internal traffic across CEs: Our preferred methods for load balancing across CE nodes are either DNS based load balancing or Equal-Cost Multi-Path (ECMP) routing utilizing BGP for redundancy. DNS Load Balancer Behavior: If a node is detached from a Virtual Site, its associated listeners and Virtual IPs (VIPs) are automatically withdrawn. Consequently, the DNS load balancer's health checks will mark those VIPs as down and prevent them from receiving internal network traffic. Current limitation for custom VIP and BGP: When using BGP, please note a current limitation that prevents configuring a custom VIP address on the Virtual Site. As a workaround, custom VIPs should be advertised on individual sites instead. The F5 product team is actively working to address this gap. For a detailed exploration of traffic routing options to CEs, please refer to the following article here: https://community.f5.com/kb/technicalarticles/f5-distributed-cloud---customer-edge-site---deployment--routing-options/319435 Design Considerations for Single Node HA Model: Best suited for: 1- Customers with high throughput requirement: This clustering model ensures that all Customer Edge (CE) nodes are engaged in managing ingress traffic from Regional Edges (REs), which allows for scalable expansion by adding additional CEs as required. In contrast, the default clustering model limits ingress traffic processing to only two CE nodes per cluster, and more precisely, to a single node from each RE, regardless of the number of worker nodes in the cluster. Consequently, this model is more advantageous for customers who have high throughput demands. 2- Customers who prefer to use controlled failover and software upgrades This clustering model enables a sequential upgrade process, where nodes are updated individually to ensure each node upgrades successfully before moving on to the other nodes. The process involves detaching the node from the cluster by removing its site label, which causes redirecting traffic to the remaining nodes during the upgrade. Once upgraded, the label is reapplied, and this process is repeated for each node in turn. This is a model that customers have known for 20+ years for upgrade procedures, with a little wrinkle with the label. 3- Customers who prefer to distribute the load across remote sites Nodes are deployed independently and do not require inter-node heartbeat communication, unlike the default clustering method. This independence allows for their deployment across various data centers and availability zones while being managed as a single entity. They are compatible with both Layer 2 (L2) spanned and Layer 3 (L3) spanned data centers, where nodes in different L3 networks utilize distinct gateways. As long as the nodes can access the origin pools, they can be integrated into the same "Virtual Site". This flexibility caters to customers' traditional preferences, such as deploying two CE nodes per location, which is fully supported by this clustering model. Challenges: Lack of VRRP Support: The primary limitation of this clustering method is the absence of VRRP support for internal VIPs. However, there are some alternative methods to distribute internal traffic across CE nodes. These include DNS based routing, BGP with Equal-Cost Multi-Path (ECMP) routing, or the implementation of CEs behind another Layer 4 (L4) load balancer capable of traffic distribution without source address alteration, such as F5 BIG-IPs or the standard load balancers provided by Azure or AWS. Limitation on Custom VIP IP Support: Currently, the F5 Distributed Cloud Console has a restriction preventing the configuration of custom virtual IPs for load balancer advertisements on Virtual Sites. We anticipate this limitation will be addressed in future updates to the F5 Distributed Cloud platform. As a temporary solution, you can advertise the LB across multiple individual sites within the Virtual Site. This approach enables the configuration of custom VIPs on those sites. Requires extra steps for upgrading nodes Unlike the Default clustering model where upgrades can be performed collectively on a group of nodes, this clustering model requires upgrading nodes on an individual basis. This may introduce more steps, especially in larger clusters, but it remains significantly simpler than traditional network device upgrades. Large-Scale Mesh Group: In F5 Distributed Cloud, the "Mesh Group" feature allows for direct connections between sites (whether individual CE sites or clusters of CEs) and other selected sites through IPSec tunnels. For CE clusters, tunnels are established on a per-cluster basis. However, for single-node sites, each node creates its own tunnels to connect with remote CEs. This setup can lead to an increased number of tunnels needed to establish the mesh. For example, in a network of 10 sites configured with dual-CE Virtual Sites, each CE is required to establish 18 IPSec tunnels to connect with other sites, or 19 for a full mesh configuration. Comparatively, a 10-site network using the default clustering method—with a minimum of 3 CEs per site—would only need up to 9 tunnels from each CE for full connectivity. Opting for Virtual Sites with dual CEs, a common choice, effectively doubles the number of required tunnels from each CE when compared to the default clustering setup. However, despite this increase in tunnels, opting for a Mesh configuration with single-node clusters can offer advantages in terms of performance and load distribution. Note: Use DC Groups as an alternative solution to Secure Mesh Group for CE connectivity: For customers with existing private connectivity between their CE nodes, running Site Mesh Group (SMG) with numerous IPsec tunnels can be less optimal. As a more scalable alternative for these customers, we recommend using DC Cluster Group (DCG). This method utilizes IP-in-IP tunnels over the existing private network, eliminating the need for individual encrypted IPsec tunnels between each node and streamlining communication between CE nodes via IP-n-IP encapsulations. Configuration Steps The configuration for creating single node clusters involves the following steps: Creating a Label Creating a Virtual Site Applying the label to the CE nodes (sites) Review and validate the configuration The detailed configuration guide for the above steps can be found here: https://docs.cloud.f5.com/docs/how-to/fleets-vsites/create-virtual-site Example Configuration: In this example, you can create a label called "my-vsite" to group CE nodes that belong to the same Virtual Site. Within this label, you can then define different values to represent different environments or clusters, such as specific Azure region or an on-premise data center. Then a Virtual Site of “CE” type can be created to represent the CE cluster in “Azure-AustraliaEast-vSite" and tied to any CE that is tagged with the label “my-vsite=Azure-AustraliaEast-vSite”: Now, any CE node that should join the cluster (Virtual Site), should get this label: Verification: To confirm the Virtual Site configuration is functioning as intended, we joined two CEs (k1-azure-ce2 and k1-azure-ce03) into the Virtual Site and evaluated the routing and load balancing behavior. Test 1: Public Load Balancer (Virtual Site referenced in the pool) The diagram shows a public "Load Balancer" advertised on the RE referencing a pool that uses the newly created Virtual Site to access the private application: As shown below, the pool member was configured to be accessed through the Virtual Site: Analysis of the request logs in the Performance dashboard confirmed that all requests to the public website were evenly distributed across both CEs. Test 2: Internal Load Balancer (LB advertised on the Virtual Site) We deployed an internal Load Balancer and advertised it on the newly created Virtual Site, utilizing the pool that also references the same Virtual Site (k1-azure-ce2 and k1-azure-ce03). As shown below, the Load Balancer was configured to be advertised on the Virtual Site. Note: Here we couldn't use a "shared" custom VIP across the Virtual Site due to a current platform constraint. If a custom VIP is required, we should use "site" as opposed to "Virtual Site" and advertise the Load Balancer on all sites, like below picture: Request logs revealed that when traffic reached either CE node within the Virtual Site, the request was processed and forwarded locally to the pool member. In the example below: src_site: Indicates the CE (k1-azure-ce2) that processed the request. src_ip: Represents the client's source IP address (192.168.1.68). dst_site: Indicates the CE (k1-azure-ce2) from which the pool member is accessed. dst_ip: Represents the IP address of the pool member (192.168.1.6). Resilience Testing: To assess the Virtual Site's resilience, we intentionally blocked network access from k1-azure-ce2 CE to the pool member (192.168.1.6). The CE automatically rerouted traffic to the pool member via the other CE (k1-azure-ce03) in the Virtual Site. Note: By default, CEs can communicate with each other via the F5 Global Network. This can be customized to use direct connectivity through tunnels if the CEs are members of the same DC Cluster Group (IP-n-IP tunneling) or Secure Mesh Groups (IPSec tunneling). The following picture shows the traffic flow via F5 Global Network. The following picture shows the traffic flow via the IP-n-IP tunnel when a DC Clustering Group (DCG) is configured across the CE nodes. Failover Behaviour When a CE node is tied to a Virtual Site, all internal Load Balancers (VIPs) advertised on that Virtual Site will be deployed in the CE. Additionally, the Regional Edge (RE) begins to use this node as one of the potential next hops for connections to the origin pool. Should the CE become unavailable, or if it lacks the necessary network access to the origin server, the RE will almost seamlessly reroute connections through the other operational CEs in the Virtual Site. Uncontrolled Failover: During instances of uncontrolled failover, such as when a node is unexpectedly shut down from the hypervisor, we have observed a handful of new connections experiencing timeouts. However, these issues were resolved by implementing health checks within the origin pool, which prevented any subsequent connection drops. Note: Irrespective of the clustering model in use, it's always recommended to configure health checks for the origin pool. This practice enhances failover responsiveness and mitigates any additional latency incurred during traffic rerouting. Controlled Failover: The moment a CE node is disassociated from the Virtual Site — by the removal of its label— the CE node will not be used by RE to connect to origin pools anymore. At the same time, all Load Balancer listeners associated with that Virtual Site are withdrawn from the node. This effectively halts traffic processing for those applications, preventing the node from receiving related traffic. During controlled failover scenarios, we have observed seamless service continuity on externally advertised services (to REs). On-Demand Scaling: F5 Distributed Cloud provides a flexible solution that enables customers to scale the number of active CE nodes according to demand. This allows you to easily add more powerful CE nodes during peak periods (such as promotional events) and then remove them when demand subsides. With the Virtual Sites method, you can even mix and match node sizes within your cluster (Virtual Site), providing granular control over resources. It's advisable to monitor CE node performance and implement node related alerts. These alerts notify you when nodes are operating at high capacity, allowing for timely addition of extra nodes as needed. Moreover, you can monitor node’s health in the dashboard. CPU, Memory and Disk utilizations of nodes can be a good factor in determining if more nodes are needed or not. Furthermore, the use of Virtual Sites makes managing this process even easier, thanks to labels. Node Based Alerts: Node-based alerts are essential for maintaining efficient CE operations. Accessing the alerts in the Console: To view alerts, go to Multi-Cloud Network Connect > Notifications > Alerts. Here, you can see both "Active Alerts" and "All Alerts." Alerts related to node health fall under the "infrastructure" alert group. The following screenshot shows alerts indicating high loads on the nodes. Configuring Alert Policies: Alert policies determine the notification process for raised alerts. To set up an alert policy, navigate to Multi-Cloud Network Connect > Alerts Management > Alert Policies. An alert policy consists of two main elements: the alert receiver configuration and the policy rules. Configuring Alert Receiver: The configuration allows for integration with platforms like Slack and PagerDuty, among others, facilitating notifications through commonly used channels. Configuring Alert Rules: For alert selection, we recommend configuring notifications for alerts with severity of “Major” or “Critical” at a minimum. Alternatively, the “infrastructure” group which includes node-based alerts can be selected. Comparison Table Criteria Default Cluster Single Node HA Minimum number of nodes in HA 3 2 Upgrade operations Per cluster Per Node Network redundancy and client side routing for east-west traffic VRRP, BGP, DNS, L4/7 LB DNS, L4/7 LB, BGP* Tunnels to RE 2 tunnels per cluster 2 tunnels per node Tunnels to other CEs (SMG or DCG) 1 tunnel from each cluster 1 tunnel from each node External traffic processing Limited to 2 nodes All nodes will be active Internal traffic processing All nodes can be active All nodes can be active Scale management in Public Cloud Sites Straightforward, by configuring ingress interfaces in Azure/AWS/GCP sites Straightforward, by adding or removing the labels Scale management in Secure Mesh Sites Requires reconfiguring the cluster (secure mesh site) - may cause interruption Straightforward, by adding or removing the labels Custom VIP IP Available Not Available (Planned to be available in future releases), workaround available. Node sizes All nodes should be same size. Upgrading node size in a cluster is a disruptive operation. Any node sizes or clusters can join the Virtual Site * When using BGP, please note a current limitation that prevents configuring custom VIP address on the Virtual Site. Conclusion: F5 Distributed Cloud offers a flexible approach to High Availability (HA) across CE nodes, allowing customers to select the redundancy model that best fits their specific use cases and requirements. While we continue to advocate for default clustering approach due to their operational simplicity and shared VRRP VIP or, unified network configuration benefits, especially for routine tasks like upgrades, the Virtual Site and single node HA model presents some great use cases. It not only addresses the limitations and challenges of the default clustering model, but also introduces a solution that is both scalable and adaptable. While Virtual Sites offer their own benefits, we recognize they also present trade-offs. The overall benefits, particularly for scenarios demanding high ingress (RE to CE) throughput and controlled failover capabilities cater to specific customer demands. The F5 product and development team remains committed to addressing the limitations of both default clustering and Virtual Sites discussed throughout this article. Their focus is on continuous improvement and finding the solutions that best serve our customers' needs. References and Additional Links: Default Clustering model: https://docs.cloud.f5.com/docs/ves-concepts/site#cluster-of-nodes Configuration guide for Virtual Sites: https://docs.cloud.f5.com/docs/how-to/fleets-vsites/create-virtual-site Routing Options for CEs: https://community.f5.com/kb/technicalarticles/f5-distributed-cloud---customer-edge-site---deployment--routing-options/319435 Configuration guide for DC Clustering Group: https://docs.cloud.f5.com/docs/how-to/advanced-networking/configure-dc-cluster-group2.5KViews7likes2CommentsF5 Distributed Cloud Security Service Insertion With BIG-IP Advanced WAF
In this article we will show you how to quickly deploy and operate external services of your choice across multiple public clouds. For this article I will select the BIG-IP Advanced WAF (PAYG), future articles will cover additional solutions. Co-Author: Anitha Mareedu, Sr. Security Engineer, F5 Introduction F5’s Distributed Cloud Securtiy Service Insertion solution allows enterprises to deploy and operate external services of their choice across multiple public clouds. Let's start by looking at a real-world customer example. The enterprise has standardized on an external firewall in their private data center. Their network and security team are very familiar with using BIG-IP AWAF. They want to deploy the same security firewall solution that they use in the private datacenter in the public cloud. The requirements are: a simple operational model to deploy these services a unified security policy consistency across different clouds simple deployments unified logging Challenges Customers have identified several challenges in moving to the cloud. Initallly, teams that are very familiar with supporting services in their private data center usually do not have the expertise in designing, deploying and supporting in public clouds. If the same team then is tasked with deploying to multiple clouds the gap widens, terminology, archtitecture tools and constructs are all unique. Second, the operational models are different across different clouds. In AWS, you use either a VPC or a transit gateway (TGW), in Azure you use a VNET and Google has VPC’s. Solution Description Let's look at how F5’s Distributed Cloud Security Service insertion solution helps simplify and unify security solution deployments in multi-cloud and hybrid cloud environments: Infrastructure-as-code: Implementation and policy configuration can be automated and run as infrastructure-as-code across clouds and regions, allowing policies to be repeatable in any major public or private cloud. Easy setup and management: This simplified setup and management extends across AWS, Azure, and other clouds, as the F5 Distributed Cloud Platform supports AWS Transit Gateway, virtual network peering in Azure, and use of VPC attachments. Define once and replicate models: No extra handcrafting is needed for consistent, straightforward operations and deployment. Unified traffic steering rules: With the Distributed Cloud Platform, traffic is rerouted from networks through the security service using the same steering rules across different public and private clouds. Using F5 Distributed Cloud Console, IT pros get granular visibility and single-pane-of-glass management of traffic across clouds and networks. Optional policy deployment routes: Policies can be deployed at either or both the network layer (using IP addresses) or the application layer (using APIs). Diagram Step by Step Process This walk thru assumes you already have an AWS VPC deployed. Have handy the VPC id. Log into the F5 Distributed Cloud Dashboard You are presented with the Dashboard where you can choose which deployment option you want to work with. We will be working with Cloud and Edge Sites. Select Cloud and Edge Sites > Manage > Site Management > AWS TWG Sites Click Add the AWS Transit Gateway (TWG) Under Metadata give your TWG site a Name, Label and Description Click on Configure under AWS Configuration This brings up the Services VPC Configuration Page Select your AWS region Select Services VPC, leave as New, let it genetrate a name or choose your own name and give the Primary CIDR block you want to assign to the VPC. Leave Transit Gateway as New TWG Leave BGP as Automatic Under Site Node Parameters, Ingress/ Egress select “Add Item” Move slider on upper right corner to Show Advanced Fields Fill in required configuration, AWS AZ Name and CIDR Blocks for each of the the subnets and click the “Add Item” You can let the system autogenerate these or assign the desired range. This will take you back to the last screen, where you need to either create or select your cloud credentials. These are Programmatic Access Credentials allowing API access. Click Apply This takes you to the previous screen where we connect your current VPC to the Service VPC we are creating. (have VPC id available) Click Configure under VPC attachments Click Add Item Supply VPC id Click Apply This takes you back once again to the AWS TWG Site Screen. Finish with clicking Save and Exit. In the UI you will then click Apply. You are now deploying your new Security VPC via Terraform. While that is deploying we will move on to the External Services. Manage > Site Management > External Services > Add External Service Give your Service a name, add a label and description. Click “Configure” under Select NFV Service Provider. For this article we will select the F5 BIG-IP Advanced WAF (PAYG), future articles will cover additional solutions. Provide the Admin Password Admin Username public SSH Key that you will use to access your BIG-IP deployment. Select the TWG site you created above. Finally click “Add Item“ under Service Nodes. Service nodes Enter a Node name and the Avilibilty Zones you wish to delpoy into. Then click “Add Item” This will take you back to the original screen. Enable HTTPS Management of Nodes, supply a delegated doman that will issue a Certificate. Under Select Service Type” Keep Inside VIP at Automatic and Set the Outside VIP to “Advertise On Outside Network”. Finally Click “Save and Exit” At the end, the External Security Service is deployed, and you are taken to all the External Services. Click the name of the External Service you deployed to expand the details From this screen you are able to access several items, the two I want to point out are the TGW stats and the BIG-IP you deployed by clicking the Management Dashboard URL. Click under Site the TWG Service you deployed Here you are able to see fine grained stats under all the tabs. System Metrics Application Metrics Site Status Nodes Interfaces Alerts Requests Top Talkers Connections TWG Flow tables DHCP Status Objects Tools Going back click the hyperlink to the BIG-IP if you wish to look at the configuration. F5 Distributed Cloud Service Insertion automatically configured your BIG-IP with the following information: • Interfaces • Self IPs • Routes • Management and credentials • VLANs • IPoIP tunnel SI<-> BIG-IP • VIP The following two items will need to be configured on your BIG-IP. This configuration Configure AWAF policies SecOps can access familiar BIG-IP UI using management link provided in F5 Cloud Console and set up and configure AWAF ploicies Define a Traffic Steering Policy Network traffic to define traffic steering policy at Network (L3/L4) layer Service policy to define traffic steering policy at App(L7) level. Below are the traffic steering control methods available: Network level – Ip address, port, etc App level – API, Method, etc At the end of this step, you can see traffic getting diverted to BIG-IP and getting inspected by BIG-IP. Summary As you can see, F5 Distributed Cloud Security Service Insertion dramatically reduces the operation complexity for deploying external services in public clouds, it greatly enhances the security posture and it vastly improves productivity for all the operations teams such as NetOps, SecOps or DevOps.2.1KViews3likes0Comments