global availability
3 TopicsDNS (GTM) best practice for DR
Hi, I need to set DR based on DNS module. After reading few posts and docs all I know that there is plenty approaches that can be implemented. I have little experience with DNS module so I will appreciate any advice what will be optimum solution. Scenario: Two Data Centers: DC1 (main), DR1 (used only when resources in DC not available) Each DC uses non overlapping subnet ranges DCs connected via internal private L2/L3 link All DNS queries will only come from devices inside DCs In each Data Center single BIG-IP DNS device In each DC one host (let's call it Main) requiring DNS resolution for resources it has to access In each DS eight hosts (let's call them Slaves) with separate IP and FQDN - those are not LTMs but standard servers - Generic Host type (monitoring via HTTP monitor) DNS device should perform DNS resolution for FQDNs for Main host DR rules: If any Slave in DC1 is down, DNS request should be resolved to IP of any working Slave If all Slaves in DC1 are down, DNS request should be resolved to IP of any Slave in DR1 What would be best approach? As far as I understand Global Availability method should be used, but at what level: Pool Wide IP Is that better to create on Pool with members from both DSs or separate Pool - one per DC - each containing members from respective DC? Now how to handle condition to return IP of active Slave inside one DC? I guess I need to create as many WideIPs as Slaves (8), or rather one wildcard IP: slave1.vip.site.com, slave2.vip.site.com,...,slave8.vip.site.com or .vip.site.com Then how to perform returning IP of another active Slave when Slave for which DNS request was made is down - inside DC HA? Piotr532Views0likes3CommentsDeclaring disaster when using a BIG-IP DNS Controller driven disaster recovery data center transition
I'm re-asking this question, because we still don't have a solution, and I'm hoping that potential answers may have been missed on the first ask - and it just "feels" to me like someone who is more experienced with Big-IP DNS Controllers (GTMs) would know a way to do this. The original posting is here. In our organization, we're planning on using our GTMs to control disaster recovery - that is, we have a backup data center, which is cold, but we want to spin it up in case of a disaster, and only after it's ready, "flip a switch" to declare a disaster, and at that point have all GTM-managed names start returning the alternate data center's IPs. Up until that switch-flip, all wide IPs should continue to return their original data center IP values (or, potentially, return nothing at all, failing to resolve). My question is, what are the general recommendations for implementation of this manual "switch"? What have people done in terms of creating a construct that is manually controlled, in order to control the GTM's logic for declaring disaster, and affecting a broad array of wide IPs, etc. managed by the GTMs? (sorry, "BIG-IP DNS") It was suggested to simply use global availability - but that doesn't quite fit, because we don't want the alternate data center's IPs returned by the wide IPs until after this "switch" is thrown. Is there some capability at the data center construct level to effect this behavior? Via distributed applications? I'm hoping for some built-in configurable capability, without having to do significant iRule coding or iControl scripting. At that point, it may simply be easier to manage it all via short TTLs and manual imports to our primary DNS servers (which are not GTMs). Thank you for any help!390Views0likes1CommentAccelerate Your Initiatives: Secure & Scale Hybrid Cloud Apps on F5 BIG-IP & Distributed Cloud DNS
It's rare now to find an application that runs exclusively in one homogeneous environment. Users are now global, and enterprises must support applications that are always-on and available. These applications must also scale to meet demand while continuing to run efficiently, continuously delivering a positive user experience with minimal cost. Introduction In F5’s 2024 State of Application Strategy Report, Hybrid and Multicloud deployments are pervasive. With the need for flexibility and resilience, most businesses will deploy applications that span multiple clouds and use complex hybrid environments. In the following solution, we walk through how an organization can expand and scale an application that has matured and now needs to be highly-available to internal users while also being accessible to external partners and customers at scale. Enterprises using different form-factors such as F5 BIG-IP TMOS and F5 Distributed Cloud can quickly right-size and scale legacy and modern applications that were originally only available in an on-prem datacenter. Secure & Scale Applications Let’s consider the following example. Bookinfo is an enterprise application running in an on-prem datacenter that only internal employees use. This application provides product information and details that the business’ users access from an on-site call center in another building on the campus. To secure the application and make it highly-available, the enterprise has deployed an F5 BIG-IP TMOS in front of each of endpoint An endpoint is the combination of an IP, port, and service URL. In this scenario, our app has endpoints for the frontend product page and backend resources that only the product page pulls from. Internal on-prem users access the app with internal DNS on BIG-IP TMOS. GSLB on the device sends another class of internal users, who aren’t on campus and access by VPN, to the public cloud frontend in AWS. The frontend that runs in AWS can scale with demand, allowing it to expand as needed to serve an influx of external users. Both internal users who are off-campus and external users will now always connect to the frontend in AWS through the F5 Global Network and Regional Edges with Distributed Cloud DNS and App Connect. Enabling the frontend for the app in AWS, it now needs to pull data from backend services that still run on-prem. Expanding the frontend requires additional connectivity, and to do that we first deploy an F5 Distributed Cloud Customer Edge (CE) to the on-prem datacenter. The CE connects to the F5 Global Network and it extends Distributed Cloud Services, such as DNS and Service Discovery, WAF, API Security, DDoS, and Bot protection to apps running on BIG-IP. These protections not only secure the app but also help reduce unnecessary traffic to the on-prem datacenter. With Distributed Cloud connecting the public cloud and on-prem datacenter, Service Discovery is configured on the CE on-prem. This makes a catalog of apps (virtual servers) on the BIG-IP available to Distributed Cloud App Connect. Using App Connect with managed DNS, Distributed Cloud automatically creates the fully qualified domain name (FQDN) for external users to access the app publicly, and it uses Service Discovery to make the backend services running on the BIG-IP available to the frontend in AWS. Here are the virtual servers running on BIG-IP. Two of the virtual servers, “details” and “reviews,” need to be made available to the frontend in AWS while continuing to work for the frontend that’s on-prem. To make the virtual servers on BIG-IP available as upstream servers in App Connect, all that’s needed is to click “Add HTTP Load Balancer” directly from the Discovered Services menu. To make the details and reviews sevices that are on-prem available to the frontend product page in AWS, we advertise each of their virtual servers on BIG-IP to only the CE running in AWS. The menu below makes this possible with only a few clicks as service discovery eliminates the need to find the virtual IP and port for each virtual server. Because the CE in AWS runs within Kubernetes, the name of the new service being advertised is recognized by the frontend product page and is automatically handled by the CE. This creates a split-DNS situation where an internal client can resolve and access both the internal on-prem and external AWS versions of the app. The subdomain “external.f5-cloud-demo.com” is now resolved by Distributed Cloud DNS, and “on-prem.f5-cloud-demo.com” is resolved by the BIG-IP. When combined with GSLB, internal users who aren’t on campus and use a VPN will be redirected to the external version of the app. Demo The following video explains this solution in greater detail, showing how to configure connectivity to each service the app uses, as well as how the app looks to internal and external users. (Note: it looks and works identically! Just the way it should be and with minimal time needed to configure it). Key Takeaways BIG-IP TMOS has long delivered best-in-class service with high-availability and scale to enterprise and complex applications. When integrated with Distributed Cloud, freely expand and migrate application services regardless of the deployment model (on-prem, cloud, and edge). This combination leverages cloud environments for extreme scale and global availability while freeing up resources on-prem that would be needed to scrub and sanitize traffic. Conclusion Using the BIG-IP platform with Distributed Cloud services addresses key challenges that enterprises face today: whether it's making internal apps available globally to workforces in multiple regions or scaling services without purchasing more fixed-cost on-prem resources. F5 has the products to unlock your enterprise’s growth potential while keeping resources nimble. Check out the select resources below to explore more about the products and services featured in this solution. Additional Resources Solution Overview: Distributed Cloud DNS Solution Overview: One DNS – Four Expressions Interactive Demo: Distributed Cloud DNS at F5 DevCentral: The Power of &: F5 Hybrid DNS solution F5 Hybrid Security Architectures: One WAF Engine, Total Flexibility219Views1like0Comments