gslb
17 TopicsUsing Distributed Cloud DNS Load Balancer with Geo-Proximity and failover scenarios
Introduction To have both high performance and responsive apps available on the Internet, you need a cloud DNS that’s both scalable and one that operates at a global level to effectively connect users to the nearest point of presence. The F5 Distributed Cloud DNS Load Balancer positions the best features used with GSLB DNS to enable the delivery of hybrid and multi-cloud applications with compute positioned right at the edge, closest to users. With Global Server Load Balancing (GSLB) features available in a cloud-based SaaS format, the Distributed Cloud DNS Load Balancer has a number distinct advantages: Speed and simplicity: Integrate with DevOps pipelines, with an automation focus and a rich and intuitive user interface Flexibility and scale: Global auto-scale keeps up with demand as the number of apps increases and traffic patterns change Security: Built-in DDoS protection, automatic failover, and DNSSEC features help ensure your apps are effectively protected. Disaster recovery: With automatic detection of site failures, apps dynamically fail over to individual recovery-designated locations without intervention. Adding user-location proximity policies to DNS load balancing rules allows the steering of users to specific instances of an app. This not only improves the overall experience but it guarantees and safeguards data, effectively silo’ing user data keeping it region-specific. In the case of disaster recovery, catch-all rules can be created to send users to alternate destinations where restrictions to data don’t apply. Integrated Solution This solution uses a cloud-based Distributed Cloud DNS to load balance traffic to VIP’s that connect to region-specific pools of servers. When data privacy isn’t a requirement, catch-all rules can further distribute traffic should a preferred pool of origin servers become unhealthy or unreachable. The following solution covers the following three DNS LB scenarios: Geo-IP Proximity Active/Standby failover within a region Disaster Recovery for manually activated failovers Autonomous System Number (ASN) Lists Fallback pool for automated failovers The configuration for this solution assumes the following: The app is in multiple regions Users are from different regions Distributed Cloud hosts/manages/is delegated the DNS domain or subdomain (optional) Failover to another region is allowed Prerequisite Steps Distributed Cloud must be providing primary DNS for the domain. Your domain must be registered with a public domain name registrar with the nameservers ns1.f5clouddns.com and ns2.f5clouddns.com. F5 XC automatically validates the domain registration when configured to be the primary nameserver. Navigate to DNS Management > domain > Manage Configuration > Edit Configuration >> DNS Zone Configuration: Primary DNZ Configuration > Edit Configuration. Select “Add Item”, with Record Set type “DNS Load Balancer” Enter the Record Name and then select Add Item to create a new load balancer record. This opens the submenu to create DNS Load Balancer rules. DNS LB for Geo-Proximity Name the rule “app-dns-rule” then go to Load Balancing Rules > Configure. Select “Add Item” then under the Load Balancing Rule, within the default Geo Location Selection, expand the “Selector Expression” and select “geoip.ves.io/continent”. Select Operator “In” and then the value “EU”. Click Apply. Under the Action “Use DNS Load Balancer pool”, click “Add Item”. Name the pool “eu-pool”, and under Pool Type (A) > Pool Members, click “Add Item”. Enter a “Public IP”, then click “Apply”. Repeat this process to have a second IP Endpoint in the pool. Scroll down to Load Balancing Method and select “Static-Persist”. Now click Continue, and then Apply to the Load Balancing Rule, and then “Add Item” to add a second rule. In the new rule, choose Geo Location Selection value “Geo Location Set selector”, and use the default “system/global-users”. Click “Add Item”. Name this new pool “global-pool” and add then select “Add Item” with the following pool member: 54.208.44.177. Change the Load Balancing Mode to “Static-Persist”, then click Continue. Click “Continue”. Now set the Load Balancing Rule Score to 90. This allows the first load balancing rule, specific to EU users, to be returned as the only answer for users of that region unless the regional servers are unhealthy. Note: The rule with the highest score is returned. When two or more rules match and have the same score, answers for each rule is returned. Although there are legitimate reasons for doing this, matching more than one rule with the same score may provide an unanticipated outcome. Now click "Apply", “Apply”, and “Continue”. Click the final “Apply” to create the new DNS Zone Resource Record Set. Now click “Apply” to the DNS Zone configuration to commit the new Resource Record. Click “Save and Exit” to finalize everything and complete the DNS Zone configuration! To view the status of the services that were just created, navigate to DNS Management > Overview > DNS Load Balancers > app-dns-rule. Clicking on the rule “eu-pool”, you can find the status for each individual IP endpoint, showing the overall health of each pool’s service that has been configured. With the DNS Load Balancing rule configured to connect two separate regions, when one of the primary sites goes down in the eu-pool users will instead be directed to the global-pool. This provides reliability in the context of site failover that spans regions. If data privacy is also a requirement, additional rules can be configured to support more sites in the same region. DNS LB for Active-Passive Sites In the previous scenario, two members are configured to be equally active for a single location. We can change the weight of the pool members so that of the two only one is used when the other is unhealthy or disabled. This creates a backup/passive scenario within a region. Navigate to DNS Load Balancer Management > DNS Load Balancers. Go to the service name "app-dns-rule", then under Actions, select Manage Configuration. Click Edit Configuration for the DNS rule. Go to the Load Balancing Rules section, and Edit Configuration. On the Load Balancing Rules order menu, go to Actions > Edit for the eu-pool Rule Action. In the Load Balancing Rule menu for eu-pool, under the section Action, click Edit Configuration. In the rule for eu-pool, under Pool Type (A) > Pool Members click the Edit action In the IP Endpoint section, change the Load Balancing Priority to 1, then click Apply. Change the Load Balancing Mode to Priority, then exit and save all changes by clicking Continue, Apply, Apply, and then Save and Exit. DNS LB for Disaster Recovery Unlike with backup/standby where failover can happen automatically depending on the status of a service's health, disaster recovery (DR) can either happen automatically or be configured to require manual intervention. In the following two scenarios, I'll show how to configure manual DR failover within a region, and also how to manual failover outside the region. To support east/west manual DR failover within the EU region, use the steps above to create a new Load Balancing Rule with the same label selector as the EU rule (eu-pool) above, then create a new DNS LB pool (name it something like eu-dr-pool) and add new designated DR IP pool endpoints. Change the DR Load Balancing Rule Score to 80, and then click Apply. On the Load Balanacing Rules page, change the order of the rules and confirm that the score is such that it aligns to the following image, then click Apply, and then Save and Exit. In the previous active/standby scenario the Global rule functions as a backup for EU users when all sites in EU are down. To force a non-regional failover, you can change F5 XC DNS to send all EU users to the Global DNS rule by disabling each of the two EU DNS rule(s) above. To disable the EU DNS rules, Navigate to DNS Load Balancer Management > DNS Load Balancers, and then under Actions, select Manage Configuration. Click Edit Configuration for the DNS rule. Go to the Load Balancing Rules section, and Edit Configuration. On the Load Balancing Rules order menu, go to Actions > Edit for the eu-pool Rule Action. In the Load Balance Rule menu for eu-pool, under the section Action, click Edit Configuration. In the top section labeled Metadata, check the box to Disable the rule. Then click Continue, Apply, Apply, and then Save and Exit. With the EU DNS LB rules disabled, all requests in the EU regionare served by the Global Pool. When it's time to restore regional services, all that's needed is to re-enter the configuration rule and uncheck the Disable box to each rule. DNS LB with ASN Lists ASN stands for Autonomous System Number. It is a unique identifier assigned to networks on the internet that operate under a single administration or entity. By mapping IP addresses to their corresponding ASN, DNS LB administrators can manage some traffic more effectively. To configure Distributed Cloud DNS LB to use ASN lists, navigate toDNS Management > DNS Load Balancer Management, then "Managed Configuration" for a DNS LB service. Choose "Add Item", and on the next page, select "ASN List", and enter one or more ASN's that apply to this rule, select a DNS LB pool, and optionally configure the score (weight). When the same ASN exists in multiple DNS LB rules, the rule having the highest score is used. Note: F5 XC uses ASPlain (4-byte) formatted AS numbers. Multiple numbers are configured one per item line. DNS LB with IP Prefix Lists and IP Prefix Sets Intermediate DNS servers are almost always involved in server name resolution. By default, DNS LB doesn't see originating IP address or subnet prefix of the client making the DNS request. To improve the effectiveness of DNS-based services like DNS LB by making more informed decisions about which server will be the closest to the client, RFC 7871 proposes a solution using the EDNS0 field to allow intermediate DNS servers to add to the DNS request the client subnet (EDNS Client Subnet or EDS). The IP Prefix List and IP Prefix Set in F5 XC DNS is used when DNS requests contain the client subnet and the prefix is within one of the prefixed defined in one or more DNS LB rule sets. To configure an IP Prefix rule, navigate to DNS Management > DNS Load Balancer Management, then "Manage Configuration" of your DNS LB service. Now click "Edit Configuration" at the top left corner, then "Edit Configuration" in the section dedicated to Load Balancing Rules. Inside the section for Load Balancing Rules, click "Add Item" and in the Client Selection box choose either "IP Prefix List" or "IP Prefix Sets" from the menu. For IP Prefix List, enter the IPv4 CIDR prefix, one prefix per line. For IP Prefix Sets, you have the option of choosing whether to use a pre-existing set created in the Shared Configuration space in your tenant or you can Add Item to create a completely new set. ::rt-arrow:: Note: IP Prefix Sets are intended to be part of much larger groups of IP CIDR block prefixes and are used for additional features in F5 XC, such as in L7 WAF and L3 Network Firewall access lists. IP Prefix Sets support the use of both IPv4 and IPv6 CIDR blocks. In the following example, the configured IP Prefix rule having client subnet 192.168.1.0/24 get an answer to our eu-dr-pool (1.1.1.1). Meanwhile, a request not having a client subnet in the defined prefix and is also outside of the EU region, get an answer for the pool global-poolx (54.208.44.177). Command line: ; <<>> DiG 9.10.6 <<>> @ns1.f5clouddns.com www.f5-cloud-demo.com +subnet=1.2.3.0/24 in a ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44218 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; CLIENT-SUBNET: 1.2.3.0/24/0 ;; QUESTION SECTION: ;www.f5-cloud-demo.com. IN A ;; ANSWER SECTION: www.f5-cloud-demo.com. 30 IN A 54.208.44.177 ;; Query time: 73 msec ;; SERVER: 107.162.234.197#53(107.162.234.197) ;; WHEN: Wed Jun 05 21:46:04 PDT 2024 ;; MSG SIZE rcvd: 77 ; <<>> DiG 9.10.6 <<>> @ns1.f5clouddns.com www.f5-cloud-demo.com +subnet=192.168.1.0/24 in a ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48622 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; CLIENT-SUBNET: 192.168.1.0/24/0 ;; QUESTION SECTION: ;www.f5-cloud-demo.com. IN A ;; ANSWER SECTION: www.f5-cloud-demo.com. 30 IN A 1.1.1.1 ;; Query time: 79 msec ;; SERVER: 107.162.234.197#53(107.162.234.197) ;; WHEN: Wed Jun 05 21:46:50 PDT 2024 ;; MSG SIZE rcvd: 77 Here we're able to see that a different answer is given based on the client-subnet provided in the DNS request. Additional use-cases apply. The ability to make a DNS LB decision with a client subnet improves ability of the F5 XC nameservers to deliver an optimal response. DNS LB Fallback Pool (Failsafe) The scenarios above illustrate how to designate alternate pools both regional and global when an individual pool fails. However, in the event of a catastrophic failure that brings all service pools are down, F5 XC provides one final mechanism, the fallback pool. Ideally, when implemented, the fallback pool should be independent from all existing pool-related infrastructure and services to deliver a failsafe service. To configure the Fallback Pool, navigate to DNS Management > DNS Load Balancer Management, then "Managed Configuration" of your DNS LB service. Click "Edit Configuration", navigate to the "Fallback Pool" box and choose an existing pool. If no qualified pool exists, the option is available to add a new pool. In my case, I've desginated "global-poolx" as my failsafe fallback pool which already functions as a regional backup. Best practice for the fallback pool is that it should be a pool not referenced elsewhere in the DNSLB configuration, a pool that exists on completely independent resources not regionally-bound. DNS LB Health Checks and Observability For sake of simplicity the above scenarios do not have DNS LB health checks configured and it's assumed that each pool's IP members are always reachable and healthy. My next article shows how to configure health checks to enable automatic failovers and ensure that users always reach a working server. Conclusion Using the Distributed Cloud DNS Load Balancer enables better performance of your apps while also providing greater uptime. With scaling and security automatically built into the service, responding to large volumes of queries without manual intervention is seamless. Layers of security deliver protection and automatic failover. Built-in DDoS protection, DNSSEC, and more make the Distributed Cloud DNS Load Balancer an ideal do-it-all GSLB distributor for multi-cloud and hybrid apps. To see a walkthrough where I configure first scenario above for Geo-IP proximity, watch the following accompanying video. Additional Resources Next article: Using Distributed Cloud DNS Load Balancer health checks and DNS observability More information about Distributed Cloud DNS Load Balancer available at: https://www.f5.com/cloud/products/dns-load-balancer Product Documentation: DNS LB Product Documentation DNS Zone Management6KViews3likes0CommentsMulti-cluster Kubernetes/Openshift with GSLB-TOOL
Overview This is article 1 of 2. GSLB-TOOL is an OSS project around BIG-IP DNS (GTM) and F5 CloudServices’ DNS LB GSLB products to provide GSLB functionality to Openshift/Kubernetes. GSLB-TOOL is a multi-cluster enabler. Doing multi-cluster with GSLB has the following advantages: Cross-cloud. Services are published in a coordinated manner while being hosted in any public cloud or private cloud. High degree of control. Publishing is done based on service name instead of IP address. Traffic is directed to specific data center based on operational decisions such as service load and also allowing canary, blue/green, and A/B deployments across data centers. Stickiness. Regardless the topology changes in the network, clients will be consistently directed to the same data center. IP Intelligence. Clients can be redirected to the desired data center based on client’s location and gather stats for analytics. The use cases covered by GSLB-TOOL are: Multi-cluster deployments Data center load distribution Enhanced customer experience Advanced Blue/Green, A/B and Canary deployment options Disaster Recovery Cluster Migrations Kubernetes <-> Openshift migrations Container's platform version migration. For example, OCP 3.x to 4.x or OCP 4.x to 4.y. GSLB-TOOL is implemented as a set of Ansible scripts and roles that can be used from the cli or from a Continious Delivery tool such as Spinnaker or Argo CD. The tool operates as a glue between the Kubernetes/Openshift API and the GSLB API. GSLB-TOOL uses GIT as source of truth to store its state hence the GSLB state is not in any specific cluster. The next figure shows an schema of it. It is important to emphasize that GSLB-TOOL is cross-vendor as well since it can use any Ingress Controller or Router implementation. In other words, It is not necessary to use BIG-IP or NGINX for this. Moreover, a given cluster can have several Router/Ingress controller instances from difference vendors. This is thanks of only using the Openshift/Kubernetes APIs when inquiring about the container routes deployed Usage To better understand how GSLB-TOOL operates it is important to remark the following characteristics: GSLB-TOOL operates with project/namespace granularity, in a per cluster bases. When operating with a cluster's project/namespace it operates with all the L7 routes of the cluster's project/namespace at once. For example, the following command: $ project-retrieve shop onprem Will retrieve all the L7 routes of the namespace shop from the onprem cluster. Having a cluster/namespace simplifies management and mimics the behavior of RedHat’s Cluster’s Application Migration tool. In the next figure we can see the overal operations of GSLB-TOOL. At the top we can see in bold the name of the clusters (onprem and aws). In the figure these are only Openshift (aka OCP) clusters but it could be any other Kubernetes as well. We can also see two sample project/namespaces (Project A and Project B). Different clusters can have different namespaces as well. There are two types of commands/actions: The project-* commands operate on the Kubernetes/Openshift API and in the source of truth/GIT repository. These commands operate with a project/namespace granularity. GSLB-TOOL doesn't modify your Openshift/K8s cluster, it only performs read-only operations. The gslb-* commands operates on the source of truth/GIT repository and with the GSLB API of choice, either BIG-IP or F5 Cloud Services. These commands operate with all the project/namespaces of all clusters at once either submitting or rolling back the changes in the GSLB backends. When GSLB-TOOL pushes the GSLB configuration either performs all changes or doesn’t perform any. Thanks to the use of GIT the gslb-rollback command easily reverts the configuration if desired. Actually, creating the Backup of the previous step is only useful when using GSLB-TOOL without GIT which is possible too. GSLB-TOOL flexibility GSLB-TOOL has been designed with flexibility in mind. This is reflected in many features it has: It is agnostic of the Router/Ingress Controller implementation. In the same GSLB domain, it supports concurrently vanilla Kubernetes and Openshift clusters. It is possible to have multiple Routers/Ingress Controllers in the same Kubernetes/Openshift cluster. It is possible to define multiple Availability Zones for a given Router/Ingress Controller. It can be easily modified given that it is written in Ansible. Furthermore, the Ansible playbooks make use of template files that can be modified if desired. Multple GSLB backends. At present GSLB-TOOL can use either F5 Cloud Service’s DNS LB (a SaaS offering) or F5 BIG-IP DNS (aka GTM) by simply changing the value of the backend configuration option to either f5aas or bigip. All operations, configuration files, etc… remain the same. At present it is recommended F5 BIG-IP DNS because currently offers better monitoring options. Easiness to PoC. F5 Cloud Service’s DNS LB can be used to test the tool and later on switch to F5 BIG-IP DNS by simply changing the backend configuration option. GSLB-TOOL L7 route inquire/config flexibility It is specially important to have flexibility when configuring the L7 routes in our source of truth. We might be interested in the following scenarios for a given namespace: Homogeneous L7 routes across clusters - In occasions we expect that all clusters have the same L7 routes for a given namespace. This happens, for example, when all applications are the same in all clusters. Heterogeneous L7 routes across clusters - In occasions we expect that each cluster might have different L7 routes for a given namespace, when there are different versions of the applications (or different applications). This happens, for example, when we are testing new versions of the applications in a cluster and other clusters use the previous version. To handle these scenarios, we have two strategies when populating the routes: project-retrieve – We use the information from the cluster’s Route/Ingress API to populate GSLB. project-populate– We use the information from another cluster’s Route/Ingress API to populate GSLB. The cluster from where we take the L7 routes is referred as the reference cluster. We exemplify these strategies in the following figure where we use a configuration of two clusters (onprem and aws) and a single project/namespace. The L7 routes (either Ingress or Route resources) in these are different: the cluster onprem has two addional L7 routes (/shop and /checkout). We are going to populate our GSLB backend in three different ways: In Example 1, we perform the following actions in sequence: With project-retrieve web onprem we retrieve from the cluster onprem the L7 routes of the project web and these are stored in the Git repository or source of truth. Analogously, with project-retrieve web aws we retrieve from the cluster aws the L7 routes (only one in this case) and these are treieved in the Git repository or source of truth. We submit this configuration into the GSLB backend with gslb-commit. The GSLB backend expects that the onprem cluster has 3 routes and the aws backend 1 route. If the services are available the health check results for both clusters will be Green. Therefore the FQDN will return the IP addresses of both clusters' Routers/Ingress Controllers. In Example 2, we use the project-populate strategy: We perform the same first action as in Example 1. With project-populate web onprem aws we indicate that we expect that the L7 routes defined in onprem are also available in the aws cluster which is not the case. In other words, the onprem cluster is used as the reference cluster for aws. After we submit the configuration in GSLB with gslb-commit, the healthchecks in the onprem cluster will succeed and will fail on aws because /shop and /checkout don't exist (an HTP/404 is returned). Therefore for the FQDN www.f5bddemos.io will return only the IP address of onprem. This will be green automatically, once we update the L7 routes and applications in aws. In Example 3, we use again the project-populate strategy but we use aws are reference cluster. Unlike in the previous examples, with project-retrieve web aws we retrieve the routes from the cluster aws. With project-populate web aws onprem we do the reverse as in step b of the Example 2: we use the aws as reference for onprem instead. After submission of the config with gslb-commit. Given that onprem has the L7 route that aws has, the health checking will succeed. For sake of simplicity, In the examples above it has been shown projects/namespaces with only a single FQDN for all their L7 routes but for a given namespace it is possible to have an arbitrary number of L7 routes and FQDNs. There is no limitation on this either. Additional information If you want to see GSLB-TOOL in practice please check this video.For more information on this tool, please visit the GSLB-TOOL Home Page and it's Wiki Page for additional documentation.1.4KViews0likes0CommentsLoad-balancing generic hosts between different datacenters
Hi all I have GTM F5 Load-balancer sitting in my primary Denver (USA) data center. I have two VPN firewalls sitting in theChennai (India) data center, where there is no F5 load balancer. Hence there is no GSLB. I would like to load-balance these two VPN firewalls through the Denver F5 GTM load balancer. The public IPs are completely different between the data centers. The expectation is that the end host ==> Public DNS Server ==>F5 GTM Listener (Denver) ==>Chennai (India) Datacenter (VPN Firewall). Will this be doable?798Views1like4CommentsAdding LTM to GTM with different version
Hi Experts, I am looking for a KB that shows the prerequisites or consideration prior doing BIGIP ADD in GTM. Are goal is to use GSLB functionality of our GTM. Our GTM is running in 11.6.1 version and we will upgrade our LTM from 11.6.1 to 13.0. May we know if it is possible or there is an issue with this setup.600Views0likes2CommentsDNS to LTM (Server peering for GSLB)
GTM1 (one external selfIP) LTM1 (one external selfIP, multiple internal selfIPs) I noticed that the HELP under DNS->GSLB->Server List states "Address: Spedifies an external (public) address for the device." In Guides - it is recommended to use SELF-IPs of devices to peer. BUT does it really HAVE to be 'external' ? Are there any limitations simply peering to the LTM using one of the internal selfIPs? Thanks for feedback!437Views0likes3CommentsGTM health monitor for Standalone Server not being sent
I'm trying to setup a standalone server in new GTM implementation and the health monitors don't seem to be working as expected. I've done the following tasks: - Created a new datacenter (which is showing green/available) - Created a new server as the GTM devices using the floating ip for the external vlan. (showing green / available) - Created another new server (Generic Host) with the ip of the webserver and created a virtual server with the same ip and the http health monitor. (showing red / down) I'm able to successfully curl the webserver from the cli of the GTM device, so I know routing and network connectivity are good. I've done some tcpdumps and the GTM device isn't sending any traffic to the virtual server. Can someone give me some pointers? Thanks in advance.Solved381Views0likes2CommentsGTM Topology Load Balancing - Order of Operation
Two-part question: 1.) For wide IP-level topology load balancing, what takes precedence: order, weight, or prefix length? (Assuming topology load balancing is choosing between pools based on source IP subnet). 2.) This question came about due to a situation in which I'm seeing some unexpected LB results. Given the below topology configuration (11.x) 1 IP Subnet is 10.0.1.0/29 Pool is West_DC_Pool 1 2 IP Subnet is 10.0.1.0/24 Pool is West_DC_Pool 150 3 IP Subnet is 10.0.0.0/24 Pool is East_DC_Pool 1 4 IP Subnet is 10.0.0.0/16 Pool is East_DC_Pool 100 The LDNS server IP is 10.0.1.5 (there's only one LDNS server at the moment) The East_DC_Pool is being chosen every time. Based on the logs, it seems to be comparing 1 (10.0.1.0/29 with a weight of 1) to 4 (10.0.0.0/16 with a weight of 100) and therefore 4 is winning based on a weight of 100. No mention of 2 (10.0.1.0/24 with a weight of 150) in the logs. If I delete 1, then 2 (10.0.1.0/24 with weight of 150) wins so traffic is then sent to West_DC_Pool. Now re-adding 1 (10.0.0.0/29 with weight 1) causes 4 (East_DC_Pool) to win again. Is this expected behavior??? I would have expected in all cases (with a LDNS IP of 10.0.1.5) that traffic would be routed to the West_DC_Pool based on either longest prefix match(1 would win), weight(2 would win), or order (again 1 would win). But maybe there's something about the order of operation that I'm unaware of. Thanks in advance, Dave322Views0likes3CommentsHA GSLB between 2 DC don't work
I have an eviroment of gslb with 2 DC, each datacenter have a F5 DNS and two LTM, we have only one wide ip, the test that we are making is trying to resolve with the dns2 if we make this test is redirect to ltm2, and if we turn off the ltm2 make the redirection to ltm1. But if we make the same test on dns1 first it send the client to ltm1, but if we turn off the ltm1 don't make the redirection to ltm2. On Wide IP i have both pools dc1 and dc2 i have the topology lb method , then in each pool i have both ltm1 and ltm2 depdnding on the zone i change the order, on the pool i have topology, global availability and none lb methods. the topology rules are the next: IP 10.100.2.0/24 Pool is DC2 IP 0.0.0.0/0 Pool is DC1 Are the topology rules ok? i don't know why is not working303Views0likes1Commentprober pool Round Robin with multi health monitors and with multi prober pool members
I have a question about The GTM monitors and prober pools: In my case, I have three datacenters, three gtm(one in each DC), and one prober pool, the prober pool include all three GTM, and the prober pool was set to use Round Robin. And two vs, vs1 and vs2 in different DC, each vs was configured two health monitors(each monitor with different porbe interval, eg. vs1's monitors have interval 5s and 7s, vs2's monitors have interval 9s and 11s). so, my questions is, how does the porber pool Round Robin work? Looking forward to your help, thank you.284Views0likes2Comments