topology
14 TopicsSelf-IP & SNAT
We are in the process of replacing our F5's with new ones. One question that came up was self-ip vs SNAT. Can we not use self-ip and use only snat pool for the pool members and can the VIP and pool members belong to the same IP of the snat without using a self-ip or do we need the self-ip? Example= VIP=10.2.2.254 SNAT Pool=10.2.2.250 Pool Members=10.2.2.10 & 10.2.2.20 -------------------------------------------- Current topology: VIP=10.10.10.20 Pool Members: 10.20.20.10 - 10.20.20.30 Self-IP:10.20.30.10 Can we just use the SNAT pool instead of the self-ip scenario? or we need the self-ip? Thanks and hope this can be answer!50Views0likes2CommentsGTM Topology Load Balancing - Order of Operation
Two-part question: 1.) For wide IP-level topology load balancing, what takes precedence: order, weight, or prefix length? (Assuming topology load balancing is choosing between pools based on source IP subnet). 2.) This question came about due to a situation in which I'm seeing some unexpected LB results. Given the below topology configuration (11.x) 1 IP Subnet is 10.0.1.0/29 Pool is West_DC_Pool 1 2 IP Subnet is 10.0.1.0/24 Pool is West_DC_Pool 150 3 IP Subnet is 10.0.0.0/24 Pool is East_DC_Pool 1 4 IP Subnet is 10.0.0.0/16 Pool is East_DC_Pool 100 The LDNS server IP is 10.0.1.5 (there's only one LDNS server at the moment) The East_DC_Pool is being chosen every time. Based on the logs, it seems to be comparing 1 (10.0.1.0/29 with a weight of 1) to 4 (10.0.0.0/16 with a weight of 100) and therefore 4 is winning based on a weight of 100. No mention of 2 (10.0.1.0/24 with a weight of 150) in the logs. If I delete 1, then 2 (10.0.1.0/24 with weight of 150) wins so traffic is then sent to West_DC_Pool. Now re-adding 1 (10.0.0.0/29 with weight 1) causes 4 (East_DC_Pool) to win again. Is this expected behavior??? I would have expected in all cases (with a LDNS IP of 10.0.1.5) that traffic would be routed to the West_DC_Pool based on either longest prefix match(1 would win), weight(2 would win), or order (again 1 would win). But maybe there's something about the order of operation that I'm unaware of. Thanks in advance, Dave347Views0likes3CommentsGTM Topology - LDNS Request
Hello! I have some doubts how the GTM works in its Topology LB records. If I have the following: WIP Pool www.mydomain.com Pool A <- Prefered: Topology, Alternate: Round Robin, Fallback:Return to DNS www.otherdomain.com Pool B <- Prefered: Topology, Alternate: Round Robin, Fallback:Return to DNS www.onedomain.com Pool C <- Prefered: Topology, Alternate: Round Robin, Fallback:Return to DNS www.lastdomain.com Pool D <- Prefered: Topology, Alternate: Round Robin, Fallback:Return to DNS Order LDNS Request Source Destination Weight 1 10.20.0.0/16 Pool A 1 2 10.30.0.0/16 Pool B 1 3 10.40.0.0/16 Pool A 1 4 10.0.0.0/16 Pool D 1 My Scenario Question are: 1- If a query is made for "www.mydomain.com" from source IP 10.20.0.1, Is Pool A served? 2- If a query is made for "www.mydomain.com" from source IP 192.168.10.1, Will Round Robin be served? 3- If a query is made for "www.lastdomain.com" from source IP 10.20.0.1, Is Pool D served? Will Topology logic not catch the first line 10.20.0.0/16 > Pool A? Thank you JohnR426Views0likes7CommentsBIG-IP DNS Topology LB selection - topology score vs QoS score
Im trying to figure out when/why topology score vs qos score is being used for the DNS LB. When we use topo LB in WIP to select a certain pool I get the following: SIN-F5I01 from {my internal src IP} [test.glb.lab.int A] [topology selected pool (AMS-Exchange-GTM_Pool) - topology score (10) is higher] [topology skipped pool (FRA-Exchange-GTM_Pool) - topology score (0) is not higher] [topology selected pool (AMS-Exchange-GTM_Pool) with the highest topology score (10)] When I do the same on a pool to select a certain pool member: SIN-F5I01 from {my internal src IP} [time.glb.lab.int A] [pool member check succeeded (chi-ntp01.labint) [QoS selected pool member (chi-ntp01.lab.int) - QoS score (0) is higher] [pool member check succeeded (sin-ntp01.lab.int) [QoS selected pool member (sin-ntp01.lab.int) - QoS score (429496729600) is higher] [pool member check succeeded (ams-ntp01.lab.int) - [QoS skipped pool member (ams-ntp01.lab.int) - QoS score is not higher] [QoS selected pool member (sin-ntp01.lab.int)] In both cases we are using Topo LB and dont assign any gtm-score to any VIP. Question: Is this just the default behavior - pool member selection vs pool selection? On the QoS score, how exactly is this calculated? I figured changing the weight in the topo record changes the score (weight 1 = QoS score (42949672900), weight 10 = QoS score (429496729600)) Many thanks in advance302Views0likes0CommentsGTM - Data Center destinations in Topology Records and Topology Load Balancing at the Wide IP level
I am hearing that when doing Topology based Load Balancing at the Wide IP level, your topology records using Data Centers as the Destination will be ignored. In fact, this is what I'm seeing with my own eyes. Why would this be the only Destination type that does not work with Topology LB'ing at the Wide IP level? It would be so very useful. Am I missing something?299Views0likes3CommentsGTM/DNS Topology Record and Local DNS Question
Hello all A quick question if I may...if a DNS request is sent from a client machine to its local DNS, which then forwards the query to another DNS server before finally being delegated to the GTM; which DNS server IP should be used in the Topology Record as the local DNS in this scenario? The very first or last? I thought I knew but now I realise I'm not so confident. Thank you.224Views0likes1CommentGTM and Topology? Possible without iRule?
Hi, I wonder if such scenario is possible to achieve without creating GTM iRule: Host asking for FQDN to connect to bunch of servers. This host can be in DC1 or DR1 depending on diiferent factors, it's migrated using VMware HA. It keeps same IP, number of hops is the same etc. - so no obvious way to figure out in which DC host actually is. It will probably be fixed by creating some external monitor - not important now. If host is in given DC it should receive IPs only of target servers in this DC If there is no active server left in this DC, but there are active in other DC then IPs from other DC should be returned. I have no idea right now how to achieve it using just configuration objects without iRule - any ideas welcome here. I can't use Topology on Wide IP level, as there is no change that can be detected - same IP for DNS requests. If Global Availability will be used then it should dynamically change order based on DC in which host is running at the given time. So... Piotr203Views0likes1CommentGTM and Topology
Hi, I am quite new to DNS/GTM stuff so I had no chance yet to dig into how Topology LB works and what can be done. Wonder if such configuration is possible: Setup: Two DC - DC1 and DR1 LDNS in each DC sending request to GTM (LDNS_DC1, LDNS_DR1) Servers in each DC - defined as Generic Host GTM serwing one Wide IP pointing to Servers in both DC I would like to achieve this result: When LDNS_DC1 sends request logic is: If there is any Srv active in DC1 return IP of any (based on RR) If there is no active Srv in DC1 return IP of any active Srv in DR1 (based on RR) When LDNS_DR1 sends request logic is: If there is any Srv active in DR1 return IP of any (based on RR) If there is no active Srv in DR1 return IP of any active Srv in DC1 (based on RR) So depending on which LDNS issued request logic is switched around. Is that possible without iRule using just LB available at Wide IP and Pool level? Side question: Which LB methods will work at all for Generic Host? I guess in this case all is based on analyzing communication between LDNS and GTM (no data that is provided when BIG-IP type of server is used)472Views0likes1CommentQuestion on GTM load balancing
Dear All, I'm looking for an approach for my client requirements which I have explained below : Currently, one DNS(wide-ip) is configured on GTM with topology as the load balancing mechanism so that requests from EMEA/US regions are processed by VS1 on LTM1 and APAC by VS2 on LTM2. Needless to say, VS1 and VS2 are configured with their respective location backend servers in pools. However, my clients request is that all requests from US should be processed by VS2 configured on LTM2 instead of current setup wherein they are processed by VS1 on LTM1 as explained above. Can anyone suggest me if its possible to configure such load balancing on GTM or can it be done on LTM level(iRule) keeping transparent on GTM level. Please advice. Br, MSK296Views0likes4CommentsF5 GTM Topology LB with mutiple wideIP, different conditions (pool selection)
Hi, I have a new requirement for my client. Previously, we have setup for their exchange server and is working perfectly in topology LB. Existing scenario (simple expalanation) Datacenter A Source IP X : 10.10.x.x , Pool A : DCA_VS Datacenter B Source IP Y : 20.20.x.x , Pool B : DCB_VS Topology Design : Source IP X, Pool is Pool A, Score 200 Source IP X, Pool is Pool B, Weight 100 Source IP Y, Pool is Pool B, Weight 200 Source IP Y, Pool is Pool A, Weight 100 On the wideIP configurations (WideIP : exchange.abc.com), i have selected "Topology" as the load balance method. Here comes the new requirement: Datacenter A Source IP X : 10.10.x.x , Pool C : DCC_VS Datacenter B Source IP Y : 20.20.x.x , Pool D : DCD_VS How do i apply these new topology requirements with new WideIP? Example: sharepoint.abc.com. Because in the WideIP loadbalancing options, we cannot select which topology conditions the wideIP should look for. NOTES : Source IP X will always prefer Datacenter A (resources) , while Source IP Y will always prefer Datacenter B (resources) as their primary. Thanks in advance!!!227Views0likes1Comment