gtm dns
16 TopicsBIG-IP DNS: Check Status Of Multiple Monitors Against Pool Member
Good day, everyone! Within the LTM platform, if a Pool is configured with "Min 1 of" with multiple monitors, you can check the status per monitor via tmsh show ltm monitor <name>, or you can click the Pool member in the TMUI and it will show you the status of each monitor for that member. I cannot seem to locate a similar function on the GTM/BIG-IP DNS platform. We'd typically use this methodology when transitioning to a new type of monitor, where we can passively test connectivity without the potential for impact prior to removing the previous monitor. Does anyone have a way through tmsh or the TMUI where you can check an individual pool member's status against the multiple monitors configured for its pool? Thanks, all!251Views0likes4CommentsWhat is the best practice to deploy single Tenant in F5 rseries?
Hi, we are going to deploy new rseries 5k with single Tenant. What is the best practice to setup? I plan to setup like below, can someone please advise whether it is correct or not? And I have question on auto disk space and memory allocation. Thanks in advance! Allocate all the disk space to this large single tenant Allocate all the memory to this single tenant within the tenant, set "Large" to "Mgmt" module for the rest modules: LTM, GTM , ASM , set "Normal" under Resource Provisioning". Seems the system automatically allocate disk space and memory to each module. Based on the amount of disk space and memory allocated to these modules, seems there are still a lot spare diskspace and memory. Will these modules automatically share the rest spare diskspace and memory when necessary?Solved72Views0likes2CommentsAny issue if setting up LTM and GTM/DNS on the same F5 Appliance Cluster?
Hi, we have a pair of F5 appliance, and plan to setup HA cluster. After HA configuration and both appliance in sync, LTM works well as active/standby mode as expected GTM delivery listener is active on active F5 appliance as expected, the dns queries are routed to the active appliance GTM wild-ip pool members are shown "down" state on Standby appliance. The status of Data Center/Links are also shown "down" on the Standby appliance. Is it normal? Both F5 appliances are configured under the same GTM sync-group with different external physical links. Can someone please advise? Thanks in advance!21Views0likes0Commentssome questions on device Trust Certificate?
hi, I have two questions on device trust certificates (client cert). why there are duplicate certificates on Device Trust Certificate list? I saw duplicate gtm device certificates in LTM devices. is it true that only gtm device certificate is sent to ltm device, and reverse "no" -- no ltm device certificate in gtm Device Trust Certificate list? I checked out gtm and ltm devices for our different regions, no ltm device certificate is on any gtm Device Trust Certificate list. Can someone please help advise, thanks in advance!Solved74Views0likes5CommentsRetrieve GTM pool member addresses (Bigrest)
A wide-IP has a pool of servers that are virtual-servers on an LTM. I would like to retrieve the pool member addresses of the virtual servers used in the wide-IP pool using the Bigrest Python library. wide-ip = site.com Pool Name = site_pool Pool Member A = site_a_vs (server = ltm_a) Pool Member B = site_b_vs (server = ltm_b) I can load the wide-IP which provides a poolReference. I can then load the pool, which provides a membersReference. The membersReference provides a serverReference (the LTM) and the vs name. From here, I can load all virtual servers on the server provided by the serverRefence, but unsure how to retrieve only the virtual servers that are relevant to the wide-IP. There is no virtualserver ID provided by the membersReference or ServerReference.65Views0likes1CommentPriority group activation on GTM.
Hello All, I need to configure active standby configuration on GTM pool level, only one VS should be UP and second should be standby if one vs is down then traffic should pass to another VS and i can see there is one option Minimum-up Members but i do not know how to use it as a priority group activation on GTM level. If any one has any article or config suggestion please share. Many thanks in advanced for your time and consideration.55Views0likes2CommentsUse Fully Qualified Domain Name (FQDN) for GSLB Pool Member with F5 DNS
Normally, we define a specific IP (and port) to be used as GSLB pool member. This article provides a custom configuration to be able to use Fully Qualified Domain Name (FQDN) as GSLB pool member--with all GSLB features like health-check monitoring, load balancing method, persistence, etc. Despite GSLB as a mechanism to distribute traffic across datacenters having reached years of age, it has not become less relevant this recent years. The fact that internet infrastructure still rely heavily on DNS technology means GSLB is continuously used due to is lightweight nature and smooth integration. When using F5 DNS as GSLB solution, usually we are dealing with LTM and its VS as GSLB server and pool member respectively. Sometimes, we will add a non-LTM node as a generic server to provide inter-DC load balancing capability. Either way, we will end up with a pair of IP and port to represent the application, in which we sent a health-check against. Due to the trend of public cloud and CDN, there is a need to use FQDN as GSLB pool member (instead of IP and port pair). Some of us may immediately think of using a CNAME-type GSLB pool to accommodate this. However, there is a limitation in which BIG-IP requires a CNAME-type GSLB pool to use a wideIP-type pool member, in which we will end up with an IP and port pair (again!) We can use "static target", but there is "side-effect" where the pool member will always consider available (which then triggers the question why we need to use GSLB in the first place!). Additionally, F5 BIG-IP TMUI accepts FQDN input when we configure GSLB server and pool member. However, it will immediately translate to IP based on configured DNS. Thus, this is not the solution we are looking for Now this is where F5’s BIG-IP power (a.k.a programmability) comes into play. Enter the realm of customization... We all love customization, but at the same time do not want that to be overly complicated so that life becomes harder on day-2 🙃. Thus, the key is to use some customization, but simple enough to avoid unnecessary complication. Here is one idea to solve our FQDN as GSLB pool problem above The customized configuration object includes 1. External health-check monitor: Dynamically resolve DNS to translate FQDN into IP address Perform health-check monitoring against current IP address Result is used to determine GSLB pool member availability status 2. DNS iRules: Check #1: Checks if GSLB pool attached to wideIP contains only FQDN-type member (e.g. other pool referring to LTM VS is also attached to the wideIP) If false, do nothing (let DNS response refer to LTM VS) Otherwise, perform check #2 Check #2: Checks current health-check status of requested domain name If FQDN is up, modify DNS response to return current IP of FQDN Otherwise, perform fallback action as requirement (e.g. return empty response, return static IP, use fallback pool, etc.) 3. Internal Datagroup: Store current IP of FQDN, updated according to health-check interval Datagroup record value contains current IP if health-check success. Otherwise, the value contains empty data Here are some of the codes, where configured; wideIP is gslb.test.com, while GSLB pool member FQDN is arcadia.f5poc.id 1. External health-check monitor config gtm monitor external gslb_external_monitor { defaults-from external destination *:* interval 10 probe-timeout 5 run /Common/gslb_external_monitor_script timeout 120 #define FQDN here user-defined fqdn arcadia.f5poc.id } External health-check monitor script #!/bin/sh pidfile="/var/run/$MONITOR_NAME.$1..$2.pid" if [ -f $pidfile ] then kill -9 -`cat $pidfile` > /dev/null 2>&1 fi echo "$$" > $pidfile # Obtain current IP for the FQDN resolv=`dig +short ${fqdn}` # The actual monitoring action here curl -fIs -k https://${fqdn}/ --resolve ${fqdn}:443:${resolv} | grep -i HTTP 2>&1 > /dev/null status=$? if [ $status -eq 0 ] then # Actions when health-check success rm -f $pidfile tmsh modify ltm data-group internal fqdn { records replace-all-with { $fqdn { data $resolv } } } echo "sending monitor to ${fqdn} ${resolv} with result OK" | logger -p local0.info echo "up" else # Actions when health-check fails tmsh modify ltm data-group internal fqdn { records replace-all-with { $fqdn { } } } echo "sending monitor to ${fqdn} ${resolv} with result NOK" | logger -p local0.info fi rm -f $pidfile 2. DNS iRules when DNS_REQUEST { set qname [DNS::question name] # Obtain current IP for the FQDN set currentip [class match -value $qname equals fqdn] } when DNS_RESPONSE { set rname [getfield [lindex [split [DNS::answer]] 4] "\}" 1 ] #Check if return is IP address of specially encoded FQDN IP, 10.10.10.10 in this example if {$rname eq "10.10.10.10" }{ #Response is only from pool with external monitor, meaning no other pool is attached to wideIP if {$currentip ne ""}{ #Current FQDN health-check success DNS::answer clear # Use current IP to construct DNS answer section DNS::answer insert "[DNS::question name]. 123 [DNS::question class] [DNS::question type] $currentip" } else { #Current FQDN health-check failed #Define action to be performed here DNS::answer clear } } } 3. Internal Datagroup ltm data-group internal fqdn { records { # Define FQDN as record name arcadia.f5poc.id { # Record data contains IP, where this will be continuously updated by external monitoring script data 158.140.176.219 } } type string } *GSLB virtual server configuration Some testing The resolve will follow whichever current IP address for the FQDN. If a returning CNAME response is required, you can do so by modifying DNS irules above. The logic and code are open to any improvement, so leave your suggestions in the comments if you have any. Thanks!473Views1like1CommentWhen user goes through LB the server page has stripped information
I have created a pretty simple round robin load balancing for a user with three servers. As a part of this I also have DNS LB in place that sends the traffic to two VIPs that are connected to the three nodes in a pool I have created on my LTM F5. User accesses the LB DNS URL I provide via Https://<>.com > VIP > Pool > Nodes. There is a certificate applied to the clientssl and serverssl profiles attached to the VIPs. The user is able to get to their backend servers/nodes when going through the load balancer, but we are coming across an interesting issue. When the user goes through the F5 the server dashboard page they usually see is stripped of information on that dashboard. Typically, there would be tiles shown on the server dashboard, but it is just the basic UI and none of the tiles. When the user goes directly to their server, all the information/tiles are shown as normal. I have never experienced this problem before and am not sure how to prove out the F5 is causing the issue or how it is happening. Any insight would be greatly appreciated! *Attached file shows what I'm explaining.78Views0likes6CommentsGTM pool is OFFLINE even if pool members are UNKNOWN
Hi, Maybe someone can clarify me this situation. I didn't found it in documentation. generic host (with no monitors) has two virtual servers (also no monitors here). the state of server and virtual servers is unknown (that expected). state of the pool is OFFLINE (why?? this is not clear for me), but (all two) members are UNKNOWN wide IP is OFFLINE because poll has no available members (members are unknown, not unavailable) dns response to wide ip returns two IPs (IP adresses of both members). it's ok in this case, because return code on failure is (by default) disabled when I enable 'return code on failure', response is empty Note: when one member is disabled (or down based on temporary monitor), dns response return only one IP - IP of the unknown member. That's correct, but pool state and wide ip state are offline. My question is: Why is pool state OFFLINE when pool members states are UNKNOWN? I think he should be unknown. When the same situation occurs on LTM, state of pool is unknown, not offline. Does GTM behave differently?? TMOS version: 17.1.1.3 Here is simple test configuration: # gslb domain (wide ip) gtm wideip a /testTenant/testApp/test.my.local { pools { /testTenant/testApp/testPool { order 0 } } } # gslb pool gtm pool a /testTenant/testApp/testPool { alternate-mode global-availability fallback-mode none load-balancing-mode global-availability members { /Common/server1:vs1 { member-order 0 } /Common/server1:vs2 { member-order 1 } } } # gslb servers gtm server /Common/server1 { datacenter /Common/testDc devices { 0 { addresses { 10.1.1.1 { } } } } prober-fallback none product generic-host virtual-servers { vs1 { destination 10.1.1.11:0 } vs2 { destination 10.1.1.12:0 } } }Solved55Views0likes1Comment