GTM DNS
18 TopicsBGP Over 2 vlans to 2 Network switch
Hi, im testing a new design for ltm, when’re big ip will have 2 transit vlans to switch a and switch b and then i ll establish bgp over it. Switch will advertise default to Big ip and big ip will advertise vip and snat to switch. I use snat for vip since traffic should not drip when a switch fails. The way im advertising snat is by creating them as virtual server forward ip type with loose close and initiation enabled, and vip is configured as standard type with this snat. These will be advertised to bgp from kernal. I enabled connection mirroring for this vip, disabled autolast hop globally and vlan keyed connections. My expectation is connection to vip don’t drip when a switch fails or during big ip failover . And with bgp I disabled graceful restart and enabled bfd. This works with what I did so far. My question if if there is something I should think about before implementing in production or any that I can do make this better. Is my approach to advertise snat as forwadip vs correct ? I want to do similar approach with gtm as well and I’m thinking if I should create a non floating self ip like loopback for listener and snat to backend vip?24Views0likes0Commentshow to monitor the axfr master response
Hi everyone. Previously I had a zone transfer configuration between GTM (slave) and Microsoft AD (master). Recently, I have experienced problems with axfr access to the server with intermittent status, but for ixfr it is still monitored normally. I tested both using the dig tool. related to this, does anyone have experience, how to monitor the axfr master response? such as placing the master server into a pool, and monitoring it like a health monitor. some provide references to monitor links and external monitors, but I don't really understand. thank you65Views0likes1CommentBIG-IP DNS: Check Status Of Multiple Monitors Against Pool Member
Good day, everyone! Within the LTM platform, if a Pool is configured with "Min 1 of" with multiple monitors, you can check the status per monitor via tmsh show ltm monitor <name>, or you can click the Pool member in the TMUI and it will show you the status of each monitor for that member. I cannot seem to locate a similar function on the GTM/BIG-IP DNS platform. We'd typically use this methodology when transitioning to a new type of monitor, where we can passively test connectivity without the potential for impact prior to removing the previous monitor. Does anyone have a way through tmsh or the TMUI where you can check an individual pool member's status against the multiple monitors configured for its pool? Thanks, all!280Views0likes4CommentsWhat is the best practice to deploy single Tenant in F5 rseries?
Hi, we are going to deploy new rseries 5k with single Tenant. What is the best practice to setup? I plan to setup like below, can someone please advise whether it is correct or not? And I have question on auto disk space and memory allocation. Thanks in advance! Allocate all the disk space to this large single tenant Allocate all the memory to this single tenant within the tenant, set "Large" to "Mgmt" module for the rest modules: LTM, GTM , ASM , set "Normal" under Resource Provisioning". Seems the system automatically allocate disk space and memory to each module. Based on the amount of disk space and memory allocated to these modules, seems there are still a lot spare diskspace and memory. Will these modules automatically share the rest spare diskspace and memory when necessary?Solved94Views0likes2CommentsAny issue if setting up LTM and GTM/DNS on the same F5 Appliance Cluster?
Hi, we have a pair of F5 appliance, and plan to setup HA cluster. After HA configuration and both appliance in sync, LTM works well as active/standby mode as expected GTM delivery listener is active on active F5 appliance as expected, the dns queries are routed to the active appliance GTM wild-ip pool members are shown "down" state on Standby appliance. The status of Data Center/Links are also shown "down" on the Standby appliance. Is it normal? Both F5 appliances are configured under the same GTM sync-group with different external physical links. Can someone please advise? Thanks in advance!24Views0likes0Commentssome questions on device Trust Certificate?
hi, I have two questions on device trust certificates (client cert). why there are duplicate certificates on Device Trust Certificate list? I saw duplicate gtm device certificates in LTM devices. is it true that only gtm device certificate is sent to ltm device, and reverse "no" -- no ltm device certificate in gtm Device Trust Certificate list? I checked out gtm and ltm devices for our different regions, no ltm device certificate is on any gtm Device Trust Certificate list. Can someone please help advise, thanks in advance!Solved90Views0likes5CommentsRetrieve GTM pool member addresses (Bigrest)
A wide-IP has a pool of servers that are virtual-servers on an LTM. I would like to retrieve the pool member addresses of the virtual servers used in the wide-IP pool using the Bigrest Python library. wide-ip = site.com Pool Name = site_pool Pool Member A = site_a_vs (server = ltm_a) Pool Member B = site_b_vs (server = ltm_b) I can load the wide-IP which provides a poolReference. I can then load the pool, which provides a membersReference. The membersReference provides a serverReference (the LTM) and the vs name. From here, I can load all virtual servers on the server provided by the serverRefence, but unsure how to retrieve only the virtual servers that are relevant to the wide-IP. There is no virtualserver ID provided by the membersReference or ServerReference.79Views0likes1CommentPriority group activation on GTM.
Hello All, I need to configure active standby configuration on GTM pool level, only one VS should be UP and second should be standby if one vs is down then traffic should pass to another VS and i can see there is one option Minimum-up Members but i do not know how to use it as a priority group activation on GTM level. If any one has any article or config suggestion please share. Many thanks in advanced for your time and consideration.71Views0likes2CommentsUse Fully Qualified Domain Name (FQDN) for GSLB Pool Member with F5 DNS
Normally, we define a specific IP (and port) to be used as GSLB pool member. This article provides a custom configuration to be able to use Fully Qualified Domain Name (FQDN) as GSLB pool member--with all GSLB features like health-check monitoring, load balancing method, persistence, etc. Despite GSLB as a mechanism to distribute traffic across datacenters having reached years of age, it has not become less relevant this recent years. The fact that internet infrastructure still rely heavily on DNS technology means GSLB is continuously used due to is lightweight nature and smooth integration. When using F5 DNS as GSLB solution, usually we are dealing with LTM and its VS as GSLB server and pool member respectively. Sometimes, we will add a non-LTM node as a generic server to provide inter-DC load balancing capability. Either way, we will end up with a pair of IP and port to represent the application, in which we sent a health-check against. Due to the trend of public cloud and CDN, there is a need to use FQDN as GSLB pool member (instead of IP and port pair). Some of us may immediately think of using a CNAME-type GSLB pool to accommodate this. However, there is a limitation in which BIG-IP requires a CNAME-type GSLB pool to use a wideIP-type pool member, in which we will end up with an IP and port pair (again!) We can use "static target", but there is "side-effect" where the pool member will always consider available (which then triggers the question why we need to use GSLB in the first place!). Additionally, F5 BIG-IP TMUI accepts FQDN input when we configure GSLB server and pool member. However, it will immediately translate to IP based on configured DNS. Thus, this is not the solution we are looking for Now this is where F5’s BIG-IP power (a.k.a programmability) comes into play. Enter the realm of customization... We all love customization, but at the same time do not want that to be overly complicated so that life becomes harder on day-2 🙃. Thus, the key is to use some customization, but simple enough to avoid unnecessary complication. Here is one idea to solve our FQDN as GSLB pool problem above The customized configuration object includes 1. External health-check monitor: Dynamically resolve DNS to translate FQDN into IP address Perform health-check monitoring against current IP address Result is used to determine GSLB pool member availability status 2. DNS iRules: Check #1: Checks if GSLB pool attached to wideIP contains only FQDN-type member (e.g. other pool referring to LTM VS is also attached to the wideIP) If false, do nothing (let DNS response refer to LTM VS) Otherwise, perform check #2 Check #2: Checks current health-check status of requested domain name If FQDN is up, modify DNS response to return current IP of FQDN Otherwise, perform fallback action as requirement (e.g. return empty response, return static IP, use fallback pool, etc.) 3. Internal Datagroup: Store current IP of FQDN, updated according to health-check interval Datagroup record value contains current IP if health-check success. Otherwise, the value contains empty data Here are some of the codes, where configured; wideIP is gslb.test.com, while GSLB pool member FQDN is arcadia.f5poc.id 1. External health-check monitor config gtm monitor external gslb_external_monitor { defaults-from external destination *:* interval 10 probe-timeout 5 run /Common/gslb_external_monitor_script timeout 120 #define FQDN here user-defined fqdn arcadia.f5poc.id } External health-check monitor script #!/bin/sh pidfile="/var/run/$MONITOR_NAME.$1..$2.pid" if [ -f $pidfile ] then kill -9 -`cat $pidfile` > /dev/null 2>&1 fi echo "$$" > $pidfile # Obtain current IP for the FQDN resolv=`dig +short ${fqdn}` # The actual monitoring action here curl -fIs -k https://${fqdn}/ --resolve ${fqdn}:443:${resolv} | grep -i HTTP 2>&1 > /dev/null status=$? if [ $status -eq 0 ] then # Actions when health-check success rm -f $pidfile tmsh modify ltm data-group internal fqdn { records replace-all-with { $fqdn { data $resolv } } } echo "sending monitor to ${fqdn} ${resolv} with result OK" | logger -p local0.info echo "up" else # Actions when health-check fails tmsh modify ltm data-group internal fqdn { records replace-all-with { $fqdn { } } } echo "sending monitor to ${fqdn} ${resolv} with result NOK" | logger -p local0.info fi rm -f $pidfile 2. DNS iRules when DNS_REQUEST { set qname [DNS::question name] # Obtain current IP for the FQDN set currentip [class match -value $qname equals fqdn] } when DNS_RESPONSE { set rname [getfield [lindex [split [DNS::answer]] 4] "\}" 1 ] #Check if return is IP address of specially encoded FQDN IP, 10.10.10.10 in this example if {$rname eq "10.10.10.10" }{ #Response is only from pool with external monitor, meaning no other pool is attached to wideIP if {$currentip ne ""}{ #Current FQDN health-check success DNS::answer clear # Use current IP to construct DNS answer section DNS::answer insert "[DNS::question name]. 123 [DNS::question class] [DNS::question type] $currentip" } else { #Current FQDN health-check failed #Define action to be performed here DNS::answer clear } } } 3. Internal Datagroup ltm data-group internal fqdn { records { # Define FQDN as record name arcadia.f5poc.id { # Record data contains IP, where this will be continuously updated by external monitoring script data 158.140.176.219 } } type string } *GSLB virtual server configuration Some testing The resolve will follow whichever current IP address for the FQDN. If a returning CNAME response is required, you can do so by modifying DNS irules above. The logic and code are open to any improvement, so leave your suggestions in the comments if you have any. Thanks!521Views1like1Comment