Forum Discussion
F5 DNS external monitor
Hi , we are using an F5 DNS (v12.1.2) .
We are now getting request for doing a DNS loadbalalancing for 2 IP's in seperate datacenters. (openshift setups)
Both IP's are accessible via https but are running multiple instances on same port (so different uri's on same IP) . So using a standard https monitor isn't a solution in this case . https monitor will work but will not show status of individual services . And we need to known the status of individual instances.
I was thinking about defining seperate monitors for each url specified . But the problem is that monitoring pools or members is done based on IP address . And i would instead need to use the FQDN in the monitor .
i'm able to use a curl command "curl -kvX -H GET https://<dns-name>/checkService" & this works when i use the FQDN .Using IP doesn't work .
Anybody an idea on how to use FQDN's in monitor?
i also tested with external monitors from articles " https://devcentral.f5.com/s/articles/http-monitor-curl-basic-get" & " https://devcentral.f5.com/s/articles/https-sni-monitoring-how-to"
But i'm getting some weird results when using these on F5 DNS .i was able to define the tls-sni monitor but when checking logs from debug i saw it takes all the variables i defined , except the 'hostname" , where F5 DNS continued to use his own hostname. Resulting in a failure . But not really sure if these external monitors are designed for use on LTM only or also on DNS ?
- cjuniorNacreous
Hi,
Have you tried to create a monitor for each FQDN with a simple monitor string?
e.g.
GET /checkService HTTP/1.1\r\nHost: dns-name\r\nConnection: close\r\n\r\n
See this article:
https://support.f5.com/csp/article/K13397
Regards
- werner_verheyleAltocumulus
yes , but the platform is openshift an works based on https with TLS SNI . I don't know all te internals but it's accepted on what they call "router" but there is no SSL done there , so they are also forwarding to application .So when we introduce the hostname in http header , it's not accepted and we get an http error rather than the expected 200 ok.
Using curl & working with the FQDN works fine . But i don't see possibility to define a node based on FQDN in F5 DNS .
- Yoann_Le_Corvi1Cumulonimbus
Hi,
First, try to run your CURL test with -vv option to get a full debug of all what Curl sends to the backend to get a 200. Then you can try building a monitor with this info.
I don't quite understand why a CURL with FQDN would work, and not a monitor with Host header, as this is normally the same. It may also be another header (User-Agent, Accept, Content-Type....)
Sincerely
- cjuniorNacreous
Hi Yoann,
Far as I know, SSL with TLS/SNI isn't possible on the built-in HTTPS monitors, at least on BIGIP version 12.x.
The monitor headers is in OSI Layer 7 and the SNI (server name indication) occurs on Client Hello event during SSL handshake, soon after L4 TCP handshake.
The curl command applies the SNI during the SSL handshake, like as the openssl with parameter "-servername" does.
Please, correct me if am I wrong.
Respectfully
- cjuniorNacreous
Hi, sorry the delay.
OK, the issue is probably the SNI.
Could you try with this external monitor?
I use to have this one on my setup.
Regards
#!/bin/sh # These arguments supplied automatically for all external monitors: # $1 = IP (nnn.nnn.nnn.nnn notation) # $2 = port (decimal, host byte order) # # This script expects the following Name/Value pairs: # HOST = the host name of the SNI-enabled site # URI = the URI to request # RECV = the expected response # # Remove IPv6/IPv4 compatibility prefix (LTM passes addresses in IPv6 format) NODE=`echo ${1} | sed 's/::ffff://'` if [[ $NODE =~ ^[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}$ ]]; then NODE=${NODE} else NODE=[${NODE}] fi PORT=${2} PIDFILE="/var/run/`basename ${0}`.sni_monitor_${HOST}_${PORT}_${NODE}.pid" if [ -f $PIDFILE ] then echo "EAV exceeded runtime needed to kill ${HOST}:${PORT}:${NODE}" | logger -p local0.error kill -9 `cat $PIDFILE` > /dev/null 2>&1 fi echo "$$" > $PIDFILE curl-apd -k -i --silent --resolve ${HOST}:${PORT}:${NODE} https://${HOST}:${PORT}${URI} | grep -i "${RECV}" > /dev/null 2>&1 STATUS=$? # Remove the pidfile before the script echoes anything to stdout and is killed by bigd rm -f $PIDFILE if [ $STATUS -eq 0 ] then echo "UP" fi exit
- werner_verheyleAltocumulus
i uploaded this script on both our DNS (GTM) unit via system - file managament - external monitor files . As we use 2 F5 DNS in a sync group , did this seperate on both units .
I than created the monitor based on this script, with parameters HOST - URI - RECV.
The server equiped with this monitor is put down . But the weird thing is that when i try to check via tcpdump , i don't even see a query going out of the box ?? i even did tcpump -i 0.0 for checking all interfaces on the box .
I'm puzzled to see an external file monitor being present & linked to server . But tcpdump showing me nothing is send out , but still monitor is being put to down.
I worked before on LTM units with external monitors and had no issues there . But this is not an LTM unit , but only a DNS unit (no LTM enabled on it ) . I presume external monitors would work also on F5 DNS ??
- cjuniorNacreous
Yes, it should work to DNS as in LTM unit.
Did you add the DNS units to data center server list as you did to the openshift generic hosts?
Did you try to run script on bash to know if paramaters and script are OK?
If possible, please share your setup here.
- werner_verheyleAltocumulus
i'm using 2 F5 DNS units in a sync group . (version 12.1.2)
we have 2 datacenters (DC1 & DC2) . In each datacenter we deployed an F5 DNS .
hereunder is the config like it was created (I changed DNS names & IP's but nothing else) :
gtm datacenter /Common/DC1 {
description "Datacenter DC1"
}
gtm datacenter /Common/DC2 {
description "Datacenter DC2"
}
gtm prober-pool /Common/GTM-Probers-DC1 {
members {
/Common/unit-dc1 {
order 0
}
/Common/unit-dc2 {
order 1
}
}
}
gtm prober-pool /Common/GTM-Probers-DC2 {
members {
/Common/unit-dc1{
order 1
}
/Common/unit-dc2 {
order 0
}
}
}
gtm server /Common/DC1-Openshift-virtual-ip {
addresses {
10.100.100.100 {
device-name /Common/DC1-Openshift-virtual-ip
}
}
datacenter /Common/DC1
prober-pool /Common/GTM-Probers-DC1
product generic-host
virtual-servers {
testname.domain {
destination 10.100.100.100:443
monitor /Common/Test
}
}
}
gtm monitor external /Common/Test {
defaults-from /Common/external
destination *:*
interval 30
probe-timeout 5
run /Common/https-sni-monitor-v1
timeout 120
user-defined HOST testname.domain
user-defined RECV 200
user-defined URI /checkService
}
so we are creating a generic host without monitor . ANd than define a virtual server on F5 DNS based on that host on port 443 .Monitor "Test" is associated with this vs , and this is an EAV monitior pointing to file "https-sni-monitor-v1" . Which is the file with script mentioned earlier here .
This script was uploaded as a file , via "System" - "File management" - "external monitor" as is normal procedure described by F5.
When monitor is enabled on host , we see host being put "down" after the monitoring interval .
however when doing an tcpdump , we do not see anything going out to IP address of the host where monitor is defined .
- cjuniorNacreous
Hi, this sounds weird to me since you already have added the BIGIP units to server list on data center.
I'll try to reproduce this behaviour on lab till someone help you here.
Regards.
- werner_verheyleAltocumulus
sorry to say . but this was a stupid mistake from my side .
a #-character was missing in the beginning of the script . Which made that that line wasn't commented out & did make script fail . Hence normal we didn't see anything in tcpdump as script wasn't being executed correctly.
- cjuniorNacreous
Wow, nice to hear that!
BIG-IP still rocks :)
Cheers
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com