monitor
326 TopicsHTTP Monitor cURL Basic POST
Problem this snippet solves: External HTTP monitor script that sends a POST request to the pool member to which it is applied, marking it up if the expected response is received. URI, POST data, and response string are user-configurable. cURL by default uses HTTP/1.1 and, since no hostname is specified in the cURL command, inserts the IP address in the Host header. NOTE: Use external monitors only when a built-in monitor won't do the trick. This monitor is intended as an example of using cURL (which offers a large number of other useful options) to perform a POST. More basic HTTP monitors are much more efficiently configured using the built-in HTTP monitor template instead. UPDATE: The script below had a logic error in it where by it was using the NODE and PORT variables to create a PID file before the variables were defined. This meant that if your monitor took long enough to run the PID running monitor was killed before it finished and a new process ran in its place. This gave the appearence of the monitor not functioning correctly. I have corrected this below. How to use this snippet: Create a new file containing the code below in /usr/bin/monitors on the LTM filesystem. Permissions on the file must be 700 or better, giving root rwx access to the file. Create a monitor profile of type "External" with the following values: External Program: . . the name of the script file created in step 1 Variables: Name.......Value URI . . . . .the URI to which the POST will be sent (URI only, no hostname) DATA . . . . the POST data to be sent to the server RECV . . . . the expected response Adjust the interval and timeout as appropriate for your application Jan 3 00:00:00 local/bigip err logger: EAV exceeded runtime needed to kill 10.0.0.10:80 If the interval and timeout is smaller then the execution time of the script, the monitor marks the element down and logs a message in /var/log/ltm. This is a false negative. To fix this, please increase the interval and timeout accordingly. Code : #!/bin/sh # # (c) Copyright 1996-2007 F5 Networks, Inc. # # This software is confidential and may contain trade secrets that are the # property of F5 Networks, Inc. No part of the software may be disclosed # to other parties without the express written consent of F5 Networks, Inc. # It is against the law to copy the software. No part of the software may # be reproduced, transmitted, or distributed in any form or by any means, # electronic or mechanical, including photocopying, recording, or information # storage and retrieval systems, for any purpose without the express written # permission of F5 Networks, Inc. Our services are only available for legal # users of the program, for instance in the event that we extend our services # by offering the updating of files via the Internet. # # @(#) $Id: http_monitor_cURL+POST,v 1.0 2007/06/28 16:36:11 deb Exp $ # (based on sample_monitor,v 1.3 2005/02/04 18:47:17 saxon) # # # these arguments supplied automatically for all external monitors: # $1 = IP (nnn.nnn.nnn.nnn notation) # $2 = port (decimal, host byte order) # # additional command line arguments ($3 and higher) may be specified in the monitor template # This example does not expect any additional command line arguments # # Name/Value pairs may also be specified in the monitor template # This example expects the following Name/Value pairs: # URI = the URI to which the POST will be sent # DATA = the POST data to send to the server # RECV = the expected response (not case sensitive) # # remove IPv6/IPv4 compatibility prefix (LTM passes addresses in IPv6 format) NODE=`echo ${1} | sed 's/::ffff://'` PORT=${2} PIDFILE="/var/run/`basename ${0}`.${NODE}_${PORT}.pid" # kill of the last instance of this monitor if hung and log current pid if [ -f $PIDFILE ] then echo "EAV exceeded runtime needed to kill ${IP}:${PORT}" | logger -p local0.error kill -9 `cat $PIDFILE` > /dev/null 2>&1 fi echo "$$" > $PIDFILE # send request & check for expected response curl -fNs http://${NODE}:${PORT}${URI} -d "${DATA}" | grep -i "${RECV}" 2>&1 > /dev/null # mark node UP if expected response was received if [ $? -eq 0 ] then # Remove the PID file rm -f $PIDFILE echo "UP" else # Remove the PID file rm -f $PIDFILE fi exit3.8KViews0likes3CommentsF5 mssql health monitor failing
I am having an issue when configuring an mssql health monitor. I see a successful connection in the database, but the advance debug logs are showing receive string not matching. 2025-12-03 09:50:40,269-0500 [id750_DBPinger-1686] - DB connect succeeded. 2025-12-03 09:50:40,269-0500 [id750_DBPinger-1686] - Query message: SELECT @@SERVERNAME AS ServerName, SERVERPROPERTY('ServerName') AS InstanceName, SERVERPROPERTY('ProductVersion') AS ProductVersion 2025-12-03 09:50:40,270-0500 [id750_DBPinger-1686] - Send Query success 2025-12-03 09:50:40,274-0500 [id750_DBPinger-1686] - Response from server: ServerName: 'icscwsql1' , InstanceName: 'icscwsql1' , ProductVersion: '16.0.4215.2' 2025-12-03 09:50:40,277-0500 [id750_DBPinger-1686] - Checking for recv string: ServerName 2025-12-03 09:50:40,279-0500 [id750_DBPinger-1686] - Analyze Response failure Here is the configuration of the health monitor: ltm monitor mssql icscwsql.mssql.mon { debug no defaults-from mssql interval 30 password xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx recv ServerName: recv-column 1 recv-row 1 send "SELECT @@SERVERNAME AS ServerName, SERVERPROPERTY('ServerName') AS InstanceName, SERVERPROPERTY('ProductVersion') AS ProductVersion" time-until-up 0 timeout 91 username xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx69Views0likes3CommentsHow to configure pool to go down if multiple members are down
Hello community, I have a requirement related to pool health and its impact on BGP announcements. By default, a pool in BIG-IP is considered up as long as at least one member is still healthy. However, in my case, I need the pool to be marked down if a certain number of members are unhealthy. For example: Suppose I have a pool with 10 nodes. I would like the pool to be considered down if 5 (or more) of those nodes are marked down. The purpose is to ensure that when the pool is in this degraded state, the associated virtual server is also marked down, so that the VIP is no longer advertised via BGP. In some specific cases, I have already applied monitors at the individual node level and configured the minimum number of monitors that must be available. While this works for isolated scenarios, I am looking for a more generic, scalable, and easy-to-maintain approach that could be applied across pools. Has anyone implemented this type of behavior? Is there a native configuration option in BIG-IP to achieve this? Or would it require an external monitor script / custom solution? Any guidance or best practices would be appreciated. Thanks in advance!Solved307Views0likes10CommentsHTTP Monitor to Check USER-COUNT from Ivanti Node – Regex Issues
Hi everyone, I'm trying to configure an HTTP health monitor on an F5 LTM to check a value returned by an external Ivanti (Pulse Secure) node. The goal is to parse the value of the USER-COUNT field from the HTML response and ensure it's below or equal to 3000 users (based on our license limit). If the value exceeds that threshold, the monitor should mark the node as DOWN. The Ivanti node returns a page that looks like this: <!DOCTYPE html ... > <html xmlns="http://www.w3.org/1999/xhtml" lang="en-US" xml:lang="en-US"> <head> <title>Cluster HealthCheck</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> </head> <body> <h1>Health check details:</h1> CPU-UTILIZATION=1; <br>SWAP-UTILIZATION=0; <br>DISK-UTILIZATION=24; <br>SSL-CONNECTION-COUNT=1; <br>PLATFORM-LIMIT=25000; <br>MAXIMUM-LICENSED-USER-COUNT=0; <br>USER-COUNT=200; <br>MAX-LICENSED-USERS-REACHED=NO; <br>CLUSTER-NAME=CARU-LAB; <br>VPN-TUNNEL-COUNT=0; <br> </body> </html> I’m trying to match the USER-COUNT value using the recv string in the monitor, like this: recv "USER-COUNT=([0-9]{1,3}|[1-2][0-9]{3}|3000);" I’ve also tried many others. The issue is: even when the page returns USER-COUNT=5000;, the monitor still reports the node as UP, when it should be DOWN. The regex seems to match incorrectly. What I need: A working recv regex that matches USER-COUNT values from 0 to 3000 (inclusive), but fails if the value exceeds that limit. Has anyone successfully implemented this kind of monitor with a numeric threshold check using recv? Is there a reliable pattern that avoids partial matches within larger numbers? Thanks in advance for any insight or working exampleSolved226Views0likes7Commentsusing '--resolve' in the pool monitor health check
Hello, I am checking if it's possible to add the option '--resolve' in the health check monitor and avoid using a custom monitor (which consumes too much memory). For example: curl -kvs https://some_site_in_the_internet.com/ready --resolve some_site_in_the_internet.com:443:196.196.12.12 I know you can use curl -kvs https://196.196.12.12/ready --header "host: some_site_in_the_internet.com" But the path to the servers has some TLS requirements that' does not work. Any ideas are welcome Thanks95Views0likes1CommentBig-IP sending Health Check to not-used Node-IP
Hello everyone, my customer recently noticed while checking traffic on his firewall that healt checks are send from the Big-IPs internal self-ip to an IP that fits into the address range of the nodes in use on the f5. This node ip is not known to the customer, and by searching the node table or looking in /var/log/ltm we were unable to find this ip-address. So either this node was used a while ago and the node object was deleted or the Big-IP send tries talking to this ip via 443 for some other reason. Pings & curls send from the Big-IP fail. Has anyone noticed something like this before? Or is there another way to see where health checks are sent? Thanks and regards316Views0likes9CommentsStandby Has Fewer Online VIPs Than Active – Requires Manual Monitor Reset
Hello F5 community, I’ll preface this by saying that networking has been verified as fully routable between the Active and Standby units. Both devices can ping and SSH to each other’s Self-IPs, and rebooting the Standby did not resolve the issue. Issue: Discrepancy in Online VIPs Between Active & Standby Despite being In-Sync, the Active and Standby units show a different number of Online VIPs. If I randomly select one or two VIPs that should be online, remove their monitors, and then re-add them—BOOM, the VIP comes online. The VIPs in question were both HTTPS (443). Side Note: Frequent TCP Monitor Failures In my environment, I also frequently see generic ‘TCP’ monitors failing, leading to outages. While I understand that TCP monitoring alone isn’t ideal, my hands are tied as all changes must go through upper management for approval. Has anyone encountered a similar issue where VIPs don’t come online until the monitor is manually reset? Any insights into potential root causes or troubleshooting steps would be greatly appreciated! Thanks in advance.658Views0likes4CommentsBIG-IP DNS: Check Status Of Multiple Monitors Against Pool Member
Good day, everyone! Within the LTM platform, if a Pool is configured with "Min 1 of" with multiple monitors, you can check the status per monitor via tmsh show ltm monitor <name>, or you can click the Pool member in the TMUI and it will show you the status of each monitor for that member. I cannot seem to locate a similar function on the GTM/BIG-IP DNS platform. We'd typically use this methodology when transitioning to a new type of monitor, where we can passively test connectivity without the potential for impact prior to removing the previous monitor. Does anyone have a way through tmsh or the TMUI where you can check an individual pool member's status against the multiple monitors configured for its pool? Thanks, all!879Views0likes4Commentssnmp-check external monitor
Problem this snippet solves: This external monitor script runs an snmpget to pool members and marks the members up or down based upon the result. Specifically created for this GTM/APM use case, but can be modified as needed. How to use this snippet: copy the contents of this file into /config/monitors/snmp-check, and then in the external monitor configuration, reference the monitor and provide the following variable key/value pairs: result=<result> community=<community> OID=<oid> Code : #!/bin/sh # # (c) Copyright 1996-2005 F5 Networks, Inc. # # This software is confidential and may contain trade secrets that are the # property of F5 Networks, Inc. No part of the software may be disclosed # to other parties without the express written consent of F5 Networks, Inc. # It is against the law to copy the software. No part of the software may # be reproduced, transmitted, or distributed in any form or by any means, # electronic or mechanical, including photocopying, recording, or information # storage and retrieval systems, for any purpose without the express written # permission of F5 Networks, Inc. Our services are only available for legal # users of the program, for instance in the event that we extend our services # by offering the updating of files via the Internet. # # @(#) $Id: sample_monitor,v 1.3 2005/02/04 18:47:17 saxon Exp $ # # # these arguments supplied automatically for all external pingers: # $1 = IP (nnn.nnn.nnn.nnn notation or hostname) # $2 = port (decimal, host byte order) # $3 and higher = additional arguments # # $MONITOR_NAME = name of the monitor # # In this sample script, $3 is the regular expression # #These lines are required to control the process ID of the monitor pidfile="/var/run/$MONITOR_NAME.$1..$2.pid" if [ -f $pidfile ] then kill -9 `cat $pidfile` > /dev/null 2>&1 fi echo "$$" > $pidfile #Since version9 uses the ipv6 native version of the IP address, parse that down #for usage node_ip=`echo $1 | sed 's/::ffff://'` #Log the variables for debugging #echo IP= $node_ip Port =$2 OID= $OID comm= $community result= $result >> /var/tmp/test #Create a variable called answer that contains the result of the snmpwalk. answer=`snmpget $node_ip -c $community -O v $OID | awk '{print $2}'` #Log the answer for debugging #echo Answer= $answer >> /var/tmp/test if [ $answer -lt $result ] then echo "up" fi rm -f $pidfile Tested this on version: No Version Found2.2KViews2likes5Comments