monitor
284 TopicsHow to configure pool to go down if multiple members are down
Hello community, I have a requirement related to pool health and its impact on BGP announcements. By default, a pool in BIG-IP is considered up as long as at least one member is still healthy. However, in my case, I need the pool to be marked down if a certain number of members are unhealthy. For example: Suppose I have a pool with 10 nodes. I would like the pool to be considered down if 5 (or more) of those nodes are marked down. The purpose is to ensure that when the pool is in this degraded state, the associated virtual server is also marked down, so that the VIP is no longer advertised via BGP. In some specific cases, I have already applied monitors at the individual node level and configured the minimum number of monitors that must be available. While this works for isolated scenarios, I am looking for a more generic, scalable, and easy-to-maintain approach that could be applied across pools. Has anyone implemented this type of behavior? Is there a native configuration option in BIG-IP to achieve this? Or would it require an external monitor script / custom solution? Any guidance or best practices would be appreciated. Thanks in advance!Solved153Views0likes10CommentsHTTP Monitor to Check USER-COUNT from Ivanti Node – Regex Issues
Hi everyone, I'm trying to configure an HTTP health monitor on an F5 LTM to check a value returned by an external Ivanti (Pulse Secure) node. The goal is to parse the value of the USER-COUNT field from the HTML response and ensure it's below or equal to 3000 users (based on our license limit). If the value exceeds that threshold, the monitor should mark the node as DOWN. The Ivanti node returns a page that looks like this: <!DOCTYPE html ... > <html xmlns="http://www.w3.org/1999/xhtml" lang="en-US" xml:lang="en-US"> <head> <title>Cluster HealthCheck</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> </head> <body> <h1>Health check details:</h1> CPU-UTILIZATION=1; <br>SWAP-UTILIZATION=0; <br>DISK-UTILIZATION=24; <br>SSL-CONNECTION-COUNT=1; <br>PLATFORM-LIMIT=25000; <br>MAXIMUM-LICENSED-USER-COUNT=0; <br>USER-COUNT=200; <br>MAX-LICENSED-USERS-REACHED=NO; <br>CLUSTER-NAME=CARU-LAB; <br>VPN-TUNNEL-COUNT=0; <br> </body> </html> I’m trying to match the USER-COUNT value using the recv string in the monitor, like this: recv "USER-COUNT=([0-9]{1,3}|[1-2][0-9]{3}|3000);" I’ve also tried many others. The issue is: even when the page returns USER-COUNT=5000;, the monitor still reports the node as UP, when it should be DOWN. The regex seems to match incorrectly. What I need: A working recv regex that matches USER-COUNT values from 0 to 3000 (inclusive), but fails if the value exceeds that limit. Has anyone successfully implemented this kind of monitor with a numeric threshold check using recv? Is there a reliable pattern that avoids partial matches within larger numbers? Thanks in advance for any insight or working exampleSolved143Views0likes7Commentsusing '--resolve' in the pool monitor health check
Hello, I am checking if it's possible to add the option '--resolve' in the health check monitor and avoid using a custom monitor (which consumes too much memory). For example: curl -kvs https://some_site_in_the_internet.com/ready --resolve some_site_in_the_internet.com:443:196.196.12.12 I know you can use curl -kvs https://196.196.12.12/ready --header "host: some_site_in_the_internet.com" But the path to the servers has some TLS requirements that' does not work. Any ideas are welcome Thanks64Views0likes1CommentBig-IP sending Health Check to not-used Node-IP
Hello everyone, my customer recently noticed while checking traffic on his firewall that healt checks are send from the Big-IPs internal self-ip to an IP that fits into the address range of the nodes in use on the f5. This node ip is not known to the customer, and by searching the node table or looking in /var/log/ltm we were unable to find this ip-address. So either this node was used a while ago and the node object was deleted or the Big-IP send tries talking to this ip via 443 for some other reason. Pings & curls send from the Big-IP fail. Has anyone noticed something like this before? Or is there another way to see where health checks are sent? Thanks and regards183Views0likes9CommentsStandby Has Fewer Online VIPs Than Active – Requires Manual Monitor Reset
Hello F5 community, I’ll preface this by saying that networking has been verified as fully routable between the Active and Standby units. Both devices can ping and SSH to each other’s Self-IPs, and rebooting the Standby did not resolve the issue. Issue: Discrepancy in Online VIPs Between Active & Standby Despite being In-Sync, the Active and Standby units show a different number of Online VIPs. If I randomly select one or two VIPs that should be online, remove their monitors, and then re-add them—BOOM, the VIP comes online. The VIPs in question were both HTTPS (443). Side Note: Frequent TCP Monitor Failures In my environment, I also frequently see generic ‘TCP’ monitors failing, leading to outages. While I understand that TCP monitoring alone isn’t ideal, my hands are tied as all changes must go through upper management for approval. Has anyone encountered a similar issue where VIPs don’t come online until the monitor is manually reset? Any insights into potential root causes or troubleshooting steps would be greatly appreciated! Thanks in advance.554Views0likes4CommentsBIG-IP DNS: Check Status Of Multiple Monitors Against Pool Member
Good day, everyone! Within the LTM platform, if a Pool is configured with "Min 1 of" with multiple monitors, you can check the status per monitor via tmsh show ltm monitor <name>, or you can click the Pool member in the TMUI and it will show you the status of each monitor for that member. I cannot seem to locate a similar function on the GTM/BIG-IP DNS platform. We'd typically use this methodology when transitioning to a new type of monitor, where we can passively test connectivity without the potential for impact prior to removing the previous monitor. Does anyone have a way through tmsh or the TMUI where you can check an individual pool member's status against the multiple monitors configured for its pool? Thanks, all!704Views0likes4Commentshealth monitor source IP address
Hi there, Has somebody ever tried to change the source IP address for the LTM health monitor? To work around a specific design in the network I do not want to use the egress interface local self IP address which is used by default. Regards, DanphilSolved797Views0likes2Commentsprober pool Round Robin with multi health monitors and with multi prober pool members
I have a question about The GTM monitors and prober pools: In my case, I have three datacenters, three gtm(one in each DC), and one prober pool, the prober pool include all three GTM, and the prober pool was set to use Round Robin. And two vs, vs1 and vs2 in different DC, each vs was configured two health monitors(each monitor with different porbe interval, eg. vs1's monitors have interval 5s and 7s, vs2's monitors have interval 9s and 11s). so, my questions is, how does the porber pool Round Robin work? Looking forward to your help, thank you.345Views0likes2CommentsSNMP DCA based node monitor
Hi, I am trying to implement SNMP based monitoring on node, i am getting current CPU utilization. When it reaches the threshold, node status is not getting into down state. Did i missed anything on configuration? Example : CPU threshold 5%, SNMP trap result is 24%. still node is not going down. SNMP LOG: SNMP output: snmpwalk -c public -v 2c 1xx.xx4.1x.1xx .1.3.6.1.2.1.25.3.3.1.2 HOST-RESOURCES-MIB::hrProcessorLoad.2 = INTEGER: 12 HOST-RESOURCES-MIB::hrProcessorLoad.3 = INTEGER: 25 HOST-RESOURCES-MIB::hrProcessorLoad.4 = INTEGER: 7 HOST-RESOURCES-MIB::hrProcessorLoad.5 = INTEGER: 27 HOST-RESOURCES-MIB::hrProcessorLoad.6 = INTEGER: 8 HOST-RESOURCES-MIB::hrProcessorLoad.7 = INTEGER: 6 Any help or a point in the right direction would be wonderful! Thanks!561Views0likes3Comments