Forum Discussion

Ash_Lewis's avatar
Ash_Lewis
Icon for Nimbostratus rankNimbostratus
Sep 05, 2022

Differences between LTM and GTM External SNMP monitor results

Hi all,

Got a weird issue with an external monitor when running on an LTM vs a GTM. All I am doing is running the basic SNMP get code taken from the link below and adjusting the SMNP variables to match what I want: (in this instance the HA status of a Panorma cluster)

https://community.f5.com/t5/crowdsrc/snmp-check-external-monitor/ta-p/285358

https://community.f5.com/t5/technical-articles/monitoring-apm-session-limit-availability-from-gtm/ta-p/279371

From the LTM:

ltm monitor external ltm_external_monitor_snmp {
defaults-from external
destination *:*
interval 10
run /Common/snmp_check
time-until-up 0
timeout 31
user-defined OID .1.3.6.1.4.1.25461.2.1.2.1.11.0
user-defined community XXXXXX
user-defined result "\"active\""
}

If I log the results I get the following:

IP= x.x.x.x Port =161 OID= .1.3.6.1.4.1.25461.2.1.2.1.11.0 comm= XXXXXX result= "active"
Answer= "active"

 

From the GTM:

gtm monitor gtm_external_monitor_snmp {
defaults-from external
destination *.snmp
interval 10
probe-timeout 5
run /Common/snmp_check
timeout 31
user-defined OID .1.3.6.1.4.1.25461.2.1.2.1.11.0
user-defined community XXXXXX
user-defined result "\"active\""
}

If I log the results I get the following:

IP= x.x.x.x Port =161 OID= .1.3.6.1.4.1.25461.2.1.2.1.11.0 comm= XXXXXX result= \"active\"
Answer= "active"

The GTM does not seem to format the user-defined result in the same way the LTM does and I believe this is why the monitor is failing on the GTM but not on the LTM.

Has anyone run into this before? Couldn't find much in the bug tracker.

The GUI on the GTM will not let me input "active" into the results field and states it needs to be escaped with a backslash so I assume it understands the logic but does not format it correctly when running the monitor.

LTM and GTM are running 15.1.5.1

 

Many thanks,

Ash

1 Reply

  • As an update I have also raised this with F5 support and they have been able to replicate the issue in their lab and are investigating further. I will update this when I know more in case anyone sees this in the future.