25-Jan-2021 06:08
When doing releases, it is not unusual for an application server to be taken down but when this occurs, a health check failure will often occur. Or, if a host is decommissioned before it is removed from the load balancer. Is there a way to temporarily suspend or suppress a health check per node and/or per pool member or to temporarily suppress SNMP traps getting issued for a host or pool member?
Solved! Go to Solution.
02-Feb-2021 11:40
Hello Mahnsc.
Sure, you can. Try it with this:
https://support.f5.com/csp/article/K53142338
If this helps, please mark the answer as 'the best' to help other people to find it.
Regards,
Dario.
26-Jan-2021
10:21
- last edited on
04-Jun-2023
21:05
by
JimmyPackets
Hello Mahnsc.
You can stop the service during those maintenance works.
## Stop services
tmsh stop sys service snmpd
tmsh stop sys service alertd
## Check services status
tmsh show sys service snmpd
tmsh show sys service alertd
## Start services
tmsh stop sys service snmpd
tmsh stop sys service alertd
Regards,
Dario.
27-Jan-2021 01:48
BTW, in section SNMP > Traps > Configuration > Agent Start / Stop
You can disable SNMP Traps exclusively.
REF - https://techdocs.f5.com/kb/en-us/products/big-ip-afm/manuals/product/dns-dos-firewall-implementations-12-1-0/7.html
Regards,
Dario.
01-Feb-2021 15:34
Thanks! I apologize for not responding sooner. I was hoping to see if there was a solution that could be applied more granularly to specific nodes and/or pools but it doesn't appear so.
02-Feb-2021 11:40
Hello Mahnsc.
Sure, you can. Try it with this:
https://support.f5.com/csp/article/K53142338
If this helps, please mark the answer as 'the best' to help other people to find it.
Regards,
Dario.
10-May-2023 07:26
Doesnt this article only show how to disable traps for only virtual servers? what about a specific pool, node etc? is it the same syntax? what about the snmp oid then? Would be great if anyone can reply quick since i need to disable alerts for specific virtual servers, pool as well as nodes.
Thank You
02-Feb-2021 13:22
Thanks a lot!