Forum Discussion
health monitor issue
Hi all
We modified the health monitor option on a per-node basis, from default to None, for 10.10.7.100. A few minutes later, the active unit stated that a completely different node (other vlan, IP, and trunk ) has a monitor status down:
Wed Jul 25 07:29:27 CEST 2015 NODE_ADDRESS modified: name="/Common/10.10.100.7" new_session_enable=2 monitor_rule="/Common/none" update_status=1
Jul 25 07:33:30 slot1/xxx notice mcpd[9264]: 01070640:5: Node /Common/10.10.7.8 address 10.10.7.8 monitor status down. [ /Common/icmp: down ] [ was up for 1250hrs:40mins:20sec ] Jul 25 07:33:30 slot1/xxx notice mcpd[9264]: 01070638:5: Pool /Common/ra_pool member /Common/10.10.7.8:1813 monitor status node down. [ /Common/udp: up ] [ was up for 1250hrs:40mins:20sec ]
Do you have any idea what may cause this ?
Thanks !
2 Replies
- Why do you think they are related? Couldn't it just have been a fluke?
- Ed_Summers
Nimbostratus
I'm not really offering more than echoing Patrik's thoughts. The provided log indicates the node was marked down by a failure from an ICMP monitor. Your implementation may have changed, but default for the monitor is a 16-second timeout. Four minutes would be unusually long for an ICMP monitor timeout though verification of the monitor settings would confirm the timeout. Unless you have evidence or reason to suspect they are linked I'd agree they are unrelated to the monitor change you made.
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com