Forum Discussion
HTTP Monitors and Pool Members State issue
I have 6 nodes (each with an icmp node monitor) that are membrs of a http pool. The pool has a GET send string and expects a 200 OK receive string, which is green across all node members.
However, randomly the nodes seem to enter disabled state (black diamond icon) on their own or the perception is so. Once manually enabled the node members are returned to an active state and the virtual service continues as required. However, they disable themselves again after a random period.
App builders say no icontrols in place and this virtual service is built manually. Does anyone have any insights as to why LTM is self disabling pool members? It seems to be only for this service, which is 1 of 15 configured, but the health monitor is in use on other services and does not fail.
A direct check of the nodes returns the correct response, it just seems to be the LTM disabling these pool members.
Any suggestions/advise is appreciated.
2 Replies
- nitass
Employee
have you tried to turn on audit logging? - smp_86112
Cirrostratus
Let's clarify this a little bit. First you claimed the Node status changes:
> randomly the nodes seem to enter disabled state
Then later you changed that to Pool Member status:
> why LTM is self disabling pool members
Which of these - Node or Pool Member - has their status changed to a black diamond? If you are looking at the status of the Pool Member, what is the Node status at the same time? What is reported in /var/log/ltm at these times?
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com