Forum Discussion
Monitoring ephemeral pool members & nodes
Last night I deployed a config which did use auto populate. I came back to it a few hours later and found the following...
1) An unrelated change had caused all network connectivity to fail, as such all pool members went down, including this new pool.
2) An hour later the change was rolled back (by UCS load) and connectivity came back up. At THIS point, the ephemeral pool member (only was A record was coming back anyway) was deleted.
In the logs I see bigd restarts etc., and the three messages saying this ephemeral pool member "was not found" and one that the parent node wasn't found either. What does that actually mean??
3) from that point on, nothing was resolved. refresh interval was 300 seconds, down interval 5 seconds, but 5 hours elapsed with that pool being completely empty, despite the network being fine again.
4) To nudge the code, I changed the refresh time on the node from 300 to 301, and the pool immediately repopulated with the same member as before.
So clearly the node was deleted based on some form of logic, but only when the network connectivity was restored and once deleted, nothing ever changed again... Can this behaviour be explained?
I've since recreated the node and pool with autopopulate disabled in case this might make a difference, but I can't see any way the observed behaviour would be by design, and therefore relevant to a feature being turned on or off in the first place?
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com