Forum Discussion
Inquiry on F5's Maintenance Mode Feature for Pool Members
- May 15, 2024
Both "disable" and "force offline" actions may answer the requirement as they both provide some level of smoothness
- When set to Disabled, a node or pool member continues to process persistent and active connections. It can accept new connections only if the connections belong to an existing persistence session.
- When set to Forced Offline, a node or pool member allows existing connections to time out, but no new connections are allowed
So, for the first you need to disable the node then wait for a sufficient time to make sure there is no active nor persistent connection related to that node. Typically, you will disable a node 30 minutes to a few hours before the maintenance schedule.
For the second, you set the node to forced offline and wait a few minutes until there is no active connection to this node. Note that persistence here is immediately lost and any client who is in the persistence table but who is not actively connected will load blance to the next member.
So you're telling me that there is no other way than just to "force offline" a pool member and wait till all the old connections are no longer active? I was hoping there could be a way to move active connections to another pool member...
You can kill those connections from CLI after you force offline it:
"tmsh delete /sys connection ss-server-addr 1.1.1.1"
where 1.1.1.1 is your poolmember
- tommaMay 22, 2024Nimbostratus
Killing active connection would lead to interrupts in our service. I was talking about moving established connections to second pool member, but I guess F5 does not have a feature like that.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com