Forum Discussion
13 Replies
- Kevin_StewartEmployee
Just consider that while the user_alert option is viable, it adds complexity that may not be necessary if you only need to remove access to a handful of servers during some defined time period. User_alert will physically disable the pool members, while LB::down will simply remove them from the load balancing decision process.
- nitassEmployee
just in case you have not seen this.
Note: Calling LB::down in an iRule triggers an immediate monitor probe regardless of the monitor interval settings.
LB::down wiki
https://devcentral.f5.com/wiki/irules.LB__down.ashxe.g.
root@(ve11a)(cfg-sync Changes Pending)(Active)(/Common)(tmos) list ltm rule myrule ltm rule myrule { when HTTP_REQUEST { log local0. "LB::down pool foo member 200.200.200.101 80" LB::down pool foo member 200.200.200.101 80 } } root@(ve11a)(cfg-sync Changes Pending)(Active)(/Common)(tmos) list ltm pool foo ltm pool foo { members { 200.200.200.101:80 { address 200.200.200.101 session monitor-enabled state up } } monitor myhttp } [root@ve11a:Active:Changes Pending] config tail -f /var/log/ltm Sep 8 16:14:48 ve11a info tmm[16464]: Rule /Common/myrule : LB::down pool foo member 200.200.200.101 80 Sep 8 16:14:48 ve11a err tmm[16464]: 01010028:3: No members available for pool /Common/foo Sep 8 16:14:48 ve11a notice mcpd[13124]: 01070638:5: Pool /Common/foo member /Common/200.200.200.101:80 monitor status iRule down. [ /Common/myhttp: up ] [ was up for 0hr:1min:56sec ] Sep 8 16:14:48 ve11a err tmm1[16464]: 01010028:3: No members available for pool /Common/foo Sep 8 16:14:49 ve11a notice mcpd[13124]: 01070727:5: Pool /Common/foo member /Common/200.200.200.101:80 monitor status up. [ /Common/myhttp: up ] [ was iRule down for 0hr:0min:1sec ] Sep 8 16:14:49 ve11a err tmm[16464]: 01010221:3: Pool /Common/foo now has available members Sep 8 16:14:49 ve11a err tmm1[16464]: 01010221:3: Pool /Common/foo now has available members
- Kevin_StewartEmployee
Note: Calling LB::down in an iRule triggers an immediate monitor probe regardless of the monitor interval settings.
Fair enough. So then you have a few more options:
-
Ignore the log statements
-
Turn down logging
-
Add another log statement just before LB::down to indicate that next statements are by design
-
Use the user_alert method
-
Send users to a different pool during the down period (same members minus the ones you want to remove). Here's an iRule example of what that might look like:
when RULE_INIT { user-defined: start of DOWN time set static::START_OFF_TIME "06:33 AM" user-defined: end of DOWN time set static::END_OFF_TIME "10:00 AM" user-defined: DOWN pool set static::DOWN_POOL "local-pool-minus2" } when CLIENT_ACCEPTED { get pool assigned to the VIP (and strip off partition) set default_pool [string map {"/Common/" ""} [LB::server pool]] recreate the persistence cookie name set persistcookie "BIGipServer${default_pool}" } when HTTP_REQUEST { set start_off_time [clock scan $static::START_OFF_TIME] set end_off_time [clock scan $static::END_OFF_TIME] set now [clock seconds] if the persistence cookie doesn't exists, and inside DOWN time - send to DOWN pool if { not ( [HTTP::cookie exists $persistcookie] ) and (( [expr $now > $start_off_time] ) and ( [expr $now < $end_off_time] )) } { pool $static::DOWN_POOL } }
First, set a default cookie persistence profile on the VIP. If the request is made by a new user (with no existing persistence cookie) during the down period, send them to the designated DOWN pool. If a user has an existing session (has a persistence cookie for the default pool), continue to send them to the default pool. This will allow you to remove the servers without disrupting existing sessions.
-