Mar 27, 2026 - For details about updated CVE-2025-53521 (BIG-IP APM vulnerability), refer to K000156741.

Forum Discussion

Worker's avatar
Worker
Icon for Nimbostratus rankNimbostratus
Mar 31, 2026
Solved

Help with an iRule to disconnect active connections to Pool Members that are "offline"

In order to update an application, we put one node out of two offline in the pool.  However, any existing connections don't get directed to the node that is online.  It gets a 404 error. Is there ...
  • Jeff_Granieri's avatar
    Apr 01, 2026

    HI Worker​ ,

     

    I'd imagine we need frequent state checks for this node, there are probably ways to do this via tables iRules - Table  but you would still need other event handlers to manage the checks and then kick off session deletion.  You could also potentially do this without an iRule, if you were to use a custom alert to look for the entry when the node if offline then fire off a command to delete sys connections off that node....  Have to make sure the node is not used anywhere else otherwise you have to get more specific to the node/port.   Here is a rough example:

    this would be configured in your custom alert_conf:

            alert Node_Failure "/Common/app-server1:0 monitor status down" {

            exec command="export REMOTEUSER=admin; /usr/bin/tmsh delete /sys connection ss-server-addr x.x.x.x"

            }

     

     

    if you want to go the irule route you will have to spend some time to test this out.  I leverage F5 BIG-IP Irules Assistant to help with this  F5 iRules Assistant:  

     

    Here is a potential iRules:  

    #--------------------------------------------------------------
    # iRule:   drain_offline_member
    # Purpose: Detect disabled/offline pool members and disconnect
    #          active sessions so clients reconnect to healthy nodes
    # Version: BIG-IP 17.x+
    # Profile: HTTP, TCP
    # Notes:   Handles keep-alive and persistence-pinned connections
    #          that survive member disable during app updates
    #--------------------------------------------------------------
    
    when RULE_INIT {
        # Toggle debug logging (0=off, 1=on) — disable in production
        set static::drain_debug 1
    }
    
    when LB_SELECTED {
        # Capture the member chosen by the LB decision
        # This fires on every new LB selection (first request or reselect)
        set member_addr [LB::server addr]
        set member_port [LB::server port]
        set member_pool [LB::server pool]
    
        if { $static::drain_debug } {
            log local0.debug "\[DRAIN\] LB selected $member_addr:$member_port in $member_pool"
        }
    }
    
    when HTTP_REQUEST {
        # On each HTTP request (including keep-alive subsequent requests),
        # verify the pinned member is still active
        if { [info exists member_addr] && [info exists member_pool] } {
    
            # LB::status returns: "up", "down", "disabled", "forced_offline", etc.
            set mbr_status [LB::status pool $member_pool member $member_addr $member_port]
    
            if { $mbr_status ne "up" } {
                if { $static::drain_debug } {
                    log local0.warn "\[DRAIN\] Member $member_addr:$member_port status=$mbr_status — dropping session from [IP::client_addr]"
                }
    
                # Option A: Graceful — send 503 so the client/load-balancer upstream retries
                HTTP::respond 503 content {<html><body><h1>Service Temporarily Unavailable</h1><p>Please retry.</p></body></html>} \
                    "Content-Type" "text/html" \
                    "Connection" "close" \
                    "Retry-After" "1"
    
                # Option B (alternative): Hard reset — uncomment if you want TCP RST instead
                # reject
    
                return
            }
        }
    }

     

     

    Here is the iRule Assistant in action evaluating this iRule.   Always test iRule code in a non-prod environment, this may need further tweaking as well.

     

    AI Assistant 03:55 pm

    1. Objective
      • Initializes a debug flag, records the pool member chosen by load balancing, and on each HTTP request verifies that this member is still “up.”
      • If the member’s status changes to anything other than “up,” it returns a 503 and closes the connection.
      • This forces clients (and any upstream load-balancer) to retry on a healthy node during maintenance or app upgrades.
    2. Execution Flow
      • RULE_INIT
      – Sets static::drain_debug (0=off, 1=on) when the iRule is loaded or the device restarts.
      • LB_SELECTED
      – Fires on each new load-balance decision.
      – Captures the selected server’s address, port, and pool in member_addr, member_port, member_pool.
      – If debug is enabled, logs these values at local0.debug.
      • HTTP_REQUEST
      – On every HTTP request (including keep-alive), checks that member_addr and member_pool exist.
      – Calls LB::status pool $member_pool member $member_addr $member_port to get the member’s status.
      – If status is not “up”:
      • Logs a warning with client IP and status.
      • Sends a 503 response with “Connection: close” and a “Retry-After” header.
      • Returns to terminate further processing.

    Overall summary
    This iRule cleanly drains sessions from disabled or offline pool members by intercepting HTTP requests and responding with a retry-prompting 503. It relies on per-connection variables set in LB_SELECTED and enforces the check in HTTP_REQUEST, ensuring clients reconnect to healthy nodes.