Mar 27, 2026 - For details about updated CVE-2025-53521 (BIG-IP APM vulnerability), refer to K000156741.

Forum Discussion

Worker's avatar
Worker
Icon for Nimbostratus rankNimbostratus
Mar 31, 2026
Solved

Help with an iRule to disconnect active connections to Pool Members that are "offline"

In order to update an application, we put one node out of two offline in the pool.  However, any existing connections don't get directed to the node that is online.  It gets a 404 error.

Is there an iRule that can detect the node is offline and drain the connections and redirect it to the node that is actually online?  

Saw this article, but it does not work for us.

https://clouddocs.f5.com/api/irules/LB__status.html


I have also tried something like this (see below).  I tried putting some debug code in the log to show status, but I can't get a status other than "up" in the logs, even when I force the nodes offline.  I am hoping someone has done this.

"-------------------

 

when LB_SELECTED {

 

 

 

    # Extract pool, IP, and port

 

    set poolname [LB::server pool]

 

    set ip       [LB::server addr]

 

    set port     [LB::server port]

 

 

 

    # Get member status correctly

 

    set status [LB::status pool $poolname member $ip $port]

 

 

 

    log local0. "Selected member $ip:$port in pool $poolname has status $status"

 

 

 

    if { $status eq "down" } {

 

        log local0. "Member is DOWN (possibly forced down) – reselection triggered"

 

        LB::reselect

 

    }

 

}

--------------------------------"

  • HI Worker​ ,

     

    I'd imagine we need frequent state checks for this node, there are probably ways to do this via tables iRules - Table  but you would still need other event handlers to manage the checks and then kick off session deletion.  You could also potentially do this without an iRule, if you were to use a custom alert to look for the entry when the node if offline then fire off a command to delete sys connections off that node....  Have to make sure the node is not used anywhere else otherwise you have to get more specific to the node/port.   Here is a rough example:

    this would be configured in your custom alert_conf:

            alert Node_Failure "/Common/app-server1:0 monitor status down" {

            exec command="export REMOTEUSER=admin; /usr/bin/tmsh delete /sys connection ss-server-addr x.x.x.x"

            }

     

     

    if you want to go the irule route you will have to spend some time to test this out.  I leverage F5 BIG-IP Irules Assistant to help with this  F5 iRules Assistant:  

     

    Here is a potential iRules:  

    #--------------------------------------------------------------
    # iRule:   drain_offline_member
    # Purpose: Detect disabled/offline pool members and disconnect
    #          active sessions so clients reconnect to healthy nodes
    # Version: BIG-IP 17.x+
    # Profile: HTTP, TCP
    # Notes:   Handles keep-alive and persistence-pinned connections
    #          that survive member disable during app updates
    #--------------------------------------------------------------
    
    when RULE_INIT {
        # Toggle debug logging (0=off, 1=on) — disable in production
        set static::drain_debug 1
    }
    
    when LB_SELECTED {
        # Capture the member chosen by the LB decision
        # This fires on every new LB selection (first request or reselect)
        set member_addr [LB::server addr]
        set member_port [LB::server port]
        set member_pool [LB::server pool]
    
        if { $static::drain_debug } {
            log local0.debug "\[DRAIN\] LB selected $member_addr:$member_port in $member_pool"
        }
    }
    
    when HTTP_REQUEST {
        # On each HTTP request (including keep-alive subsequent requests),
        # verify the pinned member is still active
        if { [info exists member_addr] && [info exists member_pool] } {
    
            # LB::status returns: "up", "down", "disabled", "forced_offline", etc.
            set mbr_status [LB::status pool $member_pool member $member_addr $member_port]
    
            if { $mbr_status ne "up" } {
                if { $static::drain_debug } {
                    log local0.warn "\[DRAIN\] Member $member_addr:$member_port status=$mbr_status — dropping session from [IP::client_addr]"
                }
    
                # Option A: Graceful — send 503 so the client/load-balancer upstream retries
                HTTP::respond 503 content {<html><body><h1>Service Temporarily Unavailable</h1><p>Please retry.</p></body></html>} \
                    "Content-Type" "text/html" \
                    "Connection" "close" \
                    "Retry-After" "1"
    
                # Option B (alternative): Hard reset — uncomment if you want TCP RST instead
                # reject
    
                return
            }
        }
    }

     

     

    Here is the iRule Assistant in action evaluating this iRule.   Always test iRule code in a non-prod environment, this may need further tweaking as well.

     

    AI Assistant 03:55 pm

    1. Objective
      • Initializes a debug flag, records the pool member chosen by load balancing, and on each HTTP request verifies that this member is still “up.”
      • If the member’s status changes to anything other than “up,” it returns a 503 and closes the connection.
      • This forces clients (and any upstream load-balancer) to retry on a healthy node during maintenance or app upgrades.
    2. Execution Flow
      • RULE_INIT
      – Sets static::drain_debug (0=off, 1=on) when the iRule is loaded or the device restarts.
      • LB_SELECTED
      – Fires on each new load-balance decision.
      – Captures the selected server’s address, port, and pool in member_addr, member_port, member_pool.
      – If debug is enabled, logs these values at local0.debug.
      • HTTP_REQUEST
      – On every HTTP request (including keep-alive), checks that member_addr and member_pool exist.
      – Calls LB::status pool $member_pool member $member_addr $member_port to get the member’s status.
      – If status is not “up”:
      • Logs a warning with client IP and status.
      • Sends a 503 response with “Connection: close” and a “Retry-After” header.
      • Returns to terminate further processing.

    Overall summary
    This iRule cleanly drains sessions from disabled or offline pool members by intercepting HTTP requests and responding with a retry-prompting 503. It relies on per-connection variables set in LB_SELECTED and enforces the check in HTTP_REQUEST, ensuring clients reconnect to healthy nodes.

4 Replies

  • Worker's avatar
    Worker
    Icon for Nimbostratus rankNimbostratus

    the Reject works better than the 503, the request goes to the other node instead of getting a 503.

  • Worker's avatar
    Worker
    Icon for Nimbostratus rankNimbostratus

    Preliminary testing shows the iRule will work for what we need.  So, Thank You.

    Wondering if there a way to send the traffic to the other node instead of giving a 503?

    I assume the F5 iRules Assistant is a paid service?

  • Worker's avatar
    Worker
    Icon for Nimbostratus rankNimbostratus

    I don't think the custom alert will work as it doesn't seem to specify a specific pool and node.  I believe the result will be disconnecting users connecting to other apps in a different pool but on the same node.

    We have two nodes with many applications installed.  Each application has its own pool but using the same two nodes but different http monitors to detect if the application is up or down.  

    Example of part of iRule:

    when HTTP_REQUEST {

    if { [HTTP::uri] starts_with "/helloworld1" } {

    pool helloworld1

    } elseif { [HTTP::uri] starts_with "/helloworld2" } {

    pool helloworld3

    } elseif { [HTTP::uri] starts_with "/helloworld3" } {

    pool helloworld3
    ...

    Each pool has the same two nodes on same port 443.  They just have different monitors to check a url to detect if that particular app is down.

    I will play around with the iRule example you provided.


  • HI Worker​ ,

     

    I'd imagine we need frequent state checks for this node, there are probably ways to do this via tables iRules - Table  but you would still need other event handlers to manage the checks and then kick off session deletion.  You could also potentially do this without an iRule, if you were to use a custom alert to look for the entry when the node if offline then fire off a command to delete sys connections off that node....  Have to make sure the node is not used anywhere else otherwise you have to get more specific to the node/port.   Here is a rough example:

    this would be configured in your custom alert_conf:

            alert Node_Failure "/Common/app-server1:0 monitor status down" {

            exec command="export REMOTEUSER=admin; /usr/bin/tmsh delete /sys connection ss-server-addr x.x.x.x"

            }

     

     

    if you want to go the irule route you will have to spend some time to test this out.  I leverage F5 BIG-IP Irules Assistant to help with this  F5 iRules Assistant:  

     

    Here is a potential iRules:  

    #--------------------------------------------------------------
    # iRule:   drain_offline_member
    # Purpose: Detect disabled/offline pool members and disconnect
    #          active sessions so clients reconnect to healthy nodes
    # Version: BIG-IP 17.x+
    # Profile: HTTP, TCP
    # Notes:   Handles keep-alive and persistence-pinned connections
    #          that survive member disable during app updates
    #--------------------------------------------------------------
    
    when RULE_INIT {
        # Toggle debug logging (0=off, 1=on) — disable in production
        set static::drain_debug 1
    }
    
    when LB_SELECTED {
        # Capture the member chosen by the LB decision
        # This fires on every new LB selection (first request or reselect)
        set member_addr [LB::server addr]
        set member_port [LB::server port]
        set member_pool [LB::server pool]
    
        if { $static::drain_debug } {
            log local0.debug "\[DRAIN\] LB selected $member_addr:$member_port in $member_pool"
        }
    }
    
    when HTTP_REQUEST {
        # On each HTTP request (including keep-alive subsequent requests),
        # verify the pinned member is still active
        if { [info exists member_addr] && [info exists member_pool] } {
    
            # LB::status returns: "up", "down", "disabled", "forced_offline", etc.
            set mbr_status [LB::status pool $member_pool member $member_addr $member_port]
    
            if { $mbr_status ne "up" } {
                if { $static::drain_debug } {
                    log local0.warn "\[DRAIN\] Member $member_addr:$member_port status=$mbr_status — dropping session from [IP::client_addr]"
                }
    
                # Option A: Graceful — send 503 so the client/load-balancer upstream retries
                HTTP::respond 503 content {<html><body><h1>Service Temporarily Unavailable</h1><p>Please retry.</p></body></html>} \
                    "Content-Type" "text/html" \
                    "Connection" "close" \
                    "Retry-After" "1"
    
                # Option B (alternative): Hard reset — uncomment if you want TCP RST instead
                # reject
    
                return
            }
        }
    }

     

     

    Here is the iRule Assistant in action evaluating this iRule.   Always test iRule code in a non-prod environment, this may need further tweaking as well.

     

    AI Assistant 03:55 pm

    1. Objective
      • Initializes a debug flag, records the pool member chosen by load balancing, and on each HTTP request verifies that this member is still “up.”
      • If the member’s status changes to anything other than “up,” it returns a 503 and closes the connection.
      • This forces clients (and any upstream load-balancer) to retry on a healthy node during maintenance or app upgrades.
    2. Execution Flow
      • RULE_INIT
      – Sets static::drain_debug (0=off, 1=on) when the iRule is loaded or the device restarts.
      • LB_SELECTED
      – Fires on each new load-balance decision.
      – Captures the selected server’s address, port, and pool in member_addr, member_port, member_pool.
      – If debug is enabled, logs these values at local0.debug.
      • HTTP_REQUEST
      – On every HTTP request (including keep-alive), checks that member_addr and member_pool exist.
      – Calls LB::status pool $member_pool member $member_addr $member_port to get the member’s status.
      – If status is not “up”:
      • Logs a warning with client IP and status.
      • Sends a 503 response with “Connection: close” and a “Retry-After” header.
      • Returns to terminate further processing.

    Overall summary
    This iRule cleanly drains sessions from disabled or offline pool members by intercepting HTTP requests and responding with a retry-prompting 503. It relies on per-connection variables set in LB_SELECTED and enforces the check in HTTP_REQUEST, ensuring clients reconnect to healthy nodes.