For more information regarding the security incident at F5, the actions we are taking to address it, and our ongoing efforts to protect our customers, click here.

Forum Discussion

Ethereal_96320's avatar
Ethereal_96320
Icon for Nimbostratus rankNimbostratus
Aug 19, 2013

iRule to disable a node when health check fail

Dear fellows

 

I have a F5 3900 LTM with 2 nodes (running one instance of the server on each). Basically each instance contain, let's say, 2 processes: one is the login page and the other the JVM env for each connected user. Sometime the login page is unavailable for seconds (still need to fix this) or crash, therefore the health check fail. In the mean time, the user sessions related to the JVM are still up and running.

 

My idea was to find a way to disable the node, routing the new sessions to the working node (with the login page available) and keep the existing session running on the "failed" node till the login page become available. Currently the health check just declare the node down, cutting all the existing sessions.

 

An iRule probably would fix the strange behavior, but I have no idea about how to write it. That's why I'm here, to get any suggestion or help.

 

Thanks

 

Vincent

 

6 Replies

  • Could you not assign both (jvm and login) health monitors to each pool...? Therefore, if the login monitor failed it would also take out the JVM pool. If these are listening on different ports then you could use the Alias settings in the monitor.

     

  • Hamish's avatar
    Hamish
    Icon for Cirrocumulus rankCirrocumulus

    iRules can't affect the poolmember state, but you can select a poolmember from an iRule (Or even dump the existing one and select a new one).

     

    You could also use an external monitor... Or even a completely external application doing checking that then used iControl to affect the pools (e.g. to select different states like down/forced down on the poolmembers.

     

    (From iControl you can select Down awhich ffects new sessions only. Or Forced Down which would swing both new and existing sessions.

     

    H

     

  • iaine, I am using only a customized HTTPS health check, but once the user get logged in, then other processes start and keep the session up. The login page and JVM are on the same server and are seen as a single big process (only one health check). Basically the user log in and then there is a specific interface (Java + some Oracle connectors) doing the job. That UI remain up and running even if the login page fail. That's why I was looking for disabling the node instead marking it down.

     

  • Hamis, so you suggest to use an iRule to select the pool member, right? IF the health check (via iRule) fail, then the working pool member will be selected, keeping the current session up. Right?

     

    iControl seems to be a little complicated for me, also all this stuff happen on a production env

     

  • Okay, completely untested in the real world, but perhaps something like this:

    1. Create two identical pools (same members) and apply your monitor(s) to only one of them - the "status" pool.

    2. Apply a generic cookie profile to your VIP and this iRule:

      when RULE_INIT {
           user-defined: debug enable/disable
          set static::lb_debug 1
      
           user-defined: application pool name
          set static::app_pool "lb_test_pool"
      
           user-defined: status pool name
          set static::status_pool "lb_test_status_pool"
      }
      when LB_SELECTED {
          if { $static::lb_debug } { log local0. "Selected: [LB::server]" }
          if { [LB::status pool $static::status_pool member [LB::server addr] [LB::server port]] eq "down" } {
              if { $static::lb_debug } { log local0. "Status pool node down - reselecting" }
              LB::reselect pool $static::status_pool
          }
      }
      when HTTP_RESPONSE {
          if { [HTTP::cookie exists "BIGipServer$static::status_pool"] } {
              if { $static::lb_debug } { log local0. "Rewriting persistence cookie" }
              set persistval [HTTP::cookie value "BIGipServer$static::status_pool"]
              HTTP::cookie remove "BIGipServer$static::status_pool"
              HTTP::cookie insert name "BIGipServer$static::app_pool" value $persistval
          }
      }
      

    Change the name of the static::app_pool and static::status_pool variables to the names of your application and status pools, respectively.

    So you have two pools: one applied to the VIP and another exactly identical pool with your health monitor(s) applied. On LB_SELECTED, if a) there's no existing persistence and b) the chosen member from the primary pool is marked "down" in the status pool, reselect a new (known good) node from the status pool. LTM will send a persistence cookie for the status pool and node, which you'll rewrite to re-attach persistence to the app pool.

    Also note that this iRule does not account for complete mid-session node failures, but it may roll over gracefully.