For more information regarding the security incident at F5, the actions we are taking to address it, and our ongoing efforts to protect our customers, click here.

Forum Discussion

Tony_Jarvis_132's avatar
Tony_Jarvis_132
Icon for Altostratus rankAltostratus
Jun 23, 2014

Modify priority of pool members using iRules

Hi all

 

I have an application which has the specific requirements:

 

  • Two nodes in the pool.
  • Send all requests to only one node during normal operation (hence using priority group activation).
  • If the favoured node fails, then send all requests to the second node in the pool.

I understand there are implications here around availability, capacity etc, but the application team is aware of this and specifically want this setup. If the favoured node comes back online in the above example, the application team want the F5 to continue sending all requests to the second node to avoid outages.

 

My issue with the above is this:

 

  • If the favoured node comes back online, it will be ignored and never used.
  • Once this happens, if the second node in the pool fails (after it has already been selected as the sole node to use), traffic will not automatically be redirected back to the favoured node and no node will be selected.

I was thinking there may be some way to achieve the automatic redirection back the favoured node using "Action on service up", but sadly no event is available. Alternatively, it might be possible by programatically modifying the pool member priority values, but I'm not too sure of how this might be implemented.

 

Is there any way I can achieve the above functionality automatically on the F5?

 

5 Replies

    • Tony_Jarvis_132's avatar
      Tony_Jarvis_132
      Icon for Altostratus rankAltostratus
      That's a really interesting link and something I was unaware of, so thanks for posting! After reading through the entirety of the article, it seems this may not be suitable for situations requiring prioritisation of nodes within the pool though? This is the requirement we are facing, so we do need some way to cater for this. Are there any options with this in mind?
    • Tony_Jarvis_132's avatar
      Tony_Jarvis_132
      Icon for Altostratus rankAltostratus
      That's a really interesting link and something I was unaware of, so thanks for posting! After reading through the entirety of the article, it seems this may not be suitable for situations requiring prioritisation of nodes within the pool though? This is the requirement we are facing, so we do need some way to cater for this. Are there any options with this in mind?
  • it seems this may not be suitable for situations requiring prioritisation of nodes within the pool though?

    is LB::status useful (i.e. manually select pool member for the first request)?

    e.g.

     config
    
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm virtual bar
    ltm virtual bar {
        destination 172.28.24.10:80
        ip-protocol tcp
        mask 255.255.255.255
        pool foo
        profiles {
            http { }
            tcp { }
        }
        rules {
            qux
        }
        source 0.0.0.0/0
        source-address-translation {
            type automap
        }
        vs-index 41
    }
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm pool foo
    ltm pool foo {
        members {
            200.200.200.101:80 {
                address 200.200.200.101
                session monitor-enabled
                state up
            }
            200.200.200.111:80 {
                address 200.200.200.111
                session monitor-enabled
                state up
            }
        }
        monitor gateway_icmp
    }
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm rule qux
    ltm rule qux {
        when CLIENT_ACCEPTED {
      if { [persist lookup uie 1] ne "" } {
        persist uie 1
      } else {
        if { [LB::status pool foo member 200.200.200.111 80] eq "up" } {
          pool foo member 200.200.200.111 80
        }
      }
    }
    when SERVER_CONNECTED {
      persist add uie 1
    }
    }
    
     persistence
    
    root@(ve11a)(cfg-sync In Sync)(Active)(/Common)(tmos) show ltm persistence persist-records all-properties
    Sys::Persistent Connections
    universal - 172.28.24.10:80 - 200.200.200.111:80
    ------------------------------------------------
      TMM           0
      Mode          universal
      Value         1
      Age (sec.)    6
      Virtual Name  /Common/bar
      Virtual Addr  172.28.24.10:80
      Node Addr     200.200.200.111:80
      Pool Name     /Common/foo
      Client Addr   200.200.200.111
      Local entry
    
    universal - 172.28.24.10:80 - 200.200.200.111:80
    ------------------------------------------------
      TMM           1
      Mode          universal
      Value         1
      Age (sec.)    6
      Virtual Name  /Common/bar
      Virtual Addr  172.28.24.10:80
      Node Addr     200.200.200.111:80
      Pool Name     /Common/foo
      Client Addr   200.200.200.111
      Owner entry
    
    Total records returned: 2
    

    by the way, if you want to change pool member priority value, i think alertd or icall may be applicable.

    Acton on Log - using the alertd deamon

    https://devcentral.f5.com/wiki/advdesignconfig.Acton-on-Log-using-the-alertd-deamon.ashx

    iCall - All New Event-Based Automation System by Jason Rahm

    https://devcentral.f5.com/articles/icall-all-new-event-based-automation-system