Forum Discussion

Chuck_Adkins_13's avatar
Chuck_Adkins_13
Icon for Nimbostratus rankNimbostratus
Nov 15, 2005

priority LB - maybe better suited to iRule?

In this setup - as long as there are one priority 2 nodes available - traffic is directed to them. This is correct behavior. If all the priority 2 nodes are down, then traffic is directed to priority 1 nodes. This is correct behavior.

 

 

When my priority 2 nodes become available, I want all traffic to be directed to them - and no longer to the priority 1 node.

 

 

It appears that the cookie pers is keeping the priority 1 active even when the priority 2 nodes reappear.

 

 

Would I be better suited to use a iRule?

 

 

Essentially my priority 1 node serves a page that says "The website is unavailable" - when the priority 2 nodes are available I need traffic to go to them.

 

 

Setup:

 

 

VIP:

 

 

virtual www.domain.com-ssl {

 

destination 1.2.3.10:https

 

ip protocol tcp

 

profile rewrite-ssl tcp www.domain.com

 

persist cookie-insert

 

pool www.domain.com

 

}

 

 

 

pool:

 

 

pool www.domain.com {

 

lb method member least conn

 

min up members enable

 

min active members 1

 

monitor all check

 

member 1.2.3.1:80 priority 1

 

member 1.2.3.2:80 priority 2

 

member 1.2.3.3:80 priority 2

 

member 1.2.3.4:80 priority 2

 

member 1.2.3.5:80 priority 2

 

}

 

 

persistance:

 

profile persist cookie-insert {

 

defaults from cookie

 

mode cookie

 

cookie mode insert

 

across services enable

 

across virtuals enable

 

across pools enable

 

}

 

 

profile http rewrite-ssl {

 

defaults from http

 

redirect rewrite all

 

}

 

 

profile clientssl www.domain.com {

 

defaults from clientssl

 

key "www.domain.com.2006.key"

 

cert "www.domain.com.2006.crt"

 

}

 

  • unRuleY_95363's avatar
    unRuleY_95363
    Historic F5 Account
    Since you don't want persistence when the priority 1 nodes are used, this is likely a case where you'll want to instead split up your priority 1 and priority 2 nodes into different pools.

    Then put the priority 2 pool as the default pool on the virtual server and write an iRule that looks like this:
    when LB_FAILED {
       if { [LB::server pool] eq prio_2_mbrs and \
            [active_members prio_2_mbrs] == 0 } {
          pool prio_1_mbrs
          persist none
       }
    }
    What will happen is that by default the priority 2 nodes will be used. If it fails to connect to a pool member from that pool, it will check whether it was because there are none availabe and switch to the priority 1 pool, disabling persistence.

    Hope that helps.
  • Thanks for the quick response.

     

     

    I have setup the iRule - but am not getting the response I am seeking. When all the nodes are down in the main_pool (and nodes are up in secondary) my vip is down - doesnt serve content from from secondary.

     

     

    iRule:

     

     

    rule pool-failover {

     

    when LB_FAILED {

     

    if { [LB::server pool] eq "main_pool" and [active_members main_pool] == 0} {

     

    pool secondary_pool

     

    persist none

     

    }

     

    }

     

    }

     

     

    My VIP:

     

     

    virtual www.domain.com-ssl {

     

    destination 1.2.3.10:https

     

    ip protocol tcp

     

    profile rewrite-ssl tcp www.domain.com

     

    persist cookie-insert

     

    pool main_pool

     

    rule pool-failover

     

    }
  • unRuleY_95363's avatar
    unRuleY_95363
    Historic F5 Account
    Are you referring to the status of the VIP in the GUI when you say "my VIP is down"? This status will not actually effect anything on the LTM. It will influence any 3DNS/GTM products though.

    You could try removing the default pool from the VIP and then adding this to your iRule:
    when CLIENT_ACCEPTED {
       pool main_pool
    }

    This will cause the VIP's status to be blue (undetermined).

    Also, I think the problem might be that you need to add "LB::reselect" after the line "persist none". This causes the system to re-make the load-balancing decision to something in the new pool.

  • Quick recap of my issue - if all nodes are down I want to serve content from another pool. We use cookie persistence - when the main nodes are available again I do not want customers persisting to the failover nodes. Initially tried this with a priority setting but persistence was burning me.

    With iRule - the VIP status shows available when all nodes in its pool are down/disbaled - but no content from fail_over_pool is ever served.

    domain.com-failover rule:

    
    when LB_FAILED {
    if { [LB::server pool] eq "main_pool" and [active_members main_pool] == 0} {
          pool fail_over_pool
          persist none
          LB::reselect
       }
    }

    This is my virtual:

    
    virtual www.domain.com {
       destination 1.2.3.4:https
       ip protocol tcp
       profile rewrite-ssl tcp www.domain.com
       persist cookie-insert
       pool main_pool
       rule domain.com-failover
    }
  • Troubleshooting w/o logs is tough ... not knowing how to turn on logging ... even tougher! Can you clue me in on how to add some logging - that would be great.

    I think LB_FAILED is being triggered - this works -

    
    when CLIENT_ACCEPTED {pool main_pool}
    when LB_FAILED {
          pool failover_pool
          persist none
          LB::reselect
    }

    must be losing it in the if clause:

    
    if { [LB::server pool] eq "main_pool" and [active_members main_pool] == 0} {
          pool failover_pool
          persist none
          LB::reselect

    I can probably live w/o that if clause - and it might make the irule faster/etc.

    I still see the BIGIP cookie get set - however when my main_pool node members come back traffic doesnt persist to the failover_pool ... so that works too.
  • unRuleY_95363's avatar
    unRuleY_95363
    Historic F5 Account
    Ok. Sorry about that. To add logging use the "log" statement.

    So, for example:
    when LB_FAILED {
       log local0. "failed for pool [LB::server pool], active mbrs = [active_members [LB::server pool]]"
    }

    This will generate a log message into syslog that will end up in /var/log/ltm.

    Now, on to your problem. Just a hunch but try adding parenthesis () around the two parts of the expression in your if:
    if { ( [LB::server pool] eq "main_pool" ) and ( [active_members main_pool] == 0 ) } {

    If this fixes it, try searching for "operator precendence" and you should find some reasonable explanation that I've already made elsewhere on devcentral.

  • Thanks for the tips ... just getting my head around these irules. With the logging turned on i can see that "if statment" is using the failover pool and not the main_pool. I have main_pool defined in my virtual.

    The only way I can get the "begin:" log to show "main_pool" is to define the pool:

    
    when LB_FAILED {
    pool main_pool
    log local0. "begin: pool [LB::server pool], active mbrs = [active_members [LB::server pool]]"
    if { ( [LB::server pool] eq "main_pool" ) and ( [active_members main_pool] == 0 ) } {
          pool failover_pool
          persist none
          LB::reselect
          log local0. "end: pool [LB::server pool], active mbrs = [active_members [LB::server pool]]"
        }
    }

    With the above rule I see these logs when the nodes in main_pool are all inactive:

    
    Dec  7 08:10:45 tmm tmm[714]: Rule domain.com-failover : begin: pool main_pool, active mbrs = 0
    Dec  7 08:10:45 tmm tmm[714]: Rule domain.com-failover : end: pool failover_pool, active mbrs = 3

    For clarity - this is what I see without defining the pool (and failover_pool nodes not served if main_pool nodes inactive:

    
    when LB_FAILED {
    log local0. "begin: pool [LB::server pool], active mbrs = [active_members [LB::server pool]]"
    if { ( [LB::server pool] eq "main_pool" ) and ( [active_members main_pool] == 0 ) } {
          pool failover_pool
          persist none
          LB::reselect
          log local0. "end: pool [LB::server pool], active mbrs = [active_members [LB::server pool]]"
        }
    }

    logs:

    
    Dec  7 08:12:58 tmm tmm[714]: Rule domain.com-failover : begin: pool failover_pool, active mbrs = 3
    Dec  7 08:12:58 tmm tmm[714]: Rule domain.com-failover : begin: pool failover_pool, active mbrs = 3

  • unRuleY_95363's avatar
    unRuleY_95363
    Historic F5 Account
    I would suggest you add this:

    when HTTP_REQUEST {
       pool main_pool
    }

    This will set/reset the pool to main_pool on each request. The default pool setting on the virtual server is only the initial pool request. Once the pool has been changed for a given request, all subsequent requests on the same connection will be to the subsequent pool setting. Setting/resetting the pool in the HTTP_REQUEST event will ensure each subsequent request will first be routed to the main_pool, then only after a failure on that request switched to the failover pool.