Forum Discussion
Chuck_Adkins_13
Nimbostratus
Nov 15, 2005priority LB - maybe better suited to iRule?
In this setup - as long as there are one priority 2 nodes available - traffic is directed to them. This is correct behavior. If all the priority 2 nodes are down, then traffic is directed to priority 1 nodes. This is correct behavior.
When my priority 2 nodes become available, I want all traffic to be directed to them - and no longer to the priority 1 node.
It appears that the cookie pers is keeping the priority 1 active even when the priority 2 nodes reappear.
Would I be better suited to use a iRule?
Essentially my priority 1 node serves a page that says "The website is unavailable" - when the priority 2 nodes are available I need traffic to go to them.
Setup:
VIP:
virtual www.domain.com-ssl {
destination 1.2.3.10:https
ip protocol tcp
profile rewrite-ssl tcp www.domain.com
persist cookie-insert
pool www.domain.com
}
pool:
pool www.domain.com {
lb method member least conn
min up members enable
min active members 1
monitor all check
member 1.2.3.1:80 priority 1
member 1.2.3.2:80 priority 2
member 1.2.3.3:80 priority 2
member 1.2.3.4:80 priority 2
member 1.2.3.5:80 priority 2
}
persistance:
profile persist cookie-insert {
defaults from cookie
mode cookie
cookie mode insert
across services enable
across virtuals enable
across pools enable
}
profile http rewrite-ssl {
defaults from http
redirect rewrite all
}
profile clientssl www.domain.com {
defaults from clientssl
key "www.domain.com.2006.key"
cert "www.domain.com.2006.crt"
}
- unRuleY_95363Historic F5 AccountSince you don't want persistence when the priority 1 nodes are used, this is likely a case where you'll want to instead split up your priority 1 and priority 2 nodes into different pools.
What will happen is that by default the priority 2 nodes will be used. If it fails to connect to a pool member from that pool, it will check whether it was because there are none availabe and switch to the priority 1 pool, disabling persistence.when LB_FAILED { if { [LB::server pool] eq prio_2_mbrs and \ [active_members prio_2_mbrs] == 0 } { pool prio_1_mbrs persist none } }
- Chuck_Adkins_13
Nimbostratus
Thanks for the quick response. - unRuleY_95363Historic F5 AccountAre you referring to the status of the VIP in the GUI when you say "my VIP is down"? This status will not actually effect anything on the LTM. It will influence any 3DNS/GTM products though.
when CLIENT_ACCEPTED { pool main_pool }
- Chuck_Adkins_13
Nimbostratus
Quick recap of my issue - if all nodes are down I want to serve content from another pool. We use cookie persistence - when the main nodes are available again I do not want customers persisting to the failover nodes. Initially tried this with a priority setting but persistence was burning me.when LB_FAILED { if { [LB::server pool] eq "main_pool" and [active_members main_pool] == 0} { pool fail_over_pool persist none LB::reselect } }
virtual www.domain.com { destination 1.2.3.4:https ip protocol tcp profile rewrite-ssl tcp www.domain.com persist cookie-insert pool main_pool rule domain.com-failover }
- Chuck_Adkins_13
Nimbostratus
Troubleshooting w/o logs is tough ... not knowing how to turn on logging ... even tougher! Can you clue me in on how to add some logging - that would be great.when CLIENT_ACCEPTED {pool main_pool} when LB_FAILED { pool failover_pool persist none LB::reselect }
if { [LB::server pool] eq "main_pool" and [active_members main_pool] == 0} { pool failover_pool persist none LB::reselect
- unRuleY_95363Historic F5 AccountOk. Sorry about that. To add logging use the "log" statement.
when LB_FAILED { log local0. "failed for pool [LB::server pool], active mbrs = [active_members [LB::server pool]]" }
if { ( [LB::server pool] eq "main_pool" ) and ( [active_members main_pool] == 0 ) } {
- Chuck_Adkins_13
Nimbostratus
Thanks for the tips ... just getting my head around these irules. With the logging turned on i can see that "if statment" is using the failover pool and not the main_pool. I have main_pool defined in my virtual.when LB_FAILED { pool main_pool log local0. "begin: pool [LB::server pool], active mbrs = [active_members [LB::server pool]]" if { ( [LB::server pool] eq "main_pool" ) and ( [active_members main_pool] == 0 ) } { pool failover_pool persist none LB::reselect log local0. "end: pool [LB::server pool], active mbrs = [active_members [LB::server pool]]" } }
Dec 7 08:10:45 tmm tmm[714]: Rule domain.com-failover : begin: pool main_pool, active mbrs = 0 Dec 7 08:10:45 tmm tmm[714]: Rule domain.com-failover : end: pool failover_pool, active mbrs = 3
when LB_FAILED { log local0. "begin: pool [LB::server pool], active mbrs = [active_members [LB::server pool]]" if { ( [LB::server pool] eq "main_pool" ) and ( [active_members main_pool] == 0 ) } { pool failover_pool persist none LB::reselect log local0. "end: pool [LB::server pool], active mbrs = [active_members [LB::server pool]]" } }
Dec 7 08:12:58 tmm tmm[714]: Rule domain.com-failover : begin: pool failover_pool, active mbrs = 3 Dec 7 08:12:58 tmm tmm[714]: Rule domain.com-failover : begin: pool failover_pool, active mbrs = 3
- unRuleY_95363Historic F5 AccountI would suggest you add this:
when HTTP_REQUEST { pool main_pool }
Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects