Forum Discussion
GaryZ_31658
Jul 14, 2008Historic F5 Account
Vip target VIP
All,
I have a scenario where I want to LB "pods" of servers using a Master VIP. I could have many nodes in each pool and do not want to disable each individual node when we take a "pod" offline. It would be a simple task to disable either the pool or the VIP as a whole rather than each individual node.
iRule logic: In the iRule below, we round robin connections to a specific VIP (one defined for each pod). Once at the pool, we set a unique cookie and persist back to the pod as required (each VIP inserts a cookie "my_cookiePodx").
Question: Can I monitor the pool (or VIP) in the iRule for availability so if a pod is offline, the VIP won't send traffic to the disabled pod virtual? I was thinking of using when LB_Failed but the docs suggest fail detection between 9 and 45 seconds. My thoughts are if the VIP is offline, LTM would send a reset and the browser would simply retry. This seems faster but it also seems a little dirty.
when RULE_INIT {
set ::cnt 0
}
when HTTP_REQUEST {
set reqcookie [findstr [HTTP::cookie names] "my_cookie"]
if { $reqcookie starts_with "my_cookie"} {
set podcookie [findstr $reqcookie "my_cookie" 9 " "]
set podvip [findclass "$podcookie" $::pod_vip " "]
virtual $podvip
} else {
incr ::cnt
if { $::cnt <= 3 } {
set ::cnt 1
}
switch $::cnt {
1 { virtual pod1 }
2 { virtual pod2 }
3 { virtual pod3 }
}
}
}
- hooleylistCirrostratusIf you know the pool name associated with the virtual server, you could use 'active_members pool_name' (Click here) to verify there is one or more pool member up before selecting the corresponding VIP for the request.
- GaryZ_31658Historic F5 AccountThanks for the reply... I guess the best method is to disable "all nodes" in a pod_pool rather than disabling the VIP itself. I'll clean up the logic a little to include monitoring the pods and post the final iRule after testing.
- hooleylistCirrostratusYep. There isn't a method for getting the state of a VIP in an iRule. As we've just recently gotten the ability to specify a virtual server for a request, this wasn't an issue before. If you'd like to see this functionality considered for addition, you could open a case with F5's support group (https://websupport.f5.com).
- Colin_Walker_12Historic F5 AccountAaron's exactly right here, as usual. There isn't much monitoring of VIP state in iRules yet, but I wouldn't be surprised if we saw more of that soon, as we're able to use virtuals as targets now in iRule logic.
- GaryZ_31658Historic F5 AccountWe decided to modify the logic slightly... Instead of RR connections to the Pod VIP's we allow the first connection (no cookie set) to go to a "master pool" which includes all the nodes contained in each pod. We will use least connections (LB Algorithm) on this master pool to ensure the new request goes to not only the most available pod but also the server with the least connections. My thoughts are that once the logic sends this initial connection to the most available node, we can set a persist cookie on the reply. This cookie name/value is being set correctly based on server info in the podx_class but I need to include the proper persist value (Server:port) inside the cookie.
- Colin_Walker_12Historic F5 AccountAre you serving data on more than one port inside the pools? If you're not, then you just need the IP address of the member to reference it once you're at the proper virtual.
- briceNimbostratusHere's what I have come up with. This will only have one VIP, and the logic will maintain POD/pool persistence, but not always to the same server. Please let me know what you all think, and where I can improve things. I wanted to create a switch structure in the HTTP_RESPONSE section, but couldn't get through the syntax due to the matchclass requirements. Thanks in advance...
Assumptions: One VIP with this iRule assigned and a default pool of pool_all_servers. One pool for each pod of servers, and another for all servers (3 total in this example). There is an address dataset for each pod that includes the IP's for every server in that pod. Ramp up time set to 300 sec on each of the pod pools, but not the pool_all_servers pool. when HTTP_REQUEST { LB::mode leastconns set strPoolID [HTTP::cookie "pod"] if { $strPoolID !=""} { if {[active_members pool_www_pod_$strPoolID] > 0} { pod cookie identified and used to load balance to that pod/pool pool pool_www_pod_$strPoolID } else { if original pool is down, use the default pool.. User will lose session, but will get page. HTTP::cookie remove "pod" pool [LB::server pool] log local0. "sending [IP::client_addr] coming in on [IP::local_addr] to pool [LB::server pool] because original pool pool_www_pod_$strPoolID is down" } } else { if no pod cookie is present - ie new session, then use the default pool and lb mode least connections. log local0. "using pool [LB::server pool] since we dont have a pod yet" pool [LB::server pool] } } when HTTP_RESPONSE { if { [matchclass [IP::server_addr] equals $::dg_pod_1_ips] } { set strPodNumber "1" } elseif { [matchclass [IP::server_addr] equals $::dg_pod_2_ips] } { set strPodNumber "2" } else { log local0. "we have problems getting value of pod cookie or it isnt 1 or 2" } HTTP::cookie insert name "pod" value $strPodNumber path / }
- briceNimbostratusIf anyone could provide some feedback, it would be greatly appreciated. We think we have the logic down, but want to verify want kind of impact this will have on CPU, etc. Is there a better (read: more efficient) way of accomplishing this? Thanks in advance.
- GaryZ_31658Historic F5 AccountBrice,
- Deb_Allen_18Historic F5 AccountThis may be simpler & more efficient approach, esp if the default pool idea is working well and you'd like to support it still:
Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects